Updates from: 11/07/2023 02:18:14
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Red Teaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/red-teaming.md
Title: Introduction to red teaming large language models (LLMs)
+ Title: Planning red teaming for large language models (LLMs) and their applications
-description: Learn about how red teaming and adversarial testing is an essential practice in the responsible development of systems and features using large language models (LLMs)
+description: Learn about how red teaming and adversarial testing are an essential practice in the responsible development of systems and features using large language models (LLMs)
Previously updated : 05/18/2023 Last updated : 11/03/2023
recommendations: false
keywords:
-# Introduction to red teaming large language models (LLMs)
+# Planning red teaming for large language models (LLMs) and their applications
+
+This guide offers some potential strategies for planning how to set up and manage red teaming for responsible AI (RAI) risks throughout the large language model (LLM) product life cycle.
+
+## What is red teaming?
The term *red teaming* has historically described systematic adversarial attacks for testing security vulnerabilities. With the rise of LLMs, the term has extended beyond traditional cybersecurity and evolved in common usage to describe many kinds of probing, testing, and attacking of AI systems. With LLMs, both benign and adversarial usage can produce potentially harmful outputs, which can take many forms, including harmful content such as hate speech, incitement or glorification of violence, or sexual content.
-**Red teaming is an essential practice in the responsible development of systems and features using LLMs**. While not a replacement for systematic [measurement and mitigation](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context) work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations.
+## Why is RAI red teaming an important practice?
+
+Red teaming is a best practice in the responsible development of systems and features using LLMs. While not a replacement for systematic measurement and mitigation work, red teamers help to uncover and identify harms and, in turn, enable measurement strategies to validate the effectiveness of mitigations.
-Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](content-filter.md) and other [mitigation strategies](prompt-engineering.md)) for its Azure OpenAI Service models (see this [Responsible AI Overview](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context)). However, the context of your LLM application will be unique and you also should conduct red teaming to:
+While Microsoft has conducted red teaming exercises and implemented safety systems (including [content filters](./content-filter.md) and other [mitigation strategies](./prompt-engineering.md)) for its Azure OpenAI Service models (see this [Overview of responsible AI practices](/legal/cognitive-services/openai/overview)), the context of each LLM application will be unique and you also should conduct red teaming to:
+
+- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application.
-- Test the LLM base model and determine whether there are gaps in the existing safety systems, given the context of your application system. - Identify and mitigate shortcomings in the existing default filters or mitigation strategies.-- Provide feedback on failures so we can make improvements.
-Here is how you can get started in your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise.
+- Provide feedback on failures in order to make improvements.
+
+- Note that red teaming is not a replacement for systematic measurement. A best practice is to complete an initial round of manual red teaming before conducting systematic measurements and implementing mitigations. As highlighted above, the goal of RAI red teaming is to identify harms, understand the risk surface, and develop the list of harms that can inform what needs to be measured and mitigated.
-## Getting started
+Here is how you can get started and plan your process of red teaming LLMs. Advance planning is critical to a productive red teaming exercise.
-### Managing your red team
+## Before testing
-**Assemble a diverse group of red teamers.**
+### Plan: Who will do the testing
-LLM red teamers should be a mix of people with diverse social and professional backgrounds, demographic groups, and interdisciplinary expertise that fits the deployment context of your AI system. For example, if youΓÇÖre designing a chatbot to help health care providers, medical experts can help identify risks in that domain.
+**Assemble a diverse group of red teamers**
-**Recruit red teamers with both benign and adversarial mindsets.**
+Determine the ideal composition of red teamers in terms of peopleΓÇÖs experience, demographics, and expertise across disciplines (for example, experts in AI, social sciences, security) for your productΓÇÖs domain. For example, if youΓÇÖre designing a chatbot to help health care providers, medical experts can help identify risks in that domain.
+
+**Recruit red teamers with both benign and adversarial mindsets**
Having red teamers with an adversarial mindset and security-testing experience is essential for understanding security risks, but red teamers who are ordinary users of your application system and havenΓÇÖt been involved in its development can bring valuable perspectives on harms that regular users might encounter.
-**Remember that handling potentially harmful content can be mentally taxing.**
+**Assign red teamers to harms and/or product features**
+
+- Assign RAI red teamers with specific expertise to probe for specific types of harms (for example, security subject matter experts can probe for jailbreaks, meta prompt extraction, and content related to cyberattacks).
+
+- For multiple rounds of testing, decide whether to switch red teamer assignments in each round to get diverse perspectives on each harm and maintain creativity. If switching assignments, allow time for red teamers to get up to speed on the instructions for their newly assigned harm.
+
+- In later stages, when the application and its UI are developed, you might want to assign red teamers to specific parts of the application (i.e., features) to ensure coverage of the entire application.
+
+- Consider how much time and effort each red teamer should dedicate (for example, those testing for benign scenarios might need less time than those testing for adversarial scenarios).
+
+It can be helpful to provide red teamers with:
+ - Clear instructions that could include:
+ - An introduction describing the purpose and goal of the given round of red teaming; the product and features that will be tested and how to access them; what kinds of issues to test for; red teamersΓÇÖ focus areas, if the testing is more targeted; how much time and effort each red teamer should spend on testing; how to record results; and who to contact with questions.
+ - A file or location for recording their examples and findings, including information such as:
+ - The date an example was surfaced; a unique identifier for the input/output pair if available, for reproducibility purposes; the input prompt; a description or screenshot of the output.
+
+### Plan: What to test
+
+Because an application is developed using a base model, you may need to test at several different layers:
+
+- The LLM base model with its safety system in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually done through an API endpoint.)
+
+- Your application. (Testing is best done through a UI.)
+
+- Both the LLM base model and your application, before and after mitigations are in place.
+
+The following recommendations help you choose what to test at various points during red teaming:
+
+- You can begin by testing the base model to understand the risk surface, identify harms, and guide the development of RAI mitigations for your product.
+
+- Test versions of your product iteratively with and without RAI mitigations in place to assess the effectiveness of RAI mitigations. (Note, manual red teaming might not be sufficient assessmentΓÇöuse systematic measurements as well, but only after completing an initial round of manual red teaming.)
+
+- Conduct testing of application(s) on the production UI as much as possible because this most closely resembles real-world usage.
+
+When reporting results, make clear which endpoints were used for testing. When testing was done in an endpoint other than product, consider testing again on the production endpoint or UI in future rounds.
+
+### Plan: How to test
+
+**Conduct open-ended testing to uncover a wide range of harms.**
+
+The benefit of RAI red teamers exploring and documenting any problematic content (rather than asking them to find examples of specific harms) enables them to creatively explore a wide range of issues, uncovering blind spots in your understanding of the risk surface.
+
+**Create a list of harms from the open-ended testing.**
+
+- Consider creating a list of harms, with definitions and examples of the harms.
+- Provide this list as a guideline to red teamers in later rounds of testing.
+
+**Conduct guided red teaming and iterate: Continue probing for harms in the list; identify new harms that surface.**
+
+Use a list of harms if available and continue testing for known harms and the effectiveness of their mitigations. In the process, you will likely identify new harms. Integrate these into the list and be open to shifting measurement and mitigation priorities to address the newly identified harms.
+
+Plan which harms to prioritize for iterative testing. Several factors can inform your prioritization, including, but not limited to, the severity of the harms and the context in which they are more likely to surface.
+
+### Plan: How to record data
+
+**Decide what data you need to collect and what data is optional.**
+
+- Decide what data the red teamers will need to record (for example, the input they used; the output of the system; a unique ID, if available, to reproduce the example in the future; and other notes.)
+
+- Be strategic with what data you are collecting to avoid overwhelming red teamers, while not missing out on critical information.
+
+**Create a structure for data collection**
+
+A shared Excel spreadsheet is often the simplest method for collecting red teaming data. A benefit of this shared file is that red teamers can review each otherΓÇÖs examples to gain creative ideas for their own testing and avoid duplication of data.
+
+## During testing
-You will need to take care of your red teamers, not only by limiting the amount of time they spend on an assignment, but also by letting them know they can opt out at any time. Also, avoid burnout by switching red teamersΓÇÖ assignments to different focus areas.
+**Plan to be on active standby while red teaming is ongoing**
-### Planning your red teaming
+- Be prepared to assist red teamers with instructions and access issues.
+- Monitor progress on the spreadsheet and send timely reminders to red teamers.
-#### Where to test
+## After each round of testing
-Because a system is developed using a LLM base model, you may need to test at several different layers:
+**Report data**
-- The LLM base model with its [safety system](./content-filter.md) in place to identify any gaps that may need to be addressed in the context of your application system. (Testing is usually through an API endpoint.)-- Your application system. (Testing is usually through a UI.)-- Both the LLM base model and your application system before and after mitigations are in place.
+Share a short report on a regular interval with key stakeholders that:
-#### How to test
+1. Lists the top identified issues.
-Consider conducting iterative red teaming in at least two phases:
+2. Provides a link to the raw data.
-1. Open-ended red teaming, where red teamers are encouraged to discover a variety of harms. This can help you develop a taxonomy of harms to guide further testing. Note that developing a taxonomy of undesired LLM outputs for your application system is crucial to being able to measure the success of specific mitigation efforts.
-2. Guided red teaming, where red teamers are assigned to focus on specific harms listed in the taxonomy while staying alert for any new harms that may emerge. Red teamers can also be instructed to focus testing on specific features of a system for surfacing potential harms.
+3. Previews the testing plan for the upcoming rounds.
-Be sure to:
+4. Acknowledges red teamers.
-- Provide your red teamers with clear instructions for what harms or system features they will be testing.-- Give your red teamers a place for recording their findings. For example, this could be a simple spreadsheet specifying the types of data that red teamers should provide, including basics such as:
- - The type of harm that was surfaced.
- - The input prompt that triggered the output.
- - An excerpt from the problematic output.
- - Comments about why the red teamer considered the output problematic.
-- Maximize the effort of responsible AI red teamers who have expertise for testing specific types of harms or undesired outputs. For example, have security subject matter experts focus on jailbreaks, metaprompt extraction, and content related to aiding cyberattacks.
+5. Provides any other relevant information.
-### Reporting red teaming findings
+**Differentiate between identification and measurement**
-You will want to summarize and report red teaming top findings at regular intervals to key stakeholders, including teams involved in the measurement and mitigation of LLM failures so that the findings can inform critical decision making and prioritizations.
+In the report, be sure to clarify that the role of RAI red teaming is to expose and raise understanding of risk surface and is not a replacement for systematic measurement and rigorous mitigation work. It is important that people do not interpret specific examples as a metric for the pervasiveness of that harm.
-## Next steps
+Additionally, if the report contains problematic content and examples, consider including a content warning.
-[Learn about other mitigation strategies like prompt engineering](./prompt-engineering.md)
+The guidance in this document is not intended to be, and should not be construed as providing, legal advice. The jurisdiction in which you're operating may have various regulatory or legal requirements that apply to your AI system. Be aware that not all of these recommendations are appropriate for every scenario and, conversely, these recommendations may be insufficient for some scenarios.
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
You can modify the following additional settings in the **Data parameters** sect
|Parameter name | Description | |||
-|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 3. This is the `topNDocuments` parameter in the API. |
+|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 5. This is the `topNDocuments` parameter in the API. |
| **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. | ## Virtual network support & private endpoint support
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
Previously updated : 11/02/2023 Last updated : 11/06/2023 recommendations: false
curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
-d '{"input": "Sample Document goes here"}' ```
-# [python](#tab/python)
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python import openai
embeddings = response['data'][0]['embedding']
print(embeddings) ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key = os.getenv("AZURE_OPENAI_KEY"),
+ api_version = "2023-05-15",
+ azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT")
+)
+
+response = client.embeddings.create(
+ input = "Your text string goes here",
+ model= "text-embedding-ada-002"
+)
+
+print(response.model_dump_json(indent=2))
+```
+ # [C#](#tab/csharp) ```csharp using Azure;
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Previously updated : 10/05/2023 Last updated : 11/06/2023 recommendations: false
The following parameters can be used inside of the `parameters` field inside of
| `indexName` | string | Required | null | The search index to be used. | | `fieldsMapping` | dictionary | Optional | null | Index data column mapping. | | `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |
-| `topNDocuments` | number | Optional | 3 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
+| `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
ai-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/embedded-speech.md
Follow these steps to install the Speech SDK for Java using Apache Maven:
<dependency> <groupId>com.microsoft.cognitiveservices.speech</groupId> <artifactId>client-sdk-embedded</artifactId>
- <version>1.32.1</version>
+ <version>1.33.0</version>
</dependency> </dependencies> </project>
Be sure to use the `@aar` suffix when the dependency is specified in `build.grad
``` dependencies {
- implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.32.1@aar'
+ implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.33.0@aar'
} ``` ::: zone-end
dependencies {
## Models and voices
-For embedded speech, you'll need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
+For embedded speech, you need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions are provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
-All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for additional languages and voices.
+All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for more languages and voices.
## Embedded speech configuration
Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service
With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed.
-With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request.
+With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the final result is selected based on response speed. The best result is evaluated again on each new synthesis request.
## Cloud speech
For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to
## Embedded voices capabilities
-For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, please refer to the table below.
+For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, refer to the following table.
| Level 1 | Level 2 | Sub values | Support in embedded NTTS | |--|--|-|--|
aks Ai Toolchain Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md
+
+ Title: Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview)
+description: Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment.
++ Last updated : 11/01/2023++
+# Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview)
+
+The AI toolchain operator (KAITO) is a managed add-on for AKS that simplifies the experience of running OSS AI models on your AKS clusters. The AI toolchain operator automatically provisions the necessary GPU nodes and sets up the associated inference server as an endpoint server to your AI models. Using this add-on reduces your onboarding time and enables you to focus on AI model usage and development rather than infrastructure setup.
+
+This article shows you how to enable the AI toolchain operator add-on and deploy an AI model on AKS.
++
+## Before you begin
+
+* This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for AKS](./concepts-clusters-workloads.md).
+* If you aren't familiar with Microsoft Entra Workload Identity, see the [Workload Identity overview](../active-directory/workload-identities/workload-identities-overview.md).
+* For ***all hosted model inference files*** and recommended infrastructure setup, see the [KAITO GitHub repository](https://github.com/Azure/kaito).
+
+## Prerequisites
+
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+ * If you have multiple Azure subscriptions, make sure you select the correct subscription in which the resources will be created and charged using the [`az account set`][az-account-set] command.
+
+ > [!NOTE]
+ > The subscription you use must have GPU VM quota.
+
+* Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* Helm v3 installed. For more information, see [Installing Helm](https://helm.sh/docs/intro/install/).
+* The Kubernetes command-line client, kubectl, installed and configured. For more information, see [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
+
+## Enable the Azure CLI preview extension
+
+* Enable the Azure CLI preview extension using the [`az extension add`][az-extension-add] command.
+
+ ```azurecli-interactive
+ az extension add --name aks-preview
+ ```
+
+### Export environment variables
+
+* To simplify the configuration steps in this article, you can define environment variables using the following commands. Make sure to replace the placeholder values with your own.
+
+ ```azurecli-interactive
+ export AZURE_SUBSCRIPTION_ID="mySubscriptionID"
+ export AZURE_RESOURCE_GROUP="myResourceGroup"
+ export CLUSTER_NAME="myClusterName"
+ ```
+
+## Enable the AI toolchain operator add-on on an AKS cluster
+
+1. Create an Azure resource group using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ az group create --name AZURE_RESOURCE_GROUP --location eastus
+ ```
+
+2. Create an AKS cluster with the AI toolchain operator add-on enabled using the [`az aks create`][az-aks-create] command with the `--enable-ai-toolchain-operator`, `--enable-workload-identity`, and `--enable-oidc-issuer` flags.
+
+ ```azurecli-interactive
+ az aks create --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME --generate-ssh-keys --enable-managed-identity --enable-workload-identity --enable-oidc-issuer --enable-ai-toolchain-operator
+ ```
+
+ > [!NOTE]
+ > AKS creates a managed identity once you enable the AI toolchain operator add-on. The managed identity is used to access the AI toolchain operator workspace CRD. The AI toolchain operator workspace CRD is used to create and manage AI toolchain operator workspaces.
+ >
+ > AI toolchain operator enablement requires the enablement of workload identity and OIDC issuer.
+
+## Connect to your cluster
+
+1. Configure `kubectl` to connect to your cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME
+ ```
+
+2. Verify the connection to your cluster using the `kubectl get` command.
+
+ ```azurecli-interactive
+ kubectl get nodes
+ ```
+
+3. Export environment variables for the principal ID identity and client ID identity using the following commands:
+
+ ```azurecli-interactive
+ export MC_RESOURCE_GROUP=$(az aks show --resource-group AZURE_RESOURCE_GROUP --name CLUSTER_NAME --query nodeResourceGroup -o tsv)
+ export PRINCIPAL_ID=$(az identity show --name "ai-toolchain-operator-{CLUSTER_NAME}" --resource-group "{MC_RESOURCE_GROUP} --query 'principalId' -o tsv)
+ export CLIENT_ID=$(az identity show --name gpuIdentity --resource-group "${AZURE_RESOURCE_GROUP}" --subscription "${AZURE_SUBSCRIPTION_ID}" --query 'clientId' -o tsv)
+ ```
+
+## Create a role assignment for the principal ID identity
+
+1. Create a new role assignment for the service principal using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create --role "Contributor" --assignee "${PRINCIPAL_ID}" --scope "/subscriptions/${AZURE_SUBSCRIPTION_ID}/resourcegroups/${AZURE_RESOURCE_GROUP}"/providers/Microsoft.ContainerService/managedClusters/${CLUSTER_NAME}"
+ ```
+
+2. Get the AKS OIDC Issuer URL and export it as an environment variable using the following command:
+
+ ```azurecli-interactive
+ export AKS_OIDC_ISSUER=$(az aks show --resource-group "${AZURE_RESOURCE_GROUP}" --name "${CLUSTER_NAME}" --subscription "${AZURE_SUBSCRIPTION_ID}" --query "oidcIssuerProfile.issuerUrl" -o tsv)
+ ```
+
+## Establish a federated identity credential
+
+* Create the federated identity credential between the managed identity, AKS OIDC issuer, and subject using the [`az identity federated-credential create`][az-identity-federated-credential-create] command.
+
+ ```azurecli-interactive
+ az identity federated-credential create --name "${FEDERATED_IDENTITY}" --identity-name "ai-toolchain-operator-{CLUSTER_NAME}" --resource-group "${AZURE_RESOURCE_GROUP} --issuer "${AKS_OIDC_ISSUER}" --subject system:serviceaccount:"kube-system":"gpu-provisioner" --audience api://AzureADTokenExchange --subscription "${AZURE_SUBSCRIPTION_ID}"
+ ```
+
+## Deploy a default hosted AI model
+
+1. Deploy the Falcon 7B model YAML file from the GitHub repository using the `kubectl apply` command.
+
+ ```azurecli-interactive
+ kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/kaito_workspace_falcon_7b.yaml
+ ```
+
+2. Track the live resource changes in your workspace using the `kubectl get` command.
+
+ ```azurecli-interactive
+ kubectl get workspace workspace-falcon-7b -w
+ ```
+
+3. Check your service and get the service IP address using the `kubectl get svc` command.
+
+ ```azurecli-interactive
+ export SERVICE_IP=$(kubectl get svc workspace-falcon-7b -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
+ ```
+
+4. Run the Falcon 7B model with a sample input of your choice using the following `curl` command:
+
+ ```azurecli-interactive
+ curl -X POST "http://SERVICE_IP:80/chat" -H "accept: application/json" -H "Content-Type: application/json" -d '{"prompt":"YOUR_PROMPT_HERE"}'
+ ```
+
+## Clean up resources
+
+If you no longer need these resources, you can delete them to avoid incurring extra Azure charges.
+
+* Delete the resource group and its associated resources using the [`az group delete`][az-group-delete] command.
+
+ ```azurecli-interactive
+ az group delete --name AZURE_RESOURCE_GROUP --yes --no-wait
+ ```
+
+## Next steps
+
+For more inference model options, see the [KAITO GitHub repository](https://github.com/Azure/kaito).
+
+<!-- LINKS -->
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az_identity_federated_credential_create
+[az-account-set]: /cli/azure/account#az_account_set
+[az-extension-add]: /cli/azure/extension#az_extension_add
aks Best Practices Performance Scale Large https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale-large.md
+
+ Title: Performance and scaling best practices for large workloads in Azure Kubernetes Service (AKS)
+
+description: Learn the best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS).
+ Last updated : 11/03/2023++
+# Best practices for performance and scaling for large workloads in Azure Kubernetes Service (AKS)
+
+> [!NOTE]
+> This article focuses on general best practices for **large workloads**. For best practices specific to **small to medium workloads**, see [Performance and scaling best practices for small to medium workloads in Azure Kubernetes Service (AKS)](./best-practices-performance-scale.md).
+
+As you deploy and maintain clusters in AKS, you can use the following best practices to help you optimize performance and scaling.
+
+Keep in mind that *large* is a relative term. Kubernetes has a multi-dimensional scale envelope, and the scale envelope for your workload depends on the resources you use. For example, a cluster with 100 nodes and thousands of pods or CRDs might be considered large. A 1,000 node cluster with 1,000 pods and various other resources might be considered small from the control plane perspective. The best signal for scale of a Kubernetes control plane is API server HTTP request success rate and latency, as that's a proxy for the amount of load on the control plane.
+
+In this article, you learn about:
+
+> [!div class="checklist"]
+>
+> * AKS and Kubernetes control plane scalability.
+> * Kubernetes Client best practices, including backoff, watches, and pagination.
+> * Azure API and platform throttling limits.
+> * Feature limitations.
+> * Networking and node pool scaling best practices.
+
+## AKS and Kubernetes control plane scalability
+
+In AKS, a *cluster* consists of a set of nodes (physical or virtual machines (VMs)) that run Kubernetes agents and are managed by the Kubernetes control plane hosted by AKS. While AKS optimizes the Kubernetes control plane and its components for scalability and performance, it's still bound by the upstream project limits.
+
+Kubernetes has a multi-dimensional scale envelope with each resource type representing a dimension. Not all resources are alike. For example, *watches* are commonly set on secrets, which result in list calls to the kube-apiserver that add cost and a disproportionately higher load on the control plane compared to resources without watches.
+
+The control plane manages all the resource scaling in the cluster, so the more you scale the cluster within a given dimension, the less you can scale within other dimensions. For example, running hundreds of thousands of pods in an AKS cluster impacts how much pod churn rate (pod mutations per second) the control plane can support.
+
+The size of the envelope is proportional to the size of the Kubernetes control plane. AKS supports two control plane tiers as part of the Base SKU: the Free tier and the Standard tier. For more information, see [Free and Standard pricing tiers for AKS cluster management][free-standard-tier].
+
+> [!IMPORTANT]
+> We highly recommend using the Standard tier for production or at-scale workloads. AKS automatically scales up the Kubernetes control plane to support the following scale limits:
+>
+> * Up to 5,000 nodes per AKS cluster
+> * 200,000 pods per AKS cluster (with Azure CNI Overlay)
+
+In most cases, crossing the scale limit threshold results in degraded performance, but doesn't cause the cluster to immediately fail over. To manage load on the Kubernetes control plane, consider scaling in batches of up to 10-20% of the current scale. For example, for a 5,000 node cluster, scale in increments of 500-1,000 nodes. While AKS does autoscale your control plane, it doesn't happen instantaneously.
+
+You can leverage API Priority and Fairness (APF) to throttle specific clients and request types to protect the control plane during high churn and load.
+
+## Kubernetes clients
+
+Kubernetes clients are the applications clients, such as operators or monitoring agents, deployed in the Kubernetes cluster that need to communicate with the kube-api server to perform read or mutate operations. It's important to optimize the behavior of these clients to minimize the load they add to the kube-api server and Kubernetes control plane.
+
+AKS doesn't expose control plane and API server metrics via Prometheus or through platform metrics. However, you can analyze API server traffic and client behavior through Kube Audit logs. For more information, see [Troubleshoot the Kubernetes control plane](/troubleshoot/azure/azure-kubernetes/troubleshoot-apiserver-etcd).
+
+LIST requests can be expensive. When working with lists that might have more than a few thousand small objects or more than a few hundred large objects, you should consider the following guidelines:
+
+* **Consider the number of objects (CRs) you expect to eventually exist** when defining a new resource type (CRD).
+* **The load on etcd and API server primarily relies on the number of objects that exist, not the number of objects that are returned**. Even if you use a field selector to filter the list and retrieve only a small number of results, these guidelines still apply. The only exception is retrieval of a single object by `metadata.name`.
+* **Avoid repeated LIST calls if possible** if your code needs to maintain an updated list of objects in memory. Instead, consider using the Informer classes provided in most Kubernetes libraries. Informers automatically combine LIST and WATCH functionalities to efficiently maintain an in-memory collection.
+* **Consider whether you need strong consistency** if Informers don't meet your needs. Do you need to see the most recent data, up to the exact moment in time you issued the query? If not, set `ResourceVersion=0`. This causes the API server cache to serve your request instead of etcd.
+* **If you can't use Informers or the API server cache, read large lists in chunks**.
+* **Avoid listing more often than needed**. If you can't use Informers, consider how often your application lists the resources. After you read the last object in a large list, don't immediately re-query the same list. You should wait awhile instead.
+* **Consider the number of running instances of your client application**. There's a big difference between having a single controller listing objects vs. having pods on each node doing the same thing. If you plan to have multiple instances of your client application periodically listing large numbers of objects, your solution won't scale to large clusters.
+
+## Azure API and Platform throttling
+
+The load on a cloud application can vary over time based on factors such as the number of active users or the types of actions that users perform. If the processing requirements of the system exceed the capacity of the available resources, the system can become overloaded and suffer from poor performance and failures.
+
+To handle varying load sizes in a cloud application, you can allow the application to use resources up to a specified limit and then throttle them when the limit is reached. On Azure, throttling happens at two levels. Azure Resource Manager (ARM) throttles requests for the subscription and tenant. If the request is under the throttling limits for the subscription and tenant, ARM routes the request to the resource provider. The resource provider then applies throttling limits tailored to its operations. For more information, see [ARM throttling requests](../azure-resource-manager/management/request-limits-and-throttling.md).
+
+### Manage throttling in AKS
+
+Azure API limits are usually defined at a subscription-region combination level. For example, all clients within a subscription in a given region share API limits for a given Azure API, such as Virtual Machine Scale Sets PUT APIs. Every AKS cluster has several AKS-owned clients, such as cloud provider or cluster autoscaler, or customer-owned clients, such as Datadog or self-hosted Prometheus, that call Azure APIs. When running multiple AKS clusters in a subscription within a given region, all the AKS-owned and customer-owned clients within the clusters share a common set of API limits. Therefore, the number of clusters you can deploy in a subscription region is a function of the number of clients deployed, their call patterns, and the overall scale and elasticity of the clusters.
+
+Keeping the above considerations in mind, customers are typically able to deploy between 20-40 small to medium scale clusters per subscription-region. You can maximize your subscription scale using the following best practices:
+
+Always upgrade your Kubernetes clusters to the latest version. Newer versions contain many improvements that address performance and throttling issues. If you're using an upgraded version of Kubernetes and still see throttling due to the actual load or the number of clients in the subscription, you can try the following options:
+
+* **Analyze errors using AKS Diagnose and Solve Problems**: You can use [AKS Diagnose and Solve Problems](./aks-diagnostics.md) to analyze errors, identity the root cause, and get resolution recommendations.
+ * **Increase the Cluster Autoscaler scan interval**: If the diagnostic reports show that [Cluster Autoscaler throttling has been detected](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can [increase the scan interval](./cluster-autoscaler.md#change-the-cluster-autoscaler-settings) to reduce the number of calls to Virtual Machine Scale Sets from the Cluster Autoscaler.
+ * **Reconfigure third-party applications to make fewer calls**: If you filter by *user agents* in the ***View request rate and throttle details*** diagnostic and see that [a third-party application, such as a monitoring application, makes a large number of GET requests](/troubleshoot/azure/azure-kubernetes/429-too-many-requests-errors#analyze-and-identify-errors-by-using-aks-diagnose-and-solve-problems), you can change the settings of these applications to reduce the frequency of the GET calls. Make sure the application clients use exponential backoff when calling Azure APIs.
+* **Split your clusters into different subscriptions or regions**: If you have a large number of clusters and node pools that use Virtual Machine Scale Sets, you can split them into different subscriptions or regions within the same subscription. Most Azure API limits are shared at the subscription-region level, so you can move or scale your clusters to different subscriptions or regions to get unblocked on Azure API throttling. This option is especially helpful if you expect your clusters to have high activity. There are no generic guidelines for these limits. If you want specific guidance, you can create a support ticket.
+
+## Feature limitations
+
+As you scale your AKS clusters to larger scale points, keep the following feature limitations in mind:
+
+* AKS supports up to a 1,000 node scale in an AKS cluster by default. While AKS doesn't prevent you from scaling further, doing so might result in degraded performance. If you want to scale beyond 1,000 nodes, you can request a limit increase. For more information, see [Best practices for creating and running AKS clusters at scale][run-aks-at-scale].
+* [Azure Network Policy Manager (Azure npm)][azure-npm] only supports up to 250 nodes.
+* You can't use the Stop and Start feature with clusters that have more than 100 nodes. For more information, see [Stop and start an AKS cluster](./start-stop-cluster.md).
+
+## Networking
+
+As you scale your AKS clusters to larger scale points, keep the following networking best practices in mind:
+
+* Use Managed NAT for cluster egress with at least two public IPs on the NAT gateway. For more information, see [Create a managed NAT gateway for your AKS cluster][managed-nat-gateway].
+* Use Azure CNI Overlay to scale up to 200,000 pods and 5,000 nodes per cluster. For more information, see [Configure Azure CNI Overlay networking in AKS][azure-cni-overlay].
+* If your application needs direct pod-to-pod communication across clusters, use Azure CNI with dynamic IP allocation and scale up to 50,000 application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking for dynamic IP allocation in AKS][azure-cni-dynamic-ip].
+* When using internal Kubernetes services behind an internal load balancer, we recommend creating an internal load balancer or service below a 750 node scale for optimal scaling performance and load balancer elasticity.
+* Azure npm only supports up to 250 nodes. If you want to enforce network policies for larger clusters, consider using [Azure CNI powered by Cilium](./azure-cni-powered-by-cilium.md), which combines the robust control plane of Azure CNI with the Cilium data plane to provide high performance networking and security.
+
+## Node pool scaling
+
+As you scale your AKS clusters to larger scale points, keep the following node pool scaling best practices in mind:
+
+* For system node pools, use the *Standard_D16ds_v5* SKU or an equivalent core/memory VM SKU with ephemeral OS disks to provide sufficient compute resources for kube-system pods.
+* Since AKS has a limit of 1,000 nodes per node pool, we recommend creating at least five user node pools to scale up to 5,000 nodes.
+* When running at-scale AKS clusters, use the cluster autoscaler whenever possible to ensure dynamic scaling of node pools based on the demand for compute resources. For more information, see [Automatically scale an AKS cluster to meet application demands][cluster-autoscaler].
+* If you're scaling beyond 1,000 nodes and are *not* using the cluster autoscaler, we recommend scaling in batches of 500-700 nodes at a time. The scaling operations should have a two-minute to five-minute wait time between scale up operations to prevent Azure API throttling. For more information, see [API management: Caching and throttling policies][throttling-policies].
+
+> [!NOTE]
+> You can't use [Azure Network Policy Manager (Azure NPM)][azure-npm] with clusters that have more than 500 nodes.
+
+<!-- LINKS - Internal >
+[run-aks-at-scale]: ./operator-best-practices-run-at-scale.md
+[managed-nat-gateway]: ./nat-gateway.md
+[azure-cni-dynamic-ip]: ./configure-azure-cni-dynamic-ip-allocation.md
+[azure-cni-overlay]: ./azure-cni-overlay.md
+[free-standard-tier]: ./free-standard-pricing-tiers.md
+[cluster-autoscaler]: cluster-autoscaler.md
+[azure-npm]: ../virtual-network/kubernetes-network-policies.md
+
+<!-- LINKS - External -->
+[throttling-policies]: https://azure.microsoft.com/blog/api-management-advanced-caching-and-throttling-policies/
aks Best Practices Performance Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-performance-scale.md
+
+ Title: Performance and scaling best practices for small to medium workloads in Azure Kubernetes Service (AKS)
+
+description: Learn the best practices for performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS).
+ Last updated : 11/03/2023++
+# Best practices for performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS)
+
+> [!NOTE]
+> This article focuses on general best practices for **small to medium workloads**. For best practices specific to **large workloads**, see [Performance and scaling best practices for large workloads in Azure Kubernetes Service (AKS)](./best-practices-performance-scale-large.md).
+
+As you deploy and maintain clusters in AKS, you can use the following best practices to help you optimize performance and scaling.
+
+In this article, you learn about:
+
+> [!div class="checklist"]
+>
+> * Tradeoffs and recommendations for autoscaling your workloads.
+> * Managing node scaling and efficiency based on your workload demands.
+> * Networking considerations for ingress and egress traffic.
+> * Monitoring and troubleshooting control plane and node performance.
+> * Capacity planning, surge scenarios, and cluster upgrades.
+> * Storage and networking considerations for data plane performance.
+
+## Application autoscaling vs. infrastructure autoscaling
+
+### Application autoscaling
+
+Application autoscaling is useful when dealing with cost optimization or infrastructure limitations. A well-configured autoscaler maintains high availability for your application while also minimizing costs. You only pay for the resources required to maintain availability, regardless of the demand.
+
+For example, if an existing node has space but not enough IPs in the subnet, it might be able to skip the creation of a new node and instead immediately start running the application on a new pod.
+
+#### Horizontal Pod autoscaling
+
+Implementing [horizontal pod autoscaling](./concepts-scale.md#horizontal-pod-autoscaler) is useful for applications with a steady and predictable resource demand. The Horizontal Pod Autoscaler (HPA) dynamically scales the number of pod replicas, which effectively distributes the load across multiple pods and nodes. This scaling mechanism is typically most beneficial for applications that can be decomposed into smaller, independent components capable of running in parallel.
+
+The HPA provides resource utilization metrics by default. You can also integrate custom metrics or leverage tools like the [Kubernetes Event-Driven Autoscaler (KEDA) (Preview)](./keda-about.md). These extensions allow the HPA to make scaling decisions based on multiple perspectives and criteria, providing a more holistic view of your application's performance. This is especially helpful for applications with varying complex scaling requirements.
+
+> [!NOTE]
+> If maintaining high availability for your application is a top priority, we recommend leaving a slightly higher buffer for the minimum pod number for your HPA to account for scaling time.
+
+#### Vertical Pod autoscaling
+
+Implementing [vertical pod autoscaling](./vertical-pod-autoscaler.md) is useful for applications with fluctuating and unpredictable resource demands. The Vertical Pod Autoscaler (VPA) allows you to fine-tune resource requests, including CPU and memory, for individual pods, enabling precise control over resource allocation. This granularity minimizes resource waste and enhances the overall efficiency of cluster utilization. The VPA also streamlines application management by automating resource allocation, freeing up resources for critical tasks.
+
+> [!WARNING]
+> You shouldn't use the VPA in conjunction with the HPA on the same CPU or memory metrics. This combination can lead to conflicts, as both autoscalers attempt to respond to changes in demand using the same metrics. However, you can use the VPA for CPU or memory in conjunction with the HPA for custom metrics to prevent overlap and ensure that each autoscaler focuses on distinct aspects of workload scaling.
+
+> [!NOTE]
+> The VPA works based on historical data. We recommend waiting at least *24 hours* after deploying the VPA before applying any changes to give it time to collect recommendation data.
+
+### Infrastructure autoscaling
+
+#### Cluster autoscaling
+
+Implementing cluster autoscaling is useful if your existing nodes lack sufficient capacity, as it helps with scaling up and provisioning new nodes.
+
+When considering cluster autoscaling, the decision of when to remove a node involves a tradeoff between optimizing resource utilization and ensuring resource availability. Eliminating underutilized nodes enhances cluster utilization but might result in new workloads having to wait for resources to be provisioned before they can be deployed. It's important to find a balance between these two factors that aligns with your cluster and workload requirements and [configure the cluster autoscaler profile settings accordingly](./cluster-autoscaler.md#change-the-cluster-autoscaler-settings).
+
+The Cluster Autoscaler profile settings apply universally to all autoscaler-enabled node pools in your cluster. This means that any scaling actions occurring in one autoscaler-enabled node pool might impact the autoscaling behavior in another node pool. It's important to apply consistent and synchronized profile settings across all relevant node pools to ensure that the autoscaler behaves as expected.
+
+##### Overprovisioning
+
+Overprovisioning is a strategy that helps mitigate the risk of application pressure by ensuring there's an excess of readily available resources. This approach is especially useful for applications that experience highly variable loads and cluster scaling patterns that show frequent scale ups and scale downs.
+
+To determine the optimal amount of overprovisioning, you can use the following formula:
+
+```txt
+1-buffer/1+traffic
+```
+
+For example, let's say you want to avoid hitting 100% CPU utilization in your cluster. You might opt for a 30% buffer to maintain a safety margin. If you anticipate an average traffic growth rate of 40%, you might consider overprovisioning by 50%, as calculated by the formula:
+
+```txt
+1-30%/1+40%=50%
+```
+
+An effective overprovisioning method involves the use of *pause pods*. Pause pods are low-priority deployments that can be easily replaced by high-priority deployments. You create low priority pods that serve the sole purpose of reserving buffer space. When a high-priority pod requires space, the pause pods are removed and rescheduled on another node or a new node to accommodate the high priority pod.
+
+The following YAML shows an example pause pod manifest:
+
+```yml
+apiVersion: scheduling.k8s.io/v1
+kind: PriorityClass
+metadata:
+ name: overprovisioning
+value: -1
+globalDefault: false
+description: "Priority class used by overprovisioning."
+
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: overprovisioning
+ namespace: kube-system
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ run: overprovisioning
+ template:
+ metadata:
+ labels:
+ run: overprovisioning
+ spec:
+ priorityClassName: overprovisioning
+ containers:
+ - name: reserve-resources
+ image: your-custome-pause-image
+ resources:
+ requests:
+ cpu: 1
+ memory: 4Gi
+```
+
+## Node scaling and efficiency
+
+> **Best practice guidance**:
+>
+> Carefully monitor resource utilization and scheduling policies to ensure nodes are being used efficiently.
+
+Node scaling allows you to dynamically adjust the number of nodes in your cluster based on workload demands. It's important to understand that adding more nodes to a cluster isn't always the best solution for improving performance. To ensure optimal performance, you should carefully monitor resource utilization and scheduling policies to ensure nodes are being used efficiently.
+
+### Node images
+
+> **Best practice guidance**:
+>
+> Use the latest node image version to ensure that you have the latest security patches and bug fixes.
+
+Using the latest node image version provides the best performance experience. AKS ships performance improvements within the weekly image releases. The latest daemonset images are cached on the latest VHD image, which provide lower latency benefits for node provisioning and bootstrapping. Falling behind on updates might have a negative impact on performance, so it's important to avoid large gaps between versions.
+
+#### Azure Linux
+
+The [Azure Linux Container Host on AKS](../azure-linux/intro-azure-linux.md) uses a native AKS image and provides a single place for Linux development. Every package is built from source and validated, ensuring your services run on proven components.
+
+Azure Linux is lightweight, only including the necessary set of packages to run container workloads. It provides a reduced attack surface and eliminates patching and maintenance of unnecessary packages. At its base layer, it has a Microsoft-hardened kernel tuned for Azure. This image is ideal for performance-sensitive workloads and platform engineers or operators that manage fleets of AKS clusters.
+
+#### Ubuntu 2204
+
+The [Ubuntu 2204 image](https://github.com/Azure/AKS/blob/master/CHANGELOG.md) is the default node image for AKS. It's a lightweight and efficient operating system optimized for running containerized workloads. This means that it can help reduce resource usage and improve overall performance. The image includes the latest security patches and updates, which help ensure that your workloads are protected from vulnerabilities.
+
+The Ubuntu 2204 image is fully supported by Microsoft, Canonical, and the Ubuntu community and can help you achieve better performance and security for your containerized workloads.
+
+### Virtual machines (VMs)
+
+> **Best practice guidance**:
+>
+> When selecting a VM, ensure the size and performance of the OS disk and VM SKU don't have a large discrepancy. A discrepancy in size or performance can cause performance issues and resource contention.
+
+Application performance is closely tied to the VM SKUs you use in your workloads. Larger and more powerful VMs, generally provide better performance. For *mission critical or product workloads*, we recommend using VMs with at least an 8-core CPU. VMs with newer hardware generations, like v4 and v5, can also help improve performance. Keep in mind that create and scale latency might vary depending on the VM SKUs you use.
+
+### Use dedicated system node pools
+
+For scaling performance and reliability, we recommend using a dedicated system node pool. With this configuration, the dedicated system node pool reserves space for critical system resources such as system OS daemons. Your application workload can then run in a user node pool to increase the availability of allocatable resources for your application. This configuration also helps mitigate the risk of resource competition between the system and application.
+
+### Create operations
+
+Review the extensions and add-ons you have enabled during create provisioning. Extensions and add-ons can add latency to overall duration of create operations. If you don't need an extension or add-on, we recommend removing it to improve create latency.
+
+You can also use availability zones to provide a higher level of availability to protect against potential hardware failures or planned maintenance events. AKS clusters distribute resources across logical sections of underlying Azure infrastructure. Availability zones physically separate nodes from other nodes to help ensure that a single failure doesn't impact the availability of your application. Availability zones are only available in certain regions. For more information, see [Availability zones in Azure](../reliability/availability-zones-overview.md).
+
+## Kubernetes API server
+
+### LIST and WATCH operations
+
+Kubernetes uses the LIST and WATCH operations to interact with the Kubernetes API server and monitor information about cluster resources. These operations are fundamental to how Kubernetes performs resource management.
+
+**The LIST operation retrieves a list of resources that fit within certain criteria**, such as all pods in a specific namespace or all services in the cluster. This operation is useful when you want to get an overview of your cluster resources or you need to operator on multiple resources at once.
+
+The LIST operation can retrieve large amounts of data, especially in large clusters with multiple resources. Be mindful of the fact that making unbounded or frequent LIST calls puts a significant load on the API server and can close down response times.
+
+**The WATCH operation performs real-time resource monitoring**. When you set up a WATCH on a resource, the API server sends you updates whenever there are changes to that resource. This is important for controllers, like the ReplicaSet controller, which rely on WATCH to maintain the desired state of resources.
+
+Be mindful of the fact that watching too many mutable resources or making too many concurrent WATCH requests can overwhelm the API server and cause excessive resource consumption.
+
+To avoid potential issues and ensure the stability of the Kubernetes control plane, you can use the following strategies:
+
+**Resource quotas**
+
+Implement resource quotas to limit the number of resources that can be listed or watched by a particular user or namespace to prevent excessive calls.
+
+**API Priority and Fairness**
+
+Kubernetes introduced the concept of API Priority and Fairness (APF) to prioritize and manage API requests. You can use APF in Kubernetes to protect the cluster's API server and reduce the number of `HTTP 429 Too Many Requests` responses seen by client applications.
+
+| Custom resource | Key features |
+| -- | |
+| PriorityLevelConfigurations | ΓÇó Define different priority levels for API requests.<br/> ΓÇó Specifies a unique name and assigns an integer value representing the priority level. Higher priority levels have lower integer values, indicating they're more critical.<br/> ΓÇó Can use multiple to categorize requests into different priority levels based on their importance.<br/> ΓÇó Allow you to specify whether requests at a particular priority level should be subject to rate limits. |
+| FlowSchemas | ΓÇó Define how API requests should be routed to different priority levels based on request attributes.<br/> ΓÇó Specify rules that match requests based on criteria like API groups, versions, and resources.<br/> ΓÇó When a request matches a given rule, the request is directed to the priority level specified in the associated PriorityLevelConfiguration.<br/> ΓÇó Can use to set the order of evaluation when multiple FlowSchemas match a request to ensure that certain rules take precedence. |
+
+Configuring API with PriorityLevelConfigurations and FlowSchemas enables the prioritization of critical API requests over less important requests. This ensures that essential operations don't starve or experience delays because of lower priority requests.
+
+**Optimize labeling and selectors**
+
+When using LIST operations, optimize label selectors to narrow down the scope of the resources you want to query to reduce the amount of data returned and the load on the API server.
+
+In Kubernetes CREATE and UPDATE operations refer to actions that manage and modify cluster resources.
+
+### CREATE and UPDATE operations
+
+**The CREATE operation creates new resources in the Kubernetes cluster**, such as pods, services, deployments, configmaps, and secrets. During a CREATE operation, a client, such as `kubectl` or a controller, sends a request to the Kubernetes API server to create the new resource. The API server validates the request, ensures compliance with any admission controller policies, and then creates the resource in the cluster's desired state.
+
+**The UPDATE operation modifies existing resources in the Kubernetes cluster**, including changes to resources specifications, like number of replicas, container images, environment variables, or labels. During an UPDATE operation, a client sends a request to the API server to update an existing resource. The API server validates the request, applies the changes to the resource definition, and updates the cluster resource.
+
+CREATE and UPDATE operations can impact the performance of the Kubernetes API server under the following conditions:
+
+* **High concurrency**: When multiple users or applications make concurrent CREATE or UPDATE requests, it can lead to a surge in API requests arriving at the server at the same time. This can stress the API server's processing capacity and cause performance issues.
+* **Complex resource definitions**: Resource definitions that are overly complex or involve multiple nested objects can increase the time it takes for the API server to validate and process CREATE and UPDATE requests, which can lead to performance degradation.
+* **Resource validation and admission control**: Kubernetes enforces various admission control policies and validation checks on incoming CREATE and UPDATE requests. Large resource definitions, like ones with extensive annotations or configurations, might require more processing time.
+* **Custom controllers**: Custom controllers that watch for changes in resources, like Deployments or StatefulSet controllers, can generate a significant number of updates when scaling or rolling out changes. These updates can strain the API server's resources.
+
+For more information, see [Troubleshoot API server and etcd problems in AKS](/troubleshoot/azure/azure-kubernetes/troubleshoot-apiserver-etcd).
+
+## Data plane performance
+
+The Kubernetes data plane is responsible for managing network traffic between containers and services. Issues with the data plane can lead to slow response times, degraded performance, and application downtime. It's important to carefully monitor and optimize data plane configurations, such as network latency, resource allocation, container density, and network policies, to ensure your containerized applications run smoothly and efficiently.
+
+### Storage types
+
+AKS recommends and defaults to using ephemeral OS disks. Ephemeral OS disks are created on local VM storage and aren't saved to remote Azure storage like managed OS disks. They have faster reimaging and boot times, enabling faster cluster operations, and they provide lower read/write latency on the OS disk of AKS agent nodes. Ephemeral OS disks work well for stateless workloads, where applications are tolerant of individual VM failures but not of VM deployment time or individual VM reimaging instances. Only certain VM SKUs support ephemeral OS disks, so you need to ensure that your desired SKU generation and size is compatible. For more information, see [Ephemeral OS disks in Azure Kubernetes Service (AKS)](./cluster-configuration.md#use-ephemeral-os-on-new-clusters).
+
+If your workload is unable to use ephemeral OS disks, AKS defaults to using Premium SSD OS disks. If Premium SSD OS disks aren't compatible with your workload, AKS defaults to Standard SSD disks. Currently, the only other available OS disk type is Standard HDD. For more information, see [Storage options in Azure Kubernetes Service (AKS)](./concepts-storage.md).
+
+The following table provides a breakdown of suggested use cases for OS disks supported in AKS:
+
+| OS disk type | Key features | Suggested use cases |
+|--|--||
+| Ephemeral OS disks | ΓÇó Faster reimaging and boot times.<br/> ΓÇó Lower read/write latency on OS disk of AKS agent nodes.<br/> ΓÇó High performance and availability. | ΓÇó Demanding enterprise workloads, such as SQL Server, Oracle, Dynamics, Exchange Server, MySQL, Cassandra, MongoDB, SAP Business Suite, etc.<br/> ΓÇó Stateless production workloads that require high availability and low latency. |
+| Premium SSD OS disks | ΓÇó Consistent performance and low latency.<br/> ΓÇó High availability. | ΓÇó Demanding enterprise workloads, such as SQL Server, Oracle, Dynamics, Exchange Server, MySQL, Cassandra, MongoDB, SAP Business Suite, etc.<br/> ΓÇó Input/output (IO) intensive enterprise workloads. |
+| Standard SSD OS disks | ΓÇó Consistent performance.<br/> ΓÇó Better availability and latency compared to Standard HDD disks. | ΓÇó Web servers.<br/> ΓÇó Low input/output operations per second (IOPS) application servers.<br/> ΓÇó Lightly used enterprise applications.<br/> ΓÇó Dev/test workloads. |
+| Standard HDD disks | ΓÇó Low cost.<br/> ΓÇó Exhibits variability in performance and latency. | ΓÇó Backup storage.<br/> ΓÇó Mass storage with infrequent access. |
+
+#### IOPS and throughput
+
+Input/output operations per second (IOPS) refers to the number of read and write operations that a disk can perform in a second. Throughout refers to the amount of data that can be transferred in a given time period.
+
+OS disks are responsible for storing the operating system and its associated files, and the VMs are responsible for running the applications. When selecting a VM, ensure the size and performance of the OS disk and VM SKU don't have a large discrepancy. A discrepancy in size or performance can cause performance issues and resource contention. For example, if the OS disk is significantly smaller than the VMs, it can limit the amount of space available for application data and cause the system to run out of disk space. If the OS disk has lower performance than the VMs, it can become a bottleneck and limit the overall performance of the system. Make sure the size and performance are balanced to ensure optimal performance in Kubernetes.
+
+You can use the following steps to monitor IOPS and bandwidth meters on OS disks in the Azure portal:
+
+1. Navigate to the [Azure portal](https://portal.azure.com/).
+2. Search for **Virtual machine scale sets** and select your virtual machine scale set.
+3. Under **Monitoring**, select **Metrics**.
+
+Ephemeral OS disks can provide dynamic IOPS and throughput for your application, whereas managed disks have capped IOPS and throughput. For more information, see [Ephemeral OS disks for Azure VMs](../virtual-machines/ephemeral-os-disks.md).
+
+[Azure Premium SSD v2](../virtual-machines/disks-types.md#premium-ssd-v2) is designed for IO-intense enterprise workloads that require sub-millisecond disk latencies and high IOPS and throughput at a low cost. It's suited for a broad range of workloads, such as SQL server, Oracle, MariaDB, SAP, Cassandra, MongoDB, big data/analytics, gaming, and more. This disk type is the highest performing option currently available for persistent volumes.
+
+### Pod scheduling
+
+The memory and CPU resources allocated to a VM have a direct impact on the performance of the pods running on the VM. When a pod is created, it's assigned a certain amount of memory and CPU resources, which are used to run the application. If the VM doesn't have enough memory or CPU resources available, it can cause the pods to slow down or even crash. If the VM has too much memory or CPU resources available, it can cause the pods to run inefficiently, wasting resources and increasing costs. We recommend monitoring the total pod requests across your workloads against the total allocatable resources for best scheduling predictability and performance. You can also set the maximum pods per node based on your capacity planning using `--max-pods`.
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
Title: Create node pools in Azure Kubernetes Service (AKS)
description: Learn how to create multiple node pools for a cluster in Azure Kubernetes Service (AKS). Previously updated : 07/18/2023 Last updated : 11/06/2023 # Create node pools for a cluster in Azure Kubernetes Service (AKS)
In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped t
To support applications that have different compute or storage demands, you can create *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. For example, use more user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. However, if you wish to have only one pool in your AKS cluster, you can schedule application pods on system node pools. > [!NOTE]
-> This feature enables more control over creating and managing multiple node pools and requires separate commands for create/update/delete operations. Previously, cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and requires use of the `az aks nodepool` command set to execute operations on an individual node pool.
+> This feature enables more control over creating and managing multiple node pools and requires separate commands for *create/update/delete* (CRUD) operations. Previously, cluster operations through [`az aks create`][az-aks-create] or [`az aks update`][az-aks-update] used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and requires use of the [`az aks nodepool`][az-aks-nodepool] command set to execute operations on an individual node pool.
This article shows you how to create one or more node pools in an AKS cluster.
The following limitations apply when you create AKS clusters that support multip
* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)](quotas-skus-regions.md). * You can delete system node pools if you have another system node pool to take its place in the AKS cluster. * System pools must contain at least one node, and user node pools may contain zero or more nodes.
-* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. The feature isn't supported with Basic SKU load balancers.
+* The AKS cluster must use the Standard SKU load balancer to use multiple node pools. This feature isn't supported with Basic SKU load balancers.
* The AKS cluster must use Virtual Machine Scale Sets for the nodes. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. * For Linux node pools, the length must be between 1-11 characters.
A workload may require splitting cluster nodes into separate pools for logical i
* All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes and pods in the cluster to provide critical functionality, such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
-* If you expand your VNET after creating the cluster, you must update your cluster before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state.
-* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
+* If you expand your VNET after creating the cluster, you must update your cluster before adding a subnet outside the original CIDR block. While AKS errors-out on the agent pool add, the `aks-preview` Azure CLI extension (version 0.5.66 and higher) now supports running [`az aks update`][az-aks-update] command with only the required `-g <resourceGroup> -n <clusterName>` arguments. This command performs an update operation without making any changes, which can recover a cluster stuck in a failed state.
+* In clusters with Kubernetes version less than 1.23.3, kube-proxy SNATs traffic from new subnets, which can cause Azure Network Policy to drop the packets.
* Windows nodes SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets.
Beginning in Kubernetes version 1.20 and higher, you can specify `containerd` as
> When using `containerd` with Windows Server 2019 node pools: > > * Both the control plane and Windows Server 2019 node pools must use Kubernetes version 1.20 or greater.
-> * When you create or update a node pool to run Windows Server containers, the default value for `--node-vm-size` is *Standard_D2s_v3*, which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the `--node-vm-size` parameter, please check the list of [restricted VM sizes][restricted-vm-sizes].
-> * We highly recommended using [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
+> * When you create or update a node pool to run Windows Server containers, the default value for `--node-vm-size` is *Standard_D2s_v3*, which was minimum recommended size for Windows Server 2019 node pools prior to Kubernetes version 1.20. The minimum recommended size for Windows Server 2019 node pools using `containerd` is *Standard_D4s_v3*. When setting the `--node-vm-size` parameter, check the list of [restricted VM sizes][restricted-vm-sizes].
+> * We recommended using [taints or labels][aks-taints] with your Windows Server 2019 node pools running `containerd` and tolerations or node selectors with your deployments to guarantee your workloads are scheduled correctly.
### Add a Windows Server node pool with `containerd`
In this article, you learned how to create multiple node pools in an AKS cluster
[arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-update]: /cli/azure/aks#az_aks_update
[az-aks-delete]: /cli/azure/aks#az_aks_delete
+[az-aks-nodepool]: /cli/azure/aks/nodepool
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list [az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Previously updated : 06/06/2023 Last updated : 11/06/2023 #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
Last updated 06/06/2023
You can customize egress for an AKS cluster to fit specific scenarios. By default, AKS will provision a standard SKU load balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress. This article covers the various types of outbound connectivity that are available in AKS clusters.
-how
+
> [!NOTE] > You can now update the `outboundType` after cluster creation. This feature is in preview. See [Updating `outboundType after cluster creation (preview)](#updating-outboundtype-after-cluster-creation-preview).
aks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/faq.md
Title: Frequently asked questions for Azure Kubernetes Service (AKS) description: Find answers to some of the common questions about Azure Kubernetes Service (AKS). Previously updated : 07/20/2022 Last updated : 11/06/2023
Moving or renaming your AKS cluster and its associated resources isn't supported
Most clusters are deleted upon user request. In some cases, especially cases where you bring your own Resource Group or perform cross-RG tasks, deletion can take more time or even fail. If you have an issue with deletes, double-check that you don't have locks on the RG, that any resources outside of the RG are disassociated from the RG, and so on. ## Why is my cluster create/update taking so long?+ If you have issues with create and update cluster operations, make sure you don't have any assigned policies or service constraints that may block your AKS cluster from managing resources like VMs, load balancers, tags, etc. ## Can I restore my cluster after deleting it?
-No, you're unable to restore your cluster after deleting it. When you delete your cluster, the associated resource group and all its resources are deleted. If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you have the **Owner** or **User Access Administrator** built-in role, you can lock Azure resources to protect them from accidental deletions and modifications. For more information, see [Lock your resources to protect your infrastructure][lock-azure-resources].
+No, you cannot restore your cluster after deleting it. When you delete your cluster, the node resource group and all its resources are also deleted. An example of the second resource group is *MC_myResourceGroup_myAKSCluster_eastus*.
+
+If you want to keep any of your resources, move them to another resource group before deleting your cluster. If you want to protect against accidental deletes, you can lock the AKS managed resource group hosting your cluster resources using [Node resource group lockdown][node-resource-group-lockdown].
## What is platform support, and what does it include?
The AKS Linux Extension is an Azure VM extension that installs and configures mo
- [Node-exporter](https://github.com/prometheus/node_exporter): Collects hardware telemetry from the virtual machine and makes it available using a metrics endpoint. Then, a monitoring tool, such as Prometheus, is able to scrap these metrics. - [Node-problem-detector](https://github.com/kubernetes/node-problem-detector): Aims to make various node problems visible to upstream layers in the cluster management stack. It's a systemd unit that runs on each node, detects node problems, and reports them to the clusterΓÇÖs API server using Events and NodeConditions.-- [Local-gadget](https://inspektor-gadget.io/docs/v0.18.1): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod.
+- [Local-gadget](https://inspektor-gadget.io/docs/): Uses in-kernel eBPF helper programs to monitor events related to syscalls from userspace programs in a pod.
These tools help provide observability around many node health related problems, such as:
The extension **doesn't require additional outbound access** to any URLs, IP add
[az-regions]: ../availability-zones/az-region.md [pricing-tiers]: ./free-standard-pricing-tiers.md [aks-keyvault-provider]: ./csi-secrets-store-driver.md
+[node-resource-group-lockdown]: cluster-configuration.md#create-an-aks-cluster-with-node-resource-group-lockdown
<!-- LINKS - external --> [aks-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service [cordon-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ [admission-controllers]: https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
-[lock-azure-resources]: ../azure-resource-manager/management/lock-resources.md
+
aks Gpu Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md
Last updated 04/10/2023
# Use GPUs for compute-intensive workloads on Azure Kubernetes Service (AKS)
-Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6*. The NVv4 series (based on AMD GPUs) aren't supported with AKS.
+Graphical processing units (GPUs) are often used for compute-intensive workloads, such as graphics and visualization workloads. AKS supports GPU-enabled Linux node pools to run compute-intensive Kubernetes workloads. For more information on available GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-skus]. For AKS node pools, we recommend a minimum size of *Standard_NC6s_v3*. The NVv4 series (based on AMD GPUs) aren't supported with AKS.
This article helps you provision nodes with schedulable GPUs on new and existing AKS clusters.
Now that you updated your cluster to use the AKS GPU image, you can add a node p
--cluster-name myAKSCluster \ --name gpunp \ --node-count 1 \
- --node-vm-size Standard_NC6 \
+ --node-vm-size Standard_NC6s_v3 \
--node-taints sku=gpu:NoSchedule \ --aks-custom-headers UseGPUDedicatedVHD=true \ --enable-cluster-autoscaler \
Now that you updated your cluster to use the AKS GPU image, you can add a node p
The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
- * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6*.
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
* `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool. * `--aks-custom-headers`: Specifies a specialized AKS GPU image, *UseGPUDedicatedVHD=true*. If your GPU sku requires generation 2 VMs, use *--aks-custom-headers UseGPUDedicatedVHD=true,usegen2vm=true* instead. * `--enable-cluster-autoscaler`: Enables the cluster autoscaler.
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
--cluster-name myAKSCluster \ --name gpunp \ --node-count 1 \
- --node-vm-size Standard_NC6 \
+ --node-vm-size Standard_NC6s_v3 \
--node-taints sku=gpu:NoSchedule \ --enable-cluster-autoscaler \ --min-count 1 \
You can deploy a DaemonSet for the NVIDIA device plugin, which runs a pod on eac
The previous example command adds a node pool named *gpunp* to *myAKSCluster* in *myResourceGroup* and uses parameters to configure the following node pool settings:
- * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6*.
+ * `--node-vm-size`: Sets the VM size for the node in the node pool to *Standard_NC6s_v3*.
* `--node-taints`: Specifies a *sku=gpu:NoSchedule* taint on the node pool. * `--enable-cluster-autoscaler`: Enables the cluster autoscaler. * `--min-count`: Configures the cluster autoscaler to maintain a minimum of one node in the node pool.
aks Manage Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md
For more information, see [capacity reservation groups][capacity-reservation-gro
You may need to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory or a node pool that provides GPU support. In the next section, you [use taints and tolerations](#set-node-pool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.
-In the following example, we create a GPU-based node pool that uses the *Standard_NC6* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information, see [Available sizes for Linux virtual machines in Azure][vm-sizes].
+In the following example, we create a GPU-based node pool that uses the *Standard_NC6s_v3* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information, see [Available sizes for Linux virtual machines in Azure][vm-sizes].
1. Create a node pool using the [`az aks node pool add`][az-aks-nodepool-add] command. Specify the name *gpunodepool* and use the `--node-vm-size` parameter to specify the *Standard_NC6* size.
In the following example, we create a GPU-based node pool that uses the *Standar
--cluster-name myAKSCluster \ --name gpunodepool \ --node-count 1 \
- --node-vm-size Standard_NC6 \
+ --node-vm-size Standard_NC6s_v3 \
--no-wait ```
In the following example, we create a GPU-based node pool that uses the *Standar
... "provisioningState": "Creating", ...
- "vmSize": "Standard_NC6",
+ "vmSize": "Standard_NC6s_v3",
... }, {
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
Last updated 11/01/2023
# Manage SSH for secure access to Azure Kubernetes Service (AKS) nodes
-This article describes how to update the SSH key on your AKS clusters or node pools.
+This article describes how to update the SSH key (preview) on your AKS clusters or node pools.
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
This article describes how to update the SSH key on your AKS clusters or node po
* You need the Azure CLI version 2.46.0 or later installed and configured. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. * This feature supports Linux, Mariner, and CBLMariner node pools on existing clusters.
-## Update SSH public key on an existing AKS cluster
+## Update SSH public key (preview) on an existing AKS cluster
Use the [az aks update][az-aks-update] command to update the SSH public key on your cluster. This operation updates the key on all node pools. You can either specify the key or a key file using the `--ssh-key-value` argument.
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
aks Tutorial Kubernetes Deploy Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-deploy-application.md
Title: Kubernetes on Azure tutorial - Deploy an application to Azure Kubernetes Service (AKS) description: In this Azure Kubernetes Service (AKS) tutorial, you deploy a multi-container application to your cluster using images stored in Azure Container Registry. Previously updated : 10/23/2023 Last updated : 11/02/2023 #Customer intent: As a developer, I want to learn how to deploy apps to an Azure Kubernetes Service (AKS) cluster so that I can deploy and run my own applications.
In this tutorial, you deployed a sample Azure application to a Kubernetes cluste
In the next tutorial, you learn how to use PaaS services for stateful workloads in Kubernetes. > [!div class="nextstepaction"]
-> Use PaaS services for stateful workloads in AKS
+> [Use PaaS services for stateful workloads in AKS][aks-tutorial-paas]
<!-- LINKS - external --> [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
aks Tutorial Kubernetes Prepare Acr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-prepare-acr.md
Title: Kubernetes on Azure tutorial - Create an Azure Container Registry and build images description: In this Azure Kubernetes Service (AKS) tutorial, you create an Azure Container Registry instance and upload sample application container images. Previously updated : 10/23/2023 Last updated : 11/02/2023 #Customer intent: As a developer, I want to learn how to create and use a container registry so that I can deploy my own applications to Azure Kubernetes Service.
This tutorial requires Azure PowerShell version 5.9.0 or later. Run `Get-Install
Before creating an ACR instance, you need a resource group. An Azure resource group is a logical container into which you deploy and manage Azure resources.
+> [!IMPORTANT]
+> This tutorial uses *myResourceGroup* as a placeholder for the resource group name. If you want to use a different name, replace *myResourceGroup* with your own resource group name.
+ ### [Azure CLI](#tab/azure-cli) 1. Create a resource group using the [`az group create`][az-group-create] command.
Before creating an ACR instance, you need a resource group. An Azure resource gr
az group create --name myResourceGroup --location eastus ```
-2. Create an ACR instance using the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses `<acrName>` as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+2. Create an ACR instance using the [`az acr create`][az-acr-create] command and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses an environment variable, `$ACRNAME`, as a placeholder for the container registry name. You can set this environment variable to your unique ACR name to use in future commands. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
```azurecli-interactive
- az acr create --resource-group myResourceGroup --name <acrName> --sku Basic
+ az acr create --resource-group myResourceGroup --name $ACRNAME --sku Basic
``` ### [Azure PowerShell](#tab/azure-powershell)
Before creating an ACR instance, you need a resource group. An Azure resource gr
New-AzResourceGroup -Name myResourceGroup -Location eastus ```
-2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses `<acrName>` as a placeholder for the container registry name. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
+2. Create an ACR instance using the [`New-AzContainerRegistry`][new-azcontainerregistry] cmdlet and provide your own unique registry name. The registry name must be unique within Azure and contain 5-50 alphanumeric characters. The rest of this tutorial uses an environment variable, `$ACRNAME`, as a placeholder for the container registry name. You can set this environment variable to your unique ACR name to use in future commands. The *Basic* SKU is a cost-optimized entry point for development purposes that provides a balance of storage and throughput.
```azurepowershell-interactive
- New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name <acrName> -Location eastus -Sku Basic
+ New-AzContainerRegistry -ResourceGroupName myResourceGroup -Name $ACRNAME -Location eastus -Sku Basic
```
Before creating an ACR instance, you need a resource group. An Azure resource gr
> In the following example, we don't build the `rabbitmq` image. This image is available from the Docker Hub public repository and doesn't need to be built or pushed to your ACR instance. ```azurecli-interactive
- az acr build --registry <acrName> --image aks-store-demo/product-service:latest ./src/product-service/
- az acr build --registry <acrName> --image aks-store-demo/order-service:latest ./src/order-service/
- az acr build --registry <acrName> --image aks-store-demo/store-front:latest ./src/store-front/
+ az acr build --registry $ACRNAME --image aks-store-demo/product-service:latest ./src/product-service/
+ az acr build --registry $ACRNAME --image aks-store-demo/order-service:latest ./src/order-service/
+ az acr build --registry $ACRNAME --image aks-store-demo/store-front:latest ./src/store-front/
``` ## List images in registry
Before creating an ACR instance, you need a resource group. An Azure resource gr
* View the images in your ACR instance using the [`az acr repository list`][az-acr-repository-list] command. ```azurecli-interactive
- az acr repository list --name <acrName> --output table
+ az acr repository list --name $ACRNAME --output table
``` The following example output lists the available images in your registry:
Before creating an ACR instance, you need a resource group. An Azure resource gr
* View the images in your ACR instance using the [`Get-AzContainerRegistryRepository`][get-azcontainerregistryrepository] cmdlet. ```azurepowershell-interactive
- Get-AzContainerRegistryRepository -RegistryName <acrName>
+ Get-AzContainerRegistryRepository -RegistryName $ACRNAME
``` The following example output lists the available images in your registry:
aks Tutorial Kubernetes Upgrade Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/tutorial-kubernetes-upgrade-cluster.md
Title: Kubernetes on Azure tutorial - Upgrade an Azure Kubernetes Service (AKS) cluster description: In this Azure Kubernetes Service (AKS) tutorial, you learn how to upgrade an existing AKS cluster to the latest available Kubernetes version. Previously updated : 10/23/2023 Last updated : 11/02/2023 #Customer intent: As a developer or IT pro, I want to learn how to upgrade an Azure Kubernetes Service (AKS) cluster so that I can use the latest version of Kubernetes and features.
For more information on AKS, see the [AKS overview][aks-intro]. For guidance on
[aks-auto-upgrade]: ./auto-upgrade-cluster.md [auto-upgrade-node-image]: ./auto-upgrade-node-image.md [node-image-upgrade]: ./node-image-upgrade.md
+[az-aks-update]: /cli/azure/aks#az_aks_update
aks Use Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md
The following labels are AKS reserved labels. *Virtual node usage* specifies if
| kubernetes.azure.com/agentpool | \<agent pool name> | nodepool1 | Same | | kubernetes.io/arch | amd64 | runtime.GOARCH | N/A | | kubernetes.io/os | \<OS Type> | Linux/Windows | Same |
-| node.kubernetes.io/instance-type | \<VM size> | Standard_NC6 | Virtual |
+| node.kubernetes.io/instance-type | \<VM size> | Standard_NC6s_v3 | Virtual |
| topology.kubernetes.io/region | \<Azure region> | westus2 | Same | | topology.kubernetes.io/zone | \<Azure zone> | 0 | Same | | kubernetes.azure.com/cluster | \<MC_RgName> | MC_aks_myAKSCluster_westus2 | Same |
aks Virtual Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md
Title: Use virtual nodes
description: Overview of how using virtual node with Azure Kubernetes Services (AKS) Previously updated : 01/18/2023 Last updated : 11/06/2023
Virtual nodes enable network communication between pods that run in Azure Contai
Pods running in Azure Container Instances (ACI) need access to the AKS API server endpoint, in order to configure networking.
-## Known limitations
+## Limitations
-Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following scenarios aren't supported with virtual nodes:
+Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following are scenarios not supported with virtual nodes or are deployment considerations:
* Using service principal to pull ACR images. [Workaround](https://github.com/virtual-kubelet/azure-aci/blob/master/README.md#private-registry) is to use [Kubernetes secrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) * [Virtual Network Limitations](../container-instances/container-instances-vnet.md) including VNet peering, Kubernetes network policies, and outbound traffic to the internet with network security groups.
Virtual nodes functionality is heavily dependent on ACI's feature set. In additi
* [Host aliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/) * [Arguments](../container-instances/container-instances-exec.md#restrictions) for exec in ACI * [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) won't deploy pods to the virtual nodes
-* Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI.
+* To schedule Windows Server containers to ACI, you need to manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider.
* Virtual nodes require AKS clusters with Azure CNI networking.
-* Using api server authorized ip ranges for AKS.
+* Using API server authorized ip ranges for AKS.
* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). However, virtual nodes currently don't support [Persistent Volumes](concepts-storage.md#persistent-volumes) and [Persistent Volume Claims](concepts-storage.md#persistent-volume-claims). Follow the instructions for mounting [a volume with Azure Files share as an inline volume](azure-csi-files-storage-provision.md#mount-file-share-as-an-inline-volume). * Using IPv6 isn't supported. * Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature.
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
application-gateway Configuration Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-frontend-ip.md
Only one public IP address and one private IP address is supported. You choose t
A frontend IP address is associated to a *listener*, which checks for incoming requests on the frontend IP. >[!NOTE]
-> You can create private and public listeners with the same port number (Preview feature). However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an allow-inbound rule with **Destination IP addresses** as your application gateway's Public and Private frontend IPs. When using the same port, your application gateway changes the "Destination" of the inbound flow to the frontend IPs of your gateway.
+> You can create private and public listeners with the same port number. However, be aware of any Network Security Group (NSG) associated with the application gateway subnet. Depending on your NSG's configuration, you may need an allow-inbound rule with **Destination IP addresses** as your application gateway's Public and Private frontend IPs. When using the same port, your application gateway changes the "Destination" of the inbound flow to the frontend IPs of your gateway.
> > **Inbound Rule**: > - Source: (as per your requirement)
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
To use NSG with your application gateway, you will need to create or retain some
||||||| |`<as per need>`|Any|`<Subnet IP Prefix>`|`<listener ports>`|TCP|Allow|
-Upon configuring **active public and private listeners** (with Rules) **with the same port number** (in Preview), your application gateway changes the "Destination" of all inbound flows to the frontend IPs of your gateway. This is true even for the listeners not sharing any port. You must thus include your gateway's frontend Public and Private IP addresses in the Destination of the inbound rule when using the same port configuration.
+Upon configuring **active public and private listeners** (with Rules) **with the same port number**, your application gateway changes the "Destination" of all inbound flows to the frontend IPs of your gateway. This is true even for the listeners not sharing any port. You must thus include your gateway's frontend Public and Private IP addresses in the Destination of the inbound rule when using the same port configuration.
| Source | Source ports | Destination | Destination ports | Protocol | Access |
application-gateway Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-portal.md
You'll create the application gateway using the tabs on the **Create application
- **Name**: Enter *myVNet* for the name of the virtual network. - **Subnet name** (Application Gateway subnet): The **Subnets** grid will show a subnet named *default*. Change the name of this subnet to *myAGSubnet*.<br>The application gateway subnet can contain only application gateways. No other resources are allowed. The default IP address range provided is 10.0.0.0/24.
+
+ - **Subnet name** (backend server subnet): In the second row of the **Subnets** grid, enter *myBackendSubnet* in the **Subnet name** column.
+
+ - **Address range** (backend server subnet): In the second row of the **Subnets** Grid, enter an address range that doesn't overlap with the address range of *myAGSubnet*. For example, if the address range of *myAGSubnet* is 10.0.0.0/24, enter *10.0.1.0/24* for the address range of *myBackendSubnet*.
Select **OK** to close the **Create virtual network** window and save the virtual network settings.
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
automation Guidance Migration Log Analytics Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md
This article provides guidance to move from Change Tracking and Inventory using
### [Using Azure portal - for single VM](#tab/ct-single-vm)
-1. Sign in to the [Azure portal](https://portal.azure.com) and select your virtual machine
+1. Sign in to the [Azure portal](https://portal.azure.com) and select your virtual machine
1. Under **Operations** , select **Change tracking**. 1. Select **Configure with AMA** and in the **Configure with Azure monitor agent**, provide the **Log analytics workspace** and select **Migrate** to initiate the deployment.
This article provides guidance to move from Change Tracking and Inventory using
1. On the **Onboarding to Change Tracking with Azure Monitoring** page, you can view your automation account and list of machines that are currently on Log Analytics and ready to be onboarded to Azure Monitoring Agent of Change Tracking and inventory. 1. On the **Assess virtual machines** tab, select the machines and then select **Next**. 1. On **Assign workspace** tab, assign a new [Log Analytics workspace resource ID](#obtain-log-analytics-workspace-resource-id) to which the settings of AMA based solution should be stored and select **Next**.
-
+ :::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-inline.png" alt-text="Screenshot of assigning new Log Analytics resource ID." lightbox="media/guidance-migration-log-analytics-monitoring-agent/assign-workspace-expanded.png":::
-
+ 1. On **Review** tab, you can review the machines that are being onboarded and the new workspace. 1. Select **Migrate** to initiate the deployment.
This article provides guidance to move from Change Tracking and Inventory using
:::image type="content" source="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-inline.png" alt-text="Screenshot that shows switching between log analytics and Azure Monitoring Agent after a successful migration." lightbox="media/guidance-migration-log-analytics-monitoring-agent/switch-versions-expanded.png"::: - ### [Using PowerShell script](#tab/ps-policy) #### Prerequisites -- Ensure to have the Windows PowerShell console installed. Follow the steps to [install Windows PowerShell](https://learn.microsoft.com/powershell/scripting/windows-powershell/install/installing-windows-powershell?view=powershell-7.3).-- We recommend that you use PowerShell version 7.1.3 or higher.
+- Ensure to have the Windows PowerShell console installed. We recommend that you use PowerShell version 7.2 or higher. Follow the steps to [Install PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows).
- Obtain Read access for the specified workspace resources. - Ensure that you have `Az.Accounts` and `Az.OperationalInsights` modules installed. The `Az.PowerShell` module is used to pull workspace agent configuration information. - Ensure to have the Azure credentials to run `Connect-AzAccount` and `Select Az-Context` that set the context for the script to run.
Follow these steps to migrate using scripts.
#### Onboard at scale Use the [script](https://github.com/mayguptMSFT/AzureMonitorCommunity/blob/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/DCR%20Config%20Generator/CTDcrGenerator/CTWorkSpaceSettingstoDCR.ps1) to migrate Change tracking workspace settings to data collection rule.
-
+ #### Parameters **Parameter** | **Required** | **Description** |
- | | |
+ | | |
`InputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Log Analytics. | `OutputWorkspaceResourceId`| Yes | Resource ID of the workspace associated to Change Tracking & Inventory with Azure Monitoring Agent. | `OutputDCRName`| Yes | Custom name of the new DCR created. | `OutputDCRLocation`| Yes | Azure location of the output workspace ID. |
-`OutputDCRTemplateFolderPath`| Yes | Folder path where DCR templates are created. |
+`OutputDCRTemplateFolderPath`| Yes | Folder path where DCR templates are created. |
To obtain the Log Analytics Workspace resource ID, follow these steps:
**For single VM and Automation Account** 1. 100 VMs per Automation Account can be migrated in one instance.
-1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
+1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
1. Arc VM migration isn't supported with portal, we recommend that you use PowerShell script migration. 1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking-monitoring-agent.md#configure-file-content-changes). 1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
To obtain the Log Analytics Workspace resource ID, follow these steps:
### [Using PowerShell script](#tab/limit-policy) 1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking.md#track-file-contents).
-1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
+1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
After you enable management of your virtual machines using Change Tracking and I
The disable method incorporates the following: - [Removes change tracking with LA agent for selected few VMs within Log Analytics Workspace](remove-vms-from-change-tracking.md). - [Removes change tracking with LA agent from the entire Log Analytics Workspace](remove-feature.md).
-
+ ## Next steps - To enable from the Azure portal, see [Enable Change Tracking and Inventory from the Azure portal](../change-tracking/enable-vms-monitoring-agent.md).
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 #
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
If any problems occur during the enablement process, see [Troubleshoot delivery
There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc include the following: -- Dev/Test (Visual Studio)-- Disaster Recovery (Entitled benefit DR instances from Software Assurance or subscription only)
+- [Dev/Test (Visual Studio)](/azure/devtest/offer/overview-what-is-devtest-offer-visual-studio)
+- Disaster Recovery ([Entitled benefit DR instances from Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-by-benefits) or subscription only)
To qualify for these scenarios, you must have:
azure-arc Quick Enable Hybrid Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/quick-enable-hybrid-vm.md
Title: Quickstart - Connect hybrid machine with Azure Arc-enabled servers description: In this quickstart, you connect and register a hybrid machine with Azure Arc-enabled servers. Previously updated : 05/04/2023 Last updated : 11/03/2023
Use the Azure portal to create a script that automates the agent download and in
1. [Go to the Azure portal page for adding servers with Azure Arc](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/HybridVmAddBlade). Select the **Add a single server** tile, then select **Generate script**.
- :::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server-expanded.png":::
+ :::image type="content" source="media/quick-enable-hybrid-vm/add-single-server.png" alt-text="Screenshot of Azure portal's add server page." lightbox="media/quick-enable-hybrid-vm/add-single-server.png":::
> [!NOTE] > In the portal, you can also reach this page by searching for and selecting "Servers - Azure Arc" and then selecting **+Add**.
-1. Review the information on the **Prerequisites** page, then select **Next**.
-
-1. On the **Resource details** page, provide the following:
+1. On the **Basics** page, provide the following:
1. Select the subscription and resource group where you want the machine to be managed within Azure. 1. For **Region**, choose the Azure region in which the server's metadata will be stored.
azure-arc Tutorial Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-assign-policy-portal.md
Follow the steps below to create a policy assignment and assign the policy defin
For a partial list of available built-in policies, see [Azure Policy samples](../../../governance/policy/samples/index.md). 1. Search through the policy definitions list to find the _\[Preview]: Log Analytics extension should be installed on your Windows Azure Arc machines_
- definition (if you have enabled the Azure Connected Machine agent on a Windows-based machine). For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Select**.
+ definition (if you have enabled the Azure Connected Machine agent on a Windows-based machine). For a Linux-based machine, find the corresponding _\[Preview]: Log Analytics extension should be installed on your Linux Azure Arc machines_ policy definition. Click on that policy and click **Add**.
1. The **Assignment name** is automatically populated with the policy name you selected, but you can change it. For this example, leave the policy name as is, and don't change any of the remaining options on the page.
azure-arc Tutorial Enable Vm Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/learn/tutorial-enable-vm-insights.md
Sign in to the [Azure portal](https://portal.azure.com).>
## Enable VM insights
-1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Servers - Azure Arc**.
+1. Launch the Azure Arc service in the Azure portal by clicking **All services**, then searching for and selecting **Machines - Azure Arc**.
:::image type="content" source="./media/quick-enable-hybrid-vm/search-machines.png" alt-text="Screenshot of Azure portal showing search for Servers, Azure Arc." border="false":::
-1. On the **Azure Arc - Servers** page, select the connected machine you created in the [quickstart](quick-enable-hybrid-vm.md) article.
+1. On the **Azure Arc - Machines** page, select the connected machine you created in the [quickstart](quick-enable-hybrid-vm.md) article.
1. From the left-pane under the **Monitoring** section, select **Insights** and then **Enable**.
azure-arc Manage Automatic Vm Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-automatic-vm-extension-upgrade.md
Title: Automatic extension upgrade for Azure Arc-enabled servers description: Learn how to enable automatic extension upgrades for your Azure Arc-enabled servers. Previously updated : 10/14/2022 Last updated : 11/03/2023 # Automatic extension upgrade for Azure Arc-enabled servers
If you continue to have trouble upgrading an extension, you can [disable automat
### Timing of automatic extension upgrades
-When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it may take 5 - 8 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you may see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions).
+When a new version of a VM extension is published, it becomes available for installation and manual upgrade on Arc-enabled servers. For servers that already have the extension installed and automatic extension upgrade enabled, it might take 5 - 8 weeks for every server with that extension to get the automatic upgrade. Upgrades are issued in batches across Azure regions and subscriptions, so you might see the extension get upgraded on some of your servers before others. If you need to upgrade an extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions).
Extension versions fixing critical security vulnerabilities are rolled out much faster. These automatic upgrades happen using a specialized roll out process which can take 1 - 3 weeks to automatically upgrade every server with that extension. Azure handles identifying which extension version should be rollout quickly to ensure all servers are protected. If you need to upgrade the extension immediately, follow the guidance to manually upgrade extensions using the [Azure portal](manage-vm-extensions-portal.md#upgrade-extensions), [Azure PowerShell](manage-vm-extensions-powershell.md#upgrade-extension) or [Azure CLI](manage-vm-extensions-cli.md#upgrade-extensions).
Automatic extension upgrade is enabled by default when you install extensions on
Use the following steps to configure automatic extension upgrades in using the Azure portal:
-1. Navigate to the [Azure portal](https://portal.azure.com) and type **Servers - Azure Arc** into the search bar.
- :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-search-arc-server.png" alt-text="Screenshot of Azure portal showing user typing in Servers - Azure Arc." border="true":::
-1. Select **Servers - Azure Arc** under the Services category, then select the individual server you wish to manage.
-1. In the navigation pane, select the **Extensions** tab to see a list of all extensions installed on the server.
+1. Go to the [Azure portal](https://portal.azure.com) navigate to **Machines - Azure Arc**.
+1. Select the applicable server.
+1. In the left pane, select the **Extensions** tab to see a list of all extensions installed on the server.
:::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-navigation-extensions.png" alt-text="Screenshot of an Azure Arc-enabled server in the Azure portal showing where to navigate to extensions." border="true"::: 1. The **Automatic upgrade** column in the table shows whether upgrades are enabled, disabled, or not supported for each extension. Select the checkbox next to the extensions for which you want automatic upgrades enabled, then select **Enable automatic upgrade** to turn on the feature. Select **Disable automatic upgrade** to turn off the feature.
- :::image type="content" source="media/manage-automatic-vm-extension-upgrade/portal-enable-auto-upgrade.png" alt-text="Screenshot of Azure portal showing how to select extensions and enable automatic upgrades." border="true":::
### [Azure CLI](#tab/azure-cli)
Update-AzConnectedMachineExtension -ResourceGroup resourceGroupName -MachineName
A machine managed by Arc-enabled servers can have multiple extensions with automatic extension upgrade enabled. The same machine can also have other extensions without automatic extension upgrade enabled.
-If multiple extension upgrades are available for a machine, the upgrades may be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded.
+If multiple extension upgrades are available for a machine, the upgrades might be batched together, but each extension upgrade is applied individually on a machine. A failure on one extension doesn't impact the other extension(s) to be upgraded. For example, if two extensions are scheduled for an upgrade, and the first extension upgrade fails, the second extension will still be upgraded.
## Check automatic extension upgrade history
azure-arc Manage Vm Extensions Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-portal.md
VM extensions can be applied to your Azure Arc-enabled server-managed machine vi
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+2. In the portal, browse to **Machines - Azure Arc** and select your machine from the list.
-3. Choose **Extensions**, then select **Add**. Choose the extension you want from the list of available extensions and follow the instructions in the wizard. In this example, we will deploy the Log Analytics VM extension.
+3. Choose **Extensions**, then select **Add**.
- ![Select VM extension for selected machine](./media/manage-vm-extensions/add-vm-extensions.png)
-
- The following example shows the installation of the Log Analytics VM extension from the Azure portal:
+4. Choose the extension you want from the list of available extensions and follow the instructions in the wizard. In this example, we will deploy the Log Analytics VM extension.
![Install Log Analytics VM extension](./media/manage-vm-extensions/mma-extension-config.png) To complete the installation, you are required to provide the workspace ID and primary key. If you are not familiar with how to find this information, see [obtain workspace ID and key](../../azure-monitor/agents/agent-windows.md#workspace-id-and-key).
-4. After confirming the required information provided, select **Review + Create**. A summary of the deployment is displayed and you can review the status of the deployment.
+5. After confirming the required information provided, select **Review + Create**. A summary of the deployment is displayed and you can review the status of the deployment.
>[!NOTE] >While multiple extensions can be batched together and processed, they are installed serially. Once the first extension installation is complete, installation of the next extension is attempted.
You can get a list of the VM extensions on your Azure Arc-enabled server from th
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+2. In the portal, browse to **Machines - Azure Arc** and select your machine from the list.
3. Choose **Extensions**, and the list of installed extensions is returned.
You can upgrade one, or select multiple extensions eligible for an upgrade from
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+2. In the portal, browse to **Machines - Azure Arc** and select your hybrid machine from the list.
3. Choose **Extensions**, and review the status of extensions under the **Update available** column.
You can remove one or more extensions from an Azure Arc-enabled server from the
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-2. In the portal, browse to **Servers - Azure Arc** and select your hybrid machine from the list.
+2. In the portal, browse to **Machines - Azure Arc** and select your hybrid machine from the list.
3. Choose **Extensions**, and then select an extension from the list of installed extensions.
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 05/23/2022 Last updated : 11/03/2023
The script to automate the download and installation, and to establish the conne
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-1. On the **Servers - Azure Arc** page, select **Add** at the upper left.
+1. On the **Machines - Azure Arc** page, select **Add/Create** at the upper left, then select **Add a machine** from the drop-down menu.
-1. On the **Select a method** page, select the **Add multiple servers** tile, and then select **Generate script**.
+1. On the **Add servers with Azure Arc** page, select the **Add multiple servers** tile, and then select **Generate script**.
-1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location.
+1. On the **Basics** page, provide the following:
-1. On the **Prerequisites** page, review the information and then select **Next: Resource details**.
-
-1. On the **Resource details** page, provide the following:
-
- 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from.
- 1. In the **Region** drop-down list, select the Azure region to store the servers metadata.
+ 1. Select the **Subscription** and **Resource group** for the machines.
+ 1. In the **Region** drop-down list, select the Azure region to store the servers' metadata.
1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on. 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Using this configuration, the agent communicates through the proxy server using the HTTP protocol. Enter the value in the format `http://<proxyURL>:<proxyport>`.
- 1. Select **Next: Authentication**.
-
-1. On the **Authentication** page, under the **service principal** drop-down list, select **Arc-for-servers**. Then select, **Next: Tags**.
+ 1. Select **Next**.
+ 1. In the **Authentication** section, under the **Service principal** drop-down list, select **Arc-for-servers**. Then select, **Next**.
1. On the **Tags** page, review the default **Physical location tags** suggested and enter a value, or specify one or more **Custom tags** to support your standards.
-1. Select **Next: Download and run script**.
+1. Select **Next**.
1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**.
After you install the agent and configure it to connect to Azure Arc-enabled ser
![Screenshot showing a successful server connection in the Azure portal.](./media/onboard-portal/arc-for-servers-successful-onboard.png) ---------- ## Next steps - Review the [Planning and deployment guide](plan-at-scale-deployment.md) to plan for deploying Azure Arc-enabled servers at any scale and implement centralized management and monitoring.
azure-arc Onboard Update Management Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md
Title: Connect machines from Azure Automation Update Management description: In this article, you learn how to connect hybrid machines to Azure Arc managed by Automation Update Management. Previously updated : 11/01/2023 Last updated : 11/06/2023
Perform the following steps to configure the hybrid machine with Arc-enabled ser
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-1. Navigate to the **Servers - Azure Arc** page, and then select **Add** at the upper left.
+1. Navigate to the **Machines - Azure Arc** page, select **Add/Create**, and then select **Add a machine** from the drop-down menu.
-1. On the **Select a method** page, select the **Add managed servers from Update Management (preview)** tile, and then select **Add servers**.
+1. On the **Add servers with Azure Arc** page, select **Add servers** from the **Add managed servers from Update Management** tile.
-1. On the **Basics** page, configure the following:
+1. On the **Resource details** page, configure the following:
- 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from.
+ 1. Select the **Subscription** and **Resource group** where you want the server to be managed within Azure.
1. In the **Region** drop-down list, select the Azure region to store the servers metadata. 1. If the machine is communicating through a proxy server to connect to the internet, specify the proxy server IP address or the name and port number that the machine will use to communicate with the proxy server. Enter the value in the format `http://<proxyURL>:<proxyport>`.
- 1. Select **Next: Machines**.
+ 1. Select **Next**.
-1. On the **Machines** page, select the **Subscription** and **Automation account** from the drop-down list that has the Update Management feature enabled and includes the machines you want to onboard to Azure Arc-enabled servers.
+1. On the **Servers** page, select **Add Servers**, then select the **Subscription** and **Automation account** from the drop-down list that has the Update Management feature enabled and includes the machines you want to onboard to Azure Arc-enabled servers.
After specifying the Automation account, the list below returns non-Azure machines managed by Update Management for that Automation account. Both Windows and Linux machines are listed and for each one, select **add**. You can review your selection by selecting **Review selection** and if you want to remove a machine select **remove** from under the **Action** column.
- Once you confirm your selection, select **Next: Tags**.
+ Once you confirm your selection, select **Next**.
1. On the **Tags** page, specify one or more **Name**/**Value** pairs to support your standards. Select **Next: Review + add**.
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-arc Prepare Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prepare-extended-security-updates.md
# Prepare to deliver Extended Security Updates for Windows Server 2012
-With Windows Server 2012 and Windows Server 2012 R2 reaching end of support on October 10, 2023, Azure Arc-enabled servers lets you enroll your existing Windows Server 2012/2012 R2 machines in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview). Affording both cost flexibility and an enhanced delivery experience, Azure Arc better positions you to migrate to Azure.
+With Windows Server 2012 and Windows Server 2012 R2 having reached end of support on October 10, 2023, Azure Arc-enabled servers lets you enroll your existing Windows Server 2012/2012 R2 machines in [Extended Security Updates (ESUs)](/windows-server/get-started/extended-security-updates-overview). Affording both cost flexibility and an enhanced delivery experience, Azure Arc better positions you to migrate to Azure.
The purpose of this article is to help you understand the benefits and how to prepare to use Arc-enabled servers to enable delivery of ESUs.
Delivering ESUs to your Windows Server 2012/2012 R2 machines provides the follow
For Azure Arc-enabled servers enrolled in WS2012 ESUs enabled by Azure Arc, free access is provided to these Azure services from October 10, 2023: * [Azure Update Manager](../../update-center/overview.md) - Unified management and governance of update compliance that includes not only Azure and hybrid machines, but also ESU update compliance for all your Windows Server 2012/2012 R2 machines.
+ Enrollment in ESUs does not impact Azure Update Manager. After enrollment in ESUs through Azure Arc, the server becomes eligible for ESU patches. These patches can be delivered through Azure Update Manager or any other patching solution. You'll still need to configure updates from Microsoft Updates or Windows Server Update Services.
* [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2) - Track changes in virtual machines hosted in Azure, on-premises, and other cloud environments. * [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) - Audit the configuration settings in a virtual machine. Guest configuration supports Azure VMs natively and non-Azure physical and virtual servers through Azure Arc-enabled servers.
Other Azure services through Azure Arc-enabled servers are available as well, wi
* [Microsoft Sentinel](scenario-onboard-azure-sentinel.md) - Collect security-related events and correlate them with other data sources. >[!NOTE]
- >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager (preview) and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
+ >Activation of ESU is planned for the third quarter of 2023. Using Azure services such as Azure Update Manager and Azure Policy to support managing ESU-eligible Windows Server 2012/2012 R2 machines are also planned for the third quarter.
## Prepare delivery of ESUs To prepare for this new offer, you need to plan and prepare to onboard your machines to Azure Arc-enabled servers through the installation of the [Azure Connected Machine agent](agent-overview.md) (version 1.34 or higher) and establishing a connection to Azure. Windows Server 2012 Extended Security Updates supports Windows Server 2012 and R2 Standard and Datacenter editions. Windows Server 2012 Storage is not supported.
-We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy one month before Windows Server 2012 end of support. Billing for this service starts from October 2023, after Windows Server 2012 end of support.
+We recommend you deploy your machines to Azure Arc in preparation for when the related Azure services deliver supported functionality to manage ESU. Once these machines are onboarded to Azure Arc-enabled servers, you'll have visibility into their ESU coverage and enroll through the Azure portal or using Azure Policy. Billing for this service starts from October 2023 (i.e., after Windows Server 2012 end of support).
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Azure Arc supports the following Windows and Linux operating systems. Only x86-6
* Both Desktop and Server Core experiences are supported * Azure Editions are supported on Azure Stack HCI
-The Azure Connected Machine agent can't currently be installed on systems hardened by the Center for Information Security (CIS) Benchmark.
+The Azure Connected Machine agent hasn't been tested on operating systems hardened by the Center for Information Security (CIS) Benchmark.
### Client operating system guidance
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/private-link-security.md
Last updated 06/20/2023
# Use Azure Private Link to securely connect servers to Azure Arc
-[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multi-cloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks.
+[Azure Private Link](../../private-link/private-link-overview.md) allows you to securely link Azure PaaS services to your virtual network using private endpoints. For many services, you just set up an endpoint per resource. This means you can connect your on-premises or multicloud servers with Azure Arc and send all traffic over an Azure [ExpressRoute](../../expressroute/expressroute-introduction.md) or site-to-site [VPN connection](../../vpn-gateway/vpn-gateway-about-vpngateways.md) instead of using public networks.
Starting with Azure Arc-enabled servers, you can use a Private Link Scope model to allow multiple servers or machines to communicate with their Azure Arc resources using a single private endpoint.
There are two ways you can achieve this:
|Priority |150 (must be lower than any rules that block internet access) |151 (must be lower than any rules that block internet access) | |Name |AllowAADOutboundAccess |AllowAzOutboundAccess | -- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID and Azure using the downloadable service tag files. The [JSON file](https://www.microsoft.com/en-us/download/details.aspx?id=56519) contains all the public IP address ranges used by Microsoft Entra ID and Azure and is updated monthly to reflect any changes. Azure ADs service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
+- Configure the firewall on your local network to allow outbound TCP 443 (HTTPS) access to Microsoft Entra ID and Azure using the downloadable service tag files. The [JSON file](https://www.microsoft.com/en-us/download/details.aspx?id=56519) contains all the public IP address ranges used by Microsoft Entra ID and Azure and is updated monthly to reflect any changes. Azure AD's service tag is `AzureActiveDirectory` and Azure's service tag is `AzureResourceManager`. Consult with your network administrator and network firewall vendor to learn how to configure your firewall rules.
See the visual diagram under the section [How it works](#how-it-works) for the network traffic flows.
Once your Azure Arc Private Link Scope is created, you need to connect it with o
a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Arc-enabled server.
- b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below.
+ b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones might be different from what is shown in the screenshot below.
> [!NOTE] > If you choose **No** and prefer to manage DNS records manually, first complete setting up your Private Link - including this Private Endpoint and the Private Scope configuration. Then, configure your DNS according to the instructions in [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md). Make sure not to create empty records as preparation for your Private Link setup. The DNS records you create can override existing settings and impact your connectivity with Azure Arc-enabled servers.
If you opted out of using Azure private DNS zones during private endpoint creati
### Single server scenarios
-If you're only planning to use Private Links to support a few machines or servers, you may not want to update your entire network's DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address.
+If you're only planning to use Private Links to support a few machines or servers, you might not want to update your entire network's DNS configuration. In this case, you can add the private endpoint hostnames and IP addresses to your operating systems **Hosts** file. Depending on the OS configuration, the Hosts file can be the primary or alternative method for resolving hostname to IP address.
#### Windows
If you're only planning to use Private Links to support a few machines or server
1. Add the private endpoint IPs and hostnames as shown in the table from step 3 under [Manual DNS server configuration](#manual-dns-server-configuration). The hosts file requires the IP address first followed by a space and then the hostname.
-1. Save the file with your changes. You may need to save to another directory first, then copy the file to the original path.
+1. Save the file with your changes. You might need to save to another directory first, then copy the file to the original path.
#### Linux
When connecting a machine or server with Azure Arc-enabled servers for the first
1. From your browser, go to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Servers -Azure Arc**.
+1. Navigate to **Machines - Azure Arc**.
-1. On the **Servers - Azure Arc** page, select **Add** at the upper left.
+1. On the **Machines - Azure Arc** page, select **Add/Create** at the upper left, and then select **Add a machine** from the drop-down menu.
1. On the **Add servers with Azure Arc** page, select either the **Add a single server** or **Add multiple servers** depending on your deployment scenario, and then select **Generate script**. 1. On the **Generate script** page, select the subscription and resource group where you want the machine to be managed within Azure. Select an Azure location where the machine metadata will be stored. This location can be the same or different, as the resource group's location.
-1. On the **Prerequisites** page, review the information and then select **Next: Resource details**.
+1. On the **Basics** page, provide the following:
-1. On the **Resource details** page, provide the following:
-
- 1. In the **Resource group** drop-down list, select the resource group the machine will be managed from.
+ 1. Select the **Subscription** and **Resource group** for the machine.
1. In the **Region** drop-down list, select the Azure region to store the machine or server metadata. 1. In the **Operating system** drop-down list, select the operating system that the script is configured to run on.
- 1. Under **Network Connectivity**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list.
+ 1. Under **Connectivity method**, select **Private endpoint** and select the Azure Arc Private Link Scope created in Part 1 from the drop-down list.
:::image type="content" source="./media/private-link-security/arc-enabled-servers-create-script.png" alt-text="Selecting Private Endpoint connectivity option" border="true":::
When connecting a machine or server with Azure Arc-enabled servers for the first
1. On the **Download and run script** page, review the summary information, and then select **Download**. If you still need to make changes, select **Previous**.
-After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you may need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
+After downloading the script, you have to run it on your machine or server using a privileged (administrator or root) account. Depending on your network configuration, you might need to download the agent from a computer with internet access and transfer it to your machine or server, and then modify the script with the path to the agent.
The Windows agent can be downloaded from [https://aka.ms/AzureConnectedMachineAgent](https://aka.ms/AzureConnectedMachineAgent) and the Linux agent can be downloaded from [https://packages.microsoft.com](https://packages.microsoft.com). Look for the latest version of the **azcmagent** under your OS distribution directory and installed with your local package manager. The script will return status messages letting you know if onboarding was successful after it completes. > [!TIP]
-> Network traffic from the Azure Connected Machine agent to Microsoft Entra ID and Azure Resource Manager will continue to use public endpoints. If your server needs to communicate through a proxy server to reach these endpoints, [configure the agent with the proxy server URL](manage-agent.md#update-or-remove-proxy-settings) before connecting it to Azure. You may also need to [configure a proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) for the Azure Arc services if your private endpoint is not accessible from your proxy server.
+> Network traffic from the Azure Connected Machine agent to Microsoft Entra ID and Azure Resource Manager will continue to use public endpoints. If your server needs to communicate through a proxy server to reach these endpoints, [configure the agent with the proxy server URL](manage-agent.md#update-or-remove-proxy-settings) before connecting it to Azure. You might also need to [configure a proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) for the Azure Arc services if your private endpoint is not accessible from your proxy server.
### Configure an existing Azure Arc-enabled server
For Azure Arc-enabled servers that were set up prior to your private link scope,
:::image type="content" source="./media/private-link-security/select-servers-private-link-scope.png" lightbox="./media/private-link-security/select-servers-private-link-scope.png" alt-text="Selecting Azure Arc resources" border="true":::
-It may take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s).
+It might take up to 15 minutes for the Private Link Scope to accept connections from the recently associated server(s).
## Troubleshooting
azure-arc Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/disaster-recovery.md
To recover from Arc resource bridge VM deletion, you need to deploy a new resour
1. Copy the Azure region and resource IDs of the Arc resource bridge, custom location, and SCVMM Azure resources.
-2. Find and delete the old Arc resource bridge template from your SCVMM.
+2. Find and delete the old Arc resource bridge resource under the [Resource Bridges tab from the Azure Arc center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/resourceBridges).
3. Download the [onboarding script](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#download-the-onboarding-script) from the Azure portal and update the following section in the script, using the same information as the original resources in Azure.
To recover from Arc resource bridge VM deletion, you need to deploy a new resour
5. [Provide the inputs](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc#script-runtime) as prompted.
-6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again.
+6. In the same machine, run the following scripts, as applicable:
+ - [Download the script](https://download.microsoft.com/download/6/b/4/6b4a5009-fed8-46c2-b22b-b24a4d0a06e3/arcvmm-appliance-dr.ps1) if you are running the script from a Windows machine
+ - [Download the script](https://download.microsoft.com/download/0/5/c/05c2bcb8-87f8-4ead-9757-a87a0759071c/arcvmm-appliance-dr.sh) if you are running the script from a Linux machine
+
+7. Once the script is run successfully, the old Resource Bridge will be recovered and the connection is re-established to the existing Azure-enabled SCVMM resources.
## Next steps
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
This QuickStart shows you how to connect your SCVMM management server to Azure A
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. |
+| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. |
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
azure-arc Administer Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/administer-arc-vmware.md
Title: Perform ongoing administration for Arc-enabled VMware vSphere description: Learn how to perform administrator operations related to Azure Arc-enabled VMware vSphere Previously updated : 08/18/2023 Last updated : 11/06/2023
# Perform ongoing administration for Arc-enabled VMware vSphere
-In this article, you learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere (preview):
+In this article, you learn how to perform various administrative operations related to Azure Arc-enabled VMware vSphere:
-- Upgrading the Azure Arc resource bridge (preview)
+- Upgrading the Azure Arc resource bridge
- Updating the credentials - Collecting logs from the Arc resource bridge
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
Title: Azure Arc agent description: Learn about Azure Arc agent Previously updated : 10/31/2023 Last updated : 11/06/2023
azure-arc Enable Guest Management At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/enable-guest-management-at-scale.md
Title: Install Arc agent at scale for your VMware VMs description: Learn how to enable guest management at scale for Arc enabled VMware vSphere VMs. Previously updated : 08/21/2023 Last updated : 11/06/2023
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
Title: What is Azure Arc-enabled VMware vSphere (preview)?
+ Title: What is Azure Arc-enabled VMware vSphere?
description: Azure Arc-enabled VMware vSphere extends Azure governance and management capabilities to VMware vSphere infrastructure and delivers a consistent management experience across both platforms. Previously updated : 10/31/2023 Last updated : 11/06/2023
-# What is Azure Arc-enabled VMware vSphere (preview)?
+# What is Azure Arc-enabled VMware vSphere?
-Azure Arc-enabled VMware vSphere (preview) is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure.
+Azure Arc-enabled VMware vSphere is an [Azure Arc](../overview.md) service that helps you simplify management of hybrid IT estate distributed across VMware vSphere and Azure. It does so by extending the Azure control plane to VMware vSphere infrastructure and enabling the use of Azure security, governance, and management capabilities consistently across VMware vSphere and Azure.
-Arc-enabled VMware vSphere (preview) allows you to:
+Arc-enabled VMware vSphere allows you to:
- Discover your VMware vSphere estate (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register resources with Arc at scale.
Arc-enabled VMware vSphere extends Azure's control plane (Azure Resource Manager
## How does it work?
-Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) (preview) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure.
+Arc-enabled VMware vSphere provides these capabilities by integrating with your VMware vCenter Server. To connect your VMware vCenter Server to Azure Arc, you need to deploy the [Azure Arc resource bridge](../resource-bridge/overview.md) in your vSphere environment. Azure Arc resource bridge is a virtual appliance that hosts the components that communicate with your vCenter Server and Azure.
When a VMware vCenter Server is connected to Azure, an automatic discovery of the inventory of vSphere resources is performed. This inventory data is continuously kept in sync with the vCenter Server.
You have the flexibility to start with either option, and incorporate the other
## Supported VMware vSphere versions
-Azure Arc-enabled VMware vSphere (preview) currently works with vCenter Server versions 7 and 8.
+Azure Arc-enabled VMware vSphere currently works with vCenter Server versions 7 and 8.
> [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, we don't recommend you to use Arc-enabled VMware vSphere with it at this point.
## Supported regions
-You can use Azure Arc-enabled VMware vSphere (preview) in these supported regions:
-- Australia East-- Canada Central
+You can use Azure Arc-enabled VMware vSphere in these supported regions:
+ - East US-- East US 2-- North Europe-- Southeast Asia
+- East US2
+- West US2
+- West US3
+- South Central US
+- Canada Central
- UK South
+- North Europe
- West Europe-- West US 2-- West US 3
+- Sweden Central
+- Southeast Asia
+- Australia East
For the most up-to-date information about region availability of Azure Arc-enabled VMware vSphere, see [Azure Products by Region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-arc&regions=all) page.
azure-arc Perform Vm Ops Through Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/perform-vm-ops-through-azure.md
Title: Perform VM operations on VMware VMs through Azure description: Learn how to view the operations that you can do on VMware virtual machines and install the Log Analytics agent. Previously updated : 08/18/2023 Last updated : 11/06/2023 # Manage VMware VMs in Azure through Arc-enabled VMware vSphere
-In this article, you learn how to perform various operations on the Azure Arc-enabled VMware vSphere (preview) VMs such as:
+In this article, you learn how to perform various operations on the Azure Arc-enabled VMware vSphere VMs such as:
- Start, stop, and restart a VM
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Title: Connect VMware vCenter Server to Azure Arc by using the helper script
description: In this quickstart, you learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Previously updated : 10/31/2023 Last updated : 11/06/2023
To start using the Azure Arc-enabled VMware vSphere features, you need to connect your VMware vCenter Server instance to Azure Arc. This quickstart shows you how to connect your VMware vCenter Server instance to Azure Arc by using a helper script.
-First, the script deploys a virtual appliance called [Azure Arc resource bridge (preview)](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc.
+First, the script deploys a virtual appliance called [Azure Arc resource bridge](../resource-bridge/overview.md) in your vCenter environment. Then, it installs a VMware cluster extension to provide a continuous connection between vCenter Server and Azure Arc.
> [!IMPORTANT] > This article describes a way to connect a generic vCenter Server to Azure Arc. If you're trying to enable Arc for Azure VMware Solution (AVS) private cloud, please follow this guide instead - [Deploy Arc for Azure VMware Solution](../../azure-vmware/deploy-arc-for-azure-vmware-solution.md). With the Arc for AVS onboarding process you need to provide fewer inputs and Arc capabilities are better integrated into the AVS private cloud portal experience.
You need a vSphere account that can:
- Read all inventory. - Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc.
-This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge (preview) VM.
+This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge VM.
### Workstation
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
| **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge VM should be deployed. | | **Network selection** | Select the name of the virtual network or segment to which the Azure Arc resource bridge VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). | | **Static IP / DHCP** | For deploying Azure Arc resource bridge, the preferred configuration is to use Static IP. Enter **n** to select static IP configuration. While not recommended, if you have DHCP server in your network and want to use it instead, enter **y**. If you're using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). If you use DHCP, the cluster configuration IP address still needs to be a static IP address. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc resource bridge VM for DNS resolution. Azure Arc resource bridge VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the Azure Arc resource bridge VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br>|
-| **Control Plane IP address** | Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). <br> - If there's a DHCP service on the network, the IP address must be outside of DHCP range.|
+| **Control Plane IP address** | Azure Arc resource bridge runs a Kubernetes cluster, and its control plane always requires a static IP address. Provide an IP address that meets the following requirements: <br> - The IP address must have internet access. <br> - The IP address must be within the subnet defined by IP address prefix. <br> - If you're using static IP address option for resource bridge VM IP address, the control plane IP address must be outside of the IP address range provided for the VM (Start range IP - End range IP). <br> - If there's a DHCP service on the network, the IP address must be outside of DHCP range.|
| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge VM will be deployed. | | **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge VM. | | **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. |
azure-arc Quick Start Create A Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-create-a-vm.md
Title: Create a virtual machine on VMware vCenter using Azure Arc description: In this quickstart, you learn how to create a virtual machine on VMware vCenter using Azure Arc Previously updated : 10/23/2023 Last updated : 11/06/2023
azure-arc Recover From Resource Bridge Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md
Title: Perform disaster recovery operations
description: Learn how to perform recovery operations for the Azure Arc resource bridge VM in Azure Arc-enabled VMware vSphere disaster scenarios. Previously updated : 08/18/2023 Last updated : 11/06/2023 # Recover from accidental deletion of resource bridge VM
-In this article, you learn how to recover the Azure Arc resource bridge (preview) connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc will fail.
+In this article, you learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc fail.
-## Recovering the Arc resource bridge in case of VM deletion
+## Recovering the Arc resource bridge if there is VM deletion
To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps.
To recover from Arc resource bridge VM deletion, you need to deploy a new resour
5. [Provide the inputs](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md#inputs-for-the-script) as prompted.
-6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources will be manageable in Azure again.
+6. Once the script successfully finishes, the resource bridge should be recovered, and the previously disconnected Arc-enabled resources are manageable in Azure again.
## Next steps
-[Troubleshoot Azure Arc resource bridge (preview) issues](../resource-bridge/troubleshoot-resource-bridge.md)
+[Troubleshoot Azure Arc resource bridge issues](../resource-bridge/troubleshoot-resource-bridge.md)
If the recovery steps mentioned above are unsuccessful in restoring Arc resource bridge to its original state, try one of the following channels for support:
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
description: This article explains the steps to cleanly remove your VMware vCent
Previously updated : 03/28/2022 Last updated : 11/06/2023
azure-arc Setup And Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/setup-and-manage-self-service-access.md
Title: Set up and manage self-service access to VMware resources through Azure RBAC description: Learn how to manage access to your on-premises VMware resources through Azure role-based access control (Azure RBAC). Previously updated : 08/21/2023 Last updated : 11/06/2023 # Customer intent: As a VI admin, I want to manage access to my vCenter resources in Azure so that I can keep environments secure
To provision VMware VMs and change their size, add disks, change network interfa
You must assign this role on individual resource pool (or cluster or host), network, datastore, and template that a user or a group needs to access.
-1. Go to the [**VMware vCenters (preview)** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter).
+1. Go to the [**VMware vCenters** list in Arc center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/vCenter).
2. Search and select your vCenter.
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
Title: Plan for deployment description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more. Previously updated : 10/31/2023 Last updated : 11/06/2023 # Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere.
-# Support matrix for Azure Arc-enabled VMware vSphere (preview)
+# Support matrix for Azure Arc-enabled VMware vSphere
-This article documents the prerequisites and support requirements for using [Azure Arc-enabled VMware vSphere (preview)](overview.md) to manage your VMware vSphere VMs through Azure Arc.
+This article documents the prerequisites and support requirements for using [Azure Arc-enabled VMware vSphere](overview.md) to manage your VMware vSphere VMs through Azure Arc.
-To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge (preview) in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc.
+To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc.
## VMware vSphere requirements
The following requirements must be met in order to use Azure Arc-enabled VMware
### Supported vCenter Server versions
-Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 7 and 8.
+Azure Arc-enabled VMware vSphere works with vCenter Server versions 7 and 8.
> [!NOTE]
-> Azure Arc-enabled VMware vSphere (preview) currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point.
+> Azure Arc-enabled VMware vSphere currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point.
### Required vSphere account privileges
You need a vSphere account that can:
- Read all inventory. - Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc.
-This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the deployment of the Azure Arc resource bridge (preview) VM.
+This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere and the deployment of the Azure Arc resource bridge VM.
### Resource bridge resource requirements
azure-arc Switch To New Preview Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-preview-version.md
- Title: Switch to the new preview version
-description: Learn to switch to the new preview version and use its capabilities
- Previously updated : 08/22/2023------
-# Customer intent: As a VI admin, I want to switch to the new preview version of Arc-enabled VMware vSphere and leverage the associated capabilities
--
-# Switch to the new preview version
-
-On August 21, 2023, we rolled out major changes to Azure Arc-enabled VMware vSphere preview. We're now announcing a new preview. By switching to the new preview version, you can use all the Azure management services that are available for Arc-enabled Servers.
-
-> [!NOTE]
-> If you're new to Arc-enabled VMware vSphere (preview), you will be able to leverage the new capabilities by default. To get started with the new preview, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).
--
-## Switch to the new preview version (Existing preview customer)
-
-If you're an existing **Azure Arc-enabled VMware** customer, for VMs that are Azure-enabled, follow these steps to switch to the new preview version:
-
->[!Note]
->If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc).
-
-1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource.
-
-2. Select all the virtual machines that are Azure enabled with the older preview version.
-
-3. Select **Remove from Azure**.
-
- :::image type="VM Inventory view" source="media/switch-to-new-preview-version/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-preview-version/vm-inventory-view-expanded.png":::
-
-4. After successful removal from Azure, enable the same resources again in Azure.
-
-5. Once the resources are re-enabled, the VMs are auto switched to the new preview version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**.
-
- :::image type=" New VM browse view" source="media/switch-to-new-preview-version/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-preview-version/new-vm-browse-view-expanded.png":::
-
-## Next steps
-
-[Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](/azure/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script).
azure-arc Switch To New Version Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/switch-to-new-version-vmware.md
+
+ Title: Switch to the new version of VMware vSphere
+description: Learn to switch to the new version of VMware vSphere and use its capabilities
+ Last updated : 11/06/2023++++++
+# Customer intent: As a VI admin, I want to switch to the new version of Arc-enabled VMware vSphere and leverage the associated capabilities.
++
+# Switch to the new version of VMware vSphere
+
+On August 21, 2023, we rolled out major changes to **Azure Arc-enabled VMware vSphere**. By switching to the new version, you can use all the Azure management services that are available for Arc-enabled Servers.
+
+> [!NOTE]
+> If you're new to Arc-enabled VMware vSphere, you'll be able to leverage the new capabilities by default. To get started with the new version, see [Quickstart: Connect VMware vCenter Server to Azure Arc by using the helper script](quick-start-connect-vcenter-to-arc-using-script.md).
++
+## Switch to the new version (Existing customer)
+
+If you've onboarded to **Azure Arc-enabled VMware** before August 21, 2023, for VMs that are Azure-enabled, follow these steps to switch to the new version:
+
+>[!Note]
+>If you had enabled guest management on any of the VMs, remove [VM extensions](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-1-remove-vm-extensions) and [disconnect agents](/azure/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware#step-2-disconnect-the-agent-from-azure-arc).
+
+1. From your browser, go to the vCenters blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview) and select the vCenter resource.
+
+2. Select all the virtual machines that are Azure enabled with the older version.
+
+3. Select **Remove from Azure**.
+
+ :::image type="VM Inventory view" source="media/switch-to-new-version-vmware/vm-inventory-view-inline.png" alt-text="Screenshot of VM Inventory view." lightbox="media/switch-to-new-version-vmware/vm-inventory-view-expanded.png":::
+
+4. After successful removal from Azure, enable the same resources again in Azure.
+
+5. Once the resources are re-enabled, the VMs are auto switched to the new version. The VM resources will now be represented as **Machine - Azure Arc (VMware)**.
+
+ :::image type=" New VM browse view" source="media/switch-to-new-version-vmware/new-vm-browse-view-inline.png" alt-text="Screenshot of New VM browse view." lightbox="media/switch-to-new-version-vmware/new-vm-browse-view-expanded.png":::
+
+## Next steps
+
+[Create a virtual machine on VMware vCenter using Azure Arc](/azure/azure-arc/vmware-vsphere/quick-start-create-a-vm).
azure-arc Troubleshoot Guest Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md
Title: Troubleshoot Guest Management Issues description: Learn about how to troubleshoot the guest management issues for Arc-enabled VMware vSphere. Previously updated : 08/18/2023 Last updated : 11/06/2023
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md
redis-benchmark -h yourcache.redis.cache.windows.net -a yourAccesskey -t GET -n
>These numbers might change as we post newer results periodically. >
+>[!IMPORTANT]
+>Microsoft periodically updates the underlying VM used in cache instances. This can change the performance characteristics from cache to cache and from region to region. The example benchmarking values on this page reflect older generation cache hardware in a single region. You may see better or different results in practice.
+>
+ ### Standard tier | Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-functions Durable Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-bindings.md
Make sure to choose your Durable Functions development language at the top of th
## Python v2 programming model
-Durable Functions provides preview support of the new [Python v2 programming model](../functions-reference-python.md?pivots=python-mode-decorators). To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package `azure-functions-durable`, version `1.2.2` or a later version. During the preview, you can provide feedback and suggestions in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python/issues).
-
-Using [Extension Bundles](../functions-bindings-register.md#extension-bundles) isn't currently supported for the v2 model with Durable Functions. You'll instead need to manage your extensions manually as follows:
-
-1. Remove the `extensionBundle` section of your `host.json` file.
-
-1. Run the `func extensions install --package Microsoft.Azure.WebJobs.Extensions.DurableTask --version 2.9.1` command on your terminal. This installs the Durable Functions extension for your app, which allows you to use the v2 model preview. For more information, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install).
+Durable Functions is supported in the new [Python v2 programming model](../functions-reference-python.md?pivots=python-mode-decorators). To use the v2 model, you must install the Durable Functions SDK, which is the PyPI package `azure-functions-durable`, version `1.2.2` or a later version. You must also check `host.json` to make sure your app is referencing [Extension Bundles](../functions-bindings-register.md#extension-bundles) version 4.x to use the v2 model with Durable Functions.
+You can provide feedback and suggestions in the [Durable Functions SDK for Python repo](https://github.com/Azure/azure-functions-durable-python/issues).
::: zone-end ## Orchestration trigger
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
4. Save your changes to the file and open the HTML page in a browser. The map shown is the most basic map that you can make by calling `atlas.Map` using your account key.
- :::image type="content" source="./media/tutorial-search-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling atlas.Map using your Azure Maps account key.":::
+ :::image type="content" source="./media/tutorial-search-location/basic-map.png" lightbox="./media/tutorial-search-location/basic-map.png" alt-text="A screenshot showing the most basic map that you can make by calling `atlas.Map` using your Azure Maps account key.":::
5. In the `GetMap` function, after initializing the map, add the following JavaScript code.
This section shows how to use the Maps [Search API] to find a point of interest
3. Save the **MapSearch.html** file and refresh your browser. You should see the map centered on Seattle with round-blue pins for locations of gas stations in the area.
- :::image type="content" source="./media/tutorial-search-location/pins-map.png" alt-text="A screenshot showing the map resulting from the search, which is a map showing Seattle with round-blue pins at locations of gas stations.":::
+ :::image type="content" source="./media/tutorial-search-location/pins-map.png" lightbox="./media/tutorial-search-location/pins-map.png" alt-text="A screenshot showing the map resulting from the search, which is a map showing Seattle with round-blue pins at locations of gas stations.":::
4. You can see the raw data that the map is rendering by entering the following HTTPRequest in your browser. Replace `<Your Azure Maps Subscription Key>` with your subscription key.
The map that we've made so far only looks at the longitude/latitude data for the
3. Save the file and refresh your browser. Now the map in the browser shows information popups when you hover over any of the search pins.
- :::image type="content" source="./media/tutorial-search-location/popup-map.png" alt-text="A screenshot of a map with information popups that appear when you hover over a search pin.":::
+ :::image type="content" source="./media/tutorial-search-location/popup-map.png" lightbox="./media/tutorial-search-location/popup-map.png" alt-text="A screenshot of a map with information popups that appear when you hover over a search pin.":::
* For the completed code used in this tutorial, see the [search tutorial] on GitHub. * To view this sample live, see [Search for points of interest] on the **Azure Maps Code Samples** site.
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
This document focuses on best practices for the Azure Maps Web SDK, however, many of the best practices and optimizations outlined can be applied to all other Azure Maps SDKs.
-The Azure Maps Web SDK provides a powerful canvas for rendering large spatial data sets in many different ways. In some cases, there are multiple ways to render data the same way, but depending on the size of the data set and the desired functionality, one method may perform better than others. This article highlights best practices and tips and tricks to maximize performance and create a smooth user experience.
+The Azure Maps Web SDK provides a powerful canvas for rendering large spatial data sets in many different ways. In some cases, there are multiple ways to render data the same way, but depending on the size of the data set and the desired functionality, one method might perform better than others. This article highlights best practices and tips and tricks to maximize performance and create a smooth user experience.
Generally, when looking to improve performance of the map, look for ways to reduce the number of layers and sources, and the complexity of the data sets and rendering styles being used.
Often apps want to load the map to a specific location or style. Sometimes devel
The Web SDK has two data sources,
-* **GeoJSON source**: The `DataSource` class, manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features).
-* **Vector tile source**: The `VectorTileSource` class, loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features).
+* **GeoJSON source**: The `DataSource` class manages raw location data in GeoJSON format locally. Good for small to medium data sets (upwards of hundreds of thousands of features).
+* **Vector tile source**: The `VectorTileSource` class loads data formatted as vector tiles for the current map view, based on the maps tiling system. Ideal for large to massive data sets (millions or billions of features).
### Use tile-based solutions for large datasets
It's possible to store GeoJSON objects inline inside of JavaScript, however this
## Optimize rendering layers
-Azure maps provides several different layers for rendering data on a map. There are many optimizations you can take advantage of to tailor these layers to your scenario the increase performances and the overall user experience.
+Azure maps provide several different layers for rendering data on a map. There are many optimizations you can take advantage of to tailor these layers to your scenario the increase performances and the overall user experience.
### Create layers once and reuse them
Unlike most layers in the Azure Maps Web control that use WebGL for rendering, H
The [Reusing Popup with Multiple Pins] code sample shows how to create a single popup and reuse it by updating its content and position. For the source code, see [Reusing Popup with Multiple Pins sample code]. <! > [!VIDEO //codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true] -->
-That said, if you only have a few points to render on the map, the simplicity of HTML markers may be preferred. Additionally, HTML markers can easily be made draggable if needed.
+That said, if you only have a few points to render on the map, the simplicity of HTML markers might be preferred. Additionally, HTML markers can easily be made draggable if needed.
### Combine layers
The symbol layer has two options that exist for both icon and text called `allow
### Cluster large point data sets
-When working with large sets of data points you may find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms in the map, clusters break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options.
+When working with large sets of data points you might find that when rendered at certain zoom levels, many of the points overlap and are only partial visible, if at all. Clustering is process of grouping points that are close together and representing them as a single clustered point. As the user zooms in the map, clusters break apart into their individual points. This can significantly reduce the amount of data that needs to be rendered, make the map feel less cluttered, and improve performance. The `DataSource` class has options for clustering data locally. Additionally, many tools that generate vector tiles also have clustering options.
Additionally, increase the size of the cluster radius to improve performance. The larger the cluster radius, the less clustered points there's to keep track of and render. For more information, see [Clustering point data in the Web SDK]. ### Use weighted clustered heat maps
-The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there's little visual difference in the rendered heat map. Using a larger cluster radius improves performance more but may reduce the resolution of the rendered heat map.
+The heat map layer can render tens of thousands of data points easily. For larger data sets, consider enabling clustering on the data source and using a small cluster radius and use the clusters `point_count` property as a weight for the height map. When the cluster radius is only a few pixels in size, there's little visual difference in the rendered heat map. Using a larger cluster radius improves performance more but might reduce the resolution of the rendered heat map.
```javascript var layer = new atlas.layer.HeatMapLayer(source, null, {
var layer = new atlas.layer.BubbleLayer(source, null, {
}); ```
-The above code functions fine if all features in the data source have a `myColor` property, and the value of that property is a color. This may not be an issue if you have complete control of the data in the data source and know for certain all features have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green.
+The above code functions fine if all features in the data source have a `myColor` property, and the value of that property is a color. This might not be an issue if you have complete control of the data in the data source and know for certain all features have a valid color in a `myColor` property. That said, to make this code safe from errors, a `case` expression can be used with the `has` expression to check that the feature has the `myColor` property. If it does, the `to-color` type expression can then be used to try to convert the value of that property to a color. If the color is invalid, a fallback color can be used. The following code demonstrates how to do this and sets the fallback color to green.
```javascript var layer = new atlas.layer.BubbleLayer(source, null, {
Things to check:
* Ensure that you complete your authentication options in the map. Without authentication, the map loads a blank canvas and returns a 401 error in the network tab of the browser's developer tools. * Ensure that you have an internet connection.
-* Check the console for errors of the browser's developer tools. Some errors may cause the map not to render. Debug your application.
+* Check the console for errors of the browser's developer tools. Some errors might cause the map not to render. Debug your application.
* Ensure you're using a [supported browser]. **All my data is showing up on the other side of the world, what's going on?**
Things to check:
**Why are icons or text in the symbol layer appearing in the wrong place?** Check that the `anchor` and the `offset` options are configured correctly to align with the part of your image or text that you want to have aligned with the coordinate on the map.
-If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols rotate with the maps viewport, appearing upright to the user. However, depending on your scenario, it may be desirable to lock the symbol to the map's orientation by setting the `rotationAlignment` option to `map`.
+If the symbol is only out of place when the map is rotated, check the `rotationAlignment` option. By default, symbols rotate with the maps viewport, appearing upright to the user. However, depending on your scenario, it might be desirable to lock the symbol to the map's orientation by setting the `rotationAlignment` option to `map`.
-If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols stay upright in the maps viewport when the map is pitched or tilted. However, depending on your scenario, it may be desirable to lock the symbol to the map's pitch by setting the `pitchAlignment` option to `map`.
+If the symbol is only out of place when the map is pitched/tilted, check the `pitchAlignment` option. By default, symbols stay upright in the maps viewport when the map is pitched or tilted. However, depending on your scenario, it might be desirable to lock the symbol to the map's pitch by setting the `pitchAlignment` option to `map`.
**Why isn't any of my data appearing on the map?**
Things to check:
* Check the console in the browser's developer tools for errors. * Ensure that a data source has been created and added to the map, and that the data source has been connected to a rendering layer that has also been added to the map. * Add break points in your code and step through it. Ensure data is added to the data source and the data source and layers are added to the map.
-* Try removing data-driven expressions from your rendering layer. It's possible that one of them may have an error in it that is causing the issue.
+* Try removing data-driven expressions from your rendering layer. It's possible that one of them might have an error in it that is causing the issue.
**Can I use the Azure Maps Web SDK in a sandboxed iframe?**
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Operating system | Azure Monitor agent <sup>1</sup> | Log Analytics agent (legacy) <sup>1</sup> | Diagnostics extension <sup>2</sup>| |:|::|::|::|
+| AlmaLinux 9 | Γ£ô<sup>3</sup> | | |
| AlmaLinux 8 | Γ£ô<sup>3</sup> | Γ£ô | | | Amazon Linux 2017.09 | | Γ£ô | | | Amazon Linux 2 | Γ£ô | Γ£ô | |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Debian 9 | Γ£ô | Γ£ô | Γ£ô | | Debian 8 | | Γ£ô | | | OpenSUSE 15 | Γ£ô | | |
+| Oracle Linux 9 | Γ£ô | | |
| Oracle Linux 8 | Γ£ô | Γ£ô | | | Oracle Linux 7 | Γ£ô | Γ£ô | Γ£ô | | Oracle Linux 6.4+ | | | Γ£ô |
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô | Γ£ô<sup>2</sup> | | Red Hat Enterprise Linux Server 7 | Γ£ô | Γ£ô | Γ£ô | | Red Hat Enterprise Linux Server 6.7+ | | | Γ£ô |
+| Rocky Linux 9 | Γ£ô | | |
| Rocky Linux 8 | Γ£ô | Γ£ô | | | SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>3</sup> | | | | SUSE Linux Enterprise Server 15 SP3 | Γ£ô | | |
azure-monitor Alerts Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-plan.md
You want to create alerts for any important information in your environment. But
Typically, you'll want to alert on issues for all your critical Azure applications and resources. Use the following methods for creating alert rules at scale: -- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Monitoring at scale using metric alerts in Azure Monitor](alerts-metric-overview.md#monitoring-at-scale-using-metric-alerts-in-azure-monitor).
+- Azure Monitor supports monitoring multiple resources of the same type with one metric alert rule for resources that exist in the same Azure region. For a list of Azure services that are currently supported for this feature, see [Supported resources for metric alerts in Azure Monitor](alerts-metric-near-real-time.md).
- For metric alert rules for Azure services that don't support multiple resources, use automation tools such as the Azure CLI and PowerShell with Resource Manager templates to create the same alert rule for multiple resources. For samples, see [Resource Manager template samples for metric alert rules in Azure Monitor](resource-manager-alerts-metric.md). - To return data for multiple resources, write queries in log query alert rules. Use the **Split by dimensions** setting in the rule to create separate alerts for each resource.
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
The information in this table can help you decide when to use each type of alert
|Alert type |When to use |Pricing information| |||| |Metric alert|Metric data is stored in the system already pre-computed. Metric alerts are useful when you want to be alerted about data that requires little or no manipulation. Use metric alerts if the data you want to monitor is available in metric data.|Each metric alert rule is charged based on the number of time series that are monitored. |
-|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for [at-scale monitoring](#splitting-by-dimensions-in-log-alert-rules), the cost also depends on the number of time series created by the dimensions resulting from your query. |
+|Log alert|You can use log alerts to perform advanced logic operations on your data. If the data you want to monitor is available in logs, or requires advanced logic, you can use the robust features of Kusto Query Language (KQL) for data manipulation by using log alerts.|Each log alert rule is billed based on the interval at which the log query is evaluated. More frequent query evaluation results in a higher cost. For log alerts configured for at-scale monitoring using splitting by dimensions, the cost also depends on the number of time series created by the dimensions resulting from your query. |
|Activity log alert|Activity logs provide auditing of all actions that occurred on resources. Use activity log alerts to be alerted when a specific event happens to a resource like a restart, a shutdown, or the creation or deletion of a resource. Service Health alerts and Resource Health alerts let you know when there's an issue with one of your services or resources.|For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/).| |Prometheus alerts|Prometheus alerts are used for alerting on Prometheus metrics stored in [Azure Monitor managed services for Prometheus](../essentials/prometheus-metrics-overview.md). The alert rules are based on the PromQL open-source query language. |Prometheus alert rules are only charged on the data queried by the rules. For more information, see the [pricing page](https://azure.microsoft.com/pricing/details/monitor/). |
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
These properties are client specific, so you can configure `appInsights.defaultC
| correlationIdRetryIntervalMs | The time to wait before retrying to retrieve the ID for cross-component correlation. (Default is `30000`.) | | correlationHeaderExcludedDomains| A list of domains to exclude from cross-component correlation header injection. (Default. See [Config.ts](https://github.com/Microsoft/ApplicationInsights-node.js/blob/develop/Library/Config.ts).)|
-## How do I customize logs collection?
-
-By default, Application Insights Node.js SDK logs at warning level to console.
-
-To spot and diagnose issues with Application Insights, "Self-diagnostics" can be enabled. This means collection of internal logging from the Application Insights Node.js SDK.
-
-The following code demonstrates how to enable debug logging as well as generate telemetry for internal logs.
-
-```
-let appInsights = require("applicationinsights");
-appInsights.setup("<YOUR_CONNECTION_STRING>")
- .setInternalLogging(true, true) // Enable both debug and warning logging
- .setAutoCollectConsole(true, true) // Generate Trace telemetry for winston/bunyan and console logs
- .start();
-
-Logs could be put into local file using APPLICATIONINSIGHTS_LOG_DESTINATION environment variable, supported values are file and file+console, a file named applicationinsights.log will be generated on tmp folder by default, including all logs, /tmp for *nix and USERDIR\\AppData\\Local\\Temp for Windows. Log directory could be configured using APPLICATIONINSIGHTS_LOGDIR environment variable.
-
-process.env.APPLICATIONINSIGHTS_LOG_DESTINATION = "file+console";
-process.env.APPLICATIONINSIGHTS_LOGDIR = "C:\\applicationinsights\\logs";
-
-// Application Insights SDK setup....
-```
- ## Troubleshooting -
-For more information, see [Troubleshoot Application Insights monitoring of Node.js apps and services](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-app-insights-nodejs).
+For troubleshooting information, including "no data" scenarios and customizing logs, see [Troubleshoot Application Insights monitoring of Node.js apps and services](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-app-insights-nodejs).
## Next steps
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Sampling is a feature in [Application Insights](./app-insights-overview.md). It'
When metric counts are presented in the portal, they're renormalized to take into account sampling. Doing so minimizes any effect on the statistics.
+> [!NOTE]
+> - If you've adopted our OpenTelemetry Distro and are looking for configuration options, see [Enable Sampling](opentelemetry-configuration.md#enable-sampling).
++ ## Brief summary * There are three different types of sampling: adaptive sampling, fixed-rate sampling, and ingestion sampling.
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
All custom tables created with or migrated to the [data collection rule (DCR)-ba
| Managed Lustre | [AFSAuditLogs](/azure/azure-monitor/reference/tables/AFSAuditLogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | | Monitor | [AzureMetricsV2](/azure/azure-monitor/reference/tables/AzureMetricsV2) |
+| Network managers | [AVNMConnectivityConfigurationChange](/azure/azure-monitor/reference/tables/AVNMConnectivityConfigurationChange) |
| Nexus Clusters | [NCCKubernetesLogs](/azure/azure-monitor/reference/tables/NCCKubernetesLogs)<br>[NCCVMOrchestrationLogs](/azure/azure-monitor/reference/tables/NCCVMOrchestrationLogs) | | Nexus Storage Appliances | [NCSStorageLogs](/azure/azure-monitor/reference/tables/NCSStorageLogs)<br>[NCSStorageAlerts](/azure/azure-monitor/reference/tables/NCSStorageAlerts) | | Redis cache | [ACRConnectedClientList](/azure/azure-monitor/reference/tables/ACRConnectedClientList) |
azure-monitor Manage Logs Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-logs-tables.md
Reduce costs and analysis effort by using data collection rules to [filter out a
## View table properties
+> [!NOTE]
+> The table name is case sensitive.
+ # [Portal](#tab/azure-portal) To view and set table properties in the Azure portal:
To view table properties using PowerShell, run:
Invoke-AzRestMethod -Path "/subscriptions/ContosoSID/resourcegroups/ContosoRG/providers/microsoft.operationalinsights/workspaces/ContosoWorkspace/tables/Heartbeat?api-version=2021-12-01-preview" -Method GET ```
-> [!NOTE]
-> The table name used in the `-Path` parameter is case sensitive.
- **Sample response** ```json
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
> > !["An rss icon"](./media//whats-new/rss.png) https://aka.ms/azmon/rss
+## October 2023
+
+|Subservice | Article | Description |
+||||
+General|[Best practices for monitoring Kubernetes with Azure Monitor](best-practices-containers.md)|New article.|
+General|[Estimate Azure Monitor costs](cost-estimate.md)|New article describing use of Azure Monitor pricing calculator.|
+General|[Azure Monitor billing meter names](cost-meters.md)|Billing meters moved into dedicated reference article.|
+General|[Azure Monitor cost and usage](cost-usage.md)|Rewritten.|
+Agents|[Collect logs from a text or JSON file with Azure Monitor Agent](agents/data-collection-text-log.md)|Added the ability to collect logs from a JSON file with Azure Monitor Agent.|
+Alerts|[Create or edit an alert rule](alerts/alerts-create-new-alert-rule.md)|Custom properties for Azure Monitor alerts are now located in the Details tab when creating or editing an alert rule. |
+Alerts|[Create or edit an alert rule](alerts/alerts-create-new-alert-rule.md)|Added note clarifying the limitations of setting the frequency of alert rules to one minute. |
+Application-Insights|[IP addresses used by Azure Monitor](app/ip-addresses.md)|A logic model diagram is available to assist with troubleshooting scenarios.|
+Application-Insights|[Application Insights Overview dashboard](app/overview-dashboard.md)|All of the Application Insights experiences are now defined in a manner that mirrors the Azure portal experience. We've included a logic model diagram to visually convey how Application Insights works at a high level.|
+Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python, and Java applications](app/opentelemetry-enable.md)|Our OpenTelemetry Distro released for .NET, Java, Python, and Node.js. This is a replacement for classic Application Insights SDKs.|
+Essentials|[Collect IIS logs with Azure Monitor Agent](agents/data-collection-iis.md)|Added guidance on setting up data collection endpoints based on deployment.|
+Logs|[Restore logs in Azure Monitor](logs/restore.md)|Updated information about the cost of restoring logs. |
+Logs|[Log Analytics workspace data export in Azure Monitor](logs/logs-data-export.md)|Billing for Data Export was enabled in early October 2023.|
+Logs|[Analyze usage in a Log Analytics workspace](logs/analyze-usage.md)|Added support for querying data volume from events directly, and by computer.|
++ ## September 2023 |Subservice | Article | Description |
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
* [SAP HANA Azure virtual machine storage configurations](../virtual-machines/workloads/sap/hana-vm-operations-storage.md) * [SAP on Azure NetApp Files Sizing Best Practices](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/sap-on-azure-netapp-files-sizing-best-practices/ba-p/3895300) * [Optimize HANA deployments with Azure NetApp Files application volume group for SAP HANA](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/optimize-hana-deployments-with-azure-netapp-files-application/ba-p/3683417)
+* [Configuring Azure NetApp Files Application Volume Group (AVG) for zonal SAP HANA deployment](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/configuring-azure-netapp-files-anf-application-volume-group-avg/ba-p/3943801)
* [Using Azure NetApp Files AVG for SAP HANA to deploy HANA with multiple partitions (MP)](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/using-azure-netapp-files-avg-for-sap-hana-to-deploy-hana-with/ba-p/3742747) * [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md) * [High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux](../virtual-machines/workloads/sap/sap-hana-high-availability-netapp-files-red-hat.md)
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Azure NetApp Files backup is supported for the following regions:
* East US * East US 2 * France Central
+* Germany North
* Germany West Central * Japan East * Japan West
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Title: Resource providers by Azure services description: Lists all resource provider namespaces for Azure Resource Manager and shows the Azure service for that namespace. Previously updated : 08/02/2023 Last updated : 11/06/2023 content_well_notification: - AI-contribution
content_well_notification:
# Resource providers for Azure services
-This article shows how resource provider namespaces map to Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider).
+This article connects resource provider namespaces to Azure services. If you don't know the resource provider, see [Find resource provider](#find-resource-provider).
-## Match resource provider to service
+## AI and machine learning resource providers
-The resources providers that are marked with **- registered** are registered by default for your subscription. For more information, see [Registration](#registration).
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.AutonomousSystems | [Autonomous Systems](https://www.microsoft.com/ai/autonomous-systems) |
+| Microsoft.BotService | [Azure Bot Service](/azure/bot-service/) |
+| Microsoft.CognitiveServices | [Cognitive Services](../../ai-services/index.yml) |
+| Microsoft.EnterpriseKnowledgeGraph | Enterprise Knowledge Graph |
+| Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) |
+| Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) |
+| Microsoft.Search | [Azure Cognitive Search](../../search/index.yml) |
+
+## Analytics resource providers
| Resource provider namespace | Azure service | | | - |
-| Microsoft.AAD | [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) |
-| Microsoft.Addons | core |
-| Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) |
-| Microsoft.ADHybridHealthService - [registered](#registration) | [Microsoft Entra ID](../../active-directory/index.yml) |
-| Microsoft.Advisor | [Azure Advisor](../../advisor/index.yml) |
-| Microsoft.AlertsManagement | [Azure Monitor](../../azure-monitor/index.yml) |
| Microsoft.AnalysisServices | [Azure Analysis Services](../../analysis-services/index.yml) |
-| Microsoft.ApiManagement | [API Management](../../api-management/index.yml) |
-| Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) |
+| Microsoft.Databricks | [Azure Databricks](/azure/azure-databricks/) |
+| Microsoft.DataCatalog | [Data Catalog](../../data-catalog/index.yml) |
+| Microsoft.DataFactory | [Data Factory](../../data-factory/index.yml) |
+| Microsoft.DataLakeAnalytics | [Data Lake Analytics](../../data-lake-analytics/index.yml) |
+| Microsoft.DataLakeStore | [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) |
+| Microsoft.DataShare | [Azure Data Share](../../data-share/index.yml) |
+| Microsoft.HDInsight | [HDInsight](../../hdinsight/index.yml) |
+| Microsoft.Kusto | [Azure Data Explorer](/azure/data-explorer/) |
+| Microsoft.PowerBI | [Power BI](/power-bi/power-bi-overview) |
+| Microsoft.PowerBIDedicated | [Power BI Embedded](/azure/power-bi-embedded/) |
+| Microsoft.ProjectBabylon | [Azure Data Catalog](../../data-catalog/overview.md) |
+| Microsoft.Purview | [Microsoft Purview](/purview/purview) |
+| Microsoft.StreamAnalytics | [Azure Stream Analytics](../../stream-analytics/index.yml) |
+| Microsoft.Synapse | [Azure Synapse Analytics](/azure/sql-data-warehouse/) |
+
+## Blockchain resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.Blockchain | [Azure Blockchain Service](../../blockchain/workbench/index.yml) |
+| Microsoft.BlockchainTokens | [Azure Blockchain Tokens](https://azure.microsoft.com/services/blockchain-tokens/) |
+
+## Compute resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
| Microsoft.AppPlatform | [Azure Spring Apps](../../spring-apps/overview.md) |
-| Microsoft.Attestation | Azure Attestation Service |
-| Microsoft.Authorization - [registered](#registration) | [Azure Resource Manager](../index.yml) |
-| Microsoft.Automation | [Automation](../../automation/index.yml) |
-| Microsoft.AutonomousSystems | [Autonomous Systems](https://www.microsoft.com/ai/autonomous-systems) |
| Microsoft.AVS | [Azure VMware Solution](../../azure-vmware/index.yml) |
-| Microsoft.AzureActiveDirectory | [Microsoft Entra ID B2C](../../active-directory-b2c/index.yml) |
-| Microsoft.AzureArcData | Azure Arc-enabled data services |
-| Microsoft.AzureData | SQL Server registry |
-| Microsoft.AzureStack | core |
-| Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) |
| Microsoft.Batch | [Batch](../../batch/index.yml) |
-| Microsoft.Billing - [registered](#registration) | [Cost Management and Billing](/azure/billing/) |
-| Microsoft.BingMaps | [Bing Maps](/BingMaps/#pivot=main&panel=BingMapsAPI) |
-| Microsoft.Blockchain | [Azure Blockchain Service](../../blockchain/workbench/index.yml) |
-| Microsoft.BlockchainTokens | [Azure Blockchain Tokens](https://azure.microsoft.com/services/blockchain-tokens/) |
-| Microsoft.Blueprint | [Azure Blueprints](../../governance/blueprints/index.yml) |
-| Microsoft.BotService | [Azure Bot Service](/azure/bot-service/) |
-| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) |
-| Microsoft.Capacity | core |
-| Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) |
-| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-app-service-certificate.md) |
-| Microsoft.ChangeAnalysis | [Azure Monitor](../../azure-monitor/index.yml) |
| Microsoft.ClassicCompute | Classic deployment model virtual machine |
-| Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration |
-| Microsoft.ClassicNetwork | Classic deployment model virtual network |
-| Microsoft.ClassicStorage | Classic deployment model storage |
-| Microsoft.ClassicSubscription - [registered](#registration) | Classic deployment model |
-| Microsoft.CognitiveServices | [Cognitive Services](../../ai-services/index.yml) |
-| Microsoft.Commerce - [registered](#registration) | core |
-| Microsoft.Communication | [Azure Communication Services](../../communication-services/overview.md) |
| Microsoft.Compute | [Virtual Machines](../../virtual-machines/index.yml)<br />[Virtual Machine Scale Sets](../../virtual-machine-scale-sets/index.yml) |
-| Microsoft.Consumption - [registered](#registration) | [Cost Management](/azure/cost-management/) |
+| Microsoft.DesktopVirtualization | [Azure Virtual Desktop](../../virtual-desktop/index.yml) |
+| Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) |
+| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) |
+| Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) |
+| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) |
+| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) |
+| Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) |
+| Microsoft.SerialConsole - [registered by default](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) |
+| Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) |
+| Microsoft.VirtualMachineImages | [Azure Image Builder](../../virtual-machines/image-builder-overview.md) |
+| Microsoft.VMware | [Azure VMware Solution](../../azure-vmware/index.yml) |
+| Microsoft.VMwareCloudSimple | [Azure VMware Solution by CloudSimple](../../vmware-cloudsimple/index.md) |
+
+## Container resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.App | [Azure Container Apps](../../container-apps/index.yml) |
| Microsoft.ContainerInstance | [Container Instances](../../container-instances/index.yml) | | Microsoft.ContainerRegistry | [Container Registry](../../container-registry/index.yml) | | Microsoft.ContainerService | [Azure Kubernetes Service (AKS)](../../aks/index.yml) |
-| Microsoft.CostManagement - [registered](#registration) | [Cost Management](/azure/cost-management/) |
-| Microsoft.CostManagementExports | [Cost Management](/azure/cost-management/) |
-| Microsoft.CustomerLockbox | [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md) |
-| Microsoft.CustomProviders | [Azure Custom Providers](../custom-providers/overview.md) |
-| Microsoft.DataBox | [Azure Data Box](../../databox/index.yml) |
-| Microsoft.DataBoxEdge | [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md) |
-| Microsoft.Databricks | [Azure Databricks](/azure/azure-databricks/) |
-| Microsoft.DataCatalog | [Data Catalog](../../data-catalog/index.yml) |
-| Microsoft.DataFactory | [Data Factory](../../data-factory/index.yml) |
-| Microsoft.DataLakeAnalytics | [Data Lake Analytics](../../data-lake-analytics/index.yml) |
-| Microsoft.DataLakeStore | [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md) |
-| Microsoft.DataMigration | [Azure Database Migration Service](../../dms/index.yml) |
-| Microsoft.DataProtection | Data Protection |
-| Microsoft.DataShare | [Azure Data Share](../../data-share/index.yml) |
+| Microsoft.RedHatOpenShift | [Azure Red Hat OpenShift](../../virtual-machines/linux/openshift-get-started.md) |
+
+## Core resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.Addons | core |
+| Microsoft.AzureStack | core |
+| Microsoft.Capacity | core |
+| Microsoft.Commerce - [registered by default](#registration) | core |
+| Microsoft.Marketplace | core |
+| Microsoft.MarketplaceApps | core |
+| Microsoft.MarketplaceOrdering - [registered by default](#registration) | core |
+| Microsoft.SaaS | core |
+| Microsoft.Services | core |
+| Microsoft.Subscription | core |
+| microsoft.support - [registered by default](#registration) | core |
+
+## Database resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.AzureData | SQL Server registry |
+| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) |
| Microsoft.DBforMariaDB | [Azure Database for MariaDB](../../mariadb/index.yml) | | Microsoft.DBforMySQL | [Azure Database for MySQL](../../mysql/index.yml) | | Microsoft.DBforPostgreSQL | [Azure Database for PostgreSQL](../../postgresql/index.yml) |
-| Microsoft.DesktopVirtualization | [Azure Virtual Desktop](../../virtual-desktop/index.yml) |
-| Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) |
-| Microsoft.DeviceUpdate | [Device Update for IoT Hub](../../iot-hub-device-update/index.yml)
-| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) |
-| Microsoft.DevTestLab | [Azure Lab Services](../../lab-services/index.yml) |
-| Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) |
| Microsoft.DocumentDB | [Azure Cosmos DB](../../cosmos-db/index.yml) |
-| Microsoft.DomainRegistration | [App Service](../../app-service/index.yml) |
-| Microsoft.DynamicsLcs | [Lifecycle Services](https://lcs.dynamics.com/Logon/Index) |
-| Microsoft.ElasticSan | [Elastic SAN Preview](../../storage/elastic-san/index.yml) |
-| Microsoft.EnterpriseKnowledgeGraph | Enterprise Knowledge Graph |
+| Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) |
+| Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) |
+
+## Developer tools resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.AppConfiguration | [Azure App Configuration](../../azure-app-configuration/index.yml) |
+| Microsoft.DevSpaces | [Azure Dev Spaces](/previous-versions/azure/dev-spaces/) |
+| Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) |
+| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) |
+
+## DevOps resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| microsoft.visualstudio | [Azure DevOps](/azure/devops/) |
+| Microsoft.VSOnline | [Azure DevOps](/azure/devops/) |
+
+## Hybrid resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.AzureArcData | Azure Arc-enabled data services |
+| Microsoft.AzureStackHCI | [Azure Stack HCI](/azure-stack/hci/overview) |
+| Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) |
+| Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) |
+| Microsoft.KubernetesConfiguration | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) |
+
+## Identity resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.AAD | [Microsoft Entra Domain Services](../../active-directory-domain-services/index.yml) |
+| Microsoft.ADHybridHealthService - [registered by default](#registration) | [Microsoft Entra ID](../../active-directory/index.yml) |
+| Microsoft.AzureActiveDirectory | [Microsoft Entra ID B2C](../../active-directory-b2c/index.yml) |
+| Microsoft.ManagedIdentity | [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/index.yml) |
+| Microsoft.Token | Token |
+
+## Integration resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.ApiManagement | [API Management](../../api-management/index.yml) |
+| Microsoft.Communication | [Azure Communication Services](../../communication-services/overview.md) |
| Microsoft.EventGrid | [Event Grid](../../event-grid/index.yml) | | Microsoft.EventHub | [Event Hubs](../../event-hubs/index.yml) |
-| Microsoft.Features - [registered](#registration) | [Azure Resource Manager](../index.yml) |
-| Microsoft.GuestConfiguration | [Azure Policy](../../governance/policy/index.yml) |
-| Microsoft.HanaOnAzure | [SAP HANA on Azure Large Instances](../../virtual-machines/workloads/sap/hana-overview-architecture.md) |
-| Microsoft.HardwareSecurityModules | [Azure Dedicated HSM](../../dedicated-hsm/index.yml) |
-| Microsoft.HDInsight | [HDInsight](../../hdinsight/index.yml) |
| Microsoft.HealthcareApis (Azure API for FHIR) | [Azure API for FHIR](../../healthcare-apis/azure-api-for-fhir/index.yml) | | Microsoft.HealthcareApis (Healthcare APIs) | [Healthcare APIs](../../healthcare-apis/index.yml) |
-| Microsoft.HybridCompute | [Azure Arc-enabled servers](../../azure-arc/servers/index.yml) |
-| Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) |
-| Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) |
-| Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) |
-| Microsoft.Insights | [Azure Monitor](../../azure-monitor/index.yml) |
+| Microsoft.Logic | [Logic Apps](../../logic-apps/index.yml) |
+| Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) |
+| Microsoft.PowerPlatform | [Power Platform](/power-platform/) |
+| Microsoft.Relay | [Azure Relay](../../azure-relay/relay-what-is-it.md) |
+| Microsoft.ServiceBus | [Service Bus](/azure/service-bus/) |
+
+## IoT resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.Devices | [Azure IoT Hub](../../iot-hub/index.yml)<br />[Azure IoT Hub Device Provisioning Service](../../iot-dps/index.yml) |
+| Microsoft.DeviceUpdate | [Device Update for IoT Hub](../../iot-hub-device-update/index.yml) |
+| Microsoft.DigitalTwins | [Azure Digital Twins](../../digital-twins/overview.md) |
| Microsoft.IoTCentral | [Azure IoT Central](../../iot-central/index.yml) | | Microsoft.IoTSpaces | [Azure Digital Twins](../../digital-twins/index.yml) |
-| Microsoft.Intune | [Azure Monitor](../../azure-monitor/index.yml) |
-| Microsoft.KeyVault | [Key Vault](../../key-vault/index.yml) |
-| Microsoft.Kubernetes | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) |
-| Microsoft.KubernetesConfiguration | [Azure Arc-enabled Kubernetes](../../azure-arc/kubernetes/index.yml) |
-| Microsoft.Kusto | [Azure Data Explorer](/azure/data-explorer/) |
-| Microsoft.LabServices | [Azure Lab Services](../../lab-services/index.yml) |
-| Microsoft.Logic | [Logic Apps](../../logic-apps/index.yml) |
-| Microsoft.MachineLearning | [Machine Learning Studio](../../machine-learning/classic/index.yml) |
-| Microsoft.MachineLearningServices | [Azure Machine Learning](../../machine-learning/index.yml) |
-| Microsoft.Maintenance | [Azure Maintenance](../../virtual-machines/maintenance-configurations.md) |
-| Microsoft.ManagedIdentity | [Managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/index.yml) |
-| Microsoft.ManagedNetwork | Virtual networks managed by PaaS services |
+| Microsoft.TimeSeriesInsights | [Azure Time Series Insights](../../time-series-insights/index.yml) |
+| Microsoft.WindowsIoT | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) |
+
+## Management resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.Advisor | [Azure Advisor](../../advisor/index.yml) |
+| Microsoft.Authorization - [registered by default](#registration) | [Azure Resource Manager](../index.yml) |
+| Microsoft.Automation | [Automation](../../automation/index.yml) |
+| Microsoft.Billing - [registered by default](#registration) | [Cost Management and Billing](/azure/billing/) |
+| Microsoft.Blueprint | [Azure Blueprints](../../governance/blueprints/index.yml) |
+| Microsoft.ClassicSubscription - [registered by default](#registration) | Classic deployment model |
+| Microsoft.Consumption - [registered by default](#registration) | [Cost Management](/azure/cost-management/) |
+| Microsoft.CostManagement - [registered by default](#registration) | [Cost Management](/azure/cost-management/) |
+| Microsoft.CostManagementExports | [Cost Management](/azure/cost-management/) |
+| Microsoft.CustomProviders | [Azure Custom Providers](../custom-providers/overview.md) |
+| Microsoft.DynamicsLcs | [Lifecycle Services](https://lcs.dynamics.com/Logon/Index) |
+| Microsoft.Features - [registered by default](#registration) | [Azure Resource Manager](../index.yml) |
+| Microsoft.GuestConfiguration | [Azure Policy](../../governance/policy/index.yml) |
| Microsoft.ManagedServices | [Azure Lighthouse](../../lighthouse/index.yml) | | Microsoft.Management | [Management Groups](../../governance/management-groups/index.yml) |
-| Microsoft.Maps | [Azure Maps](../../azure-maps/index.yml) |
-| Microsoft.Marketplace | core |
-| Microsoft.MarketplaceApps | core |
-| Microsoft.MarketplaceOrdering - [registered](#registration) | core |
+| Microsoft.PolicyInsights | [Azure Policy](../../governance/policy/index.yml) |
+| Microsoft.Portal - [registered by default](#registration) | [Azure portal](../../azure-portal/index.yml) |
+| Microsoft.RecoveryServices | [Azure Site Recovery](../../site-recovery/index.yml) |
+| Microsoft.ResourceGraph - [registered by default](#registration) | [Azure Resource Graph](../../governance/resource-graph/index.yml) |
+| Microsoft.ResourceHealth | [Azure Service Health](../../service-health/index.yml) |
+| Microsoft.Resources - [registered by default](#registration) | [Azure Resource Manager](../index.yml) |
+| Microsoft.Scheduler | [Scheduler](../../scheduler/index.yml) |
+| Microsoft.SoftwarePlan | License |
+| Microsoft.Solutions | [Azure Managed Applications](../managed-applications/index.yml) |
+
+## Media resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
| Microsoft.Media | [Media Services](/azure/media-services/) |
-| Microsoft.Microservices4Spring | [Azure Spring Apps](../../spring-apps/overview.md) |
-| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) |
-| Microsoft.MixedReality | [Azure Spatial Anchors](../../spatial-anchors/index.yml) |
-| Microsoft.MobileNetwork | [Azure Private 5G Core](../../private-5g-core/index.yml) |
-| Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
-| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Azure Route Server](../../route-server/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
-| Microsoft.Notebooks | [Azure Notebooks](https://notebooks.azure.com/help/introduction) |
-| Microsoft.NotificationHubs | [Notification Hubs](../../notification-hubs/index.yml) |
-| Microsoft.ObjectStore | Object Store |
+
+## Migration resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration |
+| Microsoft.DataBox | [Azure Data Box](../../databox/index.yml) |
+| Microsoft.DataBoxEdge | [Azure Stack Edge](../../databox-online/azure-stack-edge-overview.md) |
+| Microsoft.DataMigration | [Azure Database Migration Service](../../dms/index.yml) |
| Microsoft.OffAzure | [Azure Migrate](../../migrate/migrate-services-overview.md) |
+| Microsoft.Migrate | [Azure Migrate](../../migrate/migrate-services-overview.md) |
+
+## Monitoring resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.AlertsManagement | [Azure Monitor](../../azure-monitor/index.yml) |
+| Microsoft.ChangeAnalysis | [Azure Monitor](../../azure-monitor/index.yml) |
+| Microsoft.Insights | [Azure Monitor](../../azure-monitor/index.yml) |
+| Microsoft.Intune | [Azure Monitor](../../azure-monitor/index.yml) |
| Microsoft.OperationalInsights | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.OperationsManagement | [Azure Monitor](../../azure-monitor/index.yml) |
+| Microsoft.WorkloadMonitor | [Azure Monitor](../../azure-monitor/index.yml) |
+
+## Network resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) |
+| Microsoft.ClassicNetwork | Classic deployment model virtual network |
+| Microsoft.ManagedNetwork | Virtual networks managed by PaaS services |
+| Microsoft.Network | [Application Gateway](../../application-gateway/index.yml)<br />[Azure Bastion](../../bastion/index.yml)<br />[Azure DDoS Protection](../../ddos-protection/ddos-protection-overview.md)<br />[Azure DNS](../../dns/index.yml)<br />[Azure ExpressRoute](../../expressroute/index.yml)<br />[Azure Firewall](../../firewall/index.yml)<br />[Azure Front Door Service](../../frontdoor/index.yml)<br />[Azure Private Link](../../private-link/index.yml)<br />[Azure Route Server](../../route-server/index.yml)<br />[Load Balancer](../../load-balancer/index.yml)<br />[Network Watcher](../../network-watcher/index.yml)<br />[Traffic Manager](../../traffic-manager/index.yml)<br />[Virtual Network](../../virtual-network/index.yml)<br />[Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md)<br />[Virtual WAN](../../virtual-wan/index.yml)<br />[VPN Gateway](../../vpn-gateway/index.yml)<br /> |
| Microsoft.Peering | [Azure Peering Service](../../peering-service/index.yml) |
-| Microsoft.PolicyInsights | [Azure Policy](../../governance/policy/index.yml) |
-| Microsoft.Portal - [registered](#registration) | [Azure portal](../../azure-portal/index.yml) |
-| Microsoft.PowerBI | [Power BI](/power-bi/power-bi-overview) |
-| Microsoft.PowerBIDedicated | [Power BI Embedded](/azure/power-bi-embedded/) |
-| Microsoft.PowerPlatform | [Power Platform](/power-platform/) |
-| Microsoft.ProjectBabylon | [Azure Data Catalog](../../data-catalog/overview.md) |
-| Microsoft.Quantum | [Azure Quantum](https://azure.microsoft.com/services/quantum/) |
-| Microsoft.RecoveryServices | [Azure Site Recovery](../../site-recovery/index.yml) |
-| Microsoft.RedHatOpenShift | [Azure Red Hat OpenShift](../../virtual-machines/linux/openshift-get-started.md) |
-| Microsoft.Relay | [Azure Relay](../../azure-relay/relay-what-is-it.md) |
-| Microsoft.ResourceGraph - [registered](#registration) | [Azure Resource Graph](../../governance/resource-graph/index.yml) |
-| Microsoft.ResourceHealth | [Azure Service Health](../../service-health/index.yml) |
-| Microsoft.Resources - [registered](#registration) | [Azure Resource Manager](../index.yml) |
-| Microsoft.SaaS | core |
-| Microsoft.Scheduler | [Scheduler](../../scheduler/index.yml) |
-| Microsoft.Search | [Azure Cognitive Search](../../search/index.yml) |
+
+## Security resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.Attestation | [Azure Attestation Service](../../attestation/overview.md) |
+| Microsoft.CustomerLockbox | [Customer Lockbox for Microsoft Azure](../../security/fundamentals/customer-lockbox-overview.md) |
+| Microsoft.DataProtection | Data Protection |
+| Microsoft.HardwareSecurityModules | [Azure Dedicated HSM](../../dedicated-hsm/index.yml) |
+| Microsoft.KeyVault | [Key Vault](../../key-vault/index.yml) |
| Microsoft.Security | [Security Center](../../security-center/index.yml) | | Microsoft.SecurityInsights | [Microsoft Sentinel](../../sentinel/index.yml) |
-| Microsoft.SerialConsole - [registered](#registration) | [Azure Serial Console for Windows](/troubleshoot/azure/virtual-machines/serial-console-windows) |
-| Microsoft.ServiceBus | [Service Bus](/azure/service-bus/) |
-| Microsoft.ServiceFabric | [Service Fabric](../../service-fabric/index.yml) |
-| Microsoft.Services | core |
-| Microsoft.SignalRService | [Azure SignalR Service](../../azure-signalr/index.yml) |
-| Microsoft.SoftwarePlan | License |
-| Microsoft.Solutions | [Azure Managed Applications](../managed-applications/index.yml) |
-| Microsoft.Sql | [Azure SQL Database](/azure/azure-sql/database/index)<br /> [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/index) <br />[Azure Synapse Analytics](/azure/sql-data-warehouse/) |
-| Microsoft.SqlVirtualMachine | [SQL Server on Azure Virtual Machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview) |
+| Microsoft.WindowsDefenderATP | [Microsoft Defender Advanced Threat Protection](../../security-center/security-center-wdatp.md) |
+| Microsoft.WindowsESU | Extended Security Updates |
+
+## Storage resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.ClassicStorage | Classic deployment model storage |
+| Microsoft.ElasticSan | [Elastic SAN Preview](../../storage/elastic-san/index.yml) |
+| Microsoft.HybridData | [StorSimple](../../storsimple/index.yml) |
+| Microsoft.ImportExport | [Azure Import/Export](../../import-export/storage-import-export-service.md) |
+| Microsoft.NetApp | [Azure NetApp Files](../../azure-netapp-files/index.yml) |
+| Microsoft.ObjectStore | Object Store |
| Microsoft.Storage | [Storage](../../storage/index.yml) | | Microsoft.StorageCache | [Azure HPC Cache](../../hpc-cache/index.yml) | | Microsoft.StorageSync | [Storage](../../storage/index.yml) | | Microsoft.StorSimple | [StorSimple](../../storsimple/index.yml) |
-| Microsoft.StreamAnalytics | [Azure Stream Analytics](../../stream-analytics/index.yml) |
-| Microsoft.Subscription | core |
-| microsoft.support - [registered](#registration) | core |
-| Microsoft.Synapse | [Azure Synapse Analytics](/azure/sql-data-warehouse/) |
-| Microsoft.TimeSeriesInsights | [Azure Time Series Insights](../../time-series-insights/index.yml) |
-| Microsoft.Token | Token |
-| Microsoft.VirtualMachineImages | [Azure Image Builder](../../virtual-machines/image-builder-overview.md) |
-| microsoft.visualstudio | [Azure DevOps](/azure/devops/) |
-| Microsoft.VMware | [Azure VMware Solution](../../azure-vmware/index.yml) |
-| Microsoft.VMwareCloudSimple | [Azure VMware Solution by CloudSimple](../../vmware-cloudsimple/index.md) |
-| Microsoft.VSOnline | [Azure DevOps](/azure/devops/) |
+
+## Web resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.BingMaps | [Bing Maps](/BingMaps/#pivot=main&panel=BingMapsAPI) |
+| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-app-service-certificate.md) |
+| Microsoft.DomainRegistration | [App Service](../../app-service/index.yml) |
+| Microsoft.Maps | [Azure Maps](../../azure-maps/index.yml) |
+| Microsoft.SignalRService | [Azure SignalR Service](../../azure-signalr/index.yml) |
| Microsoft.Web | [App Service](../../app-service/index.yml)<br />[Azure Functions](../../azure-functions/index.yml) |
-| Microsoft.WindowsDefenderATP | [Microsoft Defender Advanced Threat Protection](../../security-center/security-center-wdatp.md) |
-| Microsoft.WindowsESU | Extended Security Updates |
-| Microsoft.WindowsIoT | [Windows 10 IoT Core Services](/windows-hardware/manufacture/iot/iotcoreservicesoverview) |
-| Microsoft.WorkloadMonitor | [Azure Monitor](../../azure-monitor/index.yml) |
+
+## 5G & Space resource providers
+
+| Resource provider namespace | Azure service |
+| | - |
+| Microsoft.HybridNetwork | [Network Function Manager](../../network-function-manager/index.yml) |
+| Microsoft.MobileNetwork | [Azure Private 5G Core](../../private-5g-core/index.yml) |
+| Microsoft.Orbital | [Azure Orbital Ground Station](../../orbital/overview.md) |
## Registration
-Resource providers marked with **- registered** in the previous section are automatically registered for your subscription. For other resource providers, you need to [register them](resource-providers-and-types.md). However, many resource providers are registered automatically when you perform specific actions. For example, when you create resources through the portal or by deploying an [Azure Resource Manager template](../templates/overview.md), Azure Resource Manager automatically registers any required unregistered resource providers.
+Resource providers marked with **- registered by default** in the previous section are automatically registered for your subscription. For other resource providers, you need to [register them](resource-providers-and-types.md). However, many resource providers are registered automatically when you perform specific actions. For example, when you create resources through the portal or by deploying an [Azure Resource Manager template](../templates/overview.md), Azure Resource Manager automatically registers any required unregistered resource providers.
> [!IMPORTANT] > Register a resource provider only when you're ready to use it. This registration step helps maintain least privileges within your subscription. A malicious user can't use unregistered resource providers.
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
azure-vmware Azure Vmware Solution Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-known-issues.md
Refer to the table to find details about resolution dates or possible workaround
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |
+| The AV64 SKU currently supports RAID-1 FTT1, RAID-5 FTT1, and RAID-1 FTT2 vSAN storage policies. For more information, see [AV64 supported RAID configuration](introduction.md#av64-supported-raid-configuration) |Nov 2023 |N/A|N/A|
| [VMSA-2021-002 ESXiArgs](https://www.vmware.com/security/advisories/VMSA-2021-0002.html) OpenSLP vulnerability publicized in February 2023 | 2021 | [Disable OpenSLP service](https://kb.vmware.com/s/article/76372) | February 2021 - Resolved in [ESXi 7.0 U3c](concepts-private-clouds-clusters.md#vmware-software-versions) | | After my private cloud NSX-T Data Center upgrade to version [3.2.2](https://docs.vmware.com/en/VMware-NSX/3.2.2/rn/vmware-nsxt-data-center-322-release-notes/https://docsupdatetracker.net/index.html), the NSX-T Manager **DNS - Forwarder Upstream Server Timeout** alarm is raised | February 2023 | [Enable private cloud internet Access](concepts-design-public-internet-access.md), alarm is raised because NSX-T Manager cannot access the configured CloudFlare DNS server. Otherwise, [change the default DNS zone to point to a valid and reachable DNS server.](configure-dns-azure-vmware-solution.md) | February 2023 | | When first logging into the vSphere Client, the **Cluster-n: vSAN health alarms are suppressed** alert is active in the vSphere Client | 2021 | This alert should be considered an informational message, since Microsoft manages the service. Select the **Reset to Green** link to clear it. | 2021 |
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
Title: Deploy Arc for Azure VMware Solution (Preview)
+ Title: Deploy Arc-enabled Azure VMware Solution
description: Learn how to set up and enable Arc for your Azure VMware Solution private cloud. Previously updated : 08/28/2023 Last updated : 11/03/2023
+# Deploy Arc-enabled Azure VMware Solution
-# Deploy Arc for Azure VMware Solution (Preview)
+In this article, learn how to deploy Arc for Azure VMware Solution. Once you set up the components needed for this public preview, you're ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Arc-enabled Azure VMware Solution allows you to do the actions:
-In this article, you'll learn how to deploy Arc for Azure VMware Solution. Once you've set up the components needed for this public preview, you'll be ready to execute operations in Azure VMware Solution vCenter Server from the Azure portal. Operations are related to Create, Read, Update, and Delete (CRUD) virtual machines (VMs) in an Arc-enabled Azure VMware Solution private cloud. Users can also enable guest management and install Azure extensions once the private cloud is Arc-enabled.
+- Identify your VMware vSphere resources (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register them with Arc at scale.
+- Perform different virtual machine (VM) operations directly from Azure like; create, resize, delete, and power cycle operations (start/stop/restart) on VMware VMs consistently with Azure.
+- Permit developers and application teams to use VM operations on-demand with [Role-based access control (RBAC)](https://learn.microsoft.com/azure/role-based-access-control/overview).
+- Install the Arc-connected machine agent to [govern, protect, configure, and monitor](https://learn.microsoft.com/azure/azure-arc/servers/overview#supported-cloud-operations) them.
+- Browse your VMware vSphere resources (vms, templates, networks, and storage) in Azure
-Before you begin checking off the prerequisites, verify the following actions have been done:
-
-- You deployed an Azure VMware Solution private cluster. -- You have a connection to the Azure VMware Solution private cloud through your on-premises environment or your native Azure Virtual Network. -- There should be an isolated NSX-T Data Center network segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center network segment doesn't exist, one will be created.
-## Prerequisites
+## How Arc-enabled VMware vSphere differs from Arc-enabled servers
-The following items are needed to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution (Preview).
+You have the flexibility to start with either option, Arc-enabled servers or Arc-enabled VMware vSphere. With both options, you receive the same consistent experience. Regardless of the initial option chosen, you can incorporate the other one later without disruption. The following information helps you understand the difference between both options:
-- A jump box virtual machine (VM) with network access to the Azure VMware Solution vCenter.
- - From the jump-box VM, verify you have access to [vCenter Server and NSX-T Manager portals](./tutorial-configure-networking.md).
-- Verify that your Azure subscription has been enabled or you have connectivity to Azure end points, mentioned in the [Appendices](#appendices).-- Resource group in the subscription where you have owner or contributor role. -- A minimum of three free non-overlapping IPs addresses. -- Verify that your vCenter Server version is 6.7 or higher. -- A resource pool with minimum-free capacity of 16 GB of RAM, 4 vCPUs. -- A datastore with minimum 100 GB of free disk space that is available through the resource pool. -- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server.-- Please validate the regional support before starting the onboarding. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview).-- The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
-[Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md)
+**Arc-enabled servers**
+Azure Arc-enabled servers interact on the guest operating system level. They do that with no awareness of the underlying infrastructure or the virtualization platform they're running on. Since Arc-enabled servers support bare-metal machines, there might not be a host hypervisor in some cases.
-> [!NOTE]
-> Only the default port of 443 is supported. If you use a different port, Appliance VM creation will fail.
+**Arc-enabled VMware vSphere**
+Arc-enabled VMware vSphere is a superset of Arc-enabled servers that extends management capabilities beyond the quest operating system to the VM itself that provides lifecycle management and CRUD (Create, Read, Update, Delete) operations on a VMware vSphere VM. These lifecycle management capabilities are exposed in the Azure portal with a look and feel just like a regular Azure VM. Azure Arc-enabled VMware vSphere provides guest operating system management that uses the same components as Azure Arc-enabled servers.
+
+## Deploy Arc
-At this point, you should have already deployed an Azure VMware Solution private cloud. You need to have a connection from your on-premises environment or your native Azure Virtual Network to the Azure VMware Solution private cloud.
+There should be an isolated NSX-T Data Center network segment for deploying the Arc for Azure VMware Solution Open Virtualization Appliance (OVA). If an isolated NSX-T Data Center network segment doesn't exist, one is created.
+
+### Prerequisites
+
+> [!IMPORTANT]
+> You can't create the resources in a separate resource group. Ensure you use the same resource group from where the Azure VMware Solution private cloud was created to create your resources.
+
+You need the following items to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution.
+
+- Validate the regional support before you start the onboarding process. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview#supported-regions).
+- A jump box virtual machine (VM) or a [management VM](https://learn.microsoft.com/azure/azure-arc/resource-bridge/system-requirements#management-machine-requirements) with internet access that has a direct line of site to the vCenter.
+ - From the jump box VM, verify you have access to [vCenter Server and NSX-T manager portals](https://learn.microsoft.com/azure/azure-vmware/tutorial-access-private-cloud#connect-to-the-vcenter-server-of-your-private-cloud).
+- A resource group in the subscription where you have an owner or contributor role.
+- An unused, isolated [NSX Data Center network segment](https://learn.microsoft.com/azure/azure-vmware/tutorial-nsx-t-network-segment) that is a static network segment with static IP assignment of size /28 CIDR for deploying the Arc for Azure VMware Solution OVA. If an isolated NSX-T Data Center network segment doesn't exist, one gets created.
+- Verify your Azure subscription is enabled and has connectivity to Azure end points.
+- The firewall and proxy URLs must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. See the [Azure eArc resource bridge (Preview) network requirements](https://learn.microsoft.com/azure/azure-arc/resource-bridge/network-requirements).
+- Verify your vCenter Server version is 6.7 or higher.
+- A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs.
+- A datastore with a minimum of 100 GB of free disk space is available through the resource pool or cluster.
+- On the vCenter Server, allow inbound connections on TCP port 443. This action ensures that the Arc resource bridge and VMware vSphere cluster extension can communicate with the vCenter Server.
+> [!NOTE]
+> - Private endpoint is currently not supported.
+> - DHCP support isn't available to customers at this time, only static IP addresses are currently supported.
-For Network planning and setup, use the [Network planning checklist - Azure VMware Solution | Microsoft Docs](./tutorial-network-checklist.md)
-### Registration to Arc for Azure VMware Solution feature set
+## Registration to Arc for Azure VMware Solution feature set
The following **Register features** are for provider registration using Azure CLI.
az provider register --namespace Microsoft.AVS
Alternately, users can sign into their Subscription, navigate to the **Resource providers** tab, and register themselves on the resource providers mentioned previously.
-For feature registration, users will need to sign into their **Subscription**, navigate to the **Preview features** tab, and search for 'Azure Arc for Azure VMware Solution'. Once registered, no other permissions are required for users to access Arc.
+For feature registration, users need to sign into their **Subscription**, navigate to the **Preview features** tab, and search for 'Azure Arc for Azure VMware Solution'. Once registered, no other permissions are required for users to access Arc.
-Users need to ensure they've registered themselves to **Microsoft.AVS/earlyAccess**. After registering, use the following feature to verify registration.
```azurecli az feature show --name AzureArcForAVS --namespace Microsoft.AVS
az feature show --name AzureArcForAVS --namespace Microsoft.AVS
## Onboard process to deploy Azure Arc
-Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution (Preview).
+Use the following steps to guide you through the process to onboard Azure Arc for Azure VMware Solution.
1. Sign into the jumpbox VM and extract the contents from the compressed file from the following [location](https://github.com/Azure/ArcOnAVS/releases/latest). The extracted file contains the scripts to install the preview software.
-1. Open the 'config_avs.json' file and populate all the variables.
+2. Open the 'config_avs.json' file and populate all the variables.
**Config JSON** ```json
Use the following steps to guide you through the process to onboard Azure Arc fo
- Populate the `subscriptionId`, `resourceGroup`, and `privateCloud` names respectively. - `isStatic` is always true.
- - `networkForApplianceVM` is the name for the segment for Arc appliance VM. One will be created if it doesn't already exist.
+ - `networkForApplianceVM` is the name for the segment for Arc appliance VM. One gets created if it doesn't already exist.
- `networkCIDRForApplianceVM` is the IP CIDR of the segment for Arc appliance VM. It should be unique and not affect other networks of Azure VMware Solution management IP CIDR. - `GatewayIPAddress` is the gateway for the segment for Arc appliance VM.
- - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the k8s node pool IP range.
+ - `applianceControlPlaneIpAddress` is the IP address for the Kubernetes API server that should be part of the segment IP CIDR provided. It shouldn't be part of the K8s node pool IP range.
- `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd` are the starting and ending IP of the pool of IPs to assign to the appliance VM. Both need to be within the `networkCIDRForApplianceVM`.
- - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You may choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM`
+ - `k8sNodeIPPoolStart`, `k8sNodeIPPoolEnd`, `gatewayIPAddress` ,`applianceControlPlaneIpAddress` are optional. You can choose to skip all the optional fields or provide values for all. If you choose not to provide the optional fields, then you must use /28 address space for `networkCIDRForApplianceVM`
**Json example** ```json
Use the following steps to guide you through the process to onboard Azure Arc fo
} ```
-1. Run the installation scripts. We've provided you with the option to set up this preview from a Windows or Linux-based jump box/VM.
+3. Run the installation scripts. You can optionionally setup this preview from a Windows or Linux-based jump box/VM.
Run the following commands to execute the installation script.
Use the following steps to guide you through the process to onboard Azure Arc fo
```
-4. You'll notice more Azure Resources have been created in your resource group.
+4. More Azure resources are created in your resource group.
- Resource bridge - Custom location - VMware vCenter > [!IMPORTANT]
-> You can't create the resources in a separate resource group. Make sure you use the same resource group from where the Azure VMware Solution private cloud was created to create the resources.
-
-## Discover and project your VMware vSphere infrastructure resources to Azure
-
-When Arc appliance is successfully deployed on your private cloud, you can do the following actions.
--- View the status from within the private cloud under **Operations > Azure Arc**, located in the left navigation. -- View the VMware vSphere infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**.-- Discover your VMware vSphere infrastructure resources and project them to Azure using the same browser experience, **Private cloud > Arc vCenter resources > Virtual Machines**.-- Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure.
+> After the successful installation of Azure Arc resource bridge, it's recommended to retain a copy of the resource bridge config.yaml files and the kubeconfig file safe and secure them in a place that facilitates easy retrieval. These files could be needed later to run commands to perform management operations on the resource bridge. You can find the 3 .yaml files (config files) and the kubeconfig file in the same folder where you ran the script.
-After you've enabled VMs to be managed from Azure, you can install guest management and do the following actions.
+When the script is run successfully, check the status to see if Azure Arc is now configured. To verify if your private cloud is Arc-enabled, do the following actions:
-- Enable customers to install and use extensions.
- - To enable guest management, customers will be required to use admin credentials
- - VMtools should already be running on the VM
-> [!NOTE]
-> Azure VMware Solution vCenter Server will be available in global search but will NOT be available in the list of vCenter Servers for Arc for VMware.
--- Customers can view the list of VM extensions available in public preview.
- - Change tracking
- - Log analytics
- - Azure policy guest configuration
-
- **Azure VMware Solution private cloud with Azure Arc**
-
-When the script has run successfully, you can check the status to see if Azure Arc has been configured. To verify if your private cloud is Arc-enabled, do the following action:
- In the left navigation, locate **Operations**.-- Choose **Azure Arc (preview)**. Azure Arc state will show as **Configured**.-
- :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png" alt-text="Image showing navigation to Azure Arc state to verify it's configured."lightbox="media/deploy-arc-for-azure-vmware-solution/arc-private-cloud-configured.png":::
-
-**Arc enabled VMware vSphere resources**
-
-After the private cloud is Arc-enabled, vCenter resources should appear under **Virtual machines**.
-- From the left navigation, under **Azure Arc VMware resources (preview)**, locate **Virtual machines**.-- Choose **Virtual machines** to view the vCenter Server resources.-
-### Manage access to VMware resources through Azure Role-Based Access Control
-
-After your Azure VMware Solution vCenter Server resources have been enabled for access through Azure, there's one final step in setting up a self-service experience for your teams. You'll need to provide your teams with access to: compute, storage, networking, and other vCenter Server resources used to configure VMs.
-
-This section will demonstrate how to use custom roles to manage granular access to VMware vSphere resources through Azure.
-
-#### Arc-enabled VMware vSphere built-in roles
-
-There are three built-in roles to meet your Role-based access control (RBAC) requirements. You can apply these roles to a whole subscription, resource group, or a single resource.
-
-**Azure Arc VMware Administrator role** - is used by administrators
-
-**Azure Arc VMware Private Cloud User role** - is used by anyone who needs to deploy and manage VMs
+- Choose **Azure Arc**.
+- Azure Arc state shows as **Configured**.
-**Azure Arc VMware VM Contributor role** - is used by anyone who needs to deploy and manage VMs
+Recover from failed deployments
-**Azure Arc Azure VMware Solution Administrator role**
+If the Azure Arc resource bridge deployment fails, consult the [Azure Arc resource bridge troubleshooting](https://learn.microsoft.com/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge) guide. While there can be many reasons why the Azure Arc resource bridge deployment fails, one of them is KVA timeout error. Learn more about the [KVA timeout error](https://learn.microsoft.com/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge#kva-timeout-error) and how to troubleshoot.
-This role provides permissions to perform all possible operations for the Microsoft.ConnectedVMwarevSphere resource provider. Assign this role to users or groups that are administrators managing Azure Arc enabled VMware vSphere deployment.
+## Discover and project your VMware vSphere infrastructure resources to Azure
-**Azure Arc Azure VMware Solution Private Cloud User role**
+When Arc appliance is successfully deployed on your private cloud, you can do the following actions.
-This role gives the user permission to use the Arc-enabled Azure VMware Solutions vSphere resources that have been made accessible through Azure. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
+- View the status from within the private cloud left navigation under **Operations > Azure Arc**.
+- View the VMware vSphere infrastructure resources from the private cloud left navigation under **Private cloud** then select **Azure Arc vCenter resources**.
+- Discover your VMware vSphere infrastructure resources and project them to Azure by navigating, **Private cloud > Arc vCenter resources > Virtual Machines**.
+- Similar to VMs, customers can enable networks, templates, resource pools, and data-stores in Azure.
-We recommend assigning this role at the individual resource pool (host or cluster), virtual network, or template that you want the user to deploy VMs with.
+## Enable resource pools, clusters, hosts, datastores, networks, and VM templates in Azure
-**Azure Arc Azure VMware Solution VM Contributor role**
+Once you connected your Azure VMware Solution private cloud to Azure, you can browse your vCenter inventory from the Azure portal. This section shows you how to enable resource pools, networks, and other non-VM resources in Azure.
-This role gives the user permission to perform all VMware VM operations. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
+> [!NOTE]
+> Enabling Azure Arc on a VMware vSphere resource is a read-only operation on vCenter. It doesn't make changes to your resource in vCenter.
-We recommend assigning this role at the subscription level or resource group you want the user to deploy VMs with.
+1. On your Azure VMware Solution private cloud, in the left navigation, locate **vCenter Inventory**.
+2. Select the resource(s) you want to enable, then select **Enable in Azure**.
+3. Select your Azure **Subscription** and **Resource Group**, then select **Enable**.
-**Assign custom roles to users or groups**
+ The enable action starts a deployment and creates a resource in Azure, creating representations for your VMware vSphere resources. It allows you to manage who can access those resources through Role-based access control (RBAC) granularly.
-1. Navigate to the Azure portal.
-1. Locate the subscription, resource group, or the resource at the scope you want to provide for the custom role.
-1. Find the Arc-enabled Azure VMware Solution vCenter Server resources.
- 1. Navigate to the resource group and select the **Show hidden types** checkbox.
- 1. Search for "Azure VMware Solution".
-1. Select **Access control (IAM)** in the table of contents located on the left navigation.
-1. Select **Add role assignment** from the **Grant access to this resource**.
- :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/assign-custom-role-user-groups.png" alt-text="Image showing navigation to access control IAM and add role assignment."lightbox="media/deploy-arc-for-azure-vmware-solution/assign-custom-role-user-groups.png":::
-1. Select the custom role you want to assign, Azure Arc VMware Solution: **Administrator**, **Private Cloud User**, or **VM Contributor**.
-1. Search for **AAD user** or **group name** that you want to assign this role to.
-1. Select the **AAD user** or **group name**. Repeat this step for each user or group you want to give permission to.
-1. Repeat the above steps for each scope and role.
+4. Repeat the previous steps for one or more network, resource pool, and VM template resources.
+## Enable guest management and extension installation
-## Create Arc-enabled Azure VMware Solution virtual machine
+Before you install an extension, you need to enable guest management on the VMware VM.
-This section shows users how to create a virtual machine (VM) on VMware vCenter Server using Azure Arc. Before you begin, check the following prerequisite list to ensure you're set up and ready to create an Arc-enabled Azure VMware Solution VM.
+### Prerequisite
-### Prerequisites
+Before you can install an extension, ensure your target machine meets the following conditions:
-- An Azure subscription and resource group where you have an Arc VMware VM **Contributor role**.-- A resource pool resource that you have an Arc VMware private cloud resource **User role**.-- A virtual machine template resource that you have an Arc private cloud resource **User role**.-- (Optional) a virtual network resource on which you have Arc private cloud resource **User role**.-
-### Create VM flow
--- Open the [Azure portal](https://portal.azure.com/)-- On the **Home** page, search for **virtual machines**. Once you've navigated to **Virtual machines**, select the **+ Create** drop down and select **Azure VMware Solution virtual machine**.
- :::image type="content" source="media/deploy-arc-for-azure-vmware-solution/deploy-vm-arc-1.2.png" alt-text="Image showing the location of the plus Create drop down menu and Azure VMware Solution virtual machine selection option."lightbox="media/deploy-arc-for-azure-vmware-solution/deploy-vm-arc-1.2.png":::
-
-Near the top of the **Virtual machines** page, you'll find five tabs labeled: **Basics**, **Disks**, **Networking**, **Tags**, and **Review + create**. Follow the steps or options provided in each tab to create your Azure VMware Solution virtual machine.
-
-**Basics**
-1. In **Project details**, select the **Subscription** and **Resource group** where you want to deploy your VM.
-1. In **Instance details**, provide the **virtual machine name**.
-1. Select a **Custom location** that your administrator has shared with you.
-1. Select the **Resource pool/cluster/host** where the VM should be deployed.
-1. For **Template details**, pick a **Template** based on the VM you plan to create.
- - Alternately, you can check the **Override template defaults** box that allows you to override the CPU and memory specifications set in the template.
- - If you chose a Windows template, you can provide a **Username** and **Password** for the **Administrator account**.
-1. For **Extension setup**, the box is checked by default to **Enable guest management**. If you donΓÇÖt want guest management enabled, uncheck the box.
-1. The connectivity method defaults to **Public endpoint**. Create a **Username**, **Password**, and **Confirm password**.
-
-**Disks**
- - You can opt to change the disks configured in the template, add more disks, or update existing disks. These disks will be created on the default datastore per the VMware vCenter Server storage policies.
- - You can change the network interfaces configured in the template, add Network interface cards (NICs), or update existing NICs. You can also change the network that the NIC will be attached to provided you have permissions to the network resource.
-
-**Networking**
- - A network configuration is automatically created for you. You can choose to keep it or override it and add a new network interface instead.
- - To override the network configuration, find and select **+ Add network interface** and add a new network interface.
-
-**Tags**
- - In this section, you can add tags to the VM resource.
-
-**Review + create**
- - Review the data and properties you've set up for your VM. When everything is set up how you want it, select **Create**. The VM should be created in a few minutes.
-
-## Enable guest management and extension installation
+- Is running a [supported operating system](https://learn.microsoft.com/azure/azure-arc/servers/prerequisites#supported-operating-systems).
+- Is able to connect through the firewall to communicate over the internet and these [URLs](https://learn.microsoft.com/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked.
+- Has VMware tools installed and running.
+- Is powered on and the resource bridge has network connectivity to the host running the VM.
-The guest management must be enabled on the VMware vSphere virtual machine (VM) before you can install an extension. Use the following prerequisite steps to enable guest management.
+### Enable guest management
-**Prerequisite**
+You need to enable guest management on the VMware VM before you can install an extension. Use the following steps to enable guest management.
1. Navigate to [Azure portal](https://portal.azure.com/).
+1. From the left navigation, locate **vCenter Server Inventory** and choose **Virtual Machines** to view the list of VMs.
+1. Select the VM you want to install the guest management agent on.
+1. Select **Enable guest management** and provide the administrator username and password to enable guest management then select **Apply**.
1. Locate the VMware vSphere VM you want to check for guest management and install extensions on, select the name of the VM. 1. Select **Configuration** from the left navigation for a VMware VM.
-1. Verify **Enable guest management** has been checked.
-
->[!NOTE]
-> The following conditions are necessary to enable guest management on a VM.
--- The machine must be running a [Supported operating system](../azure-arc/servers/agent-overview.md).-- The machine needs to connect through the firewall to communicate over the internet. Make sure the [URLs](../azure-arc/servers/agent-overview.md) listed aren't blocked.-- The machine can't be behind a proxy, it's not supported yet.-- If you're using Linux VM, the account must not prompt to sign in on pseudo commands.
-
- Avoid pseudo commands by following these steps:
-
- 1. Sign into Linux VM.
- 1. Open terminal and run the following command: `sudo visudo`.
- 1. Add the line `username` `ALL=(ALL) NOPASSWD:ALL` at the end of the file.
- 1. Replace `username` with the appropriate user-name.
-
-If your VM template already has these changes incorporated, you won't need to do the steps for the VM created from that template.
+1. Verify **Enable guest management** is now checked.
-**Extension installation steps**
+### Install the LogAnalytics extension
1. Go to Azure portal. 1. Find the Arc-enabled Azure VMware Solution VM that you want to install an extension on and select the VM name.
-1. Navigate to **Extensions** in the left navigation, select **Add**.
+1. Locate **Extensions** from the left navigation and select **Add**.
1. Select the extension you want to install.
- 1. Based on the extension, you'll need to provide details. For example, `workspace Id` and `key` for LogAnalytics extension.
+ 1. Based on the extension, you need to provide details. For example, `workspace Id` and `key` for LogAnalytics extension.
1. When you're done, select **Review + create**. When the extension installation steps are completed, they trigger deployment and install the selected extension on the VM.
-## Change Arc appliance credential
-
-When **cloudadmin** credentials are updated, use the following steps to update the credentials in the appliance store.
-
-1. Log in to the jumpbox VM from where onboarding was performed. Change the directory to **onboarding directory**.
-1. Run the following command for Windows-based jumpbox VM.
-
- `./.temp/.env/Scripts/activate`
-1. Run the following command.
-
- `az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig file>`
-
-1. Run the following command
-
-`az connectedvmware vcenter connect --debug --resource-group {resource-group} --name {vcenter-name-in-azure} --location {vcenter-location-in-azure} --custom-location {custom-location-name} --fqdn {vcenter-ip} --port {vcenter-port} --username cloudadmin@vsphere.local --password {vcenter-password}`
-
-> [!NOTE]
-> Customers need to ensure kubeconfig and SSH keys remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios.
-
-**Parameters**
-
-Required parameters
-
-`-kubeconfig # kubeconfig of Appliance resource`
-
-**Examples**
-
-The following command invokes the set credential for the specified appliance resource.
-
-` az arcappliance setcredential <provider> --kubeconfig <kubeconfig>`
-
-## Manual appliance upgrade
-
-Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM).
-
-1. Log into vCenter Server.
-1. Locate the Arc appliance VM, which should be in the resource pool that was configured during onboarding.
- 1. Power off the VM.
- 1. Delete the VM.
-1. Delete the download template corresponding to the VM.
-1. Delete the resource bridge Azure Resource Manager resource.
-1. Get the previous script `Config_avs` file and add the following configuration item:
- 1. `"register":false`
-1. Download the latest version of the Azure VMware Solution onboarding script.
-1. Run the new onboarding script with the previous `config_avs.json` from the jump box VM, without changing other config items.
-
-## Off board from Azure Arc-enabled Azure VMware Solution
-
-This section demonstrates how to remove your VMware vSphere virtual machines (VMs) from Azure management services.
-
-If you've enabled guest management on your Arc-enabled Azure VMware Solution VMs and onboarded them to Azure management services by installing VM extensions on them, you'll need to uninstall the extensions to prevent continued billing. For example, if you installed an MMA extension to collect and send logs to an Azure Log Analytics workspace, you'll need to uninstall that extension. You'll also need to uninstall the Azure Connected Machine agent to avoid any problems installing the agent in future.
-
-Use the following steps to uninstall extensions from the portal.
-
->[!NOTE]
->**Steps 2-5** must be performed for all the VMs that have VM extensions installed.
-
-1. Log in to your Azure VMware Solution private cloud.
-1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£vCenter Server Inventory Page"
-1. Search and select the virtual machine where you have **Guest management** enabled.
-1. Select **Extensions**.
-1. Select the extensions and select **Uninstall**.
-
-To avoid problems onboarding the same VM to **Guest management**, we recommend you do the following steps to cleanly disable guest management capabilities.
-
->[!NOTE]
->**Steps 2-3** must be performed for **all VMs** that have **Guest management** enabled.
-
-1. Sign into the virtual machine using administrator or root credentials and run the following command in the shell.
- 1. `azcmagent disconnect --force-local-only`.
-1. Uninstall the `ConnectedMachine agent` from the machine.
-1. Set the **identity** on the VM resource to **none**.
-
-## Remove Arc-enabled Azure VMware Solution vSphere resources from Azure
-
-When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter Server resource in Azure, you'll need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps:
-
-1. Go to the Azure portal.
-1. Choose **Virtual machines** from Arc-enabled VMware vSphere resources in the private cloud.
-1. Select all the VMs that have an Azure Enabled value as **Yes**.
-1. Select **Remove from Azure**. This step will start deployment and remove these resources from Azure. The resources will remain in your vCenter Server.
- 1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**.
-1. When the deletion completes, select **Overview**.
- 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section.
-1. Select **Remove from Azure** to remove the vCenter Server resource from Azure.
-1. Go to vCenter Server resource in Azure and delete it.
-1. Go to the Custom location resource and select **Delete**.
-1. Go to the Azure Arc Resource bridge resources and select **Delete**.
-
-At this point, all of your Arc-enabled VMware vSphere resources have been removed from Azure.
-
-## Delete Arc resources from vCenter Server
-
-For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter Server and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it won't affect the Azure VMware Solution private cloud for the customer.
-
-## Preview FAQ
-
-**Region support for Azure VMware Solution**
-
-Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview).
-
-**How does support work?**
-
-Standard support process for Azure VMware Solution has been enabled to support customers.
-
-**Does Arc for Azure VMware Solution support private endpoint?**
-
-Private endpoint is currently not supported.
-
-**Is enabling internet the only option to enable Arc for Azure VMware Solution?**
-
-Yes, the Azure VMware Solution private cloud and jumpbox VM must have internet access for Arc to function.
-
-**Is DHCP support available?**
-
-DHCP support isn't available to customers at this time, we only support static IP addresses.
-
-## Debugging tips for known issues
-
-Use the following tips as a self-help guide.
-
-**What happens if I face an error related to Azure CLI?**
--- For windows jumpbox, if you have 32-bit Azure CLI installed, verify that your current version of Azure CLI has been uninstalled. Verification can be done from the Control Panel. -- To ensure it's uninstalled, try the `az` version to check if it's still installed. -- If you already installed Azure CLI using MSI, `az` installed by MSI and pip will conflict on PATH. In this case, it's recommended that you uninstall the current Azure CLI version.-
-**My script stopped because it timed-out, what should I do?**
--- Retry the script for `create`. A prompt will ask you to select **Y** and rerun it.-- It could be a cluster extension issue that would result in adding the extension in the pending state.-- Verify you have the correct script version.-- Verify the VMware pod is running correctly on the system in running state.-
-**Basic trouble-shooting steps if the script run was unsuccessful.**
--- Follow the directions provided in the [Prerequisites](#prerequisites) section of this article to verify that the feature and resource providers are registered.-
-**What happens if the Arc for VMware section shows no data?**
--- If the Azure Arc VMware resources in the Azure UI show no data, verify your subscription was added in the global default subscription filter.-
-**I see the error:** "`ApplianceClusterNotRunning` Appliance Cluster: `<resource-bridge-id>` expected states to be Succeeded found: Succeeded and expected status to be Running and found: Connected".
--- Run the script again.-
-**I'm unable to install extensions on my virtual machine.**
--- Check that **guest management** has been successfully installed.-- **VMware Tools** should be installed on the VM.-
-**I'm facing Network related issues during on-boarding.**
--- Look for an IP conflict. You need IPs with no conflict or from free pool.-- Verify the internet is enabled for the network segment.-
-**Where can I find more information related to Azure Arc resource bridge?**
--- For more information, go to [Azure Arc resource bridge (preview) overview](../azure-arc/resource-bridge/overview.md)-
-## Appendices
-
-Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The URLs will get pre-fixed when the script runs and can be run from the jumpbox VM to ping them. The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
-[Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md)
-
-**Additional URL resources**
+## Supported extensions and management services
-- [Google Container Registry](http://gcr.io/)-- [Red Hat Quay.io](http://quay.io/)-- [Docker](https://hub.docker.com/)-- [Harbor](https://goharbor.io/)-- [Container Registry](https://container-registry.com/)
+Perform VM operations on VMware VMs through Azure using [supported extensions and management services](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/perform-vm-ops-through-azure#supported-extensions-and-management-services)
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Previously updated : 6/20/2023 Last updated : 10/16/2023
Azure VMware Solution is a VMware validated solution with ongoing validation and
The diagram shows the adjacency between private clouds and VNets in Azure, Azure services, and on-premises environments. Network access from private clouds to Azure services or VNets provides SLA-driven integration of Azure service endpoints. ExpressRoute Global Reach connects your on-premises environment to your Azure VMware Solution private cloud. ## AV36P and AV52 node sizes available in Azure VMware Solution
For pricing and region availability, see the [Azure VMware Solution pricing page
You can deploy new or scale existing private clouds through the Azure portal or Azure CLI.
+## Azure VMware Solution private cloud extension with AV64 node size
+
+The AV64 is a new Azure VMware Solution host SKU, which is available to expand (not to create) the Azure VMware Solution private cloud built with the existing AV36, AV36P, or AV52 SKU. Use the [Microsoft documentation](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware) to check for availability of the AV64 SKU in the region.
+
+### Prerequisite for AV64 usage
+
+See the following prerequisites for AV64 cluster deployment.
+
+- An Azure VMware solution private cloud is created using AV36, AV36P, or AV52 in AV64 supported [region/AZ](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=azure-vmware).
+
+- You need one /23 or three (contiguous or noncontiguous) /25 address blocks for AV64 cluster management.
++
+### Supportability for customer scenarios
+
+**Customer with existing Azure VMware Solution private cloud**:
+When a customer has a deployed Azure VMware Solution private cloud, they can scale the private cloud by adding a separate AV64 vCenter node cluster to that private cloud. In this scenario, customers should use the following steps:
+
+1. Get an AV64 [quota approval from Microsoft](/azure/azure-vmware/request-host-quota-azure-vmware-solution) with the minimum of three nodes. Add other details on the Azure VMware Solution private cloud that you plan to extend using AV64.
+2. Use an existing Azure VMware Solution add-cluster workflow with AV64 hosts to expand.
+
+**Customer plans to create a new Azure VMware Solution private cloud**: When a customer wants a new Azure VMware Solution private cloud that can use AV64 SKU but only for expansion. In this case, the customer meets the prerequisite of having an Azure VMware Solution private cloud built with AV36, AV36P, or AV52 SKU. The customer needs to buy a minimum of three nodes of AV36, AV36P, or AV52 SKU before expanding using AV64. For this scenario, use the following steps:
+
+1. Get AV36, AV36P, AV52 and AV64 [quota approval from Microsoft](/azure/azure-vmware/request-host-quota-azure-vmware-solution) with a minimum of three nodes each.
+2. Create an Azure VMware Solution private cloud using AV36, AV36P, or AV52 SKU.
+3. Use an existing Azure VMware Solution add-cluster workflow with AV64 hosts to expand.
+
+**Azure VMware Solution stretched cluster private cloud**: The AV64 SKU isn't supported with Azure VMware Solution stretched cluster private cloud. This means that an AV64-based expansion isn't possible for an Azure VMware Solution stretched cluster private cloud.
+
+### AV64 Cluster vSAN fault domain (FD) design and recommendations
+
+The traditional Azure VMware Solution host clusters don't have explicit vSAN FD configuration. The reasoning is the host allocation logic ensures, within clusters, that no two hosts reside in the same physical fault domain within an Azure region. This feature inherently brings resilience and high availability for storage, which the vSAN FD configuration is supposed to bring. More information on vSAN FD can be found in the [VMware documentation](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan.doc/GUID-8491C4B0-6F94-4023-8C7A-FD7B40D0368D.html).
+
+The Azure VMware Solution AV64 host clusters have an explicit vSAN fault domain (FD) configuration. Azure VMware Solution control plane configures five vSAN fault domains for AV64 clusters, and hosts are balanced evenly across these five FDs, as users scale up the hosts in a cluster from three nodes to 16 nodes.
+
+### Cluster size recommendation
+
+The Azure VMware Solution minimum vSphere node cluster size supported is three. The vSAN data redundancy is handled by ensuring the minimum cluster size of three hosts are in different vSAN FDs. In a vSAN cluster with three hosts, each in a different FD, Should an FD fail (for example, the top of rack switch fails), the vSAN data would be protected. Operations such as object creation (new VM, VMDK, and others) would fail. The same is true of any maintenance activities where an ESXi host is placed into maintenance mode and/or rebooted. To avoid scenarios such as these, it's recommended to deploy vSAN clusters with a minimum of four ESXi hosts.
+
+### AV64 host removal workflow and best practices
+
+Because of the AV64 cluster vSAN fault domain (FD) configuration and need for hosts balanced across all FDs, the host removal from AV64 cluster differs from traditional Azure VMware Solution host clusters with other SKUs.
+
+Currently, a user can select one or more hosts to be removed from the cluster using portal or API. One condition is that a cluster should have a minimum of three hosts. However, an AV64 cluster behaves differently in certain scenarios when AV64 uses vSAN FDs. Any host removal request is checked against potential vSAN FD imbalance. If a host removal request creates an imbalance, the request is rejected with the http 409-Conflict response. The http 409-Conflict response status code indicates a request conflict with the current state of the target resource (hosts).
+
+The following three scenarios show examples of instances that would normally error out and demonstrate different methods that can be used to remove hosts without creating a vSAN fault domain (FD) imbalance.
+
+- When removing a host creates a vSAN FD imbalance with a difference of hosts between most and least populated FD to be more than one.
+ In the following example users, need to remove one of the hosts from FD 1 before removing hosts from other FDs.
+
+ :::image type="content" source="media/introduction/remove-host-scenario-1.png" alt-text="Diagram showing how users need to remove one of the hosts from FD 1 before removing hosts from other FDs." border="false":::
+
+- When multiple host removal requests are made at the same time and certain host removals create an imbalance. In this scenario, the Azure VMware Solution control plane removes only hosts, which don't create imbalance.
+ In the following example users can't take both of the hosts from the same FDs unless they're reducing the cluster size to four or lower.
+
+ :::image type="content" source="media/introduction/remove-host-scenario-2.png" alt-text="Diagram showing how users can't take both of the hosts from the same FDs unless they're reducing the cluster size to four or lower." border="false":::
+
+- When a selected host removal causes less than three active vSAN FDs. This scenario isn't expected to occur given that all AV64 regions have five FDs and, while adding hosts, the Azure VMware Solution control plane takes care of adding hosts from all five FDs evenly.
+ In the following example users can remove one of the hosts from FD 1, but not from FD 2 or 3.
+
+ :::image type="content" source="media/introduction/remove-host-scenario-3.png" alt-text="Diagram showing how users can remove one of the hosts from FD 1, but not from FD 2 or 3." border="false":::
+
+**How to identify the host that can be removed without causing a vSAN FD imbalance**: A user can go to the vSphere user interface to get the current state of vSAN FDs and hosts associated with each of them. This helps to identify hosts (based on the previous examples) that can be removed without affecting the vSAN FD balance and avoid any errors in the removal operation.
+
+### AV64 supported RAID configuration
+
+This table provides the list of RAID configuration supported and host requirements in AV64 cluster. The RAID6/FTT2 and RAID1/FTT3 policies will be supported in future on AV64 SKU. Microsoft allows customers to use the RAID-5 FTT1 vSAN storage policy for AV64 clusters with six or more nodes to meet the service level agreement.
+
+|RAID configuration |Failures to tolerate (FTT) | Minimum hosts required |
+|-|--||
+|RAID-1 (Mirroring) Default setting.| 1 | 3 |
+|RAID-5 (Erasure Coding) | 1 | 4 |
+|RAID-1 (Mirroring) | 2 | 5 |
+
## Networking [!INCLUDE [avs-networking-description](includes/azure-vmware-solution-networking-description.md)]
Azure VMware Solution implements a shared responsibility model that defines dist
The shared responsibility matrix table outlines the main tasks that customers and Microsoft each handle in deploying and managing both the private cloud and customer application workloads. The following table provides a detailed list of roles and responsibilities between the customer and Microsoft, which encompasses the most frequent tasks and definitions. For further questions, contact Microsoft. | **Role** | **Task/details** | | -- | - |
-| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Backup and restore VMware vCenter Server</li><li>Backup and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX |
-| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>Additional Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, VNET, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
+| Microsoft - Azure VMware Solution | Physical infrastructure<ul><li>Azure regions</li><li>Azure availability zones</li><li>Express Route/Global Reach</ul></li>Compute/Network/Storage<ul><li>Rack and power Bare Metal hosts</li><li>Rack and power network equipment</ul></li>Software defined Data Center (SDDC) deploy/lifecycle<ul><li>VMware ESXi deploy, patch, and upgrade</li><li>VMware vCenter Servers deploy, patch, and upgrade</li><li>VMware NSX-T Data Centers deploy, patch, and upgrade</li><li>VMware vSAN deploy, patch, and upgrade</ul></li>SDDC Networking - VMware NSX-T Data Center provider config<ul><li>Microsoft Edge node/cluster, VMware NSX-T Data Center host preparation</li><li>Provider Tier-0 and Tenant Tier-1 Gateway</li><li>Connectivity from Tier-0 (using BGP) to Azure Network via Express Route</ul></li>SDDC Compute - VMware vCenter Server provider config<ul><li>Create default cluster</li><li>Configure virtual networking for vMotion, Management, vSAN, and others</ul></li>SDDC backup/restore<ul><li>Back up and restore VMware vCenter Server</li><li>Back up and restore VMware NSX-T Data Center NSX-T Manager</ul></li>SDDC health monitoring and corrective actions, for example: replace failed hosts</br><br>(optional) VMware HCX deploys with fully configured compute profile on cloud side as add-on</br><br>(optional) SRM deploys, upgrade, and scale up/down</br><br>Support - SDDC platforms and VMware HCX |
+| Customer | Request Azure VMware Solution host quote with Microsoft<br>Plan and create a request for SDDCs on Azure portal with:<ul><li>Host count</li><li>Management network range</li><li>Other information</ul></li>Configure SDDC network and security (VMware NSX-T Data Center)<ul><li>Network segments to host applications</li><li>More Tier -1 routers</li><li>Firewall</li><li>VMware NSX-T Data Center LB</li><li>IPsec VPN</li><li>NAT</li><li>Public IP addresses</li><li>Distributed firewall/gateway firewall</li><li>Network extension using VMware HCX or VMware NSX-T Data Center</li><li>AD/LDAP config for RBAC</ul></li>Configure SDDC - VMware vCenter Server<ul><li>AD/LDAP config for RBAC</li><li>Deploy and lifecycle management of Virtual Machines (VMs) and application<ul><li>Install operating systems</li><li>Patch operating systems</li><li>Install antivirus software</li><li>Install backup software</li><li>Install configuration management software</li><li>Install application components</li><li>VM networking using VMware NSX-T Data Center segments</ul></li><li>Migrate Virtual Machines (VMs)<ul><li>VMware HCX configuration</li><li>Live vMotion</li><li>Cold migration</li><li>Content library sync</ul></li></ul></li>Configure SDDC - vSAN<ul><li>Define and maintain vSAN VM policies</li><li>Add hosts to maintain adequate 'slack space'</ul></li>Configure VMware HCX<ul><li>Download and deploy HCA connector OVA in on-premises</li><li>Pairing on-premises VMware HCX connector</li><li>Configure the network profile, compute profile, and service mesh</li><li>Configure VMware HCX network extension/MON</li><li>Upgrade/updates</ul></li>Network configuration to connect to on-premises, virtual network, or internet</br><br>Add or delete hosts requests to cluster from Portal</br><br>Deploy/lifecycle management of partner (third party) solutions |
| Partner ecosystem | Support for their product/solution. For reference, the following are some of the supported Azure VMware Solution partner solution/product:<ul><li>BCDR - SRM, JetStream, Zerto, and others</li><li>Backup - Veeam, Commvault, Rubrik, and others</li><li>VDI - Horizon/Citrix</li><li>Multitenancy - VMware Cloud director service (CDs), VMware Cloud director availability(VCDA)</li><li>Security solutions - BitDefender, TrendMicro, Checkpoint</li><li>Other VMware products - vRA, vROps, AVI |
The next step is to learn key [private cloud and cluster concepts](concepts-priv
<!-- LINKS - external --> [concepts-private-clouds-clusters]: ./concepts-private-clouds-clusters.md+
azure-vmware Manage Arc Enabled Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/manage-arc-enabled-azure-vmware-solution.md
+
+ Title: Manage Arc-enabled Azure VMware private cloud
+description: Learn how to manage your Arc-enabled Azure VMware private cloud.
++ Last updated : 11/01/2023++++
+# Manage Arc-enabled Azure VMware private cloud
+
+In this article, learn how to update the Arc appliance credentials, upgrade the Arc resource bridge, and collect logs from the Arc resource bridge.
+
+## Update Arc appliance credential
+
+When **cloud admin** credentials are updated, use the following steps to update the credentials in the appliance store.
+
+1. Sign into the jumpbox VM from where the [onboard process](https://learn.microsoft.com/azure/azure-vmware/arc-enabled-azure-vmware-solution?tabs=windows#onboard-process-to-deploy-azure-arc) was performed. Change the directory to **onboarding directory**.
+1. Run the following command:
+ For Windows-based jumpbox VM.
+
+ `./.temp/.env/Scripts/activate`
+
+ For Linux-based jumpbox VM
+
+ `./.temp/.env/bin/activate
+
+1. Run the following command:
+
+ `az arcappliance update-infracredentials vmware --kubeconfig <kubeconfig file>`
+
+1. Run the following command:
+
+`az connectedvmware vcenter connect --debug --resource-group {resource-group} --name {vcenter-name-in-azure} --location {vcenter-location-in-azure} --custom-location {custom-location-name} --fqdn {vcenter-ip} --port {vcenter-port} --username cloudadmin@vsphere.local --password {vcenter-password}`
+
+> [!NOTE]
+> Customers need to ensure kubeconfig and SSH keys remain available as they will be required for log collection, appliance Upgrade, and credential rotation. These parameters will be required at the time of upgrade, log collection, and credential update scenarios.
+
+**Parameters**
+
+Required parameters
+
+`-kubeconfig # kubeconfig of Appliance resource`
+
+**Examples**
+
+The following command invokes the set credential for the specified appliance resource.
+
+` az arcappliance setcredential <provider> --kubeconfig <kubeconfig>`
+
+## Upgrade the Arc resource bridge
+
+Azure Arc-enabled Azure VMware Private Cloud requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates.
+
+> [!NOTE]
+> To upgrade the Arc resource bridge VM to the latest version, you'll need to perform the onboarding again with the **same resource IDs**. This will cause some downtime as operations that are performed through Arc during this time might fail.
+
+Use the following steps to perform a manual upgrade for Arc appliance virtual machine (VM).
+
+1. Sign into vCenter Server.
+1. Locate the Arc appliance VM, which should be in the resource pool that was configured during onboarding.
+1. Power off the VM.
+1. Delete the VM.
+1. Delete the download template corresponding to the VM.
+1. Delete the resource bridge **Azure Resource Manager** resource.
+1. Get the previous script `Config_avs` file and add the following configuration item:
+
+ `"register":false`
+
+1. Download the latest version of the Azure VMware Solution [onboarding script](https://learn.microsoft.com/azure/azure-vmware/deploy-arc-for-azure-vmware-solution?tabs=windows#onboard-process-to-deploy-azure-arc).
+1. Run the new onboarding script with the previous `config_avs.json` from the jump box VM, without changing other config items.
+
+## Collect logs from the Arc resource bridge
+
+Perform ongoing administration for Arc-enabled VMware vSphere by [collecting logs from the Arc resource bridge](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/administer-arc-vmware#collecting-logs-from-the-arc-resource-bridge).
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
+
+ Title: Remove Arc-enabled Azure VMware Solution vSphere resources from Azure
+description: Learn how to remove Arc-enabled Azure VMware Solution vSphere resources from Azure.
++ Last updated : 11/01/2023+++
+# Remove Arc-enabled Azure VMware Solution vSphere resources from Azure
+
+In this article, learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, use the information in this article to perform the following actions:
+
+- Remove guest management from VMware virtual machines (VMs).
+- Remove VMware vSphere resource from Azure Arc.
+- Remove Arc resource bridge related items in your vCenter.
+
+## Remove guest management from VMware VMs
+
+To prevent continued billing of Azure management services, after you remove the vSphere environment from Azure Arc, you must first remove guest management from all Arc-enabled Azure VMware Solution VMs where it was enabled.
+
+When you enable guest management on Arc-enabled Azure VMware Solution VMs, the Arc connected machine agent is installed on them. Once guest management is enabled, you can install VM extensions on them and use Azure management services like the Log Analytics on them.
+
+To completely remove guest management, use the following steps to remove any VM extensions from the virtual machine, disconnect the agent, and uninstall the software from your virtual machine. It's important to complete each of the three steps to fully remove all related software components from your virtual machines.
+
+### Remove VM extensions
+
+Use the following steps to uninstall extensions from the portal.
+
+> [!NOTE]
+> **Steps 2-5** must be performed for all the VMs that have VM extensions installed.
+
+1. Sign in to your Azure VMware Solution private cloud.
+1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£vCenter Server Inventory Page".
+1. Search and select the virtual machine where you have **Guest management** enabled.
+1. Select **Extensions**.
+1. Select the extensions and select **Uninstall**.
+
+### Disable guest management from Azure Arc
+
+To avoid problems onboarding the same VM to **Guest management**, we recommend you do the following steps to cleanly disable guest management capabilities.
+
+> [!NOTE]
+> **Steps 2-3** must be performed for **all VMs** that have **Guest management** enabled.
+
+1. Sign into the virtual machine using administrator or root credentials and run the following command in the shell.
+ 1. `azcmagent disconnect --force-local-only`.
+1. Uninstall the `ConnectedMachine agent` from the machine.
+1. Set the **identity** on the VM resource to **none**.
+
+## Uninstall agents from Virtual Machines (VMs)
+
+### Windows VM uninstall
+
+To uninstall the Windows agent from the machine, use the following steps:
+
+1. Sign in to the computer with an account that has administrator permissions.
+2. In **Control Panel**, select **Programs and Features**.
+3. In **Programs and Features**, select **Azure Connected machine Agent**, select **Uninstall**, then select **Yes**.
+4. Delete the `C:\Program Files\AzureConnectedMachineAgent` folder.
+
+### Linux VM uninstall
+
+To uninstall the Linux agent, the command to use depends on the Linux operating system. You must have `root` access permissions or your account must have elevated rights using sudo.
+
+- For Ubuntu, run the following command:
+
+ ```bash
+ sudo apt purge azcmagent
+ ```
+
+- For RHEL, CentOS, Oracle Linux run the following command:
+
+ ```bash
+ sudo yum remove azcmagent
+ ```
+
+- For SLES, run the following command:
+
+ ```bash
+ sudo zypper remove azcmagent
+ ```
+
+## Remove VMware vSphere resources from Azure
+
+When you activate Arc-enabled Azure VMware Solution resources in Azure, a representation is created for them in Azure. Before you can delete the vCenter Server resource in Azure, you need to delete all of the Azure resource representations you created for your vSphere resources. To delete the Azure resource representations you created, do the following steps:
+
+1. Go to the Azure portal.
+1. Choose **Virtual machines** from Arc-enabled VMware vSphere resources in the private cloud.
+1. Select all the VMs that have an Azure Enabled value as **Yes**.
+1. Select **Remove from Azure**. This step starts deployment and removes these resources from Azure. The resources remain in your vCenter Server.
+ 1. Repeat steps 2, 3 and 4 for **Resourcespools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**.
+1. When the deletion completes, select **Overview**.
+ 1. Note the Custom location and the Azure Arc Resource bridge resources in the Essentials section.
+1. Select **Remove from Azure** to remove the vCenter Server resource from Azure.
+1. Go to vCenter Server resource in Azure and delete it.
+1. Go to the Custom location resource and select **Delete**.
+1. Go to the Azure Arc Resource bridge resources and select **Delete**.
+
+At this point, all of your Arc-enabled VMware vSphere resources are removed from Azure.
+
+## Remove Arc resource bridge related items in your vCenter
+
+During onboarding, to create a connection between your VMware vCenter and Azure, an Azure Arc resource bridge is deployed into your VMware vSphere environment. As the last step, you must delete the resource bridge VM as well the VM template created during the onboarding.
+
+As a last step, run the following command:
+
+[`az rest --method delete`](https://management.azure.com/subscriptions/f%7BsubId%7D/resourcegroups/%7Brg)
+
+Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer.
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Here are the main scenarios where rooms are useful:
- **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. - **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call. - **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference.-- **Add PSTN participants.** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC).
+- **Add PSTN participants. (Currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/))** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC).
## When to use rooms
The tables below provide detailed capabilities mapped to the roles. At a high le
| - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Render remote video stream | ✔️ | ✔️ | ✔️ |
-| **Add PSTN participants** | | |
-| - Call participants using phone calls | ✔️ | ❌ | ❌ |
+| **Add PSTN participants** **| | |
+| - Call participants using phone calls | ✔️** | ❌ | ❌ |
-*) Only available on the web calling SDK. Not available on iOS and Android calling SDKs
+\* Only available on the web calling SDK. Not available on iOS and Android calling SDKs
+
+** Currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
## Event handling
communication-services Enable User Engagement Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/enable-user-engagement-tracking.md
In this quick start, you'll learn about how to enable user engagement tracking f
**Your email domain is now ready to send emails with user engagement tracking. Please be aware that user engagement tracking is applicable to HTML content and will not function if you submit the payload in plaintext.** You can now subscribe to Email User Engagement operational logs - provides information related to 'open' and 'click' user engagement metrics for messages sent from the Email service.-
+> [!IMPORTANT]
+> If you plan to enable open/click tracking for your email links, ensure that you are formatting the email content in HTML correctly. Specifically, make sure your tracking content is properly encapsulated within the payload, as demonstrated below:
+```html
+ <a href="https://www.contoso.com">Contoso Inc.,</a>.
+```
+ ## Next steps - Access logs for [Email Communication Service](../../concepts/analytics/logs/email-logs.md).
-The following documents may be interesting to you:
+The following documents might be interesting to you:
- Familiarize yourself with the [Email client library](../../concepts/email/sdk-features.md) - [Get started by connecting Email Communication Service with a Azure Communication Service resource](../../quickstarts/email/connect-email-communication-resource.md)
communication-services Get Started Rooms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/get-started-rooms.md
The table below lists the main properties of `room` objects:
| `roomId` | Unique `room` identifier. | | `validFrom` | Earliest time a `room` can be used. | | `validUntil` | Latest time a `room` can be used. |
-| `pstnDialOutEnabled` | Enable or disable dialing out to a PSTN number in a room.|
+| `pstnDialOutEnabled`* | Enable or disable dialing out to a PSTN number in a room.|
| `participants` | List of participants to a `room`. Specified as a `CommunicationIdentifier`. | | `roleType` | The role of a room participant. Can be either `Presenter`, `Attendee`, or `Consumer`. |
+*pstnDialOutEnabled is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
::: zone pivot="platform-azcli" [!INCLUDE[Use rooms with Azure CLI](./includes/rooms-quickstart-az-cli.md)]
communications-gateway Interoperability Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-teams-direct-routing.md
For each customer, you must:
As part of arranging updates to customer tenants, you must create DNS records containing a verification code (provided by Microsoft 365 when the customer updates their tenant with the domain name) on a DNS server that you control. These records allow Microsoft 365 to verify that the customer tenant is authorized to use the domain name. Azure Communications Gateway provides the DNS server that you must use. You must obtain the verification code from the customer and upload it with Azure Communications Gateway's Provisioning API to generate the DNS TXT records that verify the domain. > [!TIP]
-> For a walkthrough of setting up a customer tenant and subdomain for your testing, see [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md). When you onboard a real customer, you'll need to follow a similar process, but you'll typically need to ask them to carry out the steps that need access to their tenant.
+> For a walkthrough of setting up a customer tenant and numbers for your testing, see [Configure a test customer for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-customer-teams-direct-routing.md) and [Configure test numbers for Microsoft Teams Direct Routing with Azure Communications Gateway](configure-test-numbers-teams-direct-routing.md). When you onboard a real customer, you'll need to follow a similar process, but you'll typically need to ask your customer to carry out the steps that need access to their tenant.
## Support for caller ID screening
communications-gateway Interoperability Zoom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-zoom.md
Azure Communications Gateway can manipulate signaling and media to meet the requ
## Role and position in the network
-Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to Zoom servers, allowing you to support the Zoom Phone Cloud Peering program. The following diagram shows where Azure Communications Gateway sits in your network.
+Azure Communications Gateway sits at the edge of your fixed networks. It connects these networks to Zoom servers, allowing you to support the Zoom Phone Cloud Peering program. The following diagram shows where Azure Communications Gateway sits in your network.
:::image type="complex" source="media/azure-communications-gateway-architecture-zoom.svg" alt-text="Architecture diagram for Azure Communications Gateway for Microsoft Teams Direct Routing." lightbox="media/azure-communications-gateway-architecture-zoom.svg"::: Architecture diagram showing Azure Communications Gateway connecting to Zoom servers and a fixed operator network over SIP and RTP. Azure Communications Gateway and Zoom Phone Cloud Peering connect multiple customers to the operator network. Azure Communications Gateway also has a provisioning API to which a BSS client in the operator's management network must connect. Azure Communications Gateway contains certified SBC function. :::image-end:::
+You provide a trunk towards Zoom (via Azure Communications Gateway) for your customers. Calls flow from Zoom clients through the Zoom servers and Azure Communications Gateway into your network. [!INCLUDE [communications-gateway-multitenant](includes/communications-gateway-multitenant.md)].
-You provide a trunk towards Zoom (via Azure Communications Gateway) for your customers. Calls flow from Zoom clients through the Zoom servers and Azure Communications Gateway into your network.
+You must provision Azure Communications Gateway with the details of the numbers that you upload to Zoom. This provisioning allows Azure Communications Gateway to route calls correctly. For more information, see [Identifying Zoom calls](#identifying-zoom-calls).
-
-Azure Communications Gateway does not support Premises Peering (where each customer has an eSBC) for Zoom Phone.
+Azure Communications Gateway doesn't support Premises Peering (where each customer has an eSBC) for Zoom Phone.
## SIP signaling
The Zoom Phone Cloud Peering program requires SRTP for media. Azure Communicatio
### Media handling for calls
-Azure Communications Gateway can use Opus, G.722 and G.711 towards Zoom servers, with a packetization time of 20ms. You must select the codecs that you want to support when you deploy Azure Communications Gateway.
+Azure Communications Gateway can use Opus, G.722 and G.711 towards Zoom servers, with a packetization time of 20 ms. You must select the codecs that you want to support when you deploy Azure Communications Gateway.
-If your network cannot support a packetization time of 20ms, you must contact your onboarding team or raise a support request to discuss your requirements for transrating (changing packetization time).
+If your network can't support a packetization time of 20 ms, you must contact your onboarding team or raise a support request to discuss your requirements for transrating (changing packetization time).
### Media interworking options
Azure Communications Gateway offers multiple media interworking options. For exa
For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
+## Identifying Zoom calls
+
+You must provision Azure Communications Gateway with all the numbers that you upload to Zoom and indicate that these numbers are enabled for Zoom service. This provisioning allows Azure Communications Gateway to route calls to and from Zoom. It requires [Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md).
+
+> [!IMPORTANT]
+> If numbers that you upload to Zoom aren't configured on Azure Communications Gateway, calls involving those numbers fail.
+>
+> [Configure test numbers for Zoom Phone Cloud Peering with Azure Communications Gateway](configure-test-numbers-zoom.md) explains how to set up test numbers for integration testing. You will need to follow a similar process for real customer numbers.
+
+Optionally, you can indicate to your network that calls are from Zoom by:
+
+- Using the Provisioning API to add a header to calls associated with Zoom numbers.
+- Configuring Zoom to add a header with custom contents to SIP INVITEs (as part of uploading numbers to Zoom). For more information on this header, see Zoom's _Zoom Phone Provider Exchange Solution Reference Guide_.
+ ## Next steps - Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
container-apps Deploy Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio-code.md
Previously updated : 09/01/2022 Last updated : 10/29/2023
In this tutorial, you'll deploy a containerized application to Azure Container A
## Clone the project
-1. Begin by cloning the [sample repository](https://github.com/azure-samples/containerapps-albumapi-javascript) to your machine using the following command.
+1. Open a new Visual Studio Code window.
+
+1. Select <kbd>F1</kbd> to open the command palette.
+
+1. Enter **Git: Clone** and press enter.
+
+1. Enter the following URL to clone the sample project:
```git
- git clone https://github.com/Azure-Samples/containerapps-albumapi-javascript.git
+ https://github.com/Azure-Samples/containerapps-albumapi-javascript.git
``` > [!NOTE] > This tutorial uses a JavaScript project, but the steps are language agnostic.
-1. Open Visual Studio Code.
-
-1. Select **F1** to open the command palette.
+1. Select a folder to clone the project into.
-1. Select **File > Open Folder...** and select the folder where you cloned the sample project.
+1. Select **Open** to open the project in Visual Studio Code.
## Sign in to Azure
-1. Select **F1** to open the command palette.
+1. Select <kbd>F1</kbd> to open the command palette.
1. Select **Azure: Sign In** and follow the prompts to authenticate. 1. Once signed in, return to Visual Studio Code.
-## Create the container registry and Docker image
-
-Docker images contain the source code and dependencies necessary to run an application. This sample project includes a Dockerfile used to build the application's container. Since you can build and publish the image for your app directly in Azure, a local Docker installation isn't required.
-
-Container images are stored inside container registries. You can create a container registry and upload an image of your app in a single workflow using Visual Studio Code.
-
-1. In the _Explorer_ window, expand the _src_ folder to reveal the Dockerfile.
-
-1. Right select on the Dockerfile, and select **Build Image in Azure**.
-
- This action opens the command palette and prompts you to define a container tag.
-
-1. Enter a tag for the container. Accept the default, which is the project name with a run ID suffix.
-
-1. Select the Azure subscription that you want to use.
-
-1. Select **+ Create new registry**, or if you already have a registry you'd like to use, select that item and skip to creating and deploying to the container app.
-
-1. Enter a unique name for the new registry such as `msdocscapps123`, where `123` are unique numbers of your own choosing, and then select enter.
-
- Container registry names must be globally unique across all over Azure.
-
-1. Select **Basic** as the SKU.
-
-1. Choose **+ Create new resource group**, or select an existing resource group you'd like to use.
-
- For a new resource group, enter a name such as `msdocscontainerapps`, and press enter.
-
-1. Select the location that is nearest to you. Select **Enter** to finalize the workflow, and Azure begins creating the container registry and building the image.
-
- This process may take a few moments to complete.
-
-1. Select **Linux** as the image base operating system (OS).
-
-Once the registry is created and the image is built successfully, you're ready to create the container app to host the published image.
-
-## Create and deploy to the container app
+## Create and deploy to Azure Container Apps
The Azure Container Apps extension for Visual Studio Code enables you to choose existing Container Apps resources, or create new ones to deploy your applications to. In this scenario, you create a new Container App environment and container app to host your application. After installing the Container Apps extension, you can access its features under the Azure control panel in Visual Studio Code.
-### Create the Container Apps environment
+1. Select <kbd>F1</kbd> to open the command palette and run the **Azure Container Apps: Deploy Project from Workspace** command.
-Every container app must be part of a Container Apps environment. An environment provides an isolated network for one or more container apps, making it possible for them to easily invoke each other. You'll need to create an environment before you can create the container app itself.
-
-1. Select <kbd>F1</kbd> to open the command palette.
-
-1. Enter **Azure Container Apps: Create Container Apps Environment...** and enter the following values as prompted by the extension.
+1. Enter the following values as prompted by the extension.
| Prompt | Value | |--|--|
- | Name | Enter **my-aca-environment** |
- | Region | Select the region closest to you |
-
-Once you issue this command, Azure begins to create the environment for you. This process may take a few moments to complete. Creating a container app environment also creates a log analytics workspace for you in Azure.
-
-### Create the container app and deploy the Docker image
-
-Now that you have a container app environment in Azure you can create a container app inside of it. You can also publish the Docker image you created earlier as part of this workflow.
-
-1. Select <kbd>F1</kbd> to open the command palette.
-
-1. Enter **Azure Container Apps: Create Container App...** and enter the following values as prompted by the extension.
-
- | Prompt | Value | Remarks |
- |--|--|--|
- | Environment | Select **my-aca-environment** | |
- | Name | Enter **my-container-app** | |
- | Container registry | Select **Azure Container Registries**, then select the registry you created as you published the container image. | |
- | Repository | Select the container registry repository where you published the container image. | |
- | Tag | Select **latest** | |
- | Environment variables | Select **Skip for now** | |
- | Ingress | Select **Enable** | |
- | HTTP traffic type | Select **External** | |
- | Port | Enter **3500** | You set this value to the port number that your container uses. |
+ | Select subscription | Select the Azure subscription you want to use. |
+ | Select a container apps environment | Select **Create new container apps environment**. You're only asked this question if you have existing Container Apps environments. |
+ | Enter a name for the new container app resource(s) | Enter **my-container-app**. |
+ | Select a location | Select an Azure region close to you. |
+ | Would you like to save your deployment configuration? | Select **Save**. |
-During this process, Visual Studio Code and Azure create the container app for you. The published Docker image you created earlier is also be deployed to the app. Once this process finishes, Visual Studio Code displays a notification with a link to browse to the site. Select this link, and to view your app in the browser.
+ The Azure activity log panel opens and displays the deployment progress. This process might take a few minutes to complete.
+1. Once this process finishes, Visual Studio Code displays a notification. Select **Browse** to open the deployed app in a browser.
-You can also append the `/albums` path at the end of the app URL to view data from a sample API request.
+ In the browser's location bar, append the `/albums` path at the end of the app URL to view data from a sample API request.
Congratulations! You successfully created and deployed your first container app using Visual Studio code.
If you're not going to continue to use this application, you can delete the Azur
Follow these steps in the Azure portal to remove the resources you created:
-1. Select the **msdocscontainerapps** resource group from the *Overview* section.
+1. Select the **my-container-app** resource group from the *Overview* section.
1. Select the **Delete resource group** button at the top of the resource group *Overview*.
-1. Enter the resource group name **msdocscontainerapps** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
+1. Enter the resource group name **my-container-app** in the *Are you sure you want to delete "my-container-apps"* confirmation dialog.
1. Select **Delete**.
- The process to delete the resource group may take a few minutes to complete.
+ The process to delete the resource group might take a few minutes to complete.
> [!TIP] > Having issues? Let us know on GitHub by opening an issue in the [Azure Container Apps repo](https://github.com/microsoft/azure-container-apps).
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
Title: Built-in policy definitions for Azure Container Apps
description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
container-instances Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Azure Container Instances
container-registry Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md
Title: Built-in policy definitions for Azure Container Registry
description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
cosmos-db How To Container Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md
This article describes how to create, monitor, and manage intra-account containe
## Prerequisites
-* You may use the portal [Cloud Shell](/azure/cloud-shell/quickstart?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) downloaded and installed on your machine.
+* You may use the portal [Cloud Shell](/azure/cloud-shell/get-started?tabs=powershell) to run container copy commands. Alternately, you may run the commands locally; make sure you have [Azure CLI](/cli/azure/install-azure-cli) downloaded and installed on your machine.
* Currently, container copy is only supported in [these regions](intra-account-container-copy.md#supported-regions). Make sure your account's write region belongs to this list.
cosmos-db How To Develop Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-develop-emulator.md
The certificate for the emulator is available in the `_explorer/emulator.pem` pa
> [!NOTE] > You may need to change the host (or IP address) and port number if you have previously modified those values.
-1. Install the certificate according to the process typically used for your operating system. For example, in Linux you would copy the certificate to the `/usr/local/share/ca-certificats/` path.
+1. Install the certificate according to the process typically used for your operating system. For example, in Linux you would copy the certificate to the `/usr/local/share/ca-certificates/` path.
```bash cp ~/emulatorcert.crt /usr/local/share/ca-certificates/
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Title: Azure Cosmos DB ΓÇô Unified AI Database-+ description: Azure Cosmos DB is a global multi-model database and ideal database for AI applications requiring speed, elasticity and availability with native support for NoSQL and relational data.
cosmos-db Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md
Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
cosmos-db How To Configure Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/how-to-configure-authentication.md
Last updated 09/19/2023
> [!IMPORTANT] > Microsoft Entra authentication in Azure Cosmos DB for PostgreSQL is currently in preview. > This preview version is provided without a service level agreement, and it's not recommended
-> for production workloads. Certain features might not be supported or might have constrained
+> for production workloads. Certain features might not be supported or might have constrained
> capabilities. > > You can see a complete list of other new features in [preview features](product-updates.md#features-in-preview). In this article, you configure authentication methods for Azure Cosmos DB for PostgreSQL. You manage Microsoft Entra admin users and native PostgreSQL roles for authentication with Azure Cosmos DB for PostgreSQL. You also learn how to use a Microsoft Entra token with Azure Cosmos DB for PostgreSQL.
-An Azure Cosmos DB for PostgreSQL cluster is created with one built-in native PostgreSQL role named 'citus'. You can add more native PostgreSQL roles after cluster provisioning is completed.
+An Azure Cosmos DB for PostgreSQL cluster is created with one built-in native PostgreSQL role named 'citus'. You can add more native PostgreSQL roles after cluster provisioning is completed.
You can also configure Microsoft Entra authentication for Azure Cosmos DB for PostgreSQL. You can enable Microsoft Entra authentication in addition or instead of the native PostgreSQL authentication on your cluster. You can change authentication methods enabled on cluster at any point after the cluster is provisioned. When Microsoft Entra authentication is enabled, you can add multiple Microsoft Entra users to an Azure Cosmos DB for PostgreSQL cluster and make any of them administrators. Microsoft Entra user can be a user or a service principal.
Once done proceed with [configuring Microsoft Entra authentication](#configure-a
To add or remove Microsoft Entra roles on cluster, follow these steps on **Authentication** page:
-1. In **Microsoft Entra authentication (preview)** section, select **Add Microsoft Entra admins**.
+1. In **Microsoft Entra authentication (preview)** section, select **Add Microsoft Entra admins**.
1. In **Select Microsoft Entra Admins** panel, select one or more valid Microsoft Entra user or enterprise application in the current AD tenant to be a Microsoft Entra administrator on your Azure Cosmos DB for PostgreSQL cluster. 1. Use **Select** to confirm your choice. 1. In the **Authentication** page, select **Save** in the toolbar to save changes or proceed with adding native PostgreSQL roles.
-
+ ## Configure native PostgreSQL authentication To add Postgres roles on cluster, follow these steps on **Authentication** page:
We've tested the following clients:
- **Other libpq-based clients**: Examples include common application frameworks and object-relational mappers (ORMs). - **pgAdmin**: Clear **Connect now** at server creation.
-Use the following procedures to authenticate with Microsoft Entra ID as an Azure Cosmos DB for PostgreSQL user. You can follow along in [Azure Cloud Shell](./../../cloud-shell/quickstart.md), on an Azure virtual machine, or on your local machine.
+Use the following procedures to authenticate with Microsoft Entra ID as an Azure Cosmos DB for PostgreSQL user. You can follow along in [Azure Cloud Shell](./../../cloud-shell/get-started.md), on an Azure virtual machine, or on your local machine.
### Sign in to the user's Azure subscription
export PGPASSWORD=$(az account get-access-token --resource-type oss-rdbms --quer
> [!NOTE]
-> Make sure PGPASSWORD variable is set to the Microsoft Entra access token for your
-> subscription for Microsoft Entra authentication. If you need to do Postgres role authentication
-> from the same session you can set PGPASSWORD to the Postgres role password
-> or clear the PGPASSWORD variable value to enter the password interactively.
+> Make sure PGPASSWORD variable is set to the Microsoft Entra access token for your
+> subscription for Microsoft Entra authentication. If you need to do Postgres role authentication
+> from the same session you can set PGPASSWORD to the Postgres role password
+> or clear the PGPASSWORD variable value to enter the password interactively.
> Authentication would fail with the wrong value in PGPASSWORD. Now you can initiate a connection with Azure Cosmos DB for PostgreSQL as you usually would (without 'password' parameter in the command line):
For example, to allow PostgreSQL `db_user` to read `mytable`, grant the permissi
GRANT SELECT ON mytable TO db_user; ```
-To grant the same permissions to Microsoft Entra role `user@tenant.onmicrosoft.com` use the following command:
+To grant the same permissions to Microsoft Entra role `user@tenant.onmicrosoft.com` use the following command:
```sql GRANT SELECT ON mytable TO "user@tenant.onmicrosoft.com";
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/autoscale.md
The script in this article creates an Azure Cosmos DB for Apache Cassandra accou
- This script requires Azure CLI version 2.12.1 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/autoscale.md
The script in this article creates an Azure Cosmos DB for Gremlin account, datab
- This script requires Azure CLI version 2.30 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/serverless.md
The script in this article creates an Azure Cosmos DB for Gremlin serverless acc
- This script requires Azure CLI version 2.30 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure to select **Bash** in the environment field at the upper left of the shell window. Cloud Shell has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/autoscale.md
The script in this article creates an Azure Cosmos DB for NoSQL account, databas
- This script requires Azure CLI version 2.0.73 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
The script in this article creates an Azure Cosmos DB for NoSQL account, databas
```azurecli subscription="<subscriptionId>" # add subscription here
-
+ az account set -s $subscription # ...or use 'az login' ```
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/autoscale.md
The script in this article creates an Azure Cosmos DB for Table account and tabl
- This script requires Azure CLI version 2.12.1 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
The script in this article creates an Azure Cosmos DB for Table account and tabl
```azurecli subscription="<subscriptionId>" # add subscription here
-
+ az account set -s $subscription # ...or use 'az login' ```
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/lock.md
[!INCLUDE[Table](../../../includes/appliesto-table.md)]
-The script in this article demonstrates performing resource lock operations for a API for Table table.
+The script in this article demonstrates performing resource lock operations for a API for Table table.
> [!IMPORTANT] > To enable resource locking, the Azure Cosmos DB account must have the `disableKeyBasedMetadataWriteAccess` property enabled. This property prevents any changes to resources from clients that connect via account keys, such as the Azure Cosmos DB Table SDK, Azure Storage Table SDK, or Azure portal. For more information, see [Preventing changes from SDKs](../../../role-based-access-control.md#prevent-sdk-changes).
The script in this article demonstrates performing resource lock operations for
- This script requires Azure CLI version 2.12.1 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
The script in this article demonstrates performing resource lock operations for
```azurecli subscription="<subscriptionId>" # add subscription here
-
+ az account set -s $subscription # ...or use 'az login' ```
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/serverless.md
The script in this article creates an Azure Cosmos DB for Table serverless accou
- This script requires Azure CLI version 2.12.1 or later.
- - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/quickstart.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
+ - You can run the script in the Bash environment in [Azure Cloud Shell](../../../../cloud-shell/get-started.md). When Cloud Shell opens, make sure **Bash** appears in the environment field at the upper left of the shell window. Cloud Shell always has the latest version of Azure CLI.
[![Launch Cloud Shell in a new window](../../../../../includes/media/cloud-shell-try-it/hdi-launch-cloud-shell.png)](https://shell.azure.com)
The script in this article creates an Azure Cosmos DB for Table serverless accou
```azurecli subscription="<subscriptionId>" # add subscription here
-
+ az account set -s $subscription # ...or use 'az login' ```
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 07/21/2023 Last updated : 11/07/2023
This article explains the common tasks that an Enterprise Agreement (EA) administrator accomplishes in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_GTM/ModernBillingMenuBlade/AllBillingScopes). A direct enterprise agreement is signed between Microsoft and an enterprise agreement customer. Conversely, an indirect EA is one where a customer signs an agreement with a Microsoft partner. This article is applicable for both direct and indirect EA customers. > [!NOTE]
-> We recommend that both direct and indirect EA Azure customers use Cost Management + Billing in the Azure portal to manage their enrollment and billing instead of using the EA portal. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
->
-> As of February 20, 2023 indirect EA customers no longer manage their billing account in the EA portal. Instead, they use the Azure portal.
+> On November 15, 2023, the Azure Enterprise portal is retiring for EA enrollments in the Commercial cloud and is becoming read-only for EA enrollments in the Azure Government cloud.
+> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
>
-> Until August 14, 2023, this change doesnΓÇÖt affect customers with Azure Government EA enrollments. They continue using the EA portal to manage their enrollment until then. However, after August 14, 2023, EA customers won't be able to manage their Azure Government EA enrollments from the [Azure portal](https://portal.azure.com). Instead, they can manage it from the Azure Government portal at [https://portal.azure.us](https://portal.azure.us). The functionality mentioned in this article the same as the Azure Government portal.
+> Since August 14, 2023, EA customers is not be able to manage their Azure Government EA enrollments from the [Azure portal](https://portal.azure.com). Instead, they can manage it from the Azure Government portal at [https://portal.azure.us](https://portal.azure.us). The functionality mentioned in this article is same as the Azure Government portal.
## Manage your enrollment
cost-management-billing Direct Ea Azure Usage Charges Invoices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-azure-usage-charges-invoices.md
Title: View your Azure usage summary details and download reports for EA enrollm
description: This article explains how enterprise administrators of direct and indirect Enterprise Agreement (EA) enrollments can view a summary of their usage data, Azure Prepayment consumed, and charges associated with other usage in the Azure portal. Previously updated : 07/14/2023 Last updated : 11/06/2023
To review and verify the charges on your invoice, you must be an Enterprise Admi
## Review usage charges
-To view detailed usage for specific accounts, download the usage detail report. Usage files may be large. If you prefer, you can use the exports feature to get the same data exported to an Azure Storage account. For more information, see [Export usage details to a storage account](../costs/tutorial-export-acm-data.md).
+To view detailed usage for specific accounts, download the usage detail report. Usage files can be large. If you prefer, you can use the exports feature to get the same data exported to an Azure Storage account. For more information, see [Export usage details to a storage account](../costs/tutorial-export-acm-data.md).
As an enterprise administrator:
Enterprise administrators and partner administrators can also view an overall su
## Download or view your Azure billing invoice
-An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or have it sent in email. Invoices are sent to whoever is set up to receive invoices for the enrollment. If someone other than an EA administrator needs an email copy of the invoice, an EA administrator can send them a copy.
+An EA administrator can download the invoice from the [Azure portal](https://portal.azure.com) or send it in email. Invoices are sent to whoever is set up to receive invoices for the enrollment. If someone other than an EA administrator needs an email copy of the invoice, an EA administrator can send them a copy.
Only an Enterprise Administrator has permission to view and download the billing invoice. To learn more about getting access to billing information, see [Manage access to Azure billing using roles](manage-billing-access.md).
You receive an Azure invoice when any of the following events occur during your
- Visual Studio Professional (Annual) - **Marketplace charges** - Azure Marketplace purchases and usage aren't covered by your organization's credit. So, you're invoiced for Marketplace charges despite your credit balance. In the Azure portal, an Enterprise Administrator can enable and disable Marketplace purchases.
-Your invoice displays Azure usage charges with costs associated to them first, followed by any Marketplace charges. If you have a credit balance, it's applied to Azure usage. Your invoice shows Azure usage and Marketplace usage without any cost last.
+Your invoice displays Azure usage charges with costs associated to them first, followed by any Marketplace charges. If you have a credit balance, it gets applied to Azure usage. Your invoice shows Azure usage and Marketplace usage without any cost last.
+
+### Advanced report download
+
+You can use the Download Advanced Report to get reports that cover specific date ranges for the selected accounts. The output file is in CSV format to accommodate large record sets.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Search for **Cost Management + Billing** and select it.
+1. Select **Billing scopes** from the navigation menu and then select the billing account that you want to work with.4. In the left navigation menu, select Billing profiles and select the billing profile that you want to work with.
+1. In the navigation menu, select **Usage + Charges**.
+1. At the top of the Usage + charges page, select **Download Advanced report**.
+1. Select a date range and the accounts to include in the report.
+1. Select **Download**.
+1. You can also download files from the **Report History**. It shows the latest reports that you downloaded.
+ ### Download your Azure invoices (.pdf)
However, you *should* see:
The formatting issue occurs because of default settings in Excel's import functionality. Excel imports all fields as *General* text and assumes that a number is separated in the mathematical standard. For example: *1,000.00*.
-If your currency uses a period (**.**) for the thousandth place separator and a comma (**,**) for the decimal place separator, it's displayed incorrectly. For example: *1.000,00*. The import results may vary depending on your regional language setting.
+If your currency uses a period (**.**) for the thousandth place separator and a comma (**,**) for the decimal place separator, it gets displayed incorrectly. For example: *1.000,00*. The import results might vary depending on your regional language setting.
To import the CSV file without formatting issues: 1. In Microsoft Excel, go to **File** > **Open**. The Text Import Wizard appears. 1. Under **Original Data Type**, choose **delimited**. Default is **Fixed Width**. 1. Select **Next**.
-1. Under **Delimiters**, select the box for **Comma**. Clear **Tab** if it's selected.
+1. Under **Delimiters**, select the box for **Comma**. Clear **Tab** if selected.
1. Select **Next**. 1. Scroll over to the **ResourceRate** and **ExtendedCost** columns. 1. Select the **ResourceRate** column. It appears highlighted in black.
cost-management-billing Ea Portal Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-administration.md
Title: Azure EA portal administration
description: This article explains the common tasks that an administrator accomplishes in the Azure EA portal. Previously updated : 07/28/2023 Last updated : 11/07/2023
This article explains the common tasks that an administrator accomplishes in the Azure EA portal (https://ea.azure.com). The Azure EA portal is an online management portal that helps customers manage the cost of their Azure EA services. For introductory information about the Azure EA portal, see the [Get started with the Azure EA portal](ea-portal-get-started.md) article.
-> [!IMPORTANT]
-> The Azure EA portal is getting deprecated. Direct and indirect EA Azure customers now use Cost Management + Billing features in the Azure portal to manage their enrollment and billing *instead of using the EA portal*. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
+> [!NOTE]
+> On November 15, 2023, the Azure Enterprise portal is retiring for EA enrollments in the Commercial cloud and is becoming read-only for EA enrollments in the Azure Government cloud.
+> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](ea-direct-portal-get-started.md).
## Activate your enrollment
data-factory Connector Microsoft Fabric Lakehouse Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse-table.md
- Title: Copy and Transform data in Microsoft Fabric Lakehouse Table (Preview) -
-description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
------ Previously updated : 11/01/2023--
-# Copy and Transform data in Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics
--
-The Microsoft Fabric Lakehouse serves as a data architecture platform designed to store, manage, and analyse both structured and unstructured data within a single location. This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Table (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse Files (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
-
-> [!IMPORTANT]
-> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
-
-## Supported capabilities
-
-This Microsoft Fabric Lakehouse Table connector is supported for the following capabilities:
-
-| Supported capabilities|IR | Managed private endpoint|
-|| --| --|
-|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
-|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
-
-<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
-
-## Get started
--
-## Create a Microsoft Fabric Lakehouse linked service using UI
-
-Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI.
-
-1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
-
- # [Azure Data Factory](#tab/data-factory)
-
- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
-
- # [Azure Synapse](#tab/synapse-analytics)
-
- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
-
-2. Search for Microsoft Fabric Lakehouse and select the connector.
-
- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/microsoft-fabric-lakehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Lakehouse connector.":::
-
-1. Configure the service details, test the connection, and create the new linked service.
-
- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/configure-microsoft-fabric-lakehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Lakehouse linked service.":::
--
-## Connector configuration details
-
-The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Lakehouse.
-
-## Linked service properties
-
-The Microsoft Fabric Lakehouse connector supports the following authentication types. See the corresponding sections for details:
--- [Service principal authentication](#service-principal-authentication)-
-### Service principal authentication
-
-To use service principal authentication, follow these steps.
-
-1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service:
-
- - Application ID
- - Application key
- - Tenant ID
-
-2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps:
- 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**.
-
- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access.":::
-
- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane.":::
-
- 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list.
-
- 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**.
-
- :::image type="content" source="media/connector-microsoft-fabric-lakehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role.":::
-
- 1. Your service principal is displayed on **Manage access** pane.
-
-These properties are supported for the linked service:
-
-| Property | Description | Required |
-|: |: |: |
-| type | The type property must be set to **Lakehouse**. |Yes |
-| workspaceId | The Microsoft Fabric workspace ID. | Yes |
-| artifactId | The Microsoft Fabric Lakehouse object ID. | Yes |
-| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
-| servicePrincipalId | Specify the application's client ID. | Yes |
-| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
-| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
-| azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Azure Active Directory application is registered. <br/> Allowed values are **AzurePublic**, **AzureChina**, **AzureUsGovernment**, and **AzureGermany**. By default, the data factory or Synapse pipeline's cloud environment is used. | No |
-| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
-
-**Example: using service principal key authentication**
-
-You can also store service principal key in Azure Key Vault.
-
-```json
-{
- "name": "MicrosoftFabricLakehouseLinkedService",
- "properties": {
- "type": "Lakehouse",
- "typeProperties": {
- "workspaceId": "<Microsoft Fabric workspace ID>",
- "artifactId": "<Microsoft Fabric Lakehouse object ID>",
- "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
- "servicePrincipalId": "<service principal id>",
- "servicePrincipalCredentialType": "ServicePrincipalKey",
- "servicePrincipalCredential": {
- "type": "SecureString",
- "value": "<service principal key>"
- }
- },
- "connectVia": {
- "referenceName": "<name of Integration Runtime>",
- "type": "IntegrationRuntimeReference"
- }
- }
-}
-```
-
-## Dataset properties
-
-For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
-
-The following properties are supported for Microsoft Fabric Lakehouse Table dataset:
-
-| Property | Description | Required |
-| :-- | :-- | :-- |
-| type | The **type** property of the dataset must be set to **LakehouseTable**. | Yes |
-| schema | Name of the schema. |No for source. Yes for sink |
-| table | Name of the table/view. |No for source. Yes for sink |
-
-### Dataset properties example
-
-```json
-{
-    "name": "LakehouseTableDataset",
-    "properties": {
-        "type": "LakehouseTable",
-        "linkedServiceName": {
-            "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
-            "type": "LinkedServiceReference"
-        },
-        "typeProperties": {
- "table": "<table_name>"
-        },
-        "schema": [< physical schema, optional, retrievable during authoring >]
-    }
-}
-```
-
-## Copy activity properties
-
-For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md). This section provides a list of properties supported by the Microsoft Fabric Lakehouse Table source and sink.
-
-### Microsoft Fabric Lakehouse Table as a source type
-
-To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in the Copy Activity source to **LakehouseTableSource**. The following properties are supported in the Copy Activity **source** section:
-
-| Property | Description | Required |
-| : | :-- | :- |
-| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSource**. | Yes |
-| timestampAsOf | The timestamp to query an older snapshot. | No |
-| versionAsOf | The version to query an older snapshot. | No |
-
-**Example: Microsoft Fabric Lakehouse Table source**
-
-```json
-"activities":[
- {
- "name": "CopyFromLakehouseTable",
- "type": "Copy",
- "inputs": [
- {
- "referenceName": "<Microsoft Fabric Lakehouse Table input dataset name>",
- "type": "DatasetReference"
- }
- ],
- "outputs": [
- {
- "referenceName": "<output dataset name>",
- "type": "DatasetReference"
- }
- ],
- "typeProperties": {
- "source": {
- "type": "LakehouseTableSource",
- "timestampAsOf": "2023-09-23T00:00:00.000Z",
- "versionAsOf": 2
- },
- "sink": {
- "type": "<sink type>"
- }
- }
- }
-]
-```
-
-### Microsoft Fabric Lakehouse Table as a sink type
-
-To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in the Copy Activity source to **LakehouseTableSink**. The following properties are supported in the Copy activity **sink** section:
-
-| Property | Description | Required |
-| : | :-- | :- |
-| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSink**. | Yes |
-| tableActionOption | The way to write data to the sink table. Allowed values are `Append` and `Overwrite`. | No |
-| partitionOption | Allowed values are `None` and `PartitionByKey`. Create partitions in folder structure based on one or multiple columns when the value is `PartitionByKey`. Each distinct column value (pair) will be a new partition (e.g. year=2000/month=01/file). It supports insert-only mode and requires an empty directory in sink. | No |
-| partitionNameList | The destination columns in schemas mapping. Supported data types are string, integer, boolean and datetime. Format respects type conversion settings under "Mapping" tab. | No |
-
-**Example: Microsoft Fabric Lakehouse Table sink**
-
-```json
-"activities":[
- {
- "name": "CopyToLakehouseTable",
- "type": "Copy",
- "inputs": [
- {
- "referenceName": "<input dataset name>",
- "type": "DatasetReference"
- }
- ],
- "outputs": [
- {
- "referenceName": "<Microsoft Fabric Lakehouse Table output dataset name>",
- "type": "DatasetReference"
- }
- ],
- "typeProperties": {
- "source": {
- "type": "<source type>"
- },
- "sink": {
- "type": "LakehouseTableSink",
- "tableActionOption ": "Append"
- }
- }
- }
-]
-```
-## Mapping data flow properties
-
-When transforming data in mapping data flow, you can read and write to tables in Microsoft Fabric Lakehouse. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
-
-### Microsoft Fabric Lakehouse Table as a source type
-
-There are no configurable properties under source options.
-
-### Microsoft Fabric Lakehouse Table as a sink type
-
-The following properties are supported in the Mapping Data Flows **sink** section:
-
-| Name | Description | Required | Allowed values | Data flow script property |
-| - | -- | -- | -- | - |
-| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable |
-| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
-| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
-| Merge Schema | Merge schema option allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
-
-**Example: Microsoft Fabric Lakehouse Table sink**
-
-```
-sink(allowSchemaDrift: true,
-ΓÇ» ΓÇ» validateSchema: false,
-ΓÇ» ΓÇ» input(
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» CustomerID as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» NameStyle as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» Title as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» FirstName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» MiddleName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» LastName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» Suffix as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» CompanyName as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» SalesPerson as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» EmailAddress as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» Phone as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordHash as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordSalt as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» rowguid as string,
-ΓÇ» ΓÇ» ΓÇ» ΓÇ» ModifiedDate as string
-ΓÇ» ΓÇ» ),
-ΓÇ» ΓÇ» deletable:false,
-ΓÇ» ΓÇ» insertable:true,
-ΓÇ» ΓÇ» updateable:false,
-ΓÇ» ΓÇ» upsertable:false,
-ΓÇ» ΓÇ» optimizedWrite: true,
-ΓÇ» ΓÇ» mergeSchema: true,
-ΓÇ» ΓÇ» autoCompact: true,
-ΓÇ» ΓÇ» skipDuplicateMapInputs: true,
-ΓÇ» ΓÇ» skipDuplicateMapOutputs: true) ~> CustomerTable
-
-```
--
-## Next steps
-
-For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Microsoft Fabric Lakehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse.md
+
+ Title: Copy and transform data in Microsoft Fabric Lakehouse (Preview)
+
+description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
++++++ Last updated : 11/03/2023++
+# Copy and transform data in Microsoft Fabric Lakehouse (Preview) using Azure Data Factory or Azure Synapse Analytics
++
+Microsoft Fabric Lakehouse is a data architecture platform for storing, managing, and analyzing structured and unstructured data in a single location. In order to achieve seamless data access across all compute engines in Microsoft Fabric, go to [Lakehouse and Delta Tables](/fabric/data-engineering/lakehouse-and-delta-tables) to learn more.
+
+This article outlines how to use Copy activity to copy data from and to Microsoft Fabric Lakehouse (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+> [!IMPORTANT]
+> This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
+
+## Supported capabilities
+
+This Microsoft Fabric Lakehouse connector is supported for the following capabilities:
+
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
+
+<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
+
+## Get started
++
+## Create a Microsoft Fabric Lakehouse linked service using UI
+
+Use the following steps to create a Microsoft Fabric Lakehouse linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+1. Search for Microsoft Fabric Lakehouse and select the connector.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/microsoft-fabric-lakehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Lakehouse connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/configure-microsoft-fabric-lakehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Lakehouse linked service.":::
+
+## Connector configuration details
+
+The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Lakehouse.
+
+## Linked service properties
+
+The Microsoft Fabric Lakehouse connector supports the following authentication types. See the corresponding sections for details:
+
+- [Service principal authentication](#service-principal-authentication)
+
+### Service principal authentication
+
+To use service principal authentication, follow these steps.
+
+1. Register an application with the Microsoft Identity platform. To learn how, see [Quickstart: Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). Make note of these values, which you use to define the linked service:
+
+ - Application ID
+ - Application key
+ - Tenant ID
+
+2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps:
+ 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access.":::
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane.":::
+
+ 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list.
+
+ 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-lakehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role.":::
+
+ 1. Your service principal is displayed on **Manage access** pane.
+
+These properties are supported for the linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Lakehouse**. |Yes |
+| workspaceId | The Microsoft Fabric workspace ID. | Yes |
+| artifactId | The Microsoft Fabric Lakehouse object ID. | Yes |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's key. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example: using service principal key authentication**
+
+You can also store service principal key in Azure Key Vault.
+
+```json
+{
+ "name": "MicrosoftFabricLakehouseLinkedService",
+ "properties": {
+ "type": "Lakehouse",
+ "typeProperties": {
+ "workspaceId": "<Microsoft Fabric workspace ID>",
+ "artifactId": "<Microsoft Fabric Lakehouse object ID>",
+ "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
+ "servicePrincipalId": "<service principal id>",
+ "servicePrincipalCredentialType": "ServicePrincipalKey",
+ "servicePrincipalCredential": {
+ "type": "SecureString",
+ "value": "<service principal key>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+## Dataset properties
+
+Microsoft Fabric Lakehouse connector supports two types of datasets, which are Microsoft Fabric Lakehouse Files dataset
+and Microsoft Fabric Lakehouse Table dataset. See the corresponding sections for details.
+
+- [Microsoft Fabric Lakehouse Files dataset](#microsoft-fabric-lakehouse-files-dataset)
+- [Microsoft Fabric Lakehouse Table dataset](#microsoft-fabric-lakehouse-table-dataset)
+
+For a full list of sections and properties available for defining datasets, see [Datasets](concepts-datasets-linked-services.md).
+
+### Microsoft Fabric Lakehouse Files dataset
+
+Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+The following properties are supported under `location` settings in the format-based Microsoft Fabric Lakehouse Files dataset:
+
+| Property | Description | Required |
+| - | | -- |
+| type | The type property under `location` in the dataset must be set to **LakehouseLocation**. | Yes |
+| folderPath | The path to a folder. If you want to use a wildcard to filter folders, skip this setting and specify it in activity source settings. | No |
+| fileName | The file name under the given folderPath. If you want to use a wildcard to filter files, skip this setting and specify it in activity source settings. | No |
+
+**Example:**
+
+```json
+{
+ "name": "DelimitedTextDataset",
+ "properties": {
+ "type": "DelimitedText",
+ "linkedServiceName": {
+ "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "typeProperties": {
+ "location": {
+ "type": "LakehouseLocation",
+ "fileName": "<file name>",
+ "folderPath": "<folder name>"
+ },
+ "columnDelimiter": ",",
+ "compressionCodec": "gzip",
+ "escapeChar": "\\",
+ "firstRowAsHeader": true,
+ "quoteChar": "\""
+ },
+ "schema": [ < physical schema, optional, auto retrieved during authoring > ]
+ }
+}
+```
+
+### Microsoft Fabric Lakehouse Table dataset
+
+The following properties are supported for Microsoft Fabric Lakehouse Table dataset:
+
+| Property | Description | Required |
+| :-- | :-- | :-- |
+| type | The **type** property of the dataset must be set to **LakehouseTable**. | Yes |
+| table | The name of your table. | Yes |
+
+**Example:**
+
+```json
+{
+    "name": "LakehouseTableDataset",
+    "properties": {
+        "type": "LakehouseTable",
+        "linkedServiceName": {
+            "referenceName": "<Microsoft Fabric Lakehouse linked service name>",
+            "type": "LinkedServiceReference"
+        },
+        "typeProperties": {
+ "table": "<table_name>"
+        },
+        "schema": [< physical schema, optional, retrievable during authoring >]
+    }
+}
+```
+
+## Copy activity properties
+
+The copy activity properties for Microsoft Fabric Lakehouse Files dataset and Microsoft Fabric Lakehouse Table dataset are different. See the corresponding sections for details.
+
+- [Microsoft Fabric Lakehouse Files in Copy activity](#microsoft-fabric-lakehouse-files-in-copy-activity)
+- [Microsoft Fabric Lakehouse Table in Copy activity](#microsoft-fabric-lakehouse-table-in-copy-activity)
+
+For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md).
+
+### Microsoft Fabric Lakehouse Files in Copy activity
+
+To use Microsoft Fabric Lakehouse Files dataset type as a source or sink in Copy activity, go to the following sections for the detailed configurations.
+
+#### Microsoft Fabric Lakehouse Files as a source type
+
+Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+You have several options to copy data from Microsoft Fabric Lakehouse using the Microsoft Fabric Lakehouse Files dataset:
+
+- Copy from the given path specified in the dataset.
+- Wildcard filter against folder path or file name, see `wildcardFolderPath` and `wildcardFileName`.
+- Copy the files defined in a given text file as file set, see `fileListPath`.
+
+The following properties are under `storeSettings` settings in format-based copy source when using Microsoft Fabric Lakehouse Files dataset:
+
+| Property | Description | Required |
+| | | |
+| type | The type property under `storeSettings` must be set to **LakehouseReadSettings**. | Yes |
+| ***Locate the files to copy:*** | | |
+| OPTION 1: static path<br> | Copy from the folder/file path specified in the dataset. If you want to copy all files from a folder, additionally specify `wildcardFileName` as `*`. | |
+| OPTION 2: wildcard<br>- wildcardFolderPath | The folder path with wildcard characters to filter source folders. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual folder name has wildcard or this escape char inside. <br>See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | No |
+| OPTION 2: wildcard<br>- wildcardFileName | The file name with wildcard characters under the given folderPath/wildcardFolderPath to filter source files. <br>Allowed wildcards are: `*` (matches zero or more characters) and `?` (matches zero or single character); use `^` to escape if your actual file name has wildcard or this escape char inside. See more examples in [Folder and file filter examples](#folder-and-file-filter-examples). | Yes |
+| OPTION 3: a list of files<br>- fileListPath | Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset.<br/>When using this option, don't specify file name in dataset. See more examples in [File list examples](#file-list-examples). |No |
+| ***Additional settings:*** | | |
+| recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. <br>Allowed values are **true** (default) and **false**.<br>This property doesn't apply when you configure `fileListPath`. |No |
+| deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you'll see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. <br/>This property is only valid in binary files copy scenario. The default value: false. |No |
+| modifiedDatetimeStart | Files filter based on the attribute: Last Modified. <br>The files will be selected if their last modified time is greater than or equal to `modifiedDatetimeStart` and less than `modifiedDatetimeEnd`. The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". <br> The properties can be NULL, which means no file attribute filter will be applied to the dataset. When `modifiedDatetimeStart` has datetime value but `modifiedDatetimeEnd` is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When `modifiedDatetimeEnd` has datetime value but `modifiedDatetimeStart` is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.<br/>This property doesn't apply when you configure `fileListPath`. | No |
+| modifiedDatetimeEnd | Same as above. | No |
+| enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns.<br/>Allowed values are **false** (default) and **true**. | No |
+| partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns.<br/><br/>If it isn't specified, by default,<br/>- When you use file path in dataset or list of files on source, partition root path is the path configured in dataset.<br/>- When you use wildcard folder filter, partition root path is the sub-path before the first wildcard.<br/><br/>For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27":<br/>- If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns `month` and `day` with value "08" and "27" respectively, in addition to the columns inside the files.<br/>- If partition root path isn't specified, no extra column will be generated. | No |
+| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+
+**Example:**
+
+```json
+"activities": [
+ {
+ "name": "CopyFromLakehouseFiles",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<Delimited text input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "DelimitedTextSource",
+ "storeSettings": {
+ "type": "LakehouseReadSettings",
+ "recursive": true,
+ "enablePartitionDiscovery": false
+ },
+ "formatSettings": {
+ "type": "DelimitedTextReadSettings"
+ }
+ },
+ "sink": {
+ "type": "<sink type>"
+ }
+ }
+ }
+]
+```
++
+#### Microsoft Fabric Lakehouse Files as a sink type
+
+Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Binary format](format-binary.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+The following properties are under `storeSettings` settings in format-based copy sink when using Microsoft Fabric Lakehouse Files dataset:
+
+| Property | Description | Required |
+| | | -- |
+| type | The type property under `storeSettings` must be set to **LakehouseWriteSettings**. | Yes |
+| copyBehavior | Defines the copy behavior when the source is files from a file-based data store.<br/><br/>Allowed values are:<br/><b>- PreserveHierarchy (default)</b>: Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder.<br/><b>- FlattenHierarchy</b>: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. <br/><b>- MergeFiles</b>: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. | No |
+| blockSizeInMB | Specify the block size in MB used to write data to Microsoft Fabric Lakehouse. Learn more [about Block Blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-block-blobs). <br/>Allowed value is **between 4 MB and 100 MB**. <br/>By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into Microsoft Fabric Lakehouse, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It might be not optimal when your data isn't large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. | No |
+| maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections.| No |
+| metadata |Set custom metadata when copy to sink. Each object under the `metadata` array represents an extra column. The `name` defines the metadata key name, and the `value` indicates the data value of that key. If [preserve attributes feature](./copy-activity-preserve-metadata.md#preserve-metadata) is used, the specified metadata will union/overwrite with the source file metadata.<br/><br/>Allowed data values are:<br/>- `$$LASTMODIFIED`: a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.<br/><b>- Expression<b><br/>- <b>Static value<b>| No |
+
+**Example:**
+
+```json
+"activities": [
+ {
+ "name": "CopyToLakehouseFiles",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Parquet output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "ParquetSink",
+ "storeSettings": {
+ "type": "LakehouseWriteSettings",
+ "copyBehavior": "PreserveHierarchy",
+ "metadata": [
+ {
+ "name": "testKey1",
+ "value": "value1"
+ },
+ {
+ "name": "testKey2",
+ "value": "value2"
+ }
+ ]
+ },
+ "formatSettings": {
+ "type": "ParquetWriteSettings"
+ }
+ }
+ }
+ }
+]
+```
+
+#### Folder and file filter examples
+
+This section describes the resulting behavior of the folder path and file name with wildcard filters.
+
+| folderPath | fileName | recursive | Source folder structure and filter result (files in **bold** are retrieved)|
+|: |: |: |: |
+| `Folder*` | (Empty, use default) | false | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File2.json**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5.csv<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+| `Folder*` | (Empty, use default) | true | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File2.json**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File4.json**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+| `Folder*` | `*.csv` | false | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3.csv<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5.csv<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+| `Folder*` | `*.csv` | true | FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>AnotherFolderB<br/>&nbsp;&nbsp;&nbsp;&nbsp;File6.csv |
+
+#### File list examples
+
+This section describes the resulting behavior of using file list path in copy activity source.
+
+Assuming you have the following source folder structure and want to copy the files in bold:
+
+| Sample source structure | Content in FileListToCopy.txt | ADF configuration |
+| | | |
+| filesystem<br/>&nbsp;&nbsp;&nbsp;&nbsp;FolderA<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File1.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File2.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File3.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4.json<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;**File5.csv**<br/>&nbsp;&nbsp;&nbsp;&nbsp;Metadata<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;FileListToCopy.txt | File1.csv<br>Subfolder1/File3.csv<br>Subfolder1/File5.csv | **In dataset:**<br>- File system: `filesystem`<br>- Folder path: `FolderA`<br><br>**In copy activity source:**<br>- File list path: `filesystem/Metadata/FileListToCopy.txt` <br><br>The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
++
+#### Some recursive and copyBehavior examples
+
+This section describes the resulting behavior of the copy operation for different combinations of recursive and copyBehavior values.
+
+| recursive | copyBehavior | Source folder structure | Resulting target |
+|: |: |: |: |
+| true |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the same structure as the source:<br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 |
+| true |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File5 |
+| true |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 + File3 + File4 + File5 contents are merged into one file with an autogenerated file name. |
+| false |preserveHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+| false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
++
+### Microsoft Fabric Lakehouse Table in Copy activity
+
+To use Microsoft Fabric Lakehouse Table dataset as a source or sink dataset in Copy activity, go to the following sections for the detailed configurations.
+
+#### Microsoft Fabric Lakehouse Table as a source type
+
+To copy data from Microsoft Fabric Lakehouse using Microsoft Fabric Lakehouse Table dataset, set the **type** property in the Copy activity source to **LakehouseTableSource**. The following properties are supported in the Copy activity **source** section:
+
+| Property | Description | Required |
+| : | :-- | :- |
+| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSource**. | Yes |
+| timestampAsOf | The timestamp to query an older snapshot. | No |
+| versionAsOf | The version to query an older snapshot. | No |
+
+**Example:**
+
+```json
+"activities":[
+ {
+ "name": "CopyFromLakehouseTable",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<Microsoft Fabric Lakehouse Table input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "LakehouseTableSource",
+ "timestampAsOf": "2023-09-23T00:00:00.000Z",
+ "versionAsOf": 2
+ },
+ "sink": {
+ "type": "<sink type>"
+ }
+ }
+ }
+]
+```
+
+#### Microsoft Fabric Lakehouse Table as a sink type
+
+To copy data to Microsoft Fabric Lakehouse using Microsoft Fabric Lakehouse Table dataset, set the **type** property in the Copy Activity sink to **LakehouseTableSink**. The following properties are supported in the Copy activity **sink** section:
+
+| Property | Description | Required |
+| : | :-- | :- |
+| type | The **type** property of the Copy Activity source must be set to **LakehouseTableSink**. | Yes |
+
+**Example:**
+
+```json
+"activities":[
+ {
+ "name": "CopyToLakehouseTable",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Microsoft Fabric Lakehouse Table output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "LakehouseTableSink",
+ "tableActionOption ": "Append"
+ }
+ }
+ }
+]
+```
+
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to files or tables in Microsoft Fabric Lakehouse. See the corresponding sections for details.
+
+- [Microsoft Fabric Lakehouse Files in mapping data flow](#microsoft-fabric-lakehouse-files-in-mapping-data-flow)
+- [Microsoft Fabric Lakehouse Table in mapping data flow](#microsoft-fabric-lakehouse-table-in-mapping-data-flow)
+
+For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+
+### Microsoft Fabric Lakehouse Files in mapping data flow
+
+To use Microsoft Fabric Lakehouse Files dataset as a source or sink dataset in mapping data flow, go to the following sections for the detailed configurations.
+
+#### Microsoft Fabric Lakehouse Files as a source type
+
+Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+#### Microsoft Fabric Lakehouse Files as a sink type
+
+Microsoft Fabric Lakehouse connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+### Microsoft Fabric Lakehouse Table in mapping data flow
+
+To use Microsoft Fabric Lakehouse Table dataset as a source or sink dataset in mapping data flow, go to the following sections for the detailed configurations.
+
+#### Microsoft Fabric Lakehouse Table as a source type
+
+There are no configurable properties under source options.
+
+#### Microsoft Fabric Lakehouse Table as a sink type
+
+The following properties are supported in the Mapping Data Flows **sink** section:
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable |
+| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you might notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
+| Merge Schema | Merge schema option allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
+
+**Example: Microsoft Fabric Lakehouse Table sink**
+
+```
+sink(allowSchemaDrift: true,
+ΓÇ» ΓÇ» validateSchema: false,
+ΓÇ» ΓÇ» input(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» CustomerID as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» NameStyle as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» Title as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» FirstName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» MiddleName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» LastName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» Suffix as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» CompanyName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» SalesPerson as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» EmailAddress as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» Phone as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordHash as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordSalt as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rowguid as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ModifiedDate as string
+ΓÇ» ΓÇ» ),
+ΓÇ» ΓÇ» deletable:false,
+ΓÇ» ΓÇ» insertable:true,
+ΓÇ» ΓÇ» updateable:false,
+ΓÇ» ΓÇ» upsertable:false,
+ΓÇ» ΓÇ» optimizedWrite: true,
+ΓÇ» ΓÇ» mergeSchema: true,
+ΓÇ» ΓÇ» autoCompact: true,
+ΓÇ» ΓÇ» skipDuplicateMapInputs: true,
+ΓÇ» ΓÇ» skipDuplicateMapOutputs: true) ~> CustomerTable
+
+```
+
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 10/31/2023 Last updated : 11/06/2023
data-factory Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Data Factory
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
data-lake-store Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md
Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
databox-online Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md
Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
databox Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md
Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
ddos-protection Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023
ddos-protection Test Through Simulations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/test-through-simulations.md
Previously updated : 10/06/2023 Last updated : 11/06/2023
Simulations help you:
## Azure DDoS simulation testing policy
-You may only simulate attacks using our approved testing partners:
+You can only simulate attacks using our approved testing partners:
- [BreakingPoint Cloud](https://www.ixiacom.com/products/breakingpoint-cloud): a self-service traffic generator where your customers can generate traffic against DDoS Protection-enabled public endpoints for simulations. - [Red Button](https://www.red-button.net/): work with a dedicated team of experts to simulate real-world DDoS attack scenarios in a controlled environment. - [RedWolf](https://www.redwolfsecurity.com/services/#cloud-ddos) a self-service or guided DDoS testing provider with real-time control.
For this tutorial, you'll create a test environment that includes:
- A virtual network - An Azure Bastion host - A load balancer -- Two virtual machines.
+- Two virtual machines
You'll then configure diagnostic logs and alerts to monitor for attacks and traffic patterns. Finally, you'll configure a DDoS attack simulation using one of our approved testing partners.
You'll then configure diagnostic logs and alerts to monitor for attacks and traf
- An Azure account with an active subscription. - In order to use diagnostic logging, you must first create a [Log Analytics workspace with diagnostic settings enabled](ddos-configure-log-analytics-workspace.md).
+- For this tutorial you'll need to deploy a Load Balancer, a public IP address, Bastion, and two virtual machines. For more information, see [Deploy Load Balancer with DDoS Protection](../load-balancer/tutorial-protect-load-balancer-ddos.md). You can skip the NAT Gateway step in the Deploy Load Balancer with DDoS Protection tutorial.
-## Prepare test environment
-### Create a DDoS protection plan
-
-1. Select **Create a resource** in the upper left corner of the Azure portal.
-1. Search the term *DDoS*. When **DDoS protection plan** appears in the search results, select it.
-1. Select **Create**.
-1. Enter or select the following values.
-
- :::image type="content" source="./media/ddos-attack-simulation/create-ddos-plan.png" alt-text="Screenshot of creating a DDoS protection plan.":::
-
- |Setting |Value |
- | | |
- |Subscription | Select your subscription. |
- |Resource group | Select **Create new** and enter **MyResourceGroup**.|
- |Name | Enter **MyDDoSProtectionPlan**. |
- |Region | Enter **East US**. |
-
-1. Select **Review + create** then **Create**
-
-### Create the virtual network
-
-In this section, you'll create a virtual network, subnet, Azure Bastion host, and associate the DDoS Protection plan. The virtual network and subnet contains the load balancer and virtual machines. The bastion host is used to securely manage the virtual machines and install IIS to test the load balancer. The DDoS Protection plan will protect all public IP resources in the virtual network.
-
-> [!IMPORTANT]
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
->
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-
-1. In **Virtual networks**, select **+ Create**.
-
-1. In **Create virtual network**, enter or select the following information in the **Basics** tab:
-
- | **Setting** | **Value** |
- |||
- | **Project Details** | |
- | Subscription | Select your Azure subscription. |
- | Resource Group | Select **MyResourceGroup** |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **East US** |
-
-1. Select the **Security** tab.
-
-1. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | Azure Bastion Public IP Address | Select **myvent-bastion-publicIpAddress**. Select **OK**. |
-
-1. Under **DDoS Network Protection**, select **Enable**. Then from the drop-down menu, select **MyDDoSProtectionPlan**.
-
- :::image type="content" source="./media/ddos-attack-simulation/enable-ddos.png" alt-text="Screenshot of enabling DDoS during virtual network creation.":::
-
-1. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
-
-1. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-1. Under **Subnets**, select the word **default**. If a subnet isn't present, select **+ Add subnet**.
-
-1. In **Edit subnet**, enter this information, then select **Save**:
-
- | Setting | Value |
- |--|-|
- | Name | Enter **myBackendSubnet** |
- | Starting Address | Enter **10.1.0.0/24** |
-
-1. Under **Subnets**, select **AzureBastionSubnet**. In **Edit subnet**, enter this information,then select **Save**:
-
- | Setting | Value |
- |--|-|
- | Starting Address | Enter **10.1.1.0/26** |
-
-1. Select the **Review + create** tab or select the **Review + create** button, then select **Create**.
-
- > [!NOTE]
- > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
-
-### Create load balancer
-
-In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-
-During the creation of the load balancer, you'll configure:
-
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
-* Health probe
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results. In the **Load balancer** page, select **+ Create**.
-
-1. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **MyResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **East US**. |
- | SKU | Leave the default **Standard**. |
- | Type | Select **Public**. |
- | Tier | Leave the default **Regional**. |
-
- :::image type="content" source="./media/ddos-attack-simulation/create-standard-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
-
-1. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter the following information. Leave the rest of the defaults and select **Add**.
-
- | Setting | Value |
- | --| -- |
- | **Name** | Enter **myFrontend**. |
- | **IP Type** | Select *Create new*. In *Add a public IP address*, enter **myPublicIP** for Name |
- | **Availability zone** | Select **Zone-redundant**. |
-
- > [!NOTE]
- > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
--
-1. Select **Next: Backend pools** at the bottom of the page.
-
-1. In the **Backend pools** tab, select **+ Add a backend pool**, then enter the following information. Leave the rest of the defaults and select **Save**.
-
- | Setting | Value |
- | --| -- |
- | **Name** | Enter **myBackendPool**. |
- | **Backend Pool Configuration** | Select **IP Address**. |
-
-
-1. Select **Save**, then select **Next: Inbound rules** at the bottom of the page.
-
-1. Under **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-
-1. In **Add load balancing rule**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **myFrontend (To be created)**. |
- | Backend pool | Select **myBackendPool**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | TCP reset | Select the *Enabled* radio. |
- | Floating IP | Select the *Disabled* radio. |
- | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
-
-1. Select **Save**.
-
-1. Select the blue **Review + create** button at the bottom of the page.
-
-1. Select **Create**.
-
-### Create virtual machines
-
-In this section, you'll create two virtual machines that will be load balanced by the load balancer. You'll also install IIS on the virtual machines to test the load balancer.
-
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. In the **Virtual machines** page, select **+ Create**.
-
-1. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
-
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **MyResourceGroup** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM1** |
- | Region | Select **((US) East US)** |
- | Availability Options | Select **Availability zones** |
- | Availability zone | Select **Zone 1** |
- | Security type | Select **Standard**. |
- | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** |
- | Azure Spot instance | Leave the default of unchecked. |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None** |
-
-1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-1. In the Networking tab, select or enter the following information:
-
- | Setting | Value |
- | - | -- |
- | **Network interface** | |
- | Virtual network | Select **myVNet** |
- | Subnet | Select **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Advanced** |
- | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.|
- | Delete NIC when VM is deleted | Leave the default of **unselected**. |
- | Accelerated networking | Leave the default of **selected**. |
- | **Load balancing** |
- | **Load balancing options** |
- | Load-balancing options | Select **Azure load balancer** |
- | Select a load balancer | Select **myLoadBalancer** |
- | Select a backend pool | Select **myBackendPool** |
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
-
-1. Select **Review + create**.
-
-1. Review the settings, and then select **Create**.
-
-1. Follow the steps 1 through 7 to create another VM with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2
- | - | -- |
- | Name | **myVM2** |
- | Availability zone | **Zone 2** |
- | Network security group | Select the existing **myNSG** |
--
-### Install IIS
-
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-
-1. Select **myVM1**.
-
-1. On the **Overview** page, select **Connect**, then **Bastion**.
-
-1. Enter the username and password entered during VM creation.
-
-1. Select **Connect**.
-
-1. On the server desktop, navigate to **Start** > **Windows PowerShell** > **Windows PowerShell**.
-
-1. In the PowerShell Window, run the following commands to:
-
- * Install the IIS server
- * Remove the default iisstart.htm file
- * Add a new iisstart.htm file that displays the name of the VM:
-
- ```powershell
- # Install IIS server role
- Install-WindowsFeature -name Web-Server -IncludeManagementTools
-
- # Remove default htm file
- Remove-Item C:\inetpub\wwwroot\iisstart.htm
-
- # Add a new htm file that displays server name
- Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
-
- ```
-
-1. Close the Bastion session with **myVM1**.
-
-1. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2**.
- ## Configure DDoS Protection metrics and alerts
-Now we'll configure metrics and alerts to monitor for attacks and traffic patterns.
+In this tutorial, we'll configure DDoS Protection metrics and alerts to monitor for attacks and traffic patterns.
### Configure diagnostic logs
BreakingPoint Cloud offers:
- Predefined DDoS test sizing and test duration profiles enable safer validations by eliminating the potential of configuration errors. > [!NOTE]
-> For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud).
+> For BreakingPoint Cloud, you must first [create a BreakingPoint Cloud account](https://www.ixiacom.com/products/breakingpoint-cloud).
Example attack values:
Example attack values:
> - For a video demonstration of utilizing BreakingPoint Cloud, see [DDoS Attack Simulation](https://www.youtube.com/watch?v=xFJS7RnX-Sw). - ### Red Button Red ButtonΓÇÖs [DDoS Testing](https://www.red-button.net/ddos-testing/) service suite includes three stages:
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Microsoft Defender for Servers Plan 2 provides unique detections and alerts, in
[Further details and notes](defender-for-servers-introduction.md)
-| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-||--|:-:||
-| **A logon from a malicious IP has been detected. [seen multiple times]** | A successful remote authentication for the account [account] and process [process] occurred, however the logon IP address (x.x.x.x) has previously been reported as malicious or highly unusual. A successful attack has probably occurred. Files with the .scr extensions are screen saver files and are normally reside and execute from the Windows system directory. | - | High |
-| **Addition of Guest account to Local Administrators group** | Analysis of host data has detected the addition of the built-in Guest account to the Local Administrators group on %{Compromised Host}, which is strongly associated with attacker activity. | - | Medium |
-| **An event log was cleared** | Machine logs indicate a suspicious event log clearing operation by user: '%{user name}' in Machine: '%{CompromisedEntity}'. The %{log channel} log was cleared. | - | Informational |
-| **Antimalware Action Failed** | Microsoft Antimalware has encountered an error when taking an action on malware or other potentially unwanted software. | - | Medium |
-| **Antimalware Action Taken** | Microsoft Antimalware for Azure has taken an action to protect this machine from malware or other potentially unwanted software. | - | Medium |
-| **Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium |
-| **Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High |
-| **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
-| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion, Execution | High |
-| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion, Execution | High |
-| **Antimalware file exclusion in your virtual machine**<br>(VM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion | Medium |
-| **Antimalware real-time protection was disabled in your virtual machine**<br>(VM_AmRealtimeProtectionDisabled) | Real-time protection disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-| **Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(VM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-| **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(VM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | - | High |
-| **Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-| **Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium |
-| **Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
-| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium |
-| **Detected actions indicative of disabling and deleting IIS log files** | Analysis of host data detected actions that show IIS log files being disabled and/or deleted. | - | Medium |
-| **Detected anomalous mix of upper and lower case characters in command-line** | Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host. | - | Medium |
-| **Detected change to a registry key that can be abused to bypass UAC** | Analysis of host data on %{Compromised Host} detected that a registry key that can be abused to bypass UAC (User Account Control) was changed. This kind of configuration, while possibly benign, is also typical of attacker activity when trying to move from unprivileged (standard user) to privileged (for example administrator) access on a compromised host. | - | Medium |
-| **Detected decoding of an executable using built-in certutil.exe tool** | Analysis of host data on %{Compromised Host} detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | - | High |
-| **Detected enabling of the WDigest UseLogonCredential registry key** | Analysis of host data detected a change in the registry key HKLM\SYSTEM\ CurrentControlSet\Control\SecurityProviders\WDigest\ "UseLogonCredential". Specifically this key has been updated to allow logon credentials to be stored in clear text in LSA memory. Once enabled, an attacker can dump clear text passwords from LSA memory with credential harvesting tools such as Mimikatz. | - | Medium |
-| **Detected encoded executable in command line data** | Analysis of host data on %{Compromised Host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | - | High |
-| **Detected obfuscated command line** | Attackers use increasingly complex obfuscation techniques to evade detections that run against the underlying data. Analysis of host data on %{Compromised Host} detected suspicious indicators of obfuscation on the commandline. | - | Informational |
-| **Detected Petya ransomware indicators** | Analysis of host data on %{Compromised Host} detected indicators associated with Petya ransomware. See <https://aka.ms/petya-blog> for more information. Review the command line associated in this alert and escalate this alert to your security team. | - | High |
-| **Detected possible execution of keygen executable** | Analysis of host data on %{Compromised Host} detected execution of a process whose name is indicative of a keygen tool; such tools are typically used to defeat software licensing mechanisms but their download is often bundled with other malicious software. Activity group GOLD has been known to make use of such keygens to covertly gain back door access to hosts that they compromise. | - | Medium |
-| **Detected possible execution of malware dropper** | Analysis of host data on %{Compromised Host} detected a filename that has previously been associated with one of activity group GOLD's methods of installing malware on a victim host. | - | High |
-| **Detected possible local reconnaissance activity** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing reconnaissance activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession in the way that has occurred here is rare. | - | |
-| **Detected potentially suspicious use of Telegram tool** | Analysis of host data shows installation of Telegram, a free cloud-based instant messaging service that exists both for mobile and desktop system. Attackers are known to abuse this service to transfer malicious binaries to any other computer, phone, or tablet. | - | Medium |
-| **Detected suppression of legal notice displayed to users at logon** | Analysis of host data on %{Compromised Host} detected changes to the registry key that controls whether a legal notice is displayed to users when they log on. Microsoft security analysis has determined that this is a common activity undertaken by attackers after having compromised a host. | - | Low |
-| **Detected suspicious combination of HTA and PowerShell** | mshta.exe (Microsoft HTML Application Host) which is a signed Microsoft binary is being used by the attackers to launch malicious PowerShell commands. Attackers often resort to having an HTA file with inline VBScript. When a victim browses to the HTA file and chooses to run it, the PowerShell commands and scripts that it contains are executed. Analysis of host data on %{Compromised Host} detected mshta.exe launching PowerShell commands. | - | Medium |
-| **Detected suspicious commandline arguments** | Analysis of host data on %{Compromised Host} detected suspicious commandline arguments that have been used in conjunction with a reverse shell used by activity group HYDROGEN. | - | High |
-| **Detected suspicious commandline used to start all executables in a directory** | Analysis of host data has detected a suspicious process running on %{Compromised Host}. The commandline indicates an attempt to start all executables (*.exe) that may reside in a directory. This could be an indication of a compromised host. | - | Medium |
-| **Detected suspicious credentials in commandline** | Analysis of host data on %{Compromised Host} detected a suspicious password being used to execute a file by activity group BORON. This activity group has been known to use this password to execute Pirpi malware on a victim host. | - | High |
-| **Detected suspicious document credentials** | Analysis of host data on %{Compromised Host} detected a suspicious, common precomputed password hash used by malware being used to execute a file. Activity group HYDROGEN has been known to use this password to execute malware on a victim host. | - | High |
-| **Detected suspicious execution of VBScript.Encode command** | Analysis of host data on %{Compromised Host} detected the execution of VBScript.Encode command. This encodes the scripts into unreadable text, making it more difficult for users to examine the code. Microsoft threat research shows that attackers often use encoded VBscript files as part of their attack to evade detection systems. This could be legitimate activity, or an indication of a compromised host. | - | Medium |
-| **Detected suspicious execution via rundll32.exe** | Analysis of host data on %{Compromised Host} detected rundll32.exe being used to execute a process with an uncommon name, consistent with the process naming scheme previously seen used by activity group GOLD when installing their first stage implant on a compromised host. | - | High |
-| **Detected suspicious file cleanup commands** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing post-compromise self-cleanup activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession, followed by a delete command in the way that has occurred here is rare. | - | High |
-| **Detected suspicious file creation** | Analysis of host data on %{Compromised Host} detected creation or execution of a process that has previously indicated post-compromise action taken on a victim host by activity group BARIUM. This activity group has been known to use this technique to download more malware to a compromised host after an attachment in a phishing doc has been opened. | - | High |
-| **Detected suspicious named pipe communications** | Analysis of host data on %{Compromised Host} detected data being written to a local named pipe from a Windows console command. Named pipes are known to be a channel used by attackers to task and communicate with a malicious implant. This could be legitimate activity, or an indication of a compromised host. | - | High |
-| **Detected suspicious network activity** | Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | - | Low |
-| **Detected suspicious new firewall rule** | Analysis of host data detected a new firewall rule has been added via netsh.exe to allow traffic from an executable in a suspicious location. | - | Medium |
-| **Detected suspicious use of Cacls to lower the security state of the system** | Attackers use myriad ways like brute force, spear phishing etc. to achieve initial compromise and get a foothold on the network. Once initial compromise is achieved they often take steps to lower the security settings of a system. CaclsΓÇöshort for change access control list is Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. A lot of time the binary is used by the attackers to lower the security settings of a system. This is done by giving Everyone full access to some of the system binaries like ftp.exe, net.exe, wscript.exe etc. Analysis of host data on %{Compromised Host} detected suspicious use of Cacls to lower the security of a system. | - | Medium |
-| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file, which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium |
-| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant", which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium |
-| **Detected the disabling of critical services** | The analysis of host data on %{Compromised Host} detected execution of "net.exe stop" command being used to stop critical services like SharedAccess or the Windows Security app. The stopping of either of these services can be indication of a malicious behavior. | - | Medium |
-| **Digital currency mining related behavior detected** | Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining. | - | High |
-| **Dynamic PS script construction** | Analysis of host data on %{Compromised Host} detected a PowerShell script being constructed dynamically. Attackers sometimes use this approach of progressively building up a script in order to evade IDS systems. This could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium |
-| **Executable found running from a suspicious location** | Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | - | High |
-| **Fileless attack behavior detected**<br>(VM_FilelessAttackBehavior.Windows) | The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion | Low |
-| **Fileless attack technique detected**<br>(VM_FilelessAttackTechnique.Windows) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion, Execution | High |
-| **Fileless attack toolkit detected**<br>(VM_FilelessAttackToolkit.Windows) | The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory. | Defense Evasion, Execution | Medium |
-| **High risk software detected** | Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. When you use these tools, the malware can be silently installed in the background. | - | Medium |
-| **Local Administrators group members were enumerated** | Machine logs indicate a successful enumeration on group %{Enumerated Group Domain Name}\%{Enumerated Group Name}. Specifically, %{Enumerating User Domain Name}\%{Enumerating User Name} remotely enumerated the members of the %{Enumerated Group Domain Name}\%{Enumerated Group Name} group. This activity could either be legitimate activity, or an indication that a machine in your organization has been compromised and used to reconnaissance %{vmname}. | - | Informational |
-| **Malicious firewall rule created by ZINC server implant [seen multiple times]** | A firewall rule was created using techniques that match a known actor, ZINC. The rule was possibly used to open a port on %{Compromised Host} to allow for Command & Control communications. This behavior was seen [x] times today on the following machines: [Machine names] | - | High |
-| **Malicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious. | - | High |
-| **Multiple Domain Accounts Queried** | Analysis of host data has determined that an unusual number of distinct domain accounts are being queried within a short time period from %{Compromised Host}. This kind of activity could be legitimate, but can also be an indication of compromise. | - | Medium |
-| **Possible credential dumping detected [seen multiple times]** | Analysis of host data has detected use of native windows tool (for example, sqldumper.exe) being used in a way that allows to extract credentials from memory. Attackers often use these techniques to extract credentials that they then further use for lateral movement and privilege escalation. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
-| **Potential attempt to bypass AppLocker detected** | Analysis of host data on %{Compromised Host} detected a potential attempt to bypass AppLocker restrictions. AppLocker can be configured to implement a policy that limits what executables are allowed to run on a Windows system. The command-line pattern similar to that identified in this alert has been previously associated with attacker attempts to circumvent AppLocker policy by using trusted executables (allowed by AppLocker policy) to execute untrusted code. This could be legitimate activity, or an indication of a compromised host. | - | High |
-| **PsExec execution detected**<br>(VM_RunByPsExec) | Analysis of host data indicates that the process %{Process Name} was executed by PsExec utility. PsExec can be used for running processes remotely. This technique might be used for malicious purposes. | Lateral Movement, Execution | Informational |
-| **Ransomware indicators detected [seen multiple times]** | Analysis of host data indicates suspicious activity traditionally associated with lock-screen and encryption ransomware. Lock screen ransomware displays a full-screen message preventing interactive use of the host and access to its files. Encryption ransomware prevents access by encrypting data files. In both cases a ransom message is typically displayed, requesting payment in order to restore file access. This behavior was seen [x] times today on the following machines: [Machine names] | - | High |
-| **Ransomware indicators detected** | Analysis of host data indicates suspicious activity traditionally associated with lock-screen and encryption ransomware. Lock screen ransomware displays a full-screen message preventing interactive use of the host and access to its files. Encryption ransomware prevents access by encrypting data files. In both cases a ransom message is typically displayed, requesting payment in order to restore file access. | - | High |
-| **Rare SVCHOST service group executed**<br>(VM_SvcHostRunInRareServiceGroup) | The system process SVCHOST was observed running a rare service group. Malware often uses SVCHOST to masquerade its malicious activity. | Defense Evasion, Execution | Informational |
-| **Sticky keys attack detected** | Analysis of host data indicates that an attacker may be subverting an accessibility binary (for example sticky keys, onscreen keyboard, narrator) in order to provide backdoor access to the host %{Compromised Host}. | - | Medium |
-| **Successful brute force attack**<br>(VM_LoginBruteForceSuccess) | Several sign in attempts were detected from the same source. Some successfully authenticated to the host.<br>This resembles a burst attack, in which an attacker performs numerous authentication attempts to find valid account credentials. | Exploitation | Medium/High |
-| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
-| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
-| **Suspected Kerberos Golden Ticket attack parameters observed** | Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack. | - | Medium |
-| **Suspicious Account Creation Detected** | Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator. | - | Medium |
-| **Suspicious Activity Detected**<br>(VM_SuspiciousActivity) | Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host. | Execution | Medium |
-| **Suspicious authentication activity**<br>(VM_LoginBruteForceValidUserFailed) | Although none of them succeeded, some of them used accounts were recognized by the host. This resembles a dictionary attack, in which an attacker performs numerous authentication attempts using a dictionary of predefined account names and passwords in order to find valid credentials to access the host. This indicates that some of your host account names might exist in a well-known account name dictionary. | Probing | Medium |
-| **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides more characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium |
-| **Suspicious double extension file executed** | Analysis of host data indicates an execution of a process with a suspicious double extension. This extension may trick users into thinking files are safe to be opened and might indicate the presence of malware on the system. | - | High |
-| **Suspicious download using Certutil detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
-| **Suspicious download using Certutil detected** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. | - | Medium |
-| **Suspicious PowerShell Activity Detected** | Analysis of host data detected a PowerShell script running on %{Compromised Host} that has features in common with known suspicious scripts. This script could either be legitimate activity, or an indication of a compromised host. | - | High |
-| **Suspicious PowerShell cmdlets executed** | Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets. | - | Medium |
-| **Suspicious process executed [seen multiple times]** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names] | - | High |
-| **Suspicious process executed** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. | - | High |
-| **Suspicious process name detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
-| **Suspicious process name detected** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium |
-| **Suspicious process termination burst**<br>(VM_TaskkillBurst) | Analysis of host data indicates a suspicious process termination burst in %{Machine Name}. Specifically, %{NumberOfCommands} processes were killed between %{Begin} and %{Ending}. | Defense Evasion | Low |
-| **Suspicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is uncommon with this account. | - | Medium |
-| **Suspicious SVCHOST process executed** | The system process SVCHOST was observed running in an abnormal context. Malware often uses SVCHOST to masquerade its malicious activity. | - | High |
-| **Suspicious system process executed**<br>(VM_SystemProcessInAbnormalContext) | The system process %{process name} was observed running in an abnormal context. Malware often uses this process name to masquerade its malicious activity. | Defense Evasion, Execution | High |
-| **Suspicious Volume Shadow Copy Activity** | Analysis of host data has detected a shadow copy deletion activity on the resource. Volume Shadow Copy (VSC) is an important artifact that stores data snapshots. Some malware and specifically Ransomware, targets VSC to sabotage backup strategies. | - | High |
-| **Suspicious WindowPosition registry value detected** | Analysis of host data on %{Compromised Host} detected an attempted WindowPosition registry configuration change that could be indicative of hiding application windows in nonvisible sections of the desktop. This could be legitimate activity, or an indication of a compromised machine: this type of activity has been previously associated with known adware (or unwanted software) such as Win32/OneSystemCare and Win32/SystemHealer and malware such as Win32/Creprote. When the WindowPosition value is set to 201329664, (Hex: 0x0c00 0c00, corresponding to X-axis=0c00 and the Y-axis=0c00) this places the console app's window in a non-visible section of the user's screen in an area that is hidden from view below the visible start menu/taskbar. Known suspect Hex value includes, but not limited to c000c000 | - | Low |
-| **Suspiciously named process detected** | Analysis of host data on %{Compromised Host} detected a process whose name is very similar to but different from a very commonly run process (%{Similar To Process Name}). While this process could be benign attackers are known to sometimes hide in plain sight by naming their malicious tools to resemble legitimate process names. | - | Medium |
-| **Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium |
-| **Unusual process execution detected** | Analysis of host data on %{Compromised Host} detected the execution of a process by %{User Name} that was unusual. Accounts such as %{User Name} tend to perform a limited set of operations, this execution was determined to be out of character and may be suspicious. | - | High |
-| **Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium |
-| **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |
-| **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | | |
-| **Suspicious installation of GPU extension in your virtual machine (Preview)** <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Impact | Low |
+| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+| | | :-: | - |
+| **A logon from a malicious IP has been detected. [seen multiple times]** | A successful remote authentication for the account [account] and process [process] occurred, however the logon IP address (x.x.x.x) has previously been reported as malicious or highly unusual. A successful attack has probably occurred. Files with the .scr extensions are screen saver files and are normally reside and execute from the Windows system directory. | - | High |
+| **Addition of Guest account to Local Administrators group** | Analysis of host data has detected the addition of the built-in Guest account to the Local Administrators group on %{Compromised Host}, which is strongly associated with attacker activity. | - | Medium |
+| **An event log was cleared** | Machine logs indicate a suspicious event log clearing operation by user: '%{user name}' in Machine: '%{CompromisedEntity}'. The %{log channel} log was cleared. | - | Informational |
+| **Antimalware Action Failed** | Microsoft Antimalware has encountered an error when taking an action on malware or other potentially unwanted software. | - | Medium |
+| **Antimalware Action Taken** | Microsoft Antimalware for Azure has taken an action to protect this machine from malware or other potentially unwanted software. | - | Medium |
+| **Antimalware broad files exclusion in your virtual machine**<br>(VM_AmBroadFilesExclusion) | Files exclusion from antimalware extension with broad exclusion rule was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Such exclusion practically disabling the Antimalware protection.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | - | Medium |
+| **Antimalware disabled and code execution in your virtual machine**<br>(VM_AmDisablementAndCodeExecution) | Antimalware disabled at the same time as code execution on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers disable antimalware scanners to prevent detection while running unauthorized tools or infecting the machine with malware. | - | High |
+| **Antimalware disabled in your virtual machine**<br>(VM_AmDisablement) | Antimalware disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | Defense Evasion | Medium |
+| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmFileExclusionAndCodeExecution) | File excluded from your antimalware scanner at the same time as code was executed via a custom script extension on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion, Execution | High |
+| **Antimalware file exclusion and code execution in your virtual machine**<br>(VM_AmTempFileExclusionAndCodeExecution) | Temporary file exclusion from antimalware extension in parallel to execution of code via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion, Execution | High |
+| **Antimalware file exclusion in your virtual machine**<br>(VM_AmTempFileExclusion) | File excluded from your antimalware scanner on your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running unauthorized tools or infecting the machine with malware. | Defense Evasion | Medium |
+| **Antimalware real-time protection was disabled in your virtual machine**<br>(VM_AmRealtimeProtectionDisabled) | Real-time protection disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
+| **Antimalware real-time protection was disabled temporarily in your virtual machine**<br>(VM_AmTempRealtimeProtectionDisablement) | Real-time protection temporary disablement of the antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
+| **Antimalware real-time protection was disabled temporarily while code was executed in your virtual machine**<br>(VM_AmRealtimeProtectionDisablementAndCodeExec) | Real-time protection temporary disablement of the antimalware extension in parallel to code execution via custom script extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might disable real-time protection from the antimalware scan on your virtual machine to avoid detection while running arbitrary code or infecting the machine with malware. | - | High |
+| **Antimalware scans blocked for files potentially related to malware campaigns on your virtual machine (Preview)**<br>(VM_AmMalwareCampaignRelatedExclusion) | An exclusion rule was detected in your virtual machine to prevent your antimalware extension scanning certain files that are suspected of being related to a malware campaign. The rule was detected by analyzing the Azure Resource Manager operations in your subscription. Attackers might exclude files from antimalware scans to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
+| **Antimalware temporarily disabled in your virtual machine**<br>(VM_AmTemporarilyDisablement) | Antimalware temporarily disabled in your virtual machine. This was detected by analyzing Azure Resource Manager operations in your subscription.<br>Attackers might disable the antimalware on your virtual machine to prevent detection. | - | Medium |
+| **Antimalware unusual file exclusion in your virtual machine**<br>(VM_UnusualAmFileExclusion) | Unusual file exclusion from antimalware extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>Attackers might exclude files from the antimalware scan on your virtual machine to prevent detection while running arbitrary code or infecting the machine with malware. | Defense Evasion | Medium |
+| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access, Persistence, Execution, Command And Control, Exploitation | Medium |
+| **Detected actions indicative of disabling and deleting IIS log files** | Analysis of host data detected actions that show IIS log files being disabled and/or deleted. | - | Medium |
+| **Detected anomalous mix of upper and lower case characters in command-line** | Analysis of host data on %{Compromised Host} detected a command line with anomalous mix of upper and lower case characters. This kind of pattern, while possibly benign, is also typical of attackers trying to hide from case-sensitive or hash-based rule matching when performing administrative tasks on a compromised host. | - | Medium |
+| **Detected change to a registry key that can be abused to bypass UAC** | Analysis of host data on %{Compromised Host} detected that a registry key that can be abused to bypass UAC (User Account Control) was changed. This kind of configuration, while possibly benign, is also typical of attacker activity when trying to move from unprivileged (standard user) to privileged (for example administrator) access on a compromised host. | - | Medium |
+| **Detected decoding of an executable using built-in certutil.exe tool** | Analysis of host data on %{Compromised Host} detected that certutil.exe, a built-in administrator utility, was being used to decode an executable instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using a tool such as certutil.exe to decode a malicious executable that will then be subsequently executed. | - | High |
+| **Detected enabling of the WDigest UseLogonCredential registry key** | Analysis of host data detected a change in the registry key HKLM\SYSTEM\ CurrentControlSet\Control\SecurityProviders\WDigest\ "UseLogonCredential". Specifically this key has been updated to allow logon credentials to be stored in clear text in LSA memory. Once enabled, an attacker can dump clear text passwords from LSA memory with credential harvesting tools such as Mimikatz. | - | Medium |
+| **Detected encoded executable in command line data** | Analysis of host data on %{Compromised Host} detected a base-64 encoded executable. This has previously been associated with attackers attempting to construct executables on-the-fly through a sequence of commands, and attempting to evade intrusion detection systems by ensuring that no individual command would trigger an alert. This could be legitimate activity, or an indication of a compromised host. | - | High |
+| **Detected obfuscated command line** | Attackers use increasingly complex obfuscation techniques to evade detections that run against the underlying data. Analysis of host data on %{Compromised Host} detected suspicious indicators of obfuscation on the commandline. | - | Informational |
+| **Detected possible execution of keygen executable** | Analysis of host data on %{Compromised Host} detected execution of a process whose name is indicative of a keygen tool; such tools are typically used to defeat software licensing mechanisms but their download is often bundled with other malicious software. Activity group GOLD has been known to make use of such keygens to covertly gain back door access to hosts that they compromise. | - | Medium |
+| **Detected possible execution of malware dropper** | Analysis of host data on %{Compromised Host} detected a filename that has previously been associated with one of activity group GOLD's methods of installing malware on a victim host. | - | High |
+| **Detected possible local reconnaissance activity** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing reconnaissance activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession in the way that has occurred here is rare. | - | |
+| **Detected potentially suspicious use of Telegram tool** | Analysis of host data shows installation of Telegram, a free cloud-based instant messaging service that exists both for mobile and desktop system. Attackers are known to abuse this service to transfer malicious binaries to any other computer, phone, or tablet. | - | Medium |
+| **Detected suppression of legal notice displayed to users at logon** | Analysis of host data on %{Compromised Host} detected changes to the registry key that controls whether a legal notice is displayed to users when they log on. Microsoft security analysis has determined that this is a common activity undertaken by attackers after having compromised a host. | - | Low |
+| **Detected suspicious combination of HTA and PowerShell** | mshta.exe (Microsoft HTML Application Host) which is a signed Microsoft binary is being used by the attackers to launch malicious PowerShell commands. Attackers often resort to having an HTA file with inline VBScript. When a victim browses to the HTA file and chooses to run it, the PowerShell commands and scripts that it contains are executed. Analysis of host data on %{Compromised Host} detected mshta.exe launching PowerShell commands. | - | Medium |
+| **Detected suspicious commandline arguments** | Analysis of host data on %{Compromised Host} detected suspicious commandline arguments that have been used in conjunction with a reverse shell used by activity group HYDROGEN. | - | High |
+| **Detected suspicious commandline used to start all executables in a directory** | Analysis of host data has detected a suspicious process running on %{Compromised Host}. The commandline indicates an attempt to start all executables (*.exe) that may reside in a directory. This could be an indication of a compromised host. | - | Medium |
+| **Detected suspicious credentials in commandline** | Analysis of host data on %{Compromised Host} detected a suspicious password being used to execute a file by activity group BORON. This activity group has been known to use this password to execute Pirpi malware on a victim host. | - | High |
+| **Detected suspicious document credentials** | Analysis of host data on %{Compromised Host} detected a suspicious, common precomputed password hash used by malware being used to execute a file. Activity group HYDROGEN has been known to use this password to execute malware on a victim host. | - | High |
+| **Detected suspicious execution of VBScript.Encode command** | Analysis of host data on %{Compromised Host} detected the execution of VBScript.Encode command. This encodes the scripts into unreadable text, making it more difficult for users to examine the code. Microsoft threat research shows that attackers often use encoded VBscript files as part of their attack to evade detection systems. This could be legitimate activity, or an indication of a compromised host. | - | Medium |
+| **Detected suspicious execution via rundll32.exe** | Analysis of host data on %{Compromised Host} detected rundll32.exe being used to execute a process with an uncommon name, consistent with the process naming scheme previously seen used by activity group GOLD when installing their first stage implant on a compromised host. | - | High |
+| **Detected suspicious file cleanup commands** | Analysis of host data on %{Compromised Host} detected a combination of systeminfo commands that has previously been associated with one of activity group GOLD's methods of performing post-compromise self-cleanup activity. While 'systeminfo.exe' is a legitimate Windows tool, executing it twice in succession, followed by a delete command in the way that has occurred here is rare. | - | High |
+| **Detected suspicious file creation** | Analysis of host data on %{Compromised Host} detected creation or execution of a process that has previously indicated post-compromise action taken on a victim host by activity group BARIUM. This activity group has been known to use this technique to download more malware to a compromised host after an attachment in a phishing doc has been opened. | - | High |
+| **Detected suspicious named pipe communications** | Analysis of host data on %{Compromised Host} detected data being written to a local named pipe from a Windows console command. Named pipes are known to be a channel used by attackers to task and communicate with a malicious implant. This could be legitimate activity, or an indication of a compromised host. | - | High |
+| **Detected suspicious network activity** | Analysis of network traffic from %{Compromised Host} detected suspicious network activity. Such traffic, while possibly benign, is typically used by an attacker to communicate with malicious servers for downloading of tools, command-and-control and exfiltration of data. Typical related attacker activity includes copying remote administration tools to a compromised host and exfiltrating user data from it. | - | Low |
+| **Detected suspicious new firewall rule** | Analysis of host data detected a new firewall rule has been added via netsh.exe to allow traffic from an executable in a suspicious location. | - | Medium |
+| **Detected suspicious use of Cacls to lower the security state of the system** | Attackers use myriad ways like brute force, spear phishing etc. to achieve initial compromise and get a foothold on the network. Once initial compromise is achieved they often take steps to lower the security settings of a system. CaclsΓÇöshort for change access control list is Microsoft Windows native command-line utility often used for modifying the security permission on folders and files. A lot of time the binary is used by the attackers to lower the security settings of a system. This is done by giving Everyone full access to some of the system binaries like ftp.exe, net.exe, wscript.exe etc. Analysis of host data on %{Compromised Host} detected suspicious use of Cacls to lower the security of a system. | - | Medium |
+| **Detected suspicious use of FTP -s Switch** | Analysis of process creation data from the %{Compromised Host} detected the use of the FTP "-s:filename" switch. This switch is used to specify an FTP script file for the client to run. Malware or malicious processes are known to use this FTP switch (-s:filename) to point to a script file, which is configured to connect to a remote FTP server and download more malicious binaries. | - | Medium |
+| **Detected suspicious use of Pcalua.exe to launch executable code** | Analysis of host data on %{Compromised Host} detected the use of pcalua.exe to launch executable code. Pcalua.exe is component of the Microsoft Windows "Program Compatibility Assistant", which detects compatibility issues during the installation or execution of a program. Attackers are known to abuse functionality of legitimate Windows system tools to perform malicious actions, for example using pcalua.exe with the -a switch to launch malicious executables either locally or from remote shares. | - | Medium |
+| **Detected the disabling of critical services** | The analysis of host data on %{Compromised Host} detected execution of "net.exe stop" command being used to stop critical services like SharedAccess or the Windows Security app. The stopping of either of these services can be indication of a malicious behavior. | - | Medium |
+| **Digital currency mining related behavior detected** | Analysis of host data on %{Compromised Host} detected the execution of a process or command normally associated with digital currency mining. | - | High |
+| **Dynamic PS script construction** | Analysis of host data on %{Compromised Host} detected a PowerShell script being constructed dynamically. Attackers sometimes use this approach of progressively building up a script in order to evade IDS systems. This could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium |
+| **Executable found running from a suspicious location** | Analysis of host data detected an executable file on %{Compromised Host} that is running from a location in common with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised host. | - | High |
+| **Fileless attack behavior detected**<br>(VM_FilelessAttackBehavior.Windows) | The memory of the process specified contains behaviors commonly used by fileless attacks. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Active network connections. See NetworkConnections below for details.<br>3) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>4) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion | Low |
+| **Fileless attack technique detected**<br>(VM_FilelessAttackTechnique.Windows) | The memory of the process specified below contains evidence of a fileless attack technique. Fileless attacks are used by attackers to execute code while evading detection by security software. Specific behaviors include:<br>1) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>2) Executable image injected into the process, such as in a code injection attack.<br>3) Active network connections. See NetworkConnections below for details.<br>4) Function calls to security sensitive operating system interfaces. See Capabilities below for referenced OS capabilities.<br>5) Process hollowing, which is a technique used by malware in which a legitimate process is loaded on the system to act as a container for hostile code.<br>6) Contains a thread that was started in a dynamically allocated code segment. This is a common pattern for process injection attacks. | Defense Evasion, Execution | High |
+| **Fileless attack toolkit detected**<br>(VM_FilelessAttackToolkit.Windows) | The memory of the process specified contains a fileless attack toolkit: [toolkit name]. Fileless attack toolkits use techniques that minimize or eliminate traces of malware on disk, and greatly reduce the chances of detection by disk-based malware scanning solutions. Specific behaviors include:<br>1) Well-known toolkits and crypto mining software.<br>2) Shellcode, which is a small piece of code typically used as the payload in the exploitation of a software vulnerability.<br>3) Injected malicious executable in process memory. | Defense Evasion, Execution | Medium |
+| **High risk software detected** | Analysis of host data from %{Compromised Host} detected the usage of software that has been associated with the installation of malware in the past. A common technique utilized in the distribution of malicious software is to package it within otherwise benign tools such as the one seen in this alert. When you use these tools, the malware can be silently installed in the background. | - | Medium |
+| **Local Administrators group members were enumerated** | Machine logs indicate a successful enumeration on group %{Enumerated Group Domain Name}\%{Enumerated Group Name}. Specifically, %{Enumerating User Domain Name}\%{Enumerating User Name} remotely enumerated the members of the %{Enumerated Group Domain Name}\%{Enumerated Group Name} group. This activity could either be legitimate activity, or an indication that a machine in your organization has been compromised and used to reconnaissance %{vmname}. | - | Informational |
+| **Malicious firewall rule created by ZINC server implant [seen multiple times]** | A firewall rule was created using techniques that match a known actor, ZINC. The rule was possibly used to open a port on %{Compromised Host} to allow for Command & Control communications. This behavior was seen [x] times today on the following machines: [Machine names] | - | High |
+| **Malicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is considered malicious. | - | High |
+| **Multiple Domain Accounts Queried** | Analysis of host data has determined that an unusual number of distinct domain accounts are being queried within a short time period from %{Compromised Host}. This kind of activity could be legitimate, but can also be an indication of compromise. | - | Medium |
+| **Possible credential dumping detected [seen multiple times]** | Analysis of host data has detected use of native windows tool (for example, sqldumper.exe) being used in a way that allows to extract credentials from memory. Attackers often use these techniques to extract credentials that they then further use for lateral movement and privilege escalation. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
+| **Potential attempt to bypass AppLocker detected** | Analysis of host data on %{Compromised Host} detected a potential attempt to bypass AppLocker restrictions. AppLocker can be configured to implement a policy that limits what executables are allowed to run on a Windows system. The command-line pattern similar to that identified in this alert has been previously associated with attacker attempts to circumvent AppLocker policy by using trusted executables (allowed by AppLocker policy) to execute untrusted code. This could be legitimate activity, or an indication of a compromised host. | - | High |
+| **PsExec execution detected**<br>(VM_RunByPsExec) | Analysis of host data indicates that the process %{Process Name} was executed by PsExec utility. PsExec can be used for running processes remotely. This technique might be used for malicious purposes. | Lateral Movement, Execution | Informational |
+| **Rare SVCHOST service group executed**<br>(VM_SvcHostRunInRareServiceGroup) | The system process SVCHOST was observed running a rare service group. Malware often uses SVCHOST to masquerade its malicious activity. | Defense Evasion, Execution | Informational |
+| **Sticky keys attack detected** | Analysis of host data indicates that an attacker may be subverting an accessibility binary (for example sticky keys, onscreen keyboard, narrator) in order to provide backdoor access to the host %{Compromised Host}. | - | Medium |
+| **Successful brute force attack**<br>(VM_LoginBruteForceSuccess) | Several sign in attempts were detected from the same source. Some successfully authenticated to the host.<br>This resembles a burst attack, in which an attacker performs numerous authentication attempts to find valid account credentials. | Exploitation | Medium/High |
+| **Suspect integrity level indicative of RDP hijacking** | Analysis of host data has detected the tscon.exe running with SYSTEM privileges - this can be indicative of an attacker abusing this binary in order to switch context to any other logged on user on this host; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
+| **Suspect service installation** | Analysis of host data has detected the installation of tscon.exe as a service: this binary being started as a service potentially allows an attacker to trivially switch to any other logged on user on this host by hijacking RDP connections; it's a known attacker technique to compromise more user accounts and move laterally across a network. | - | Medium |
+| **Suspected Kerberos Golden Ticket attack parameters observed** | Analysis of host data detected commandline parameters consistent with a Kerberos Golden Ticket attack. | - | Medium |
+| **Suspicious Account Creation Detected** | Analysis of host data on %{Compromised Host} detected creation or use of a local account %{Suspicious account name} : this account name closely resembles a standard Windows account or group name '%{Similar To Account Name}'. This is potentially a rogue account created by an attacker, so named in order to avoid being noticed by a human administrator. | - | Medium |
+| **Suspicious Activity Detected**<br>(VM_SuspiciousActivity) | Analysis of host data has detected a sequence of one or more processes running on %{machine name} that have historically been associated with malicious activity. While individual commands may appear benign the alert is scored based on an aggregation of these commands. This could either be legitimate activity, or an indication of a compromised host. | Execution | Medium |
+| **Suspicious authentication activity**<br>(VM_LoginBruteForceValidUserFailed) | Although none of them succeeded, some of them used accounts were recognized by the host. This resembles a dictionary attack, in which an attacker performs numerous authentication attempts using a dictionary of predefined account names and passwords in order to find valid credentials to access the host. This indicates that some of your host account names might exist in a well-known account name dictionary. | Probing | Medium |
+| **Suspicious code segment detected** | Indicates that a code segment has been allocated by using non-standard methods, such as reflective injection and process hollowing. The alert provides more characteristics of the code segment that have been processed to provide context for the capabilities and behaviors of the reported code segment. | - | Medium |
+| **Suspicious double extension file executed** | Analysis of host data indicates an execution of a process with a suspicious double extension. This extension may trick users into thinking files are safe to be opened and might indicate the presence of malware on the system. | - | High |
+| **Suspicious download using Certutil detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
+| **Suspicious download using Certutil detected** | Analysis of host data on %{Compromised Host} detected the use of certutil.exe, a built-in administrator utility, for the download of a binary instead of its mainstream purpose that relates to manipulating certificates and certificate data. Attackers are known to abuse functionality of legitimate administrator tools to perform malicious actions, for example using certutil.exe to download and decode a malicious executable that will then be subsequently executed. | - | Medium |
+| **Suspicious PowerShell Activity Detected** | Analysis of host data detected a PowerShell script running on %{Compromised Host} that has features in common with known suspicious scripts. This script could either be legitimate activity, or an indication of a compromised host. | - | High |
+| **Suspicious PowerShell cmdlets executed** | Analysis of host data indicates execution of known malicious PowerShell PowerSploit cmdlets. | - | Medium |
+| **Suspicious process executed [seen multiple times]** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. This behavior was seen [x] times today on the following machines: [Machine names] | - | High |
+| **Suspicious process executed** | Machine logs indicate that the suspicious process: '%{Suspicious Process}' was running on the machine, often associated with attacker attempts to access credentials. | - | High |
+| **Suspicious process name detected [seen multiple times]** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. This behavior was seen [x] times today on the following machines: [Machine names] | - | Medium |
+| **Suspicious process name detected** | Analysis of host data on %{Compromised Host} detected a process whose name is suspicious, for example corresponding to a known attacker tool or named in a way that is suggestive of attacker tools that try to hide in plain sight. This process could be legitimate activity, or an indication that one of your machines has been compromised. | - | Medium |
+| **Suspicious process termination burst**<br>(VM_TaskkillBurst) | Analysis of host data indicates a suspicious process termination burst in %{Machine Name}. Specifically, %{NumberOfCommands} processes were killed between %{Begin} and %{Ending}. | Defense Evasion | Low |
+| **Suspicious SQL activity** | Machine logs indicate that '%{process name}' was executed by account: %{user name}. This activity is uncommon with this account. | - | Medium |
+| **Suspicious SVCHOST process executed** | The system process SVCHOST was observed running in an abnormal context. Malware often uses SVCHOST to masquerade its malicious activity. | - | High |
+| **Suspicious system process executed**<br>(VM_SystemProcessInAbnormalContext) | The system process %{process name} was observed running in an abnormal context. Malware often uses this process name to masquerade its malicious activity. | Defense Evasion, Execution | High |
+| **Suspicious Volume Shadow Copy Activity** | Analysis of host data has detected a shadow copy deletion activity on the resource. Volume Shadow Copy (VSC) is an important artifact that stores data snapshots. Some malware and specifically Ransomware, targets VSC to sabotage backup strategies. | - | High |
+| **Suspicious WindowPosition registry value detected** | Analysis of host data on %{Compromised Host} detected an attempted WindowPosition registry configuration change that could be indicative of hiding application windows in nonvisible sections of the desktop. This could be legitimate activity, or an indication of a compromised machine: this type of activity has been previously associated with known adware (or unwanted software) such as Win32/OneSystemCare and Win32/SystemHealer and malware such as Win32/Creprote. When the WindowPosition value is set to 201329664, (Hex: 0x0c00 0c00, corresponding to X-axis=0c00 and the Y-axis=0c00) this places the console app's window in a non-visible section of the user's screen in an area that is hidden from view below the visible start menu/taskbar. Known suspect Hex value includes, but not limited to c000c000 | - | Low |
+| **Suspiciously named process detected** | Analysis of host data on %{Compromised Host} detected a process whose name is very similar to but different from a very commonly run process (%{Similar To Process Name}). While this process could be benign attackers are known to sometimes hide in plain sight by naming their malicious tools to resemble legitimate process names. | - | Medium |
+| **Unusual config reset in your virtual machine**<br>(VM_VMAccessUnusualConfigReset) | An unusual config reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset the configuration in your virtual machine and compromise it. | Credential Access | Medium |
+| **Unusual process execution detected** | Analysis of host data on %{Compromised Host} detected the execution of a process by %{User Name} that was unusual. Accounts such as %{User Name} tend to perform a limited set of operations, this execution was determined to be out of character and may be suspicious. | - | High |
+| **Unusual user password reset in your virtual machine**<br>(VM_VMAccessUnusualPasswordReset) | An unusual user password reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing the VM Access extension to reset the credentials of a local user in your virtual machine and compromise it. | Credential Access | Medium |
+| **Unusual user SSH key reset in your virtual machine**<br>(VM_VMAccessUnusualSSHReset) | An unusual user SSH key reset was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription.<br>While this action may be legitimate, attackers can try utilizing VM Access extension to reset SSH key of a user account in your virtual machine and compromise it. | Credential Access | Medium |
+| **VBScript HTTP object allocation detected** | Creation of a VBScript file using Command Prompt has been detected. The following script contains HTTP object allocation command. This action can be used to download malicious files. | | |
+| **Suspicious installation of GPU extension in your virtual machine (Preview)** <br> (VM_GPUDriverExtensionUnusualExecution) | Suspicious installation of a GPU extension was detected in your virtual machine by analyzing the Azure Resource Manager operations in your subscription. Attackers may use the GPU driver extension to install GPU drivers on your virtual machine via the Azure Resource Manager to perform cryptojacking. | Impact | Low |
## <a name="alerts-linux"></a>Alerts for Linux machines
Microsoft Defender for Containers provides security alerts on the cluster level
[Further details and notes](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters)
-| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
-|-||:-:||
-| **Exposed Postgres service with trust authentication configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresTrustAuth) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer. The service is configured with trust authentication method, which doesn't require credentials. | InitialAccess | Medium |
-| **Exposed Postgres service with risky configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresBroadIPRange) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer with a risky configuration. Exposing the service to a wide range of IP addresses poses a security risk. | InitialAccess | Medium |
-| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium |
-| **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
-| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
-| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation, which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
-| **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
-| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium |
-| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium |
-| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
-| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
-| **Behavior similar to Fairware ransomware detected**<br>(K8S.NODE_FairwareMalware) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of rm -rf commands applied to suspicious locations. As rm -rf will recursively delete files, it is normally used on discrete folders. In this case, it is being used in a location that could remove a lot of data. Fairware ransomware is known to execute rm -rf commands in this folder. | Execution | Medium |
-| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
-| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
-| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
-| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the cluster's DNS server and poison it. | Lateral Movement | Low |
-| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
-| **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
-| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
-| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It's rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
-| **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium |
-| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
-| **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
-| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
-| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
-| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium |
-| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: <https://aka.ms/exposedkubeflow-blog> | Initial Access | Medium |
-| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
-| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
-| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
-| **Indicators associated with DDOS toolkit detected**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
-| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
-| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes that contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
-| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
-| **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
-| **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
-| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
-| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
-| **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
-| **Possible command line exploitation attempt**<br>(K8S.NODE_ExploitAttempt) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
-| **Possible credential access tool detected**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
-| **Possible Cryptocoinminer download detected**<br>(K8S.NODE_CryptoCoinMinerDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
-| **Possible data exfiltration detected**<br>(K8S.NODE_DataEgressArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
-| **Possible Log Tampering Activity Detected**<br>(K8S.NODE_SystemLogRemoval) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
-| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
-| **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
-| **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
-| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
-| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
-| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
-| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
-| **Security-related process termination detected**<br>(K8S.NODE_SuspectProcessTermination) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
-| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
-| **Suspicious file timestamp modification**<br>(K8S.NODE_TimestampTampering) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
-| **Suspicious request to Kubernetes API**<br>(K8S.NODE_KubernetesAPI) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
-| **Suspicious request to the Kubernetes Dashboard**<br>(K8S.NODE_KubernetesDashboard) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
-| **Potential crypto coin miner started**<br>(K8S.NODE_CryptoCoinMinerExecution) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
-| **Suspicious password access**<br>(K8S.NODE_SuspectPasswordFileAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational |
-| **Suspicious use of DNS over HTTPS**<br>(K8S.NODE_SuspiciousDNSOverHttps) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
-| **A possible connection to malicious location has been detected.**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium |
-| **Possible malicious web shell detected.**<br>(K8S.NODE_Webshell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected a possible web shell. Attackers will often upload a web shell to a compute resource they have compromised to gain persistence or for further exploitation. | Persistence, Exploitation | Medium |
-| **Burst of multiple reconnaissance commands could indicate initial activity after compromise**<br>(K8S.NODE_ReconnaissanceArtifactsBurst) <sup>[1](#footnote1)</sup> | Analysis of host/device data detected execution of multiple reconnaissance commands related to gathering system or host details performed by attackers after initial compromise. | Discovery, Collection | Low |
-| **Suspicious Download Then Run Activity**<br>(K8S.NODE_DownloadAndRunCombo) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a file being downloaded then run in the same command. While this isn't always malicious, this is a very common technique attackers use to get malicious files onto victim machines. | Execution, CommandAndControl, Exploitation | Medium |
-| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low |
-| **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium |
-| **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium |
-| **MITRE Caldera agent detected**<br>(K8S.NODE_MitreCalderaTools) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
+| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity |
+| | | :-: | - |
+| **Exposed Postgres service with trust authentication configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresTrustAuth) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer. The service is configured with trust authentication method, which doesn't require credentials. | InitialAccess | Medium |
+| **Exposed Postgres service with risky configuration in Kubernetes detected (Preview)**<br>(K8S_ExposedPostgresBroadIPRange) | Kubernetes cluster configuration analysis detected exposure of a Postgres service by a load balancer with a risky configuration. Exposing the service to a wide range of IP addresses poses a security risk. | InitialAccess | Medium |
+| **Attempt to create a new Linux namespace from a container detected**<br>(K8S.NODE_NamespaceCreation) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container in Kubernetes cluster detected an attempt to create a new Linux namespace. While this behavior might be legitimate, it might indicate that an attacker tries to escape from the container to the node. Some CVE-2022-0185 exploitations use this technique. | PrivilegeEscalation | Medium |
+| **A history file has been cleared**<br>(K8S.NODE_HistoryFileCleared) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected that the command history log file has been cleared. Attackers may do this to cover their tracks. The operation was performed by the specified user account. | DefenseEvasion | Medium |
+| **Abnormal activity of managed identity associated with Kubernetes (Preview)**<br>(K8S_AbnormalMiActivity) | Analysis of Azure Resource Manager operations detected an abnormal behavior of a managed identity used by an AKS addon. The detected activity isn\'t consistent with the behavior of the associated addon. While this activity can be legitimate, such behavior might indicate that the identity was gained by an attacker, possibly from a compromised container in the Kubernetes cluster. | Lateral Movement | Medium |
+| **Abnormal Kubernetes service account operation detected**<br>(K8S_ServiceAccountRareOperation) | Kubernetes audit log analysis detected abnormal behavior by a service account in your Kubernetes cluster. The service account was used for an operation, which isn't common for this service account. While this activity can be legitimate, such behavior might indicate that the service account is being used for malicious purposes. | Lateral Movement, Credential Access | Medium |
+| **An uncommon connection attempt detected**<br>(K8S.NODE_SuspectConnection) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an uncommon connection attempt utilizing a socks protocol. This is very rare in normal operations, but a known technique for attackers attempting to bypass network-layer detections. | Execution, Exfiltration, Exploitation | Medium |
+| **Anomalous pod deployment (Preview)**<br>(K8S_AnomalousPodDeployment) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected pod deployment which is anomalous based on previous pod deployment activity. This activity is considered an anomaly when taking into account how the different features seen in the deployment operation are in relations to one another. The features monitored include the container image registry used, the account performing the deployment, day of the week, how often this account performs pod deployments, user agent used in the operation, whether this is a namespace to which pod deployments often occur, and other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert's extended properties. | Execution | Medium |
+| **Anomalous secret access (Preview)**<br>(K8S_AnomalousSecretAccess) <sup>[2](#footnote2)</sup> | Kubernetes audit log analysis detected secret access request which is anomalous based on previous secret access activity. This activity is considered an anomaly when taking into account how the different features seen in the secret access operation are in relations to one another. The features monitored by this analytics include the user name used, the name of the secret, the name of the namespace, user agent used in the operation, or other features. Top contributing reasons for raising this alert as anomalous activity are detailed under the alert extended properties. | CredentialAccess | Medium |
+| **Attempt to stop apt-daily-upgrade.timer service detected**<br>(K8S.NODE_TimerServiceDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to stop apt-daily-upgrade.timer service. Attackers have been observed stopping this service to download malicious files and grant execution privileges for their attacks. This activity can also happen if the service is updated through normal administrative actions. | DefenseEvasion | Informational |
+| **Behavior similar to common Linux bots detected (Preview)**<br>(K8S.NODE_CommonBot) | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a process normally associated with common Linux botnets. | Execution, Collection, Command And Control | Medium |
+| **Command within a container running with high privileges**<br>(K8S.NODE_PrivilegedExecutionInContainer) <sup>[1](#footnote1)</sup> | Machine logs indicate that a privileged command was run in a Docker container. A privileged command has extended privileges on the host machine. | PrivilegeEscalation | Low |
+| **Container running in privileged mode**<br>(K8S.NODE_PrivilegedContainerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the execution of a Docker command that is running a privileged container. The privileged container has full access to the hosting pod or host resource. If compromised, an attacker may use the privileged container to gain access to the hosting pod or host. | PrivilegeEscalation, Execution | Low |
+| **Container with a sensitive volume mount detected**<br>(K8S_SensitiveMount) | Kubernetes audit log analysis detected a new container with a sensitive volume mount. The volume that was detected is a hostPath type which mounts a sensitive file or folder from the node to the container. If the container gets compromised, the attacker can use this mount for gaining access to the node. | Privilege Escalation | Medium |
+| **CoreDNS modification in Kubernetes detected**<br>(K8S_CoreDnsModification) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a modification of the CoreDNS configuration. The configuration of CoreDNS can be modified by overriding its configmap. While this activity can be legitimate, if attackers have permissions to modify the configmap, they can change the behavior of the cluster's DNS server and poison it. | Lateral Movement | Low |
+| **Creation of admission webhook configuration detected**<br>(K8S_AdmissionController) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new admission webhook configuration. Kubernetes has two built-in generic admission controllers: MutatingAdmissionWebhook and ValidatingAdmissionWebhook. The behavior of these admission controllers is determined by an admission webhook that the user deploys to the cluster. The usage of such admission controllers can be legitimate, however attackers can use such webhooks for modifying the requests (in case of MutatingAdmissionWebhook) or inspecting the requests and gain sensitive information (in case of ValidatingAdmissionWebhook). | Credential Access, Persistence | Low |
+| **Detected file download from a known malicious source**<br>(K8S.NODE_SuspectDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a download of a file from a source frequently used to distribute malware. | PrivilegeEscalation, Execution, Exfiltration, Command And Control | Medium |
+| **Detected suspicious file download**<br>(K8S.NODE_SuspectDownloadArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious download of a remote file. | Persistence | Low |
+| **Detected suspicious use of the nohup command**<br>(K8S.NODE_SuspectNohup) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the nohup command. Attackers have been seen using the command nohup to run hidden files from a temporary directory to allow their executables to run in the background. It's rare to see this command run on hidden files located in a temporary directory. | Persistence, DefenseEvasion | Medium |
+| **Detected suspicious use of the useradd command**<br>(K8S.NODE_SuspectUserAddition) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious use of the useradd command. | Persistence | Medium |
+| **Digital currency mining container detected**<br>(K8S_MaliciousContainerImage) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a container that has an image associated with a digital currency mining tool. | Execution | High |
+| **Digital currency mining related behavior detected**<br>(K8S.NODE_DigitalCurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an execution of a process or command normally associated with digital currency mining. | Execution | High |
+| **Docker build operation detected on a Kubernetes node**<br>(K8S.NODE_ImageBuildOnNode) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | DefenseEvasion | Low |
+| **Excessive role permissions assigned in Kubernetes cluster (Preview)**<br>(K8S_ServiceAcountPermissionAnomaly) <sup>[3](#footnote3)</sup> | Analysis of the Kubernetes audit logs detected an excessive permissions role assignment to your cluster. The listed permissions for the assigned roles are uncommon to the specific service account. This detection considers previous role assignments to the same service account across clusters monitored by Azure, volume per permission, and the impact of the specific permission. The anomaly detection model used for this alert takes into account how this permission is used across all clusters monitored by Microsoft Defender for Cloud. | Privilege Escalation | Low |
+| **Executable found running from a suspicious location (Preview)**<br>(K8S.NODE_SuspectExecutablePath) | Analysis of processes running within a container or directly on a Kubernetes node, has detected an executable file that is running from a location associated with known suspicious files. This executable could either be legitimate activity, or an indication of a compromised system. | Execution | Medium |
+| **Exposed Kubeflow dashboard detected**<br>(K8S_ExposedKubeflow) | The Kubernetes audit log analysis detected exposure of the Istio Ingress by a load balancer in a cluster that runs Kubeflow. This action might expose the Kubeflow dashboard to the internet. If the dashboard is exposed to the internet, attackers can access it and run malicious containers or code on the cluster. Find more details in the following article: <https://aka.ms/exposedkubeflow-blog> | Initial Access | Medium |
+| **Exposed Kubernetes dashboard detected**<br>(K8S_ExposedDashboard) | Kubernetes audit log analysis detected exposure of the Kubernetes Dashboard by a LoadBalancer service. Exposed dashboard allows an unauthenticated access to the cluster management and poses a security threat. | Initial Access | High |
+| **Exposed Kubernetes service detected**<br>(K8S_ExposedService) | The Kubernetes audit log analysis detected exposure of a service by a load balancer. This service is related to a sensitive application that allows high impact operations in the cluster such as running processes on the node or creating new containers. In some cases, this service doesn't require authentication. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Medium |
+| **Exposed Redis service in AKS detected**<br>(K8S_ExposedRedis) | The Kubernetes audit log analysis detected exposure of a Redis service by a load balancer. If the service doesn't require authentication, exposing it to the internet poses a security risk. | Initial Access | Low |
+| **Indicators associated with DDOS toolkit detected**<br>(K8S.NODE_KnownLinuxDDoSToolkit) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected file names that are part of a toolkit associated with malware capable of launching DDoS attacks, opening ports and services, and taking full control over the infected system. This could also possibly be legitimate activity. | Persistence, LateralMovement, Execution, Exploitation | Medium |
+| **K8S API requests from proxy IP address detected**<br>(K8S_TI_Proxy) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected API requests to your cluster from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when attackers try to hide their source IP. | Execution | Low |
+| **Kubernetes events deleted**<br>(K8S_DeleteEvents) <sup>[2](#footnote2)</sup> <sup>[3](#footnote3)</sup> | Defender for Cloud detected that some Kubernetes events have been deleted. Kubernetes events are objects in Kubernetes that contain information about changes in the cluster. Attackers might delete those events for hiding their operations in the cluster. | Defense Evasion | Low |
+| **Kubernetes penetration testing tool detected**<br>(K8S_PenTestToolsKubeHunter) | Kubernetes audit log analysis detected usage of Kubernetes penetration testing tool in the AKS cluster. While this behavior can be legitimate, attackers might use such public tools for malicious purposes. | Execution | Low |
+| **Manipulation of host firewall detected**<br>(K8S.NODE_FirewallDisabled) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible manipulation of the on-host firewall. Attackers will often disable this to exfiltrate data. | DefenseEvasion, Exfiltration | Medium |
+| **Microsoft Defender for Cloud test alert (not a threat).**<br>(K8S.NODE_EICAR) <sup>[1](#footnote1)</sup> | This is a test alert generated by Microsoft Defender for Cloud. No further action is needed. | Execution | High |
+| **New container in the kube-system namespace detected**<br>(K8S_KubeSystemContainer) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new container in the kube-system namespace that isn't among the containers that normally run in this namespace. The kube-system namespaces shouldn't contain user resources. Attackers can use this namespace for hiding malicious components. | Persistence | Low |
+| **New high privileges role detected**<br>(K8S_HighPrivilegesRole) <sup>[3](#footnote3)</sup> | Kubernetes audit log analysis detected a new role with high privileges. A binding to a role with high privileges gives the user\group high privileges in the cluster. Unnecessary privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Possible attack tool detected**<br>(K8S.NODE_KnownLinuxAttackTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious tool invocation. This tool is often associated with malicious users attacking others. | Execution, Collection, Command And Control, Probing | Medium |
+| **Possible backdoor detected**<br>(K8S.NODE_LinuxBackdoorArtifact) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious file being downloaded and run. This activity has previously been associated with installation of a backdoor. | Persistence, DefenseEvasion, Execution, Exploitation | Medium |
+| **Possible command line exploitation attempt**<br>(K8S.NODE_ExploitAttempt) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible exploitation attempt against a known vulnerability. | Exploitation | Medium |
+| **Possible credential access tool detected**<br>(K8S.NODE_KnownLinuxCredentialAccessTool) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible known credential access tool was running on the container, as identified by the specified process and commandline history item. This tool is often associated with attacker attempts to access credentials. | CredentialAccess | Medium |
+| **Possible Cryptocoinminer download detected**<br>(K8S.NODE_CryptoCoinMinerDownload) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected download of a file normally associated with digital currency mining. | DefenseEvasion, Command And Control, Exploitation | Medium |
+| **Possible data exfiltration detected**<br>(K8S.NODE_DataEgressArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible data egress condition. Attackers will often egress data from machines they have compromised. | Collection, Exfiltration | Medium |
+| **Possible Log Tampering Activity Detected**<br>(K8S.NODE_SystemLogRemoval) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a possible removal of files that tracks user's activity during the course of its operation. Attackers often try to evade detection and leave no trace of malicious activities by deleting such log files. | DefenseEvasion | Medium |
+| **Possible password change using crypt-method detected**<br>(K8S.NODE_SuspectPasswordChange) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a password change using the crypt method. Attackers can make this change to continue access and gain persistence after compromise. | CredentialAccess | Medium |
+| **Potential port forwarding to external IP address**<br>(K8S.NODE_SuspectPortForwarding) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an initiation of port forwarding to an external IP address. | Exfiltration, Command And Control | Medium |
+| **Potential reverse shell detected**<br>(K8S.NODE_ReverseShell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a potential reverse shell. These are used to get a compromised machine to call back into a machine an attacker owns. | Exfiltration, Exploitation | Medium |
+| **Privileged container detected**<br>(K8S_PrivilegedContainer) | Kubernetes audit log analysis detected a new privileged container. A privileged container has access to the node's resources and breaks the isolation between containers. If compromised, an attacker can use the privileged container to gain access to the node. | Privilege Escalation | Low |
+| **Process associated with digital currency mining detected**<br>(K8S.NODE_CryptoCoinMinerArtifacts) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected the execution of a process normally associated with digital currency mining. | Execution, Exploitation | Medium |
+| **Process seen accessing the SSH authorized keys file in an unusual way**<br>(K8S.NODE_SshKeyAccess) <sup>[1](#footnote1)</sup> | An SSH authorized_keys file was accessed in a method similar to known malware campaigns. This access could signify that an actor is attempting to gain persistent access to a machine. | Unknown | Low |
+| **Role binding to the cluster-admin role detected**<br>(K8S_ClusterAdminBinding) | Kubernetes audit log analysis detected a new binding to the cluster-admin role which gives administrator privileges. Unnecessary administrator privileges might cause privilege escalation in the cluster. | Persistence | Low |
+| **Security-related process termination detected**<br>(K8S.NODE_SuspectProcessTermination) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected an attempt to terminate processes related to security monitoring on the container. Attackers will often try to terminate such processes using predefined scripts post-compromise. | Persistence | Low |
+| **SSH server is running inside a container**<br>(K8S.NODE_ContainerSSH) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected an SSH server running inside the container. | Execution | Medium |
+| **Suspicious file timestamp modification**<br>(K8S.NODE_TimestampTampering) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious timestamp modification. Attackers will often copy timestamps from existing legitimate files to new tools to avoid detection of these newly dropped files. | Persistence, DefenseEvasion | Low |
+| **Suspicious request to Kubernetes API**<br>(K8S.NODE_KubernetesAPI) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes API. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
+| **Suspicious request to the Kubernetes Dashboard**<br>(K8S.NODE_KubernetesDashboard) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container indicates that a suspicious request was made to the Kubernetes Dashboard. The request was sent from a container in the cluster. Although this behavior can be intentional, it might indicate that a compromised container is running in the cluster. | LateralMovement | Medium |
+| **Potential crypto coin miner started**<br>(K8S.NODE_CryptoCoinMinerExecution) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a process being started in a way normally associated with digital currency mining. | Execution | Medium |
+| **Suspicious password access**<br>(K8S.NODE_SuspectPasswordFileAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected suspicious attempt to access encrypted user passwords. | Persistence | Informational |
+| **Suspicious use of DNS over HTTPS**<br>(K8S.NODE_SuspiciousDNSOverHttps) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected the use of a DNS call over HTTPS in an uncommon fashion. This technique is used by attackers to hide calls out to suspect or malicious sites. | DefenseEvasion, Exfiltration | Medium |
+| **A possible connection to malicious location has been detected.**<br>(K8S.NODE_ThreatIntelCommandLineSuspectDomain) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a connection to a location that has been reported to be malicious or unusual. This is an indicator that a compromise may have occurred. | InitialAccess | Medium |
+| **Possible malicious web shell detected.**<br>(K8S.NODE_Webshell) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected a possible web shell. Attackers will often upload a web shell to a compute resource they have compromised to gain persistence or for further exploitation. | Persistence, Exploitation | Medium |
+| **Burst of multiple reconnaissance commands could indicate initial activity after compromise**<br>(K8S.NODE_ReconnaissanceArtifactsBurst) <sup>[1](#footnote1)</sup> | Analysis of host/device data detected execution of multiple reconnaissance commands related to gathering system or host details performed by attackers after initial compromise. | Discovery, Collection | Low |
+| **Suspicious Download Then Run Activity**<br>(K8S.NODE_DownloadAndRunCombo) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a file being downloaded then run in the same command. While this isn't always malicious, this is a very common technique attackers use to get malicious files onto victim machines. | Execution, CommandAndControl, Exploitation | Medium |
+| **Digital currency mining activity**<br>(K8S.NODE_CurrencyMining) <sup>[1](#footnote1)</sup> | Analysis of DNS transactions detected digital currency mining activity. Such activity, while possibly legitimate user behavior, is frequently performed by attackers following compromise of resources. Typical related attacker activity is likely to include the download and execution of common mining tools. | Exfiltration | Low |
+| **Access to kubelet kubeconfig file detected**<br>(K8S.NODE_KubeConfigAccess) <sup>[1](#footnote1)</sup> | Analysis of processes running on a Kubernetes cluster node detected access to kubeconfig file on the host. The kubeconfig file, normally used by the Kubelet process, contains credentials to the Kubernetes cluster API server. Access to this file is often associated with attackers attempting to access those credentials, or with security scanning tools which check if the file is accessible. | CredentialAccess | Medium |
+| **Access to cloud metadata service detected**<br>(K8S.NODE_ImdsCall) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container detected access to the cloud metadata service for acquiring identity token. The container doesn't normally perform such operation. While this behavior might be legitimate, attackers might use this technique to access cloud resources after gaining initial access to a running container. | CredentialAccess | Medium |
+| **MITRE Caldera agent detected**<br>(K8S.NODE_MitreCalderaTools) <sup>[1](#footnote1)</sup> | Analysis of processes running within a container or directly on a Kubernetes node, has detected a suspicious process. This is often associated with the MITRE 54ndc47 agent which could be used maliciously to attack other machines. | Persistence, PrivilegeEscalation, DefenseEvasion, CredentialAccess, Discovery, LateralMovement, Execution, Collection, Exfiltration, Command And Control, Probing, Exploitation | Medium |
<sup><a name="footnote1"></a>1</sup>: **Preview for non-AKS clusters**: This alert is generally available for AKS clusters, but it is in preview for other environments, such as Azure Arc, EKS and GKE.
VM_VbScriptHttpObjectAllocation| VBScript HTTP object allocation detected | High
- [Security alerts in Microsoft Defender for Cloud](alerts-overview.md) - [Manage and respond to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - [Continuously export Defender for Cloud data](continuous-export.md)--
defender-for-cloud Data Aware Security Dashboard Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-aware-security-dashboard-overview.md
description: Learn about the capabilities and functions of the data-aware securi
Previously updated : 10/17/2023 Last updated : 11/06/2023 # Data security dashboard
You can select any element on the page to get more detailed information.
||| |Release state: | Public Preview | | Prerequisites: | Defender for CSPM fully enabled, including sensitive data discovery <br/> Workload protection for database and storage to explore active risks |
-| Required roles and permissions: | No other roles needed on top of what is required for the security explorer. |
+| Required roles and permissions: | No other roles needed aside from what is required for the security explorer. <br><br> To access the dashboard with more than 1000 subscriptions, you must have tenant-level permissions, which include one of the following roles: **Global Reader**, **Global Administrator**, **Security Administrator**, or **Security Reader**. |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds <br/> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government <br/> :::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet | ## Prerequisites
-In order to view the dashboard, you must enable Defender CSPM and also enable the sensitive data discovery extensions button underneath. In addition, to receive the alerts for data sensitivity, you must also enable the Defender for Storage plan.
+In order to view the dashboard, you must enable Defender CSPM and also enable the sensitive data discovery extension button underneath. In addition, to receive the alerts for data sensitivity, you must also enable the Defender for Storage plan for storage related alerts or Defender for Databases for database related alerts.
:::image type="content" source="media/data-aware-security-dashboard/select-sensitive-data-discovery.png" alt-text="Screenshot that shows where to turn on the sensitive data discovery extension." lightbox="media/data-aware-security-dashboard/select-sensitive-data-discovery.png":::
The feature is turned on at the subscription level.
## Required permissions and roles -- To view the dashboard you must have either one of the following scenarios:
+- To view the dashboard, you must have either one of the following scenarios:
- **all of the following permissions**:
You can select the **Manage data sensitivity settings** to get to the **Data sen
### Data resources security status
-**Sensitive resources status over time** - displays how data security evolves over time with a graph that shows the number of sensitive resources affected by alerts, attack paths, and recommendations within a defined period (last 30, 14, or 7 days).
+**Sensitive resources status over time** - displays how data security evolves over time with a graph that shows the number of sensitive resources affected by alerts, attack paths, and recommendations within a defined period (last 30, 14, or 7 days).
:::image type="content" source="media/data-aware-security-dashboard/data-resources-security-status.png" alt-text="Screenshot that shows the data resources security status section of the data security view." lightbox="media/data-aware-security-dashboard/data-resources-security-status.png":::
defender-for-cloud Defender For Apis Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-apis-posture.md
This article describes how to investigate API security findings, alerts, and sec
:::image type="content" source="media/defender-for-apis-posture/resource-health.png" alt-text="Screenshot that shows the health of an endpoint." lightbox="media/defender-for-apis-posture/resource-health.png":::
+## Remediate recommendations using Workflow Automation
+You can remediate recommendations generated by Defender for APIs using workflow automations.
+1. In an eligible recommendation, select one or more unhealthy resources.
+2. Select **Trigger logic app**.
+3. Confirm the **Selected subscription**.
+4. Select a relevant logic app from the list.
+5. Select **Trigger**.
+
+You can browse the [Microsoft Defender for Cloud GitHub](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workflow%20automation/Defender%20for%20API) repository for available workflow automation.
+ ## Create sample security alerts In Defender for Cloud you can use sample alerts to evaluate your Defender for Cloud plans, and validate your security configuration. [Follow these instructions](alert-validation.md#generate-sample-security-alerts) to set up sample alerts, and select the relevant APIs within your subscriptions.
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
With a simple agentless setup at scale, you can [enable Defender for Storage](tu
|-|:-| |Release state:|General Availability (GA)| |Feature availability:|- Activity monitoring (security alerts) ΓÇô General Availability (GA)<br>- Malware Scanning ΓÇô General Availability (GA)<br>- Sensitive data threat detection (Sensitive Data Discovery) ΓÇô Preview|
-|Pricing:|- Defender for Storage: $10/storage accounts/month\*<br>- Malware Scanning (add-on): $0.15/GB (USD) of data ingested\*\* <br><br>Above pricing applies to commercial clouds. Visit the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn more.<br><br>\* Storage accounts that exceed 73 million monthly transactions will be charged $0.1492 for every 1 million transactions that exceed the threshold.<br>\*\* Billing begins on September 3, 2023. To limit expenses, use the `Monthly capping` feature to set a cap on the amount of GB scanned per month, per storage account to help you control your costs. |
-| Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring |
+|Pricing:|**Microsoft Defender for Storage** pricing applies to commercial clouds. Learn more about [pricing and availability per region.](https://azure.microsoft.com/pricing/details/defender-for-cloud/)<br>|
+|<br><br> Supported storage types:|[Blob Storage](https://azure.microsoft.com/products/storage/blobs/)ΓÇ»(Standard/Premium StorageV2, including Data Lake Gen2): Activity monitoring, Malware Scanning, Sensitive Data Discovery<br>Azure Files (over REST API and SMB): Activity monitoring |
|Required roles and permissions:|For Malware Scanning and sensitive data threat detection at subscription and storage account levels, you need Owner roles (subscription owner/storage account owner) or specific roles with corresponding data actions. To enable Activity Monitoring, you need 'Security Admin' permissions. Read more about the required permissions.| |Clouds:|:::image type="icon" source="../defender-for-cloud/media/icons/yes-icon.png"::: Commercial clouds\*<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Azure Government (only activity monitoring support on the [classic plan](/azure/defender-for-cloud/defender-for-storage-classic))<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="../defender-for-cloud/media/icons/no-icon.png"::: Connected AWS accounts|
In this article, you learned about Microsoft Defender for Storage.
+
defender-for-cloud Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
AWS Systems Manager manages auto-provisioning by using the SSM Agent. Some Amazo
- [Install SSM Agent for a hybrid and multicloud environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html) - [Install SSM Agent for a hybrid and multicloud environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
-Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html), which enables core functionality for the AWS Systems Manager service.
+Ensure that your SSM Agent has the managed policy [AmazonSSMManagedInstanceCore](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonSSMManagedInstanceCore.html), which enables core functionality for the AWS Systems Manager service.
+
+**You must have the SSM Agent for auto provisioning Arc agent on EC2 machines. If the SSM doesn't exist, or is removed from the EC2, the Arc provisioning wonΓÇÖt be able to procced.**
+
+> [!NOTE]
+> As part of the cloud formation template that is run during the onboarding process, an automation process is created and triggered every 30 days, over all the EC2s that existed during the initial run of the cloud formation. The goal of this scheduled scan is to ensure that all the relevant EC2s have an IAM profile with the required IAM policy that allows Defender for Cloud to access, manage, and provide the relevant security features (including the Arc agent provisioning). The scan does not apply to EC2s that were created after the run of the cloud formation.
If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
Connecting your AWS account is part of the multicloud experience available in Mi
- Set up your [on-premises machines](quickstart-onboard-machines.md) and [GCP projects](quickstart-onboard-gcp.md). - Get answers to [common questions](faq-general.yml) about onboarding your AWS account. - [Troubleshoot your multicloud connectors](troubleshooting-guide.md#troubleshooting-the-native-multicloud-connector).+
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Today, there are four Service Level 2 names: Azure Defender, Advanced Threat Pro
The change will simplify the process of reviewing Defender for Cloud charges and provide better clarity in cost analysis.
-To ensure a smooth transition, we've taken measures to maintain the consistency of the Product/Service name, SKU, and Meter IDs. Impacted customers will receive an informational Azure Service Notification to communicate the changes. No action is necessary from customers.
+To ensure a smooth transition, we've taken measures to maintain the consistency of the Product/Service name, SKU, and Meter IDs. Impacted customers will receive an informational Azure Service Notification to communicate the changes.
+
+Organizations that retrieve cost data by calling our APIs, will need to update the values in their calls to accomodate the change. For example, in this filter function, the values will return no information:
+
+```json
+"filter": {
+ "dimensions": {
+ "name": "MeterCategory",
+ "operator": "In",
+ "values": [
+ "Advanced Threat Protection",
+ "Advanced Data Security",
+ "Azure Defender",
+ "Security Center"
+ ]
+ }
+ }
+```
The change is planned to go into effect on December 1, 2023.
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
If a subscription, account, or project has *any* Defender plan enabled, more sta
| -| | | | PCI-DSS v3.2.1 **(deprecated)** | CIS AWS Foundations v1.2.0 | CIS GCP Foundations v1.1.0 | | PCI DSS v4 | CIS AWS Foundations v1.5.0 | CIS GCP Foundations v1.2.0 |
-| SOC TSP | PCI DSS v3.2.1 | PCI DSS v3.2.1 |
+| SOC TSP **(deprecated)** | PCI DSS v3.2.1 | PCI DSS v3.2.1 |
| SOC 2 Type 2 | AWS Foundational Security Best Practices | NIST 800-53 | | ISO 27001:2013 | | ISO 27001 | | CIS Azure Foundations v1.1.0 |||
If a subscription, account, or project has *any* Defender plan enabled, more sta
| FedRAMP M ||| | HIPAA/HITRUST ||| | SWIFT CSP CSCF v2020 |||
+| SWIFT CSP CSCF v2022 |||
| UK OFFICIAL and UK NHS ||| | Canada Federal PBMM ||| | New Zealand ISM Restricted |||
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
Alert options also differ depending on your location and user role. For more inf
### Enterprise IoT alerts and Microsoft Defender for Endpoint
-Alerts triggered by Enterprise IoT sensors are shown in the Azure portal only.
+If you're using [Enterprise IoT security](eiot-defender-for-endpoint.md) in Microsoft 365 Defender, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Microsoft 365 Defender only.
-If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Microsoft 365 Defender only.
+Alerts triggered by [Enterprise IoT sensors](eiot-sensor.md) are shown in the Azure portal only.
For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response). ## Managing OT alerts in a hybrid environment
-Users working in hybrid environments may be managing OT alerts in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console.
+Users working in hybrid environments might be managing OT alerts in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console.
Alert statuses are fully synchronized between the Azure portal and the OT sensor, and between the sensor and the on-premises management console. This means that regardless of where you manage the alert in Defender for IoT, the alert is updated in other locations as well.
Setting an alert status to **Closed** or **Muted** on a sensor or on-premises ma
New alerts are automatically closed if no identical traffic is detected 90 days after the initial detection. If identical traffic is detected within those first 90 days, the 90-day count is reset.
-In addition to the default behavior, you may want to help your SOC and OT management teams triage and remediate alerts faster. Sign into an OT sensor or an on-premises management console as an **Admin** user to use the following options:
+In addition to the default behavior, you might want to help your SOC and OT management teams triage and remediate alerts faster. Sign into an OT sensor or an on-premises management console as an **Admin** user to use the following options:
- **Create custom alert rules**. OT sensors only.
Use the following table to learn more about each alert status and triage option.
|**Active** | - Azure portal only | Set an alert to *Active* to indicate that an investigation is underway, but that the alert can't yet be closed or otherwise triaged. <br><br>This status has no effect elsewhere in Defender for IoT. | |**Closed** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console | Close an alert to indicate that it's fully investigated, and you want to be alerted again the next time the same traffic is detected.<br><br>Closing an alert adds it to the sensor event timeline.<br><br>On the on-premises management console, *New* alerts are called *Acknowledged*. | |**Learn** | - Azure portal <br><br>- OT network sensors <br><br>- On-premises management console <br><br>*Unlearning* an alert is available only on the OT sensor. | Learn an alert when you want to close it and add it as allowed traffic, so that you aren't alerted again the next time the same traffic is detected. <br><br>For example, when the sensor detects firmware version changes following standard maintenance procedures, or when a new, expected device is added to the network. <br><br>Learning an alert closes the alert and adds an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating other OT sensor reports. <br><br>Learning alerts is available for selected alerts only, mostly those triggered by *Policy* and *Anomaly* engine alerts. |
-|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode may indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. |
+|**Mute** | - OT network sensors <br><br>- On-premises management console <br><br>*Unmuting* an alert is available only on the OT sensor. | Mute an alert when you want to close it and not see again for the same traffic, but without adding the alert allowed traffic. <br><br>For example, when the Operational engine triggers an alert indicating that the PLC Mode was changed on a device. The new mode might indicate that the PLC isn't secure, but after investigation, it's determined that the new mode is acceptable. <br><br>Muting an alert closes it, but doesn't add an item to the sensor event timeline. Detected traffic is included in data mining reports, but not when calculating data for other sensor reports. <br><br>Muting an alert is available for selected alerts only, mostly those triggered by the *Anomaly*, *Protocol Violation*, or *Operational* engines. |
> [!TIP] > If you know ahead of time which events are irrelevant for you, such as during a maintenance window, or if you don't want to track the event in the event timeline, create an alert exclusion rule on an on-premises management console instead.
defender-for-iot Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/billing.md
Title: Subscription billing
-description: Learn how you're billed for the Microsoft Defender for IoT service on your Azure subscription.
+ Title: Microsoft Defender for IoT billing
+description: Learn how you're billed for the Microsoft Defender for IoT service.
Previously updated : 05/17/2023 Last updated : 09/13/2023
+#CustomerIntent: As a Defender for IoT customer, I want to understand how I'm billed for Defender for IoT services so that I can best plan my deployment.
-# Defender for IoT subscription billing
+# Defender for IoT billing
As you plan your Microsoft Defender for IoT deployment, you typically want to understand the Defender for IoT pricing plans and billing models so you can optimize your costs.
-OT monitoring is billed using site-based licenses, where each license applies to an individual site, based on the site size. A site is a physical location, such as a facility, campus, office building, hospital, rig, and so on. Each site can contain any number of network sensors, all which monitor devices detected in connected networks.
+**OT monitoring** is billed using site-based licenses, where each license applies to an individual site, based on the site size. A site is a physical location, such as a facility, campus, office building, hospital, rig, and so on. Each site can contain any number of network sensors, all which monitor devices detected in connected networks.
-Enterprise IoT monitoring is billed based on the number of devices covered by your plan.
+**Enterprise IoT** monitoring supports 5 devices per Microsoft 365 E5 (ME5) or E5 Security license, or is available as standalone, per-device licenses for Microsoft Defender for Endpoint P2 customers.
## Free trial
-If you would like to evaluate Defender for IoT, you can use a trial license:
+To evaluate Defender for IoT, start a free trial as follows:
-- **For OT networks**, use a trial to deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. An OT trial supports a **Large** site license for 60 days. For more information, see [Start a Microsoft Defender for IoT trial](getting-started.md).
+- **For OT networks**, use a trial license for 60 days. Deploy one or more Defender for IoT sensors on your network to monitor traffic, analyze data, generate alerts, learn about network risks and vulnerabilities, and more. An OT trial supports a **Large** site license for 60 days. For more information, see [Start a Microsoft Defender for IoT trial](getting-started.md).
-- **For Enterprise IoT networks**, use a 30-day trial to view alerts, recommendations, and vulnerabilities in Microsoft 365. An Enterprise IoT trial is not limited to a specific number of devices. For more information, see [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md).
+- **For Enterprise IoT networks**, use a trial, standalone license for 90 days as an add-on to Microsoft Defender for Endpoint. Trial licenses support 100 devices. For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md).
## Defender for IoT devices
-When purchasing a Defender for IoT license for an OT plan, or when onboarding or editing a monthly Enterprise IoT plan, we recommend that you have a sense of how many devices you'll want to cover.
+We recommend that you have a sense of how many devices you want to monitor so that you know how many OT sites you need to license, or if you need any standalone licenses for enterprise IoT security.
- **OT monitoring**: Purchase a license for each site that you're planning to monitor. License fees differ based on the site size, each which covers a different number of devices. -- **Enterprise IoT monitoring**: Purchase a price plan based on the number of devices you want to monitor.
+- **Enterprise IoT monitoring**: Five devices are supported for each ME5/E5 Security user license. If you have more devices to monitor, and are a Defender for Endpoint P2 customer, purchase extra, standalone licenses for each device you want to monitor.
[!INCLUDE [devices-inventoried](includes/devices-inventoried.md)]
defender-for-iot Concept Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-enterprise.md
Title: Securing IoT devices in the enterprise with Microsoft Defender for Endpoint
-description: Learn how integrating Microsoft Defender for Endpoint and Microsoft Defender for IoT's security content and network sensors enhances your IoT network security.
+ Title: Securing IoT devices | Microsoft Defender for IoT
+description: Learn how integrating Microsoft Defender for Endpoint and Microsoft Defender for IoT's security content enhances your IoT network security.
Previously updated : 05/31/2023 Last updated : 09/13/2023
+#CustomerIntent: As a Defender for IoT customer, I want to understand how I can secure my enterprise IoT devices with Microsoft Defender for IoT so that I can protect my organization from IoT threats.
# Securing IoT devices in the enterprise
-The number of IoT devices continues to grow exponentially across enterprise networks, such as the printers, Voice over Internet Protocol (VoIP) devices, smart TVs, and conferencing systems scattered around many office buildings.
+The number of IoT devices continues to grow exponentially across enterprise networks, such as the printers, Voice over Internet Protocol (VoIP) devices, smart TVs, and conferencing systems scattered around many office buildings.
While the number of IoT devices continues to grow, they often lack the security safeguards that are common on managed endpoints like laptops and mobile phones. To bad actors, these unmanaged devices can be used as a point of entry for lateral movement or evasion, and too often, the use of such tactics leads to the exfiltration of sensitive information.
-[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
+[Microsoft Defender for IoT](./index.yml) seamlessly integrates with [Microsoft 365 Defender](/microsoft-365/security/defender) and [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) to provide both IoT device discovery and security value for IoT devices, including purpose-built alerts, recommendations, and vulnerability data.
-## IoT security across Microsoft 365 Defender and Azure
+## Enterprise IoT security in Microsoft 365 Defender
-Defender for IoT provides IoT security functionality across both the Microsoft 365 Defender and [Azure portals](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started).
+Enterprise IoT security in Microsoft 365 Defender provides IoT-specific security value, including alerts, risk and exposure levels, vulnerabilities, and recommendations in Microsoft 365 Defender.
-[Add an Enterprise IoT plan](eiot-defender-for-endpoint.md) in the **Settings** \> **Device discovery** \> **Enterprise IoT** page in Microsoft 365 Defender to view IoT-specific alerts, recommendations, and vulnerability data in Microsoft 365 Defender. The extra security value is provided for IoT devices detected by Defender for Endpoint.
+- If you're a Microsoft 365 E5 (ME5)/ E5 Security and Defender for Endpoint P2 customer, [toggle on support](eiot-defender-for-endpoint.md) for **Enterprise IoT Security** in the Microsoft 365 Defender portal.
-Integrating your Enterprise IoT plan with Microsoft 365 Defender requires the following:
+- If you don't have ME5/E5 Security licenses, but you're a Microsoft Defender for Endpoint customer, start with a [free trial](billing.md#free-trial) or purchase standalone, per-device licenses to gain the same IoT-specific security value.
-- A Microsoft Defender for Endpoint P2 license-- Microsoft 365 Defender access as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)-- Azure access as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner)-
-## Security value in Microsoft 365 Defender
-
-Defender for IoT's Enterprise IoT plan adds purpose-built alerts, recommendations, and vulnerability data for the IoT devices discovered by Defender for Endpoint agents. The added security value is available in Microsoft 365 Defender, which is Microsoft's central portal for combined enterprise IT and IoT device security.
-
-For example, use the added security recommendations in Microsoft 365 Defender to open a single IT ticket to patch vulnerable applications on both servers and printers. Or, use a recommendation to request that the network team adds firewall rules that apply for both workstations and cameras communicating with a suspicious IP address.
-
-The following image shows the architecture and extra features added with an Enterprise IoT plan in Microsoft 365 Defender:
+The following image shows the architecture and extra features added with **Enterprise IoT security** in Microsoft 365 Defender:
:::image type="content" source="media/enterprise-iot/architecture-endpoint-only.png" alt-text="Diagram of the service architecture when you have an Enterprise IoT plan added to Defender for Endpoint." border="false":::
-> [!NOTE]
-> Defender for Endpoint doesn't issue IoT-specific alerts, recommendations, and vulnerability data without an Enterprise IoT plan in Microsoft 365 Defender. Use our [quickstart](eiot-defender-for-endpoint.md) to start seeing this extra security value across your network.
->
For more information, see: -- [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md)
+- [Get started with enterprise IoT monitoring in Microsoft 365 Defender](eiot-defender-for-endpoint.md)
+- [Defender for IoT subscription billing](billing.md)
+- [Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery)
- [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response) - [Security recommendations](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation) - [Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/tvm-weaknesses) - [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) - [Proactively hunt with advanced hunting in Microsoft 365 Defender](/microsoft-365/security/defender/advanced-hunting-overview)
+## Frequently asked questions
+
+This section provides a list of frequently asked questions about securing Enterprise IoT networks with Microsoft Defender for IoT.
+
+### What is the difference between OT and Enterprise IoT?
+
+- **Operational Technology (OT)**: OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into Operational Technology (OT) / Industrial Control System (ICS) risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency.
+
+- **Enterprise IoT**: Enterprise IoT provides visibility and security for IoT devices in the corporate environment.
+
+ Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment might include printers, cameras, and purpose-built, proprietary, devices.
+
+### What extra security value can Enterprise IoT provide Microsoft Defender for Endpoint customers?
+
+Enterprise IoT is designed to help customers secure unmanaged devices throughout the organization and extend IT security to also cover IoT devices.
+
+While Microsoft 365 P2 customers already have visibility for discovered IoT devices in the **Device inventory** page in Defender for Endpoint, they can use enterprise IoT security to gain security value with extra alerts, recommendations and vulnerabilities for their discovered IoT devices.
+
+### How can I start using Enterprise IoT?
+
+Microsoft E5 (ME5) and E5 Security customers already have devices supported for enterprise IoT security. If you only have a Defender for Endpoint P2 license, you can purchase standalone, per-device licenses for enterprise IoT monitoring, or use a trial.
+
+For more information, see:
+
+- [Get started with enterprise IoT monitoring in Microsoft 365 Defender](eiot-defender-for-endpoint.md)
+- [Manage enterprise IoT monitoring support with Microsoft Defender for IoT](manage-subscriptions-enterprise.md)
+
+### What permissions do I need to use Enterprise IoT security with Defender for IoT?
+
+For information on required permissions, see [Prerequisites](eiot-defender-for-endpoint.md#prerequisites).
+
+### Which devices are billable?
+
+For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
+
+### How should I estimate the number of devices I want to monitor?
+
+For more information, see [Calculate monitored devices for Enterprise IoT monitoring](manage-subscriptions-enterprise.md#calculate-monitored-devices-for-enterprise-iot-monitoring).
+
+### How can I cancel Enterprise IoT?
+
+For more information, see [Turn off enterprise IoT security](manage-subscriptions-enterprise.md#turn-off-enterprise-iot-security).
+
+### What happens when the trial ends?
+
+If you haven't added a standalone license by the time your trial ends, your trial is automatically canceled, and you lose access to Enterprise IoT security features.
+
+For more information, see [Defender for IoT subscription billing](billing.md).
+
+### How can I resolve billing issues associated with my Defender for IoT plan?
+
+For any billing or technical issues, open a support ticket for Microsoft 365 Defender.
+ ## Next steps Start securing your Enterprise IoT network resources with by [onboarding to Defender for IoT from Microsoft 365 Defender](eiot-defender-for-endpoint.md).
defender-for-iot Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md
A *transient* device type indicates a device that was detected for only a short
## Device management options
-The Defender for IoT device inventory is available in the Azure portal, OT network sensor consoles, and the on-premises management console.
-
-While you can view device details from any of these locations, each location also offers extra device inventory support. The following table describes the device inventory support for each location and the extra actions available from that location only:
+Defender for IoT device inventory is available in the following locations:
|Location |Description | Extra inventory support | ||||
-|**Azure portal** | Devices detected from all cloud-connected OT sensors and Enterprise IoT sensors. <br><br> | - If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) on your Azure subscription, the device inventory also includes devices detected by Microsoft Defender for Endpoint agents. <br><br>- If you also use [Microsoft Sentinel](iot-solution.md), incidents in Microsoft Sentinel are linked to related devices in Defender for IoT. <br><br>- Use Defender for IoT [workbooks](workbooks.md) for visibility into all cloud-connected device inventory, including related alerts and vulnerabilities. |
+|**Azure portal** | OT devices detected from all cloud-connected OT sensors. | - If you also use [Microsoft Sentinel](iot-solution.md), incidents in Microsoft Sentinel are linked to related devices in Defender for IoT. <br><br>- Use Defender for IoT [workbooks](workbooks.md) for visibility into all cloud-connected device inventory, including related alerts and vulnerabilities. <br><br>- If you have a [legacy Enterprise IoT plan](whats-new.md#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses) on your Azure subscription, the Azure portal also includes devices detected by Microsoft Defender for Endpoint agents. If you have an [Enterprise IoT sensor](eiot-sensor.md), the Azure portal also includes devices detected by the Enterprise IoT sensor. |
+| **Microsoft 365 Defender** | Enterprise IoT devices detected by Microsoft Defender for Endpoint agents | Correlate devices across Microsoft 365 Defender in purpose-built alerts, vulnerabilities, and recommendations. |
|**OT network sensor consoles** | Devices detected by that OT sensor | - View all detected devices across a network device map<br><br>- View related events on the **Event timeline** | |**An on-premises management console** | Devices detected across all connected OT sensors | Enhance device data by importing data manually or via script |
+|
For more information, see: - [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
+- [Defender for Endpoint device discovery](/microsoft-365/security/defender-endpoint/device-discovery)
- [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) - [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)
-> [!NOTE]
-> If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) to [integrate with Microsoft Defender for Endpoint](concept-enterprise.md), devices detected by an Enterprise IoT sensor are also listed in Defender for Endpoint. For more information, see:
->
-> - [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview)
-> - [Defender for Endpoint device discovery](/microsoft-365/security/defender-endpoint/device-discovery)
->
- ## Automatically consolidated devices When you've deployed Defender for IoT at scale, with several OT sensors, each sensor might detect different aspects of the same device. To prevent duplicated devices in your device inventory, Defender for IoT assumes that any devices found in the same zone, with a logical combination of similar characteristics, is the same device. Defender for IoT automatically consolidates these devices and lists them only once in the device inventory.
-For example, any devices with the same IP and MAC address detected in the same zone are consolidated and identified as a single device in the device inventory. If you have separate devices from recurring IP addresses that are detected by multiple sensors, you'll want each of these devices to be identified separately. In such cases, [onboard your OT sensors](onboard-sensors.md) to different zones so that each device is identified as a separate and unique device, even if they have the same IP address. Devices that have the same MAC addresses, but different IP addresses are not merged, and continue to be listed as unique devices.
+For example, any devices with the same IP and MAC address detected in the same zone are consolidated and identified as a single device in the device inventory. If you have separate devices from recurring IP addresses that are detected by multiple sensors, you want each of these devices to be identified separately. In such cases, [onboard your OT sensors](onboard-sensors.md) to different zones so that each device is identified as a separate and unique device, even if they have the same IP address. Devices that have the same MAC addresses, but different IP addresses aren't merged, and continue to be listed as unique devices.
A *transient* device type indicates a device that was detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network.
The following table lists the columns available in the Defender for IoT device i
|Name |Description |||
-|**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value may need to change as the device security changes. |
+|**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value might need to change as the device security changes. |
|**Business Function** | Editable. Describes the device's business function. | | **Class** | Editable. The device's class. <br>Default: `IoT` | |**Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`|
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
Title: Enable Enterprise IoT security in Microsoft 365 with Defender for Endpoint - Microsoft Defender for IoT
-description: Learn how to start integrating between Microsoft Defender for IoT and Microsoft Defender for Endpoint in Microsoft 365 Defender.
+ Title: Get started with enterprise IoT monitoring in Microsoft 365 Defender | Microsoft Defender for IoT
+description: Learn how to get added value for enterprise IoT devices in Microsoft 365 Defender.
Previously updated : 10/19/2022 Last updated : 09/13/2023
+#CustomerIntent: As a Microsoft 365 administrator, I want to understand how to turn on support for enterprise IoT monitoring in Microsoft 365 Defender and where I can find the added security value so that I can keep my EIoT devices safe.
-# Enable Enterprise IoT security with Defender for Endpoint
+# Get started with enterprise IoT monitoring in Microsoft 365 Defender
-This article describes how [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) customers can add an Enterprise IoT plan in Microsoft 365 Defender, providing extra security value for IoT devices.
+This article describes how [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) customers can monitor enterprise IoT devices in their environment, using added security value in Microsoft 365 Defender.
-While IoT device inventory is already available for Defender for Endpoint P2 customers, adding an Enterprise IoT plan adds alerts, recommendations, and vulnerability data, purpose-built for IoT devices in your enterprise network.
+While IoT device inventory is already available for Defender for Endpoint P2 customers, turning on enterprise IoT security adds alerts, recommendations, and vulnerability data, purpose-built for IoT devices in your enterprise network.
-IoT devices include printers, cameras, VOIP phones, smart TVs, and more. Adding an Enterprise IoT plan means, for example, that you can use a recommendation in Microsoft 365 Defender to open a single IT ticket for patching vulnerable applications across both servers and printers.
+IoT devices include printers, cameras, VOIP phones, smart TVs, and more. Turning on enterprise IoT security means, for example, that you can use a recommendation in Microsoft 365 Defender to open a single IT ticket for patching vulnerable applications across both servers and printers.
## Prerequisites
Before you start the procedures in this article, read through [Secure IoT device
Make sure that you have: -- A Microsoft Defender for Endpoint P2 license- - IoT devices in your network, visible in the Microsoft 365 Defender **Device inventory** -- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).
+- Access to the Microsoft 365 Defender portal as a [Security administrator](../../active-directory/roles/permissions-reference.md#security-administrator)
+
+- One of the following licenses:
+
+ - A Microsoft 365 E5 (ME5) or E5 Security license
+
+ - Microsoft Defender for Endpoint P2, with an extra, standalone **Microsoft Defender for IoT - EIoT Device License - add-on** license, available for purchase or trial from the Microsoft 365 admin center.
+
+ > [!TIP]
+ > If you have a standalone license, you don't need to toggle on **Enterprise IoT Security** and can skip directly to [View added security value in Microsoft 365 Defender](#view-added-security-value-in-microsoft-365-defender).
+ >
-- The following user roles:
+ For more information, see [Enterprise IoT security in Microsoft 365 Defender](concept-enterprise.md#enterprise-iot-security-in-microsoft-365-defender).
- |Identity management |Roles required |
- |||
- |**In Microsoft Entra ID** | [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant |
- |**In Azure RBAC** | [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration |
-## Onboard a Defender for IoT plan
+## Turn on enterprise IoT monitoring
-1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+This procedure describes how to turn on enterprise IoT monitoring in Microsoft 365 Defender, and is relevant only for ME5/E5 Security customers.
-1. Select the following options for your plan:
+Skip this procedure if you have one of the following types of licensing plans:
- - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
+- Customers with legacy Enterprise IoT pricing plan and an ME5/E5 Security license.
+- Customers with standalone, per-device licenses added on to Microsoft Defender for Endpoint P2. In such cases, the Enterprise IoT security setting is turned on as read-only.
- - **Price plan**: For the sake of this tutorial, select a **Trial** pricing plan. Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for evaluation purposes.
+**To turn on enterprise IoT monitoring**:
-1. Select the **I accept the terms and conditions** option and then select **Save**.
+1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Settings** \> **Device discovery** \> **Enterprise IoT**.
-For example:
+1. Toggle the Enterprise IoT security option to **On**. For example:
+ :::image type="content" source="media/enterprise-iot/eiot-toggle-on.png" alt-text="Screenshot of Enterprise IoT toggled on in Microsoft 365 Defender.":::
## View added security value in Microsoft 365 Defender
-This procedure describes how to view related alerts, recommendations, and vulnerabilities for a specific device in Microsoft 365 Defender. Alerts, recommendations, and vulnerabilities are shown for IoT devices only after you've added an Enterprise IoT plan.
+This procedure describes how to view related alerts, recommendations, and vulnerabilities for a specific device in Microsoft 365 Defender, when the **Enterprise IoT security** option is turned on.
**To view added security value**:
-1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Assets** \> **Devices** to open the **Device inventory** page.
+1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Assets** \> **Devices** to open the **Device inventory** page.
1. Select the **IoT devices** tab and select a specific device **IP** to drill down for more details. For example: :::image type="content" source="media/enterprise-iot/select-a-device.png" alt-text="Screenshot of the IoT devices tab in Microsoft 365 Defender." lightbox="media/enterprise-iot/select-a-device.png":::
-1. On the device details page, explore the following tabs to view data added by the Enterprise IoT plan for your device:
+1. On the device details page, explore the following tabs to view data added by the enterprise IoT security for your device:
- On the **Alerts** tab, check for any alerts triggered by the device.
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
This article describes how to register an Enterprise IoT network sensor in Microsoft Defender for IoT.
-**If you're a Defender for Endpoint customer** with an Enterprise IoT plan for Defender for IoT, adding an Enterprise IoT network sensor extends your network visibility to IoT segments in your corporate network not otherwise covered by Microsoft Defender for Endpoint. For example, if you have a VLAN dedicated to VoIP devices with no other endpoints, Defender for Endpoint may not be able to discover devices on that VLAN.
+Microsoft 365 Defender customers with an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft 365 Defender or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft 365 Defender for the newly discovered devices.
-Customers that have set up an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft 365 Defender or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft 365 Defender for the newly discovered devices.
-
-**If you're a Defender for IoT customer** working solely in the Azure portal, an Enterprise IoT network sensor provides extra device visibility to Enterprise IoT devices, such as Voice over Internet Protocol (VoIP) devices, printers, and cameras, which may not be covered by your OT network sensors.
+If you're a Defender for IoT customer working solely in the Azure portal, an Enterprise IoT network sensor provides extra device visibility to Enterprise IoT devices, such as Voice over Internet Protocol (VoIP) devices, printers, and cameras, which might not be covered by your OT network sensors.
Defender for IoT [alerts](how-to-manage-cloud-alerts.md) and [recommendations](recommendations.md) for devices discovered by the Enterprise IoT sensor only are available only in the Azure portal.
This section describes the prerequisites required before deploying an Enterprise
### Azure requirements -- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have an Enterprise IoT plan, [onboarded from Microsoft 365 Defender](eiot-defender-for-endpoint.md).
+- To view Defender for IoT data in Microsoft 365 Defender, including devices, alerts, recommendations, and vulnerabilities, you must have **Enterprise IoT security** turned on in [Microsoft 365 Defender](eiot-defender-for-endpoint.md).
- If you only want to view data in the Azure portal, an Enterprise IoT plan isn't required. You can also onboard your Enterprise IoT plan from Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#security-value-in-microsoft-365-defender) to your organization.
+ If you only want to view data in the Azure portal, you don't need Microsoft 365 Defender. You can also turn on **Enterprise IoT security** in Microsoft 365 Defender after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#enterprise-iot-security-in-microsoft-365-defender) to your organization.
- Make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/). ### Network requirements -- Identify the devices and subnets you want to monitor so that you understand where to place an Enterprise IoT sensor in your network. You may want to deploy multiple Enterprise IoT sensors.
+- Identify the devices and subnets you want to monitor so that you understand where to place an Enterprise IoT sensor in your network. You might want to deploy multiple Enterprise IoT sensors.
- Configure traffic mirroring in your network so that the traffic you want to monitor is mirrored to your Enterprise IoT sensor. Supported traffic mirroring methods are the same as for OT monitoring. For more information, see [Choose a traffic mirroring method for traffic monitoring](best-practices/traffic-mirroring-methods.md).
This procedure describes how to prepare your physical appliance or VM to install
The system displays a list of all monitored interfaces.
- Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic will show an increasing number of RX packets.
+ Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic show an increasing number of RX packets.
1. For each interface you want to monitor, run the following command to enable *Promiscuous mode* in the network adapter:
This procedure describes how to prepare your physical appliance or VM to install
## Register an Enterprise IoT sensor in Defender for IoT
-This section describes how to register an Enterprise IoT sensor in Defender for IoT. When you're done registering your sensor, you'll continue on with installing the Enterprise IoT monitoring software on your sensor machine.
+This section describes how to register an Enterprise IoT sensor in Defender for IoT. When you're done registering your sensor, you continue on with installing the Enterprise IoT monitoring software on your sensor machine.
**To register a sensor in the Azure portal**:
This section describes how to register an Enterprise IoT sensor in Defender for
:::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
-1. Copy the command to a safe location, where you'll be able to copy it to your physical appliance or VM in order to [install sensor software](#install-enterprise-iot-sensor-software).
+1. Copy the command to a safe location, where you're able to copy it to your physical appliance or VM in order to [install sensor software](#install-enterprise-iot-sensor-software).
## Install Enterprise IoT sensor software
This procedure describes how to install Enterprise IoT monitoring software on [y
1. In the **Set up proxy server?** screen, select whether to set up a proxy server for your sensor. For example:
- :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server? screen.":::
+ :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server screen.":::
If you're setting up a proxy server, select **Yes**, and then define the proxy server host, port, username, and password, selecting **Ok** after each option.
In the **Sites and sensors** page, Enterprise IoT sensors are all automatically
Once you've validated your setup, the Defender for IoT **Device inventory** page will start to populate with new devices detected by your sensor after 15 minutes.
-If you're a Defender for Endpoint customer with an Enterprise IoT plan, you'll be able to view all detected devices in the **Device inventory** pages, in both Defender for IoT and Microsoft 365 Defender. Detected devices include both devices detected by Defender for Endpoint and devices detected by the Enterprise IoT sensor.
+If you're a Defender for Endpoint customer with a [legacy Enterprise IoT plan](whats-new.md#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses), you're able to view all detected devices in the **Device inventory** pages, in both Defender for IoT and Microsoft 365 Defender. Detected devices include both devices detected by Defender for Endpoint and devices detected by the Enterprise IoT sensor.
For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) and [Microsoft 365 Defender device discovery](/microsoft-365/security/defender-endpoint/machines-view-overview).
-If you're on a monthly commitment, you may want to edit the number of devices covered by your Enterprise IoT plan. For more information, see:
--- [Calculate monitored devices for Enterprise IoT monitoring](manage-subscriptions-enterprise.md#calculate-monitored-devices-for-enterprise-iot-monitoring)-- [Defender for IoT subscription billing](billing.md) ## Delete an Enterprise IoT network sensor
For more information, see [Manage sensors with Defender for IoT in the Azure por
> [!TIP] > You can also remove your sensor manually from the CLI. For more information, see [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md#remove-an-enterprise-iot-network-sensor-optional).
-If you want to cancel your Enterprise IoT plan and stop the integration with Defender for Endpoint, do so from [Microsoft 365 Defender](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
-
-## Move existing sensors to a different subscription
-
-If you've registered an Enterprise IoT network sensor, you may need to apply it to a different subscription than the one youΓÇÖre currently using.
-
-**To apply an existing sensor to a different subscription**:
-
-1. Onboard a new plan to the new subscription
-1. Register the sensors under the new subscription
-1. Remove the sensors from the previous subscription
-
-Billing changes will take effect one hour after cancellation of the previous subscription, and will be reflected on the next month's bill. Devices will be synchronized from the sensor to the new subscription automatically.
-
-**To switch to a new subscription**:
-
-1. In Defender for Endpoint, onboard a new Enterprise IoT plan to the new subscription you want to use. For more information, see [Onboard a Defender for IoT plan](eiot-defender-for-endpoint.md#onboard-a-defender-for-iot-plan).
-
-1. In the Azure portal, register your Enterprise IoT sensor under the new subscription and run the activation command. For more information, see [Enhance IoT security monitoring with an Enterprise IoT network sensor (Public preview)](eiot-sensor.md).
-
-1. Delete the legacy sensor from the previous subscription. In Defender for IoT, go to the **Sites and sensors** page and locate the legacy sensor on the previous subscription.
-
-1. In the row for your sensor, from the options (**...**) menu, select **Delete** to delete the sensor from the previous subscription.
-
-1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
+If you want to cancel enterprise IoT security with Microsoft 365 Defender, do so from the Microsoft 365 Defender portal. For more information, see [Turn off enterprise IoT security](manage-subscriptions-enterprise.md#turn-off-enterprise-iot-security).
## Next steps
defender-for-iot Faqs Eiot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-eiot.md
- Title: FAQs for Enterprise IoT networks - Microsoft Defender for IoT
-description: Find answers to the most frequently asked questions about Microsoft Defender for IoT Enterprise IoT networks.
- Previously updated : 06/05/2023---
-# Enterprise IoT network security frequently asked questions
-
-This article provides a list of frequently asked questions about securing Enterprise IoT networks with Microsoft Defender for IoT.
-
-## What is the difference between OT and Enterprise IoT?
-
-### Operational Technology (OT)
-
-OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into Operational Technology (OT) / Industrial Control System (ICS) risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency.
-
-### Enterprise IoT
-
-Enterprise IoT provides visibility and security for IoT devices in the corporate environment.
-
-Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices.
-
-## What additional security value can Enterprise IoT provide Microsoft Defender for Endpoint customers?
-
-Enterprise IoT is designed to help customers secure un-managed devices throughout the organization and extend IT security to also cover IoT devices. The solution leverages multiple means in order to ensure optimal coverage.
--- **In the Microsoft Defender for Endpoint portal**: This is the GA offering for Enterprise IoT. Microsoft 365 P2 customers already have visibility for discovered IoT devices in the **Device inventory** page in Defender for Endpoint. Customers can onboard an Enterprise IoT plan in the same portal and gain security value by viewing alerts, recommendations and vulnerabilities for their discovered IoT devices.-
- For more information, see [Onboard with Microsoft Defender for IoT](eiot-defender-for-endpoint.md).
--- **In the Azure portal**: Defender for IoT customers can view their discovered IoT devices in the **Device inventory** page in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal.-
- For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md).
-
-## How can I start using Enterprise IoT?
-
-To get started, Microsoft 365 P2 customers need to [add a Defender for IoT plan with Enterprise IoT](eiot-defender-for-endpoint.md) to an Azure subscription from the Microsoft Defender for Endpoint portal.
-
-If youΓÇÖre a Defender for Endpoint customer, when adding your Defender for IoT plan, take care to exclude any devices already [managed by Defender for Endpoint](/microsoft-365/security/defender-endpoint/device-discovery) from your count of devices you want to monitor.
-
-## What permissions do I need to add a Defender for IoT plan? Can I use any Azure subscription?
-
-For information on required permissions, see [Prerequisites](eiot-defender-for-endpoint.md#prerequisites).
-
-## Which devices are billable?
-
-For more information about billable devices, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
-
-## How should I estimate the number of devices I want to monitor?
-
-In the **Device inventory** in Defender for Endpoint:
-
-Add the total number of discovered network devices with the total number of discovered IoT devices. Round that up to a multiple of 100, and that is the number of devices to enter.
-
-For more information, see [Devices monitored by Defender for IoT](architecture.md#devices-monitored-by-defender-for-iot).
-
-## How does the integration between Microsoft Defender for Endpoint and Microsoft Defender for IoT work?
-
-Once you've [added a Defender for IoT plan with Enterprise IoT to an Azure subscription in Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#onboard-a-defender-for-iot-plan), integration between the two products takes place seamlessly.
-
-Discovered IoT devices can be viewed in both Defender for IoT and Defender for Endpoint. For more information, see [Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
-
-## Can I change the subscription IΓÇÖm using for Defender for IoT?
-
-To change the subscription you're using for your Defender for IoT plan, you'll need to cancel your plan on the existing subscription, and then onboard a new plan to a new subscription. Your existing data won't be migrated to the new subscription. For more information, see [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md).
-
-## How can I edit my plan in Defender for Endpoint?
-
-To make any changes to an existing plan, you'll need to cancel your existing plan and onboard a new plan with the new details. Changes might include moving billing charges from one subscription to another, changing the number of devices you want to cover, or changing the plan commitment from a trial to a monthly commitment.
-
-## How can I cancel Enterprise IoT?
-
-To remove only Enterprise IoT from your plan, cancel your plan from Microsoft Defender for Endpoint. For more information, see [Cancel your Defender for IoT plan](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration#cancel-your-defender-for-iot-plan).
-
-To cancel the plan and remove all Defender for IoT services from the associated subscription, cancel the plan in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
-
-## What happens when the 30-day trial ends?
-
-If you haven't changed your plan from a trial to a monthly commitment by the time your trial ends, your plan is automatically canceled, and youΓÇÖll lose access to Defender for IoT security features.
-
-To change your plan from a trial to a monthly commitment before the end of the trial, you'll need to cancel your trial plan and onboard a new plan in Defender for Endpoint. For more information, see [Defender for IoT integration](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
-
-## How can I resolve billing issues associated with my Defender for IoT plan?
-
-For any billing or technical issues, create a support request in the Azure portal.
-
-## Next steps
-
-For more information on getting started with Enterprise IoT, see:
--- [Securing IoT devices in the enterprise](concept-enterprise.md)-- [Enable Enterprise IoT security in Defender for Endpoint](eiot-defender-for-endpoint.md)-- [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md)
defender-for-iot Faqs General https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/faqs-general.md
To learn more about how to get started with Defender for IoT, see the following
- Read the Defender for IoT [overview](overview.md) - [Get started with Defender for IoT](getting-started.md) - [OT Networks frequently asked questions](faqs-ot.md)-- [Enterprise IoT networks frequently asked questions](faqs-eiot.md)
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Microsoft Defender for IoT alerts enhance your network security and operations w
- [Integrate with Microsoft Sentinel](iot-solution.md) to view Defender for IoT alerts in Microsoft Sentinel and manage them together with security incidents. -- If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) with Microsoft Defender for Endpoint, alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only.
+- If you have **Enterprise IoT security** [turned on in Microsoft 365 Defender](eiot-defender-for-endpoint.md), alerts for Enterprise IoT devices detected by Microsoft Defender for Endpoint are available in Defender for Endpoint only.
For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md) and the [Alerts queue in Microsoft 365 Defender](/microsoft-365/security/defender-endpoint/alerts-queue-endpoint-detection-response).
Microsoft Defender for IoT alerts enhance your network security and operations w
## Prerequisites -- **To have alerts in Defender for IoT**, you must have an [OT](onboard-sensors.md) or [Enterprise IoT sensor](eiot-sensor.md) on-boarded, and network data streaming into Defender for IoT.
+- **To have alerts in Defender for IoT**, you must have an [OT](onboard-sensors.md) onboarded, and network data streaming into Defender for IoT.
- **To view alerts on the Azure portal**, you must have access as a [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner)
For more information, see [Azure user roles and permissions for Defender for IoT
| **Destination device** | The destination IP or MAC address, or the destination device name.| | **First detection** | The first time the alert was detected in the network. | | **ID** |The unique alert ID.|
- | **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert de-duplication |
+ | **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert deduplication |
| **Protocol** | The protocol detected in the network traffic for the alert.| | **Sensor** | The sensor that detected the alert.| | **Zone** | The zone assigned to the sensor that detected the alert.|
For example, filter alerts by **Category**:
Use the **Group by** menu at the top-right to collapse the grid into subsections according to specific parameters.
-For example, while the total number of alerts appears above the grid, you may want more specific information about alert count breakdown, such as the number of alerts with a specific severity, protocol, or site.
+For example, while the total number of alerts appears above the grid, you might want more specific information about alert count breakdown, such as the number of alerts with a specific severity, protocol, or site.
Supported grouping options include *Engine*, *Name*, *Sensor*, *Severity*, and *Site*.
Downloading the PCAP file can take several minutes, depending on the quality of
## Export alerts to a CSV file
-You may want to export a selection of alerts to a CSV file for offline sharing and reporting.
+You might want to export a selection of alerts to a CSV file for offline sharing and reporting.
1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, select the **Alerts** page on the left.
defender-for-iot How To Manage Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-subscriptions.md
Your Microsoft Defender for IoT deployment for OT monitoring is managed through a site-based license, purchased in the Microsoft 365 admin center. After you've purchased your license, apply that license to your OT plan in the Azure portal.
-If you're looking to manage Enterprise IoT plans, see [Manage Defender for IoT plans for Enterprise IoT security monitoring](manage-subscriptions-enterprise.md).
+If you're looking to manage support for enterprise IoT security, see [Manage enterprise IoT monitoring support with Microsoft Defender for IoT](manage-subscriptions-enterprise.md).
-This article is relevant for commercial Defender for IoT customers. If you're a government cusetomer, contact your Microsoft sales representative for more information.
+This article is relevant for commercial Defender for IoT customers. If you're a government customer, contact your Microsoft sales representative for more information.
## Prerequisites
Before performing the procedures in this article, make sure that you have:
- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/). -- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you'll be using for the integration
+- A [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user role for the Azure subscription that you're using for the integration
- An understanding of your site size. For more information, see [Calculate devices in your network](best-practices/plan-prepare-deploy.md#calculate-devices-in-your-network).
This procedure describes how to add an OT plan for Defender for IoT in the Azure
1. In [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started), select **Plans and pricing** > **Add plan**.
-1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan. You can only add a single subscription, and you'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription.
+1. In the **Plan settings** pane, select the Azure subscription where you want to add a plan. You can only add a single subscription, and you need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the selected subscription.
> [!NOTE] > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner. Also make sure that you have the right subscriptions selected in your Azure settings > **Directories + subscriptions** page.
This procedure describes how to add an OT plan for Defender for IoT in the Azure
- Select the terms and conditions. - If you're working with an on-premises management console, select **Download OT activation file (Optional)**.
- When you're finished, select **Save**. If you've selected to download the on-premises management console activation file, the file is downloaded and you're prompted to save it locally. You'll use it later, when [activating your on-premises management console](ot-deploy/activate-deploy-management.md#activate-the-on-premises-management-console).
+ When you're finished, select **Save**. If you've selected to download the on-premises management console activation file, the file is downloaded and you're prompted to save it locally. You use it later, when [activating your on-premises management console](ot-deploy/activate-deploy-management.md#activate-the-on-premises-management-console).
Your new plan is listed under the relevant subscription on the **Plans and pricing** > **Plans** page.
-## Cancel a Defender for IoT plan
+## Cancel a Defender for IoT plan for OT networks
-You may need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a different subscription, or if you no longer need the service.
-
-> [!IMPORTANT]
-> Canceling a plan removes all Defender for IoT services from the subscription, including both OT and Enterprise IoT services. If you have an Enterprise IoT plan on your subscription, do this with care.
->
-> To cancel only an Enterprise IoT plan, do so from Microsoft 365. For more information, see [Cancel your Enterprise IoT plan](manage-subscriptions-enterprise.md#cancel-your-enterprise-iot-plan).
->
+You might need to cancel a Defender for IoT plan from your Azure subscription, for example, if you need to work with a different subscription, or if you no longer need the service.
**Prerequisites**: Before canceling your plan, make sure to delete any sensors that are associated with the subscription. For more information, see [Sensor management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal).
-**To cancel a Defender for IoT plan for OT networks**:
+**To cancel an OT network plan**:
1. In the Azure portal, go to **Defender for IoT** > **Plans and pricing**.
Existing customers can continue to use any legacy OT plan, with no changes in fu
### Warnings for exceeding committed devices
-If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you may see a warning message in the Azure portal and on your OT sensor that you have exceeded the maximum number of devices for your subscription.
+If the number of actual devices detected by Defender for IoT exceeds the number of committed devices currently listed on your subscription, you might see a warning message in the Azure portal and on your OT sensor that you have exceeded the maximum number of devices for your subscription.
-This warning indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Click the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open.
+This warning indicates you need to update the number of committed devices on the relevant subscription to the actual number of devices being monitored. Select the link in the warning message to take you to the **Plans and pricing** page, with the **Edit plan** pane already open.
### Move existing sensors to a different subscription
-If you have multiple legacy subscriptions and are migrating to a Microsoft 365 plan, you'll first need to consolidate your sensors to a single subscription. To do this, you'll need to register the sensors under the new subscription and remove them from the original subscription.
+If you have multiple legacy subscriptions and are migrating to a Microsoft 365 plan, you'll first need to consolidate your sensors to a single subscription. To do this, you need to register the sensors under the new subscription and remove them from the original subscription.
- Devices are synchronized from the sensor to the new subscription automatically.
If you have multiple legacy subscriptions and are migrating to a Microsoft 365 p
- Replicate site and sensor hierarchy as is.
- - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone, will be merged into one device.
+ - For sensors monitoring overlapping network segments, create the activation file under the same zone. Identical devices that are detected in more than one sensor in a zone are merged into one device.
1. On your sensor, upload the new activation file. 1. Delete the sensor identities from the previous subscription. For more information, see [Site management options from the Azure portal](how-to-manage-sensors-on-the-cloud.md#site-management-options-from-the-azure-portal).
-1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel a Defender for IoT plan](#cancel-a-defender-for-iot-plan).
+1. If relevant, cancel the Defender for IoT plan from the previous subscription. For more information, see [Cancel a Defender for IoT plan for OT networks](#cancel-a-defender-for-iot-plan-for-ot-networks).
### Edit a legacy plan on the Azure portal
If you have multiple legacy subscriptions and are migrating to a Microsoft 365 p
1. If you have an on-premises management console, make sure to upload a new activation file, which reflects the changes made. For more information, see [Upload a new activation file](how-to-manage-the-on-premises-management-console.md#upload-a-new-activation-file).
-Changes to your plan will take effect one hour after confirming the change. This change will appear on your next monthly statement, and you'll be charged based on the length of time each plan was in effect.
+Changes to your plan will take effect one hour after confirming the change. This change appears on your next monthly statement, and you're charged based on the length of time each plan was in effect.
## Next steps For more information, see:
defender-for-iot Manage Subscriptions Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md
Title: Manage Enterprise IoT plans on Azure subscriptions
-description: Manage Defender for IoT plans for Enterprise IoT monitoring on your Azure subscriptions.
Previously updated : 05/17/2023
+ Title: Manage EIoT monitoring support | Microsoft Defender for IoT
+description: Learn how to manage your EIoT monitoring support with Microsoft Defender for IoT.
Last updated : 09/13/2023
+#CustomerIntent: As a Defender for IoT customer, I want to understand how to manage my EIoT monitoring support with Microsoft Defender for IoT so that I can best plan my deployment.
-# Manage Defender for IoT plans for Enterprise IoT security monitoring
+# Manage enterprise IoT monitoring support with Microsoft Defender for IoT
-Enterprise IoT security monitoring with Defender for IoT is managed by an Enterprise IoT plan on your Azure subscription. While you can view your plan in Microsoft Defender for IoT, onboarding and canceling a plan is done with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/) in Microsoft 365 Defender.
+Enterprise IoT security monitoring with Defender for IoT is supported by a Microsoft 365 E5 (ME5) or E5 Security license, or extra standalone, per-device licenses purchased as add-ons to Microsoft Defender for Endpoint.
-For each monthly price plan, you'll be asked to define an approximate number of [devices](billing.md#defender-for-iot-devices) that you want to monitor and cover by your plan.
+This article describes how to:
+
+- Calculate the devices detected in your environment so that you can understand if you need extra, standalone licenses.
+- Cancel support for enterprise IoT monitoring with Microsoft Defender for IoT
If you're looking to manage OT plans, see [Manage Defender for IoT plans for OT security monitoring](how-to-manage-subscriptions.md).
If you're looking to manage OT plans, see [Manage Defender for IoT plans for OT
Before performing the procedures in this article, make sure that you have: -- A Microsoft Defender for Endpoint P2 license
+- One of the following sets of licenses:
+
+ - A Microsoft 365 E5 (ME5) or E5 Security license and a Microsoft Defender for Endpoint P2 license
+ - A Microsoft Defender for Endpoint P2 license alone
+
+ For more information, see [Enterprise IoT security in Microsoft 365 Defender](concept-enterprise.md#enterprise-iot-security-in-microsoft-365-defender).
+
+- Access to the Microsoft 365 Defender portal as a [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator)
+
+## Obtain a standalone, Enterprise IoT trial license
+
+This procedure describes how to start using a trial, standalone license for enterprise IoT monitoring, for customers who have a Microsoft Defender for Endpoint P2 license only.
+
+Customers with ME5/E5 Security plans have support for enterprise IoT monitoring available on by default, and don't need to start a trial. For more information, see [Get started with enterprise IoT monitoring in Microsoft 365 Defender](eiot-defender-for-endpoint.md).
-- An Azure subscription. If you need to, [sign up for a free account](https://azure.microsoft.com/free/).
+Start your enterprise IoT trial using the [Microsoft Defender for IoT - EIoT Device License - add-on wizard](https://signup.microsoft.com/get-started/signup?products=b2f91841-252f-4765-94c3-75802d7c0ddb&ali=1&bac=1) or via the Microsoft 365 admin center.
-- The following user roles:
- - **In Microsoft Entra ID**: [Global administrator](../../active-directory/roles/permissions-reference.md#global-administrator) for your Microsoft 365 tenant
+**To start an Enterprise IoT trial**:
- - **In Azure RBAC**: [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) for the Azure subscription that you'll be using for the integration
+1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) > **Marketplace**.
-### Calculate monitored devices for Enterprise IoT monitoring
+1. Search for the **Microsoft Defender for IoT - EIoT Device License - add-on** and filter the results by **Other services**. For example:
+
+ :::image type="content" source="media/enterprise-iot/eiot-standalone.png" alt-text="Screenshot of the Marketplace search results for the EIoT Device License.":::
+
+ > [!IMPORTANT]
+ > The prices shown in this image are for example purposes only and are not intended to reflect actual prices.
+ >
+
+1. Under **Microsoft Defender for IoT - EIoT Device License - add-on**, select **Details**.
+
+1. On the **Microsoft Defender for IoT - EIoT Device License - add-on** page, select **Start free trial**. On the **Check out** page, select **Try now**.
+
+> [!TIP]
+> Make sure to [assign your licenses to specific users]/microsoft-365/admin/manage/assign-licenses-to-users to start using them.
+>
-If you're working with a monthly commitment, you'll need to periodically update the number of devices covered by your plan as your network grows.
+For more information, see [Free trial](billing.md#free-trial).
+
+## Calculate monitored devices for Enterprise IoT monitoring
+
+Use the following procedure to calculate how many devices you need to monitor if:
+
+- You're an ME5/E5 Security customer and thinks you need to monitor more devices than the devices allocated per ME5/E5 Security license
+- You're a Defender for Endpoint P2 customer who's purchasing standalone enterprise IoT licenses
**To calculate the number of devices you're monitoring:**:
-1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Assets** \> **Devices** to open the **Device inventory** page.
+1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Assets** \> **Devices** to open the **Device inventory** page.
1. Add the total number of devices listed on both the **Network devices** and **IoT devices** tabs.
If you're working with a monthly commitment, you'll need to periodically update
:::image type="content" source="media/how-to-manage-subscriptions/eiot-calculate-devices.png" alt-text="Screenshot of network device and IoT devices in the device inventory in Microsoft Defender for Endpoint." lightbox="media/how-to-manage-subscriptions/eiot-calculate-devices.png":::
-1. Round up your total to a multiple of 100.
+1. Round up your total to a multiple of 100 and compare it against the number of licenses you have.
For example: - In the Microsoft 365 Defender **Device inventory**, you have *473* network devices and *1206* IoT devices. - Added together, the total is *1679* devices.-- Rounded up to a multiple of 100 is **1700**.
+- You have 320 ME5 licenses, which cover **1600** devices
-Use **1700** as the estimated number of devices in your plan
+You need **79** standalone devices to cover the gap.
For more information, see the [Defender for Endpoint Device discovery overview](/microsoft-365/security/defender-endpoint/device-discovery). > [!NOTE] > Devices listed on the **Computers & Mobile** tab, including those managed by Defender for Endpoint or otherwise, are not included in the number of [devices](billing.md#defender-for-iot-devices) monitored by Defender for IoT.
-## Onboard an Enterprise IoT plan
-
-This procedure describes how to add an Enterprise IoT plan to your Azure subscription from Microsoft 365 Defender.
-
-**To add an Enterprise IoT plan**:
-
-1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+## Purchase standalone licenses
-1. Select the following options for your plan:
+Purchase standalone, per-device licenses if you're an ME5/E5 Security customer who needs more than the five devices allocated per license, or if you're a Defender for Endpoint customer who wants to add enterprise IoT security to your organization.
- - **Select an Azure subscription**: Select the Azure subscription that you want to use for the integration. You'll need a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) role for the subscription.
+**To purchase standalone licenses**:
- > [!TIP]
- > If your subscription isn't listed, check your account details and confirm your permissions with the subscription owner.
+1. Go to the [Microsoft 365 admin center](https://portal.office.com/AdminPortal/Home#/catalog) **Billing > Purchase services**. If you don't have this option, select **Marketplace** instead.
- - **Price plan**: Select a trial or monthly commitment.
+1. Search for the **Microsoft Defender for IoT - EIoT Device License - add-on** and filter the results by **Other services**. For example:
- Microsoft Defender for IoT provides a [30-day free trial](billing.md#free-trial) for evaluation purposes, with an unlimited number of devices.
+ :::image type="content" source="media/enterprise-iot/eiot-standalone.png" alt-text="Screenshot of the Marketplace search results for the EIoT Device License.":::
- Monthly commitments require that you enter the number of [devices](#calculate-monitored-devices-for-enterprise-iot-monitoring) that you'd calculated earlier.
+ > [!IMPORTANT]
+ > The prices shown in this image are for example purposes only and are not intended to reflect actual prices.
+ >
-1. Select the **I accept the terms and conditions** option and then select **Save**.
+1. On the **Microsoft Defender for IoT - EIoT Device License - add-on** page, enter your selected license quantity, select a billing frequency, and then select **Buy**.
- For example:
-
- :::image type="content" source="media/enterprise-iot/defender-for-endpoint-onboard.png" alt-text="Screenshot of the Enterprise IoT tab in Defender for Endpoint." lightbox="media/enterprise-iot/defender-for-endpoint-onboard.png":::
+For more information, see the [Microsoft 365 admin center help](/microsoft-365/admin/).
-After you've onboarded your plan, you'll see it listed in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) in the Azure portal. Go to the Defender for IoT **Plans and pricing** page and find your subscription with the new **Enterprise IoT** plan listed. For example:
+## Turn off enterprise IoT security
+This procedure describes how to turn off enterprise IoT monitoring in Microsoft 365 Defender, and is supported only for customers who don't have any standalone, per-device licenses added on to Microsoft 365 Defender.
-## Edit your Enterprise IoT plan
+Turn off the **Enterprise IoT security** option if you're no longer using the service.
-To edit your plan, such as to edit your commitment level or the number of devices covered by your plan, first [cancel the plan](#cancel-your-enterprise-iot-plan) and then [onboard a new plan](#onboard-an-enterprise-iot-plan).
+**To turn off enterprise IoT monitoring**:
-## Cancel your Enterprise IoT plan
+1. In [Microsoft 365 Defender](https://security.microsoft.com/), select **Settings** \> **Device discovery** \> **Enterprise IoT**.
-You'll need to cancel your plan if you want to edit the details of your plan, such as the price plan or the number of devices covered by your plan, or if you no longer need the service.
+1. Toggle the option to **Off**.
-You'd also need to cancel your plan and onboard again if you need to work with a new payment entity or Azure subscription.
+You stop getting security value in Microsoft 365 Defender, including purpose-built alerts, vulnerabilities, and recommendations.
-**To cancel your Enterprise IoT plan**:
+### Cancel a legacy Enterprise IoT plan
-1. In the navigation pane of the [https://security.microsoft.com](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
+If you have a legacy Enterprise IoT plan, are *not* an ME5/E5 Security customer, and no longer to use the service, cancel your plan as follows:
-1. Select **Cancel plan**. For example:
+1. In [Microsoft 365 Defender](https://security.microsoft.com/) portal, select **Settings** \> **Device discovery** \> **Enterprise IoT**.
- :::image type="content" source="media/enterprise-iot/defender-for-endpoint-cancel-plan.png" alt-text="Screenshot of the Cancel plan option on the Microsoft 365 Defender page.":::
+1. Select **Cancel plan**. This page is available only for legacy Enterprise IoT plan customers.
After you cancel your plan, the integration stops and you'll no longer get added security value in Microsoft 365 Defender, or detect new Enterprise IoT devices in Defender for IoT.
-The cancellation takes effect one hour after confirming the change. This change will appear on your next monthly statement, and you will be charged based on the length of time the plan was in effect.
-
-If you're canceling your plan as part of an [editing procedure](#edit-your-enterprise-iot-plan), make sure to [onboard a new plan](#onboard-an-enterprise-iot-plan) back with the new details.
+The cancellation takes effect one hour after confirming the change. This change appears on your next monthly statement, and you're charged based on the length of time the plan was in effect.
> [!IMPORTANT] >
If you're canceling your plan as part of an [editing procedure](#edit-your-enter
For more information, see:
+- [Securing IoT devices in the enterprise](concept-enterprise.md)
- [Defender for IoT subscription billing](billing.md)- - [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md)- - [Create an additional Azure subscription](../../cost-management-billing/manage/create-subscription.md)- - [Upgrade your Azure subscription](../../cost-management-billing/manage/upgrade-azure-subscription.md)--
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
> [!NOTE] > OT monitoring with Microsoft Defender for IoT is now available for purchase with site-based licenses, purchased on the Microsoft 365 admin center.
-The Internet of Things (IoT) supports billions of connected devices that use both operational technology (OT) and IoT networks. IoT/OT devices and networks are often built using specialized protocols, and may prioritize operational challenges over security.
+The Internet of Things (IoT) supports billions of connected devices that use both operational technology (OT) and IoT networks. IoT/OT devices and networks are often built using specialized protocols, and might prioritize operational challenges over security.
When IoT/OT devices can't be protected by traditional security monitoring systems, each new wave of innovation increases the risk and possible attack surfaces across those IoT devices and OT networks.
-Microsoft Defender for IoT is a unified security solution built specifically to identify IoT and OT devices, vulnerabilities, and threats. Use Defender for IoT to secure your entire IoT/OT environment, including existing devices that may not have built-in security agents.
+Microsoft Defender for IoT is a unified security solution built specifically to identify IoT and OT devices, vulnerabilities, and threats. Use Defender for IoT to secure your entire IoT/OT environment, including existing devices that might not have built-in security agents.
Defender for IoT provides agentless, network layer monitoring, and integrates with both industrial equipment and security operation center (SOC) tools.
Defender for IoT provides agentless, network layer monitoring, and integrates wi
## Agentless device monitoring
-If your IoT and OT devices don't have embedded security agents, they may remain unpatched, misconfigured, and invisible to IT and security teams. Un-monitored devices can be soft targets for threat actors looking to pivot deeper into corporate networks.
+If your IoT and OT devices don't have embedded security agents, they might remain unpatched, misconfigured, and invisible to IT and security teams. Unmonitored devices can be soft targets for threat actors looking to pivot deeper into corporate networks.
Defender for IoT uses agentless monitoring to provide visibility and security across your network, and identifies specialized protocols, devices, or machine-to-machine (M2M) behaviors.
Defender for IoT uses agentless monitoring to provide visibility and security ac
- Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further.
- - Detect advanced threats that you may have missed by static indicators of compromise (IOCs), such as zero-day malware, fileless malware, and living-off-the-land tactics.
+ - Detect advanced threats that you might have missed by static indicators of compromise (IOCs), such as zero-day malware, fileless malware, and living-off-the-land tactics.
- **Respond to threats** by integrating with Microsoft services such as Microsoft Sentinel, other partner systems, and APIs. Integrate with security information and event management (SIEM) services, security operations and response (SOAR) services, extended detection and response (XDR) services, and more.
Install OT network sensors on-premises, at strategic locations in your network t
- **Hybrid services**:
- You may have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises.
+ You might have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises.
In this case, set up your system in a flexible and scalable configuration to fit your needs. Connect some of your OT sensors to the cloud and view data on the Azure portal, and keep other sensors managed on-premises only.
For more information, see [System architecture for OT system monitoring](archite
## Extend support to proprietary OT protocols
-IoT and industrial control system (ICS) devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. If you have devices that run on protocols that aren't supported by Defender for IoT out-of-the-box, use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins to decode network traffic for your protocols.
+IoT and industrial control system (ICS) devices can be secured using both embedded protocols and proprietary, custom, or nonstandard protocols. If you have devices that run on protocols that aren't supported by Defender for IoT out-of-the-box, use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins to decode network traffic for your protocols.
Create custom alerts for your plugin to pinpoint specific network activity and effectively update your security, IT, and operational teams. For example, have alerts triggered when:
For more information, see [Manage proprietary protocols with Horizon plugins](re
## Protect enterprise IoT networks
-Extend Defender for IoT's agentless security features beyond OT environments to enterprise IoT devices. Add an Enterprise IoT plan in Microsoft Defender for Endpoint for added alerts, vulnerabilities, and recommendations for IoT devices in Defender for Endpoint. An Enterprise IoT plan also provides a shared device inventory across the Azure portal and Microsoft 365 Defender.
+Extend Defender for IoT's agentless security features beyond OT environments to enterprise IoT devices by using enterprise IoT security with Microsoft Defender for Endpoint, and view related alerts, vulnerabilities, and recommendations for IoT devices in Microsoft 365 Defender.
Enterprise IoT devices can include devices such as printers, smart TVs, and conferencing systems and purpose-built, proprietary devices.
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
Title: Azure user roles and permissions for Microsoft Defender for IoT description: Learn about the Azure user roles and permissions available for OT and Enterprise IoT monitoring with Microsoft Defender for IoT on the Azure portal. Previously updated : 09/19/2022 Last updated : 10/22/2023
# Azure user roles and permissions for Defender for IoT
-Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](../../role-based-access-control/index.yml) to provide access to Enterprise IoT monitoring services and data on the Azure portal.
+Microsoft Defender for IoT uses [Azure Role-Based Access Control (RBAC)](../../role-based-access-control/index.yml) to provide access to Defender for IoT monitoring services and data on the Azure portal.
The built-in Azure [Security Reader](../../role-based-access-control/built-in-roles.md#security-reader), [Security Admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), and [Owner](../../role-based-access-control/built-in-roles.md#owner) roles are relevant for use in Defender for IoT.
Permissions are applied to user roles across an entire Azure subscription, or in
| Action and scope|[Security Reader](../../role-based-access-control/built-in-roles.md#security-reader) |[Security Admin](../../role-based-access-control/built-in-roles.md#security-admin) |[Contributor](../../role-based-access-control/built-in-roles.md#contributor) | [Owner](../../role-based-access-control/built-in-roles.md#owner) | |||||| | **[Grant permissions to others](manage-users-portal.md)**<br>Apply per subscription or site | - | - | - | Γ£ö |
-| **Onboard [OT](onboard-sensors.md) or [Enterprise IoT sensors](eiot-sensor.md)** [*](#enterprise-iot-security) <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö |
+| **Onboard [OT](onboard-sensors.md) or [Enterprise IoT sensors](eiot-sensor.md)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö |
| **[Download OT sensor and on-premises management console software](update-ot-software.md#download-the-update-package-from-the-azure-portal)**<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Download sensor endpoint details](how-to-manage-sensors-on-the-cloud.md#endpoint)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Download sensor activation files](how-to-manage-sensors-on-the-cloud.md#reactivate-an-ot-sensor)** <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö |
-| **[View values on the Plans and pricing page](how-to-manage-subscriptions.md)** [*](#enterprise-iot-security) <br>Apply per subscription only| Γ£ö | Γ£ö | Γ£ö | Γ£ö |
-| **[Modify values on the Plans and pricing page](how-to-manage-subscriptions.md)** [*](#enterprise-iot-security) <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö |
-| **[View values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md)** [*](#enterprise-iot-security)<br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö|
-| **[Modify values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** [*](#enterprise-iot-security), including remote OT sensor updates<br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö|
+| **[View values on the Plans and pricing page](how-to-manage-subscriptions.md)** <br>Apply per subscription only| Γ£ö | Γ£ö | Γ£ö | Γ£ö |
+| **[Modify values on the Plans and pricing page](how-to-manage-subscriptions.md)** <br>Apply per subscription only| - | Γ£ö | Γ£ö | Γ£ö |
+| **[View values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö|
+| **[Modify values on the Sites and sensors page](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** , including remote OT sensor updates<br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö|
| **[Recover on-premises management console passwords](how-to-manage-sensors-on-the-cloud.md#sensor-management-options-from-the-azure-portal)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | | **[Download OT threat intelligence packages](how-to-work-with-threat-intelligence-packages.md#manually-update-locally-managed-sensors)** <br>Apply per subscription only | Γ£ö | Γ£ö | Γ£ö | Γ£ö | | **[Push OT threat intelligence updates](how-to-work-with-threat-intelligence-packages.md#manually-push-updates-to-cloud-connected-sensors)** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö |
-| **[Onboard an Enterprise IoT plan from Microsoft 365 Defender](eiot-defender-for-endpoint.md)** [*](#enterprise-iot-security)<br>Apply per subscription only | - | Γ£ö | - | - |
| **[View Azure alerts](how-to-manage-cloud-alerts.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| | **[Modify Azure alerts](how-to-manage-cloud-alerts.md) (write access - change status, learn, download PCAP)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö | | **[View Azure device inventory](how-to-manage-device-inventory-for-organizations.md)** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
Permissions are applied to user roles across an entire Azure subscription, or in
| **[View Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | Γ£ö | Γ£ö |Γ£ö | Γ£ö | | **[Configure Defender for IoT settings](configure-sensor-settings-portal.md)** <br>Apply per subscription | - | Γ£ö |Γ£ö | Γ£ö | -
-## Enterprise IoT security
-
-Add, edit, or cancel an Enterprise IoT plan with [Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) from Microsoft 365 Defender. Alerts, vulnerabilities, and recommendations for Enterprise IoT networks are also only available from Microsoft 365 Defender.
-
-In addition to the permissions listed above, Enterprise IoT security with Defender for IoT has the following requirements:
--- **To add an Enterprise IoT plan**, you'll need an E5 license and specific permissions in your Microsoft 365 Defender tenant.-- **To view Enterprise IoT devices in your Azure device inventory**, you'll need an Enterprise IoT network sensor registered.-
-For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md).
- ## Next steps For more information, see:
For more information, see:
- [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md) - [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)--
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Title: What's new in Microsoft Defender for IoT description: This article describes new features available in Microsoft Defender for IoT, including both OT and Enterprise IoT networks, and both on-premises and in the Azure portal. Previously updated : 10/23/2023 Last updated : 11/01/2023
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
+| **Enterprise IoT networks** | [Enterprise IoT protection now included in Microsoft 365 E5 and E5 Security licenses](#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses) |
| **OT networks** | [Updated security stack integration guidance](#updated-security-stack-integration-guidance)|
+### Enterprise IoT protection now included in Microsoft 365 E5 and E5 Security licenses
+
+Enterprise IoT (EIoT) security with Defender for IoT discovers unmanaged IoT devices and also provides added security value, including continuous monitoring, vulnerability assessments and tailored recommendations specifically designed for Enterprise IoT devices. Seamlessly integrated with Microsoft 365 Defender, Microsoft Defender Vulnerability Management, and Microsoft Defender for Endpoint on the Microsoft 365 Defender portal, it ensures a holistic approach to safeguarding an organization's network.
+
+Defender for IoT EIoT monitoring is now automatically supported as part of the Microsoft 365 E5 (ME5) and E5 Security plans, covering up to five devices per user license. For example, if your organization possesses 500 ME5 licenses, you can use Defender for IoT to monitor up to 2500 EIoT devices. This integration represents a significant leap toward fortifying your IoT ecosystem within the Microsoft 365 environment.
+
+- **Customers who have ME5 or E5 Security plans but aren't yet using Defender for IoT for their EIoT devices** must [toggle on support](eiot-defender-for-endpoint.md) in the Microsoft 365 Defender portal.
+
+- **New customers** without an ME5 or E5 Security plan can purchase a standalone, **Microsoft Defender for IoT - EIoT Device License - add-on** license, as an add-on to Microsoft Defender for Endpoint P2. Purchase standalone licenses from the Microsoft admin center.
+
+- **Existing customers with both legacy Enterprise IoT plans and ME5/E5 Security plans** are automatically switched to the new licensing method. Enterprise IoT monitoring is now bundled into your license, at no extra charge, and with no action item required from you.
+
+- **Customers with legacy Enterprise IoT plans and no ME5/E5 Security plans** can continue to use their existing plans until the plans expire.
+
+Trial licenses are available for Defender for Endpoint P2 customers as standalone licenses. Trial licenses support 100 number of devices for 90 days.
+
+For more information, see:
+
+- [Securing IoT devices in the enterprise](concept-enterprise.md)
+- [Enable Enterprise IoT security with Defender for Endpoint](eiot-defender-for-endpoint.md)
+- [Defender for IoT subscription billing](billing.md)
+ ### Updated security stack integration guidance Defender for IoT is refreshing its security stack integrations to improve the overall robustness, scalability, and ease of maintenance of various security solutions.
dev-box How To Authenticate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-authenticate.md
Last updated 09/07/2023
## Using Microsoft Entra authentication for REST APIs
-Use the following procedures to authenticate with Microsoft Entra ID. You can follow along in [Azure Cloud Shell](../../articles/cloud-shell/quickstart.md), on an Azure virtual machine, or on your local machine.
+Use the following procedures to authenticate with Microsoft Entra ID. You can follow along in [Azure Cloud Shell](../cloud-shell/get-started.md), on an Azure virtual machine, or on your local machine.
### Sign in to the user's Azure subscription
dms Concepts Migrate Azure Mysql Replicate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-replicate-changes.md
To complete the replicate changes migration successfully, ensure that the follow
- When performing a replicate changes migration, the name of the database on the target server must be the same as the name on the source server. - Support is limited to the ROW binlog format.-- DDL changes replication is supported only when you have selected the option for migrating entire server on DMS UI.
+- DDL changes replication is supported only when migrating to a v8.0 Azure Database for MySQL Flexible Server target server and when you have selected the option for **Replicate data definition and administration statements for selected objects** on DMS UI. The replication feature supports replicating data definition and administration statements that occur after the initial load and are logged in the binary log to the target.
- Renaming databases or tables is not supported when replicating changes. ## Next steps
dms Tutorial Mysql Azure External To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-external-to-flex-online-portal.md
As you prepare for the migration, be sure to consider the following limitations.
* Currently, DMS doesn't support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration, the default definer for all objects that support a definer clause and that are created during schema migration, will be set to the login used to run the migration. * Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration won't occur. Note that selecting a table for schema migration also selects it for data movement. * Online migration support is limited to the ROW binlog format.
-* Online migration only replicates DML changes; replicating DDL changes isn't supported. Don't make any schema changes to the source while replication is in progress, if DMS detects DDL while replicating, it will generate a warning that can be viewed in the Azure portal.
+* Online migration now supports DDL statement replication when migrating to a v8.0 Azure Database for MySQL Flexible Server target server. For target server engine version v5.x, DDL statement replication is not supported currently.
+ * Statement replication is supported for databases, tables, and schema objects (views, routines, triggers) selected for schema migration when configuring an Azure DMS migration activity. Data definition and administration statements for databases, tables, and schema objects that arenΓÇÖt selected wonΓÇÖt be replicated. Selecting an entire server for migration will replicate statements for any tables, databases, and schema objects that are created on the source server after the initial load has completed.
+ * Azure DMS statement replication supports all of the Data Definition statements listed [here](https://dev.mysql.com/doc/refman/8.0/en/sql-data-definition-statements.html), with the exception of the following commands:
+ ΓÇó LOGFILE GROUP statements
+ ΓÇó SERVER statements
+ ΓÇó SPATIAL REFERENCE SYSTEM statements
+ ΓÇó TABLESPACE statements
+ * Azure DMS statement replication supports all of the Data Administration ΓÇô Account Management statements listed [here](https://dev.mysql.com/doc/refman/8.0/en/account-management-statements.html), with the exception of the following commands:
+ * SET DEFAULT ROLE
+ * SET PASSWORD
+ * Azure DMS statement replication supports all of the Data Administration ΓÇô Table Maintenance statements listed [here](https://dev.mysql.com/doc/refman/8.0/en/table-maintenance-statements.html), with the exception of the following commands:
+ * REPAIR TABLE
+ * ANALYZE TABLE
+ * CHECKSUM TABLE
## Best practices for creating a flexible server for faster data loads using DMS
dms Tutorial Mysql Azure Single To Flex Online Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-single-to-flex-online-portal.md
As you prepare for the migration, be sure to consider the following limitations.
* Currently, DMS doesn't support migrating the DEFINER clause for objects. All object types with definers on the source are dropped and after the migration, the default definer for all objects that support a definer clause and that are created during schema migration, will be set to the login used to run the migration. * Currently, DMS only supports migrating a schema as part of data movement. If nothing is selected for data movement, the schema migration won't occur. Note that selecting a table for schema migration also selects it for data movement. * Online migration support is limited to the ROW binlog format.
-* Online migration only replicates DML changes; replicating DDL changes isn't supported. Don't make any schema changes to the source while replication is in progress, if DMS detects DDL while replicating, it will generate a warning that can be viewed in the Azure portal.
+* Online migration now supports DDL statement replication when migrating to a v8.0 Azure Database for MySQL Flexible Server target server. For target server engine version v5.x, DDL statement replication is not supported currently.
+ * Statement replication is supported for databases, tables, and schema objects (views, routines, triggers) selected for schema migration when configuring an Azure DMS migration activity. Data definition and administration statements for databases, tables, and schema objects that arenΓÇÖt selected wonΓÇÖt be replicated. Selecting an entire server for migration will replicate statements for any tables, databases, and schema objects that are created on the source server after the initial load has completed.
+ * Azure DMS statement replication supports all of the Data Definition statements listed [here](https://dev.mysql.com/doc/refman/8.0/en/sql-data-definition-statements.html), with the exception of the following commands:
+ ΓÇó LOGFILE GROUP statements
+ ΓÇó SERVER statements
+ ΓÇó SPATIAL REFERENCE SYSTEM statements
+ ΓÇó TABLESPACE statements
+ * Azure DMS statement replication supports all of the Data Administration ΓÇô Account Management statements listed [here](https://dev.mysql.com/doc/refman/8.0/en/account-management-statements.html), with the exception of the following commands:
+ * SET DEFAULT ROLE
+ * SET PASSWORD
+ * Azure DMS statement replication supports all of the Data Administration ΓÇô Table Maintenance statements listed [here](https://dev.mysql.com/doc/refman/8.0/en/table-maintenance-statements.html), with the exception of the following commands:
+ * REPAIR TABLE
+ * ANALYZE TABLE
+ * CHECKSUM TABLE
## Best practices for creating a flexible server for faster data loads using DMS
event-grid Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md
Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
event-hubs Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md
Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
resource, an updated resource, or an existing resource.
These effects are currently supported in a policy definition:
+- [AddToNetworkGroup](#addtonetworkgroup)
- [Append](#append) - [Audit](#audit) - [AuditIfNotExists](#auditifnotexists)
These effects are currently supported in a policy definition:
- [Disabled](#disabled) - [Manual](#manual) - [Modify](#modify)
+- [Mutate](#mutate-preview)
## Interchanging effects
manages the evaluation and outcome and reports the results back to Azure Policy.
- **Disabled** is checked first to determine whether the policy rule should be evaluated. - **Append** and **Modify** are then evaluated. Since either could alter the request, a change made
- may prevent an audit or deny effect from triggering. These effects are only available with a
+ might prevent an audit or deny effect from triggering. These effects are only available with a
Resource Manager mode. - **Deny** is then evaluated. By evaluating deny before audit, double logging of an undesired resource is prevented.
logging or action is required.
`PATCH` requests that only modify `tags` related fields restricts policy evaluation to policies containing conditions that inspect `tags` related fields.
+## AddToNetworkGroup
+
+AddToNetworkGroup is used in Azure Virtual Network Manager to define dynamic network group membership. This effect is specific to _Microsoft.Network.Data_ [policy mode](./definition-structure.md#resource-provider-modes) definitions only.
+
+With network groups, your policy definition includes your conditional expression for matching virtual networks meeting your criteria, and specifies the destination network group where any matching resources are placed. The addToNetworkGroup effect is used to place resources in the destination network group.
+
+To learn more, go to [Configuring Azure Policy with network groups in Azure Virtual Network Manager](../../../virtual-network-manager/concept-azure-policy-integration.md).
+ ## Append Append is used to add more fields to the requested resource during creation or update. A
related resources to match.
complete, regardless of outcome. If provisioning takes longer than 6 hours, it's treated as a failure when determining _AfterProvisioning_ evaluation delays. - Default is `PT10M` (10 minutes).
- - Specifying a long evaluation delay may cause the recorded compliance state of the resource to
+ - Specifying a long evaluation delay might cause the recorded compliance state of the resource to
not update until the next [evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers). - **ExistenceCondition** (optional)
related resources to match and the template deployment to execute.
complete, regardless of outcome. If provisioning takes longer than 6 hours, it's treated as a failure when determining _AfterProvisioning_ evaluation delays. - Default is `PT10M` (10 minutes).
- - Specifying a long evaluation delay may cause the recorded compliance state of the resource to
+ - Specifying a long evaluation delay might cause the recorded compliance state of the resource to
not update until the next [evaluation trigger](../how-to/get-compliance-data.md#evaluation-triggers). - **ExistenceCondition** (optional)
is applied only when evaluating requests with API version greater or equals to `
} } ```
+## Mutate (preview)
+
+Mutation is used in Azure Policy for Kubernetes to remediate AKS cluster components, like pods. This effect is specific to _Microsoft.Kubernetes.Data_ [policy mode](./definition-structure.md#resource-provider-modes) definitions only.
+
+To learn more, go to [Understand Azure Policy for Kubernetes clusters](./policy-for-kubernetes.md).
+
+### Mutate properties
+- **mutationInfo** (optional)
+ - Can't be used with `constraint`, `constraintTemplate`, `apiGroups`, or `kinds`.
+ - Cannot be parameterized.
+ - **sourceType** (required)
+ - Defines the type of source for the constraint. Allowed values: _PublicURL_ or _Base64Encoded_.
+ - If _PublicURL_, paired with property `url` to provide location of the mutation template. The location must be publicly accessible.
+ > [!WARNING]
+ > Don't use SAS URIs or tokens in `url` or anything else that could expose a secret.
+ ## Layering policy definitions
-A resource may be affected by several assignments. These assignments may be at the same scope or at
+A resource can be affected by several assignments. These assignments might be at the same scope or at
different scopes. Each of these assignments is also likely to have a different effect defined. The condition and effect for each policy is independently evaluated. For example:
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
# Understand Azure Policy for Kubernetes clusters Azure Policy extends [Gatekeeper](https://open-policy-agent.github.io/gatekeeper) v3, an _admission
-controller webhook_ for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply
-at-scale enforcements and safeguards on your clusters in a centralized, consistent manner. Azure
-Policy makes it possible to manage and report on the compliance state of your Kubernetes clusters
-from one place. The add-on enacts the following functions:
+controller webhook_ for [Open Policy Agent](https://www.openpolicyagent.org/) (OPA), to apply at-scale enforcements and safeguards on your cluster components in a centralized, consistent manner. Cluster components include pods, containers, and namespaces.
-- Checks with Azure Policy service for policy assignments to the cluster.-- Deploys policy definitions into the cluster as
- [constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates) and
- [constraint](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraints) custom resources.
-- Reports auditing and compliance details back to Azure Policy service.
+Azure Policy makes it possible to manage and report on the compliance state of your Kubernetes cluster components from one place. By using Azure Policy's Add-On or Extension, governing your cluster components is enhanced with Azure Policy features, like the ability to use [selectors](./assignment-structure.md#resource-selectors-preview) and [overrides](./assignment-structure.md#overrides-preview) for safe policy rollout and rollback.
Azure Policy for Kubernetes supports the following cluster environments: -- [Azure Kubernetes Service (AKS)](../../../aks/intro-kubernetes.md)-- [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md)
+- [Azure Kubernetes Service (AKS)](../../../aks/intro-kubernetes.md), through **Azure PolicyΓÇÖs **Add-On** for AKS**
+- [Azure Arc enabled Kubernetes](../../../azure-arc/kubernetes/overview.md), through **Azure PolicyΓÇÖs **Extension** for Arc**
> [!IMPORTANT] > The Azure Policy Add-on Helm model and the add-on for AKS Engine have been _deprecated_. Follow the instructions to [remove the add-ons](#remove-the-add-on). ## Overview
+By installing Azure PolicyΓÇÖs add-on or extension on your Kubernetes clusters, Azure Policy enacts the following functions:
+
+- Checks with Azure Policy service for policy assignments to the cluster.
+- Deploys policy definitions into the cluster as [constraint template](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates) and [constraint](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraints) custom resources or as a mutation template resource (depending on policy definition content).
+- Reports auditing and compliance details back to Azure Policy service.
+ To enable and use Azure Policy with your Kubernetes cluster, take the following actions:
-1. Configure your Kubernetes cluster and install the [Azure Kubernetes Service (AKS)](#install-azure-policy-add-on-for-aks) add-on
+1. Configure your Kubernetes cluster and install the [Azure Kubernetes Service (AKS)](#install-azure-policy-add-on-for-aks) add-on or Azure Policy's Extension for [Arc-enabled Kubernetes clusters](#install-azure-policy-extension-for-azure-arc-enabled-kubernetes) (depending on your cluster type).
> [!NOTE] > For common issues with installation, see > [Troubleshoot - Azure Policy Add-on](../troubleshoot/general.md#add-on-for-kubernetes-installation-errors).
-2. [Understand the Azure Policy language for Kubernetes](#policy-language)
-
-3. [Assign a definition to your Kubernetes cluster](#assign-a-policy-definition)
-
-4. [Wait for validation](#policy-evaluation)
-
-## Limitations
-
-The following general limitations apply to the Azure Policy Add-on for Kubernetes clusters:
--- Azure Policy Add-on for Kubernetes [supported Kubernetes versions in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md).-- Azure Policy Add-on for Kubernetes can only be deployed to Linux node pools.-- Maximum number of pods supported by the Azure Policy Add-on per cluster: **10,000**-- Maximum number of Non-compliant records per policy per cluster: **500**-- Maximum number of Non-compliant records per subscription: **1 million**-- Installations of Gatekeeper outside of the Azure Policy Add-on aren't supported. Uninstall any
- components installed by a previous Gatekeeper installation before enabling the Azure Policy
- Add-on.
-- [Reasons for non-compliance](../how-to/determine-non-compliance.md#compliance-reasons) aren't
- available for the `Microsoft.Kubernetes.Data`
- [Resource Provider mode](./definition-structure.md#resource-provider-modes). Use
- [Component details](../how-to/determine-non-compliance.md#component-details-for-resource-provider-modes).
-- Component-level [exemptions](./exemption-structure.md) aren't supported for
- [Resource Provider modes](./definition-structure.md#resource-provider-modes).
-
-The following limitations apply only to the Azure Policy Add-on for AKS:
+1. [Create or use a sample Azure Policy definition for Kubernetes](#create-a-policy-definition)
-- [AKS Pod security policy](../../../aks/use-pod-security-policies.md) and the Azure Policy Add-on
- for AKS can't both be enabled. For more information, see
- [AKS pod security limitation](../../../aks/use-azure-policy.md).
-- Namespaces automatically excluded by Azure Policy Add-on for evaluation: _kube-system_ and
- _gatekeeper-system_.
+1. [Assign a definition to your Kubernetes cluster](#assign-a-policy-definition)
-## Recommendations
-
-The following are general recommendations for using the Azure Policy Add-on:
--- The Azure Policy Add-on requires three Gatekeeper components to run: One audit pod and two webhook
- pod replicas. These components consume more resources as the count of Kubernetes resources and
- policy assignments increases in the cluster, which requires audit and enforcement operations.
-
- - For fewer than 500 pods in a single cluster with a max of 20 constraints: two vCPUs and 350 MB
- of memory per component.
- - For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB
- of memory per component.
--- Open ports for the Azure Policy Add-On. The Azure Policy Add-On uses these domains and ports to fetch policy
- definitions and assignments and report compliance of the cluster back to Azure Policy.
-
- |Domain |Port |
- |||
- |`data.policy.core.windows.net` |`443` |
- |`store.policy.core.windows.net` |`443` |
- |`login.windows.net` |`443` |
- |`dc.services.visualstudio.com` |`443` |
-
-- Windows pods
- [don't support security contexts](https://kubernetes.io/docs/concepts/security/pod-security-standards/#what-profiles-should-i-apply-to-my-windows-pods).
- Thus, some of the Azure Policy definitions, such as disallowing root privileges, can't be
- escalated in Windows pods and only apply to Linux pods.
-
-The following recommendation applies only to AKS and the Azure Policy Add-on:
--- Use system node pool with `CriticalAddonsOnly` taint to schedule Gatekeeper pods. For more
- information, see
- [Using system node pools](../../../aks/use-system-pools.md#system-and-user-node-pools).
-- Secure outbound traffic from your AKS clusters. For more information, see
- [Control egress traffic for cluster nodes](../../../aks/limit-egress-traffic.md).
-- If the cluster has `aad-pod-identity` enabled, Node Managed Identity (NMI) pods modify the nodes'
- iptables to intercept calls to the Azure Instance Metadata endpoint. This configuration means any
- request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use
- `aad-pod-identity`. AzurePodIdentityException CRD can be configured to inform `aad-pod-identity`
- that any requests to the Metadata endpoint originating from a pod that matches labels defined in
- CRD should be proxied without any processing in NMI. The system pods with
- `kubernetes.azure.com/managedby: aks` label in _kube-system_ namespace should be excluded in
- `aad-pod-identity` by configuring the AzurePodIdentityException CRD. For more information, see
- [Disable aad-pod-identity for a specific pod or application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception).
- To configure an exception, install the
- [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml).
+1. [Wait for validation](#policy-evaluation)
+1. [Logging](#logging) and [troubleshooting](#troubleshooting-the-add-on)
+1. Review [limitations](#limitations) and [recommendations in our FAQ section](#frequently-asked-questions)
## Install Azure Policy Add-on for AKS
-Before you install the Azure Policy Add-on or enabling any of the service features, your subscription
-must enable the `Microsoft.PolicyInsights` resource providers.
-
-1. You need the Azure CLI version 2.12.0 or later installed and configured. Run `az --version` to
- find the version. If you need to install or upgrade, see
- [Install the Azure CLI](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli).
+### Prerequisites
1. Register the resource providers and preview features.
must enable the `Microsoft.PolicyInsights` resource providers.
# Provider register: Register the Azure Policy provider az provider register --namespace Microsoft.PolicyInsights ```-
-1. If limited preview policy definitions were installed, remove the add-on with the **Disable**
- button on your AKS cluster under the **Policies** page.
-
+1. You need the Azure CLI version 2.12.0 or later installed and configured. Run `az --version` to
+ find the version. If you need to install or upgrade, see
+ [Install the Azure CLI](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli).
+
1. The AKS cluster must be a [supported Kubernetes version in Azure Kubernetes Service (AKS)](../../../aks/supported-kubernetes-versions.md). Use the following script to validate your AKS cluster version:
must enable the `Microsoft.PolicyInsights` resource providers.
az aks list ```
-1. Install version _2.12.0_ or higher of the Azure CLI. For more information, see
- [Install the Azure CLI](../../../azure-resource-manager/management/resource-providers-and-types.md#azure-cli).
+1. Open ports for the Azure Policy extension. The Azure Policy extension uses these domains and ports to fetch policy
+ definitions and assignments and report compliance of the cluster back to Azure Policy.
+
+ |Domain |Port |
+ |||
+ |`data.policy.core.windows.net` |`443` |
+ |`store.policy.core.windows.net` |`443` |
+ |`login.windows.net` |`443` |
+ |`dc.services.visualstudio.com` |`443` |
After the prerequisites are completed, install the Azure Policy Add-on in the AKS cluster you want to manage.
similar to the following output:
``` ## <a name="install-azure-policy-extension-for-azure-arc-enabled-kubernetes"></a>Install Azure Policy Extension for Azure Arc enabled Kubernetes
-[Azure Policy for Kubernetes](./policy-for-kubernetes.md) makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place.
+[Azure Policy for Kubernetes](./policy-for-kubernetes.md) makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place. With Azure Policy's Extension for Arc-enabled Kubernetes clusters, you can govern your Arc-enabled Kubernetes cluster components, like pods and containers.
This article describes how to [create](#create-azure-policy-extension), [show extension status](#show-azure-policy-extension), and [delete](#delete-azure-policy-extension) the Azure Policy for Kubernetes extension.
To delete the extension instance, run the following command substituting `<>` wi
az k8s-extension delete --cluster-type connectedClusters --cluster-name <CLUSTER_NAME> --resource-group <RESOURCE_GROUP> --name <EXTENSION_INSTANCE_NAME> ```
-## Policy language
+## Create a policy definition
The Azure Policy language structure for managing Kubernetes follows that of existing policy
-definitions. With a [Resource Provider mode](./definition-structure.md#resource-provider-modes) of
-`Microsoft.Kubernetes.Data`, the effects [audit](./effects.md#audit) and [deny](./effects.md#deny)
-are used to manage your Kubernetes clusters. _Audit_ and _deny_ must provide **details** properties
+definitions. There are sample definition files available to assign in [Azure Policy's built-in policy library](../samples/built-in-policies.md) that can be used to govern your cluster components.
+
+Azure Policy for Kubernetes also support custom definition creation at the component-level for both Azure Kubernetes Service clusters and Azure Arc-enabled Kubernetes clusters. Constraint template and mutation template samples are available in the [Gatekeeper community library](https://github.com/open-policy-agent/gatekeeper-library/tree/master). [Azure Policy's VS Code Extension](../how-to/extension-for-vscode.md#create-policy-definition-from-constraint-template) can be used to help translate an existing constraint template or mutation template to a custom Azure Policy policy definition.
+
+With a [Resource Provider mode](./definition-structure.md#resource-provider-modes) of
+`Microsoft.Kubernetes.Data`, the effects [audit](./effects.md#audit), [deny](./effects.md#deny), [disabled](./effects.md#disabled), and [mutate](./effects.md#mutate-preview) are used to manage your Kubernetes clusters.
+
+_Audit_ and _deny_ must provide **details** properties
specific to working with [OPA Constraint Framework](https://github.com/open-policy-agent/frameworks/tree/master/constraint) and Gatekeeper v3.
-As part of the _details.templateInfo_, _details.constraint_, or _details.constraintTemplate_
-properties in the policy definition, Azure Policy passes the URI or Base64Encoded value of these
-[CustomResourceDefinitions](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates)
-(CRD) to the add-on. Rego is the language that OPA and Gatekeeper support to validate a request to
+As part of the _details.templateInfo_ or _details.constraintInfo_ properties in the policy definition, Azure Policy passes the URI or Base64Encoded value of these [CustomResourceDefinitions](https://open-policy-agent.github.io/gatekeeper/website/docs/howto/#constraint-templates)(CRD) to the add-on. Rego is the language that OPA and Gatekeeper support to validate a request to
the Kubernetes cluster. By supporting an existing standard for Kubernetes management, Azure Policy makes it possible to reuse existing rules and pair them with Azure Policy for a unified cloud compliance reporting experience. For more information, see
Some other considerations:
If constraint templates have the same resource metadata name, but the policy definition references the source at different locations, the policy definitions are considered to be in conflict. Example: Two policy definitions reference the same `template.yaml` file stored at different source locations
-such as the Azure Policy template store (`store.policy.core.windows.net`) and GitHub.
+like the Azure Policy template store (`store.policy.core.windows.net`) and GitHub.
When policy definitions and their constraint templates are assigned but aren't already installed on the cluster and are in conflict, they're reported as a conflict and aren't installed into the
artifacts, use the following steps:
To view constraint templates downloaded by the add-on, run `kubectl get constrainttemplates`. Constraint templates that start with `k8sazure` are the ones installed by the add-on.
+### View the add-on mutation templates
+
+To view mutation templates downloaded by the add-on, run `kubectl get assign`, `kubectl get assignmetadata`, and `kubectl get modifyset`.
+ ### Get Azure Policy mappings To identify the mapping between a constraint template downloaded to the cluster and the policy
For Azure Policy related issues, go to:
- [Inspect Azure Policy logs](#logging) - [General troubleshooting for Azure Policy on Kubernetes](../troubleshoot/general.md#add-on-for-kubernetes-general-errors)
+## Azure Policy Add-On for AKS Changelog
+Azure PolicyΓÇÖs Add-On for AKS has a version number that indicates the image version of add-on. As feature support is newly introduced on the Add-On, the version number is increased.
+
+This section will help you identify which Add-On version is installed on your cluster and also share a historical table of the Azure Policy Add-On version installed per AKS cluster.
+
+### Identify which Add-On version is installed on your cluster
+
+The Azure Policy Add-On uses the standard [Semantic Versioning](https://semver.org/) schema for each version. To identify the Azure Policy Add-On version being used, you can run the following command:
+`kubectl get pod azure-policy-<unique-pod-identifier> -n kube-system -o json | jq '.spec.containers[0].image'`
+
+To identify the Gatekeeper version that your Azure Policy Add-On is using, you can run the following command:
+`kubectl get pod gatekeeper-controller-<unique-pod-identifier> -n gatekeeper-system -o json | jq '.spec.containers[0].image' `
+
+Finally, to identify the AKS cluster version that you are using, follow the linked AKS guidance for this.
+
+### Add-On versions available per each AKS cluster version
+
+#### 1.2.1
+- Released October 2023
+- Kubernetes 1.25+
+- Gatekeeper 3.13.3
+
+#### 1.1.0
+- Released July 2023
+- Kubernetes 1.27+
+- Gatekeeper 3.11.1
+
+#### 1.0.1
+- Released June 2023
+- Kubernetes 1.24+
+- Gatekeeper 3.11.1
+
+#### 1.0.0
+Azure Policy for Kubernetes now supports mutation to remediate AKS clusters at-scale!
+ ## Remove the add-on ### Remove the add-on from AKS
aligns with how the add-on was installed:
```bash helm uninstall azure-policy-addon ```
+## Limitations
+ - For general Azure Policy definitions and assignment limits, please review [Azure Policy's documented limits](../../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-policy-limits)
+ - Azure Policy Add-on for Kubernetes can only be deployed to Linux node pools.
+ - Maximum number of pods supported by the Azure Policy Add-on per cluster: **10,000**
+ - Maximum number of Non-compliant records per policy per cluster: **500**
+ - Maximum number of Non-compliant records per subscription: **1 million**
+ - Installations of Gatekeeper outside of the Azure Policy Add-on aren't supported. Uninstall any components installed by a previous Gatekeeper installation before enabling the Azure Policy Add-on.
+ - [Reasons for non-compliance](../how-to/determine-non-compliance.md#compliance-reasons) aren't available for the Microsoft.Kubernetes.Data [Resource Provider mode](./definition-structure.md#resource-provider-modes). Use [Component details](../how-to/determine-non-compliance.md#component-details-for-resource-provider-modes).
+ - Component-level [exemptions](./exemption-structure.md) arenΓÇÖt supported for [Resource Provider modes](./definition-structure.md#resource-provider-modes). Parameters support is available in Azure Policy definitions to exclude and include particular namespaces.
+
+The following limitations apply only to the Azure Policy Add-on for AKS:
+- [AKS Pod security policy](../../../aks/use-pod-security-policies.md) and the Azure Policy Add-on for AKS can't both be enabled. For more information, see [AKS pod security limitation](../../../aks/use-azure-policy.md).
+- Namespaces automatically excluded by Azure Policy Add-on for evaluation: kube-system and gatekeeper-system.
-## Diagnostic data collected by Azure Policy Add-on
+## Frequently asked questions
+### What does the Azure Policy Add-On / Azure Policy Extension deploy on my cluster upon installation?
+The Azure Policy Add-on requires three Gatekeeper components to run: One audit pod and two webhook pod replicas. One Azure Policy pod and one Azure Policy webhook pod will also be installed.
+
+### How much resource consumption should I expect the Azure Policy Add-On / Extension to use on each cluster?
+The Azure Policy for Kubernetes components that run on your cluster consume more resources as the count of Kubernetes resources and policy assignments increases in the cluster, which requires audit and enforcement operations.
+The following are estimates to help you plan:
+ - For fewer than 500 pods in a single cluster with a max of 20 constraints: two vCPUs and 350 MB of memory per component.
+ - For more than 500 pods in a single cluster with a max of 40 constraints: three vCPUs and 600 MB of memory per component.
+
+### Can Azure Policy for Kubernetes definitions be applied on Windows pods?
+Windows pods [don't support security contexts](https://kubernetes.io/docs/concepts/security/pod-security-standards/#what-profiles-should-i-apply-to-my-windows-pods). Thus, some of the Azure Policy definitions, like disallowing root privileges, can't be escalated in Windows pods and only apply to Linux pods.
+
+### What type of diagnostic data gets collected by Azure Policy Add-on?
The Azure Policy Add-on for Kubernetes collects limited cluster diagnostic data. This diagnostic data is vital technical data related to software and performance. It's used in the following ways:
collected:
evaluation - Number of Gatekeeper policy definitions not installed by Azure Policy Add-on
+### What are general best practices to keep in mind when installing the Azure Policy Add-On?
+ - Use system node pool with CriticalAddonsOnly taint to schedule Gatekeeper pods. For more information, see [Using system node pools](../../../aks/use-system-pools.md#system-and-user-node-pools).
+ - Secure outbound traffic from your AKS clusters. For more information, see [Control egress traffic](../../../aks/limit-egress-traffic.md) for cluster nodes.
+ - If the cluster has aad-pod-identity enabled, Node Managed Identity (NMI) pods modify the nodes' iptables to intercept calls to the Azure Instance Metadata endpoint. This configuration means any request made to the Metadata endpoint is intercepted by NMI even if the pod doesn't use aad-pod-identity.
+ - AzurePodIdentityException CRD can be configured to inform aad-pod-identity that any requests to the Metadata endpoint originating from a pod that matches labels defined in CRD should be proxied without any processing in NMI. The system pods with kubernetes.azure.com/managedby: aks label in kube-system namespace should be excluded in aad-pod-identity by configuring the AzurePodIdentityException CRD. For more information, see [Disable aad-pod-identity for a specific pod or application](https://azure.github.io/aad-pod-identity/docs/configure/application_exception). To configure an exception, install the [mic-exception YAML](https://github.com/Azure/aad-pod-identity/blob/master/deploy/infra/mic-exception.yaml).
+ ## Next steps - Review examples at [Azure Policy samples](../samples/index.md).
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 11/03/2023 Last updated : 11/06/2023
The name on each built-in links to the initiative definition source on the
[!INCLUDE [azure-policy-reference-policysets-regulatory-compliance](../../../../includes/policy/reference/bycat/policysets-regulatory-compliance.md)]
+## Resilience
++ ## SDN [!INCLUDE [azure-policy-reference-policysets-sdn](../../../../includes/policy/reference/bycat/policysets-sdn.md)]
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 11/03/2023 Last updated : 11/06/2023
The name of each built-in links to the policy definition in the Azure portal. Us
[!INCLUDE [azure-policy-reference-policies-portal](../../../../includes/policy/reference/bycat/policies-portal.md)]
+## Resilience
++ ## Search [!INCLUDE [azure-policy-reference-policies-search](../../../../includes/policy/reference/bycat/policies-search.md)]
hdinsight Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md
Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
healthcare-apis Concepts Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md
Title: MedTech service and Azure Machine Learning Service - Azure Health Data Se
description: Learn how to use the MedTech service and the Azure Machine Learning Service -+ Last updated 07/21/2023
# MedTech service and Azure Machine Learning Service
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn about using the MedTech service and the Azure Machine Learning Service. ## The MedTech service and Azure Machine Learning Service reference architecture
-The MedTech service enables IoT devices to seamlessly integrate with FHIR services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
+The MedTech service enables IoT devices to seamlessly integrate with FHIR&reg; services. This reference architecture is designed to accelerate adoption of Internet of Things (IoT) projects. This solution uses Azure Databricks for the Machine Learning (ML) compute. However, Azure Machine Learning Services with Kubernetes or a partner ML solution could fit into the Machine Learning Scoring Environment.
The four line colors show the different parts of the data journey.
The four line colors show the different parts of the data journey.
12. Azure Databricks sends a payload to an Azure Function (ML Output). 13. RiskAssessment and/or Flag resource submitted to FHIR service. 1. For each observation window, a RiskAssessment resource is submitted to the FHIR service.
- 2. For observation windows where the risk assessment is outside the acceptable range a Flag resource should also be submitted to the FHIR service.
+ 2. For observation windows where the RiskAssessment is outside the acceptable range a Flag Resource should also be submitted to the FHIR service.
14. Scored data sent to data repository for routing to appropriate care team. Azure SQL Server is the data repository used in this design because of its native interaction with Power BI.
-15. Power BI Dashboard is updated with Risk Assessment output in under 15 minutes.
+15. Power BI Dashboard is updated with RiskAssessment output in under 15 minutes.
**Warm path**
The four line colors show the different parts of the data journey.
18. Care Coordination through Microsoft Teams for Healthcare Patient App. ## Next steps-
-In this article, you learned about the MedTech service and Machine Learning service integration.
-
-For an overview of the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [What is the MedTech service?](overview.md)
-
-To learn about the MedTech service device message data transformation, see
-
-> [!div class="nextstepaction"]
-> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-
-To learn about methods for deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
+
+[What is the MedTech service?](overview.md)
+
+[Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
healthcare-apis Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md
Title: MedTech service Microsoft Power BI - Azure Health Data Services
description: Learn how to use the MedTech service and Power BI -+ Last updated 07/21/2023
# MedTech service and Microsoft Power BI
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn about using the MedTech service and Microsoft Power Business Intelligence (Power BI). ## The MedTech service and Power BI reference architecture
-This reference architecture shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Things (IoT) and FHIR data.
+This reference architecture shows the basic components of using the Microsoft cloud services to enable Power BI on top of Internet of Things (IoT) and FHIR&reg; data.
You can even embed Power BI dashboards inside the Microsoft Teams client to further enhance care team coordination. For more information on embedding Power BI in Teams, see [Embed Power BI content in Microsoft Teams](/power-bi/collaborate-share/service-embed-report-microsoft-teams).
Azure IoT Edge can be used in with IoT Hub to create an on-premises endpoint for
## Next steps
-In this article, you've learned about the MedTech service and Power BI integration.
-
-For an overview of the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [What is the MedTech service?](overview.md)
-
-To learn about the MedTech service device message data transformation, see
-
-> [!div class="nextstepaction"]
-> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[What is the MedTech service?](overview.md)
-To learn about methods for deploying the MedTech service, see
+[Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Concepts Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md
Title: MedTech service and Teams notifications - Azure Health Data Services
description: Learn how to use the MedTech service and Teams notifications -+ Last updated 07/21/2023
# MedTech service and Microsoft Teams notifications
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn about using the MedTech service and Microsoft Teams for notifications. ## The MedTech service and Teams notifications reference architecture
-When combining the MedTech service, the FHIR service, and Teams, you can enable multiple care solutions.
+When combining the MedTech service, the FHIR&reg; service, and Teams, you can enable multiple care solutions.
The diagram is a MedTech service to Teams notifications conceptual architecture for enabling the MedTech service, the FHIR service, and the Teams Patient App.
Azure IoT Edge can be used in with IoT Hub to create an on-premises end point fo
## Next steps
-In this article, you've learned about the MedTech service and Teams notifications integration.
-
-For an overview of the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [What is the MedTech service?](overview.md)
-
-To learn about the MedTech service device message data transformation, see
-
-> [!div class="nextstepaction"]
-> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[What is the MedTech service?](overview.md)
-To learn about methods for deploying the MedTech service, see
+[Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-arm-template.md
Title: Deploy the MedTech service using an Azure Resource Manager template - Azu
description: Learn how to deploy the MedTech service using an Azure Resource Manager template. -+ Last updated 07/05/2023
# Quickstart: Deploy the MedTech service using an Azure Resource Manager template
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a [JavaScript Object Notation (JSON)](https://www.json.org/) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. In this quickstart, learn how to:
When deployment is completed, the following resources and access roles are creat
* Health Data Services workspace.
-* Health Data Services Fast Healthcare Interoperability Resources FHIR service.
+* Health Data Services FHIR&reg; service.
* Health Data Services MedTech service with the [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) enabled and granted the following access roles:
After you have successfully deployed an instance of the MedTech service, you'll
## Next steps
-In this quickstart, you learned how to deploy the MedTech service in the Azure portal using an ARM template with the **Deploy to Azure** button.
-
-To learn about other methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device data processing stages, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-For frequently asked questions (FAQs) about the MedTech service, see
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Bicep Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-bicep-powershell-cli.md
Title: Deploy the MedTech service using a Bicep file and Azure PowerShell or the
description: Learn how to deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI. -+ Last updated 07/12/2023
# Quickstart: Deploy the MedTech service using a Bicep file and Azure PowerShell or the Azure CLI
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. In a Bicep file, you define the infrastructure you want to deploy to Azure, and then use that file throughout the development lifecycle to repeatedly deploy your infrastructure. Your resources are deployed in a consistent manner Bicep provides concise syntax, reliable type safety, and support for code reuse. Bicep offers a first-class authoring experience for your infrastructure-as-code solutions in Azure. In this quickstart, learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file.
When deployment is completed, the following resources and access roles are creat
* Health Data Services workspace.
-* Health Data Services FHIR service.
+* Health Data Services FHIR&reg; service.
* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
For example: `az group delete --resource-group BicepTestDeployment`
## Next steps
-In this quickstart, you learned how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using a Bicep file.
-
-To learn about other methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device data processing stages, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-For frequently asked questions (FAQs) about the MedTech service, see
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Choose Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-choose-method.md
Title: Choose a deployment method for the MedTech service - Azure Health Data Se
description: Learn about the different methods for deploying the MedTech service. -+ Last updated 07/05/2023
# Quickstart: Choose a deployment method for the MedTech service
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- The MedTech service provides multiple methods for deployment into Azure. Each deployment method has different advantages that allow you to customize your deployment to suit your needs and use cases. In this quickstart, learn about these deployment methods:
In this quickstart, learn about these deployment methods:
## Deployment overview
-The following diagram outlines the basic steps of the MedTech service deployment. These steps may help you analyze the deployment options and determine which deployment method is best for you.
+The following diagram outlines the basic steps of the MedTech service deployment. These steps might help you analyze the deployment options and determine which deployment method is best for you.
:::image type="content" source="media/get-started/get-started-with-medtech-service.png" alt-text="Diagram showing MedTech service deployment overview." lightbox="media/get-started/get-started-with-medtech-service.png"::: ## ARM template including an Azure Iot Hub using the Deploy to Azure button
-Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service and Azure IoT Hub are fully functional including conforming and valid device and FHIR destination mappings. Use the Azure IoT Hub to create devices and send device messages to the MedTech service.
+Using an ARM template with the **Deploy to Azure** button is an easy and fast deployment method because it automates the deployment, most configuration steps, and uses the Azure portal. The deployed MedTech service and Azure IoT Hub are fully functional including conforming and valid device and FHIR&reg; destination mappings. Use the Azure IoT Hub to create devices and send device messages to the MedTech service.
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json)
To learn more about deploying the MedTech service using a Bicep file and Azure P
## Azure portal
-Using the Azure portal allows you to see the details of each deployment step. The Azure portal deployment has many steps, but it provides valuable technical information that may be useful for customizing and troubleshooting your MedTech service.
+Using the Azure portal allows you to see the details of each deployment step. The Azure portal deployment has many steps, but it provides valuable technical information that might be useful for customizing and troubleshooting your MedTech service.
To learn more about deploying the MedTech service using the Azure portal, see [Deploy the MedTech service using the Azure portal](deploy-manual-portal.md).
To learn more about deploying the MedTech service using the Azure portal, see [D
## Next steps
-In this quickstart, you learned about the different types of deployment methods for the MedTech service.
-
-To learn about other methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device data processing stages, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-For frequently asked questions (FAQs) about the MedTech service, see
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Json Powershell Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-json-powershell-cli.md
Title: Deploy the MedTech service using an Azure Resource Manager template and A
description: Learn how to deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI. -+ Last updated 07/05/2023
# Quickstart: Deploy the MedTech service using an Azure Resource Manager template and Azure PowerShell or the Azure CLI
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- To implement infrastructure as code for your Azure solutions, use Azure Resource Manager templates (ARM templates). The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax, which lets you state what you intend to deploy without having to write the sequence of programming commands to create it. In the template, you specify the resources to deploy and the properties for those resources. In this quickstart, learn how to use Azure PowerShell or the Azure CLI to deploy an instance of the MedTech service using an Azure Resource Manager template (ARM template).
When deployment is completed, the following resources and access roles are creat
* Health Data Services workspace.
-* Health Data Services FHIR service.
+* Health Data Services FHIR&reg; service.
* Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
For example: `az group delete --resource-group ArmTestDeployment`
## Next steps
-In this quickstart, you learned how to use Azure PowerShell or Azure CLI to deploy an instance of the MedTech service using an ARM template.
-
-To learn about other methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device data processing stages, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-For frequently asked questions (FAQs) about the MedTech service, see
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Deploy Manual Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-manual-portal.md
Title: Deploy the MedTech service using the Azure portal - Azure Health Data Ser
description: Learn how to deploy the MedTech service using the Azure portal. -+ Last updated 07/06/2023
# Quickstart: Deploy the MedTech service using the Azure portal
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-
-In this quickstart, learn how to deploy the MedTech service and required resources using the Azure portal.
- The MedTech service deployment using the Azure portal is divided into the following three sections: * [Deploy prerequisite resources](#deploy-prerequisite-resources)
The first step is to deploy the MedTech service prerequisite resources:
* Azure resource group * Azure Event Hubs namespace and event hub * Azure Health Data services workspace
-* Azure Health Data Services FHIR service
+* Azure Health Data Services FHIR&reg; service
Once the prerequisite resources are available, deploy:
The **Destination** tab should now look something like this after you've filled
### Configure the Tags tab (Optional)
-Before you complete your configuration in the **Review + create** tab, you may want to configure tags. You can do this step by selecting the **Next: Tags >** tab.
+Before you complete your configuration in the **Review + create** tab, you might want to configure tags. You can do this step by selecting the **Next: Tags >** tab.
Tags are name and value pairs used for categorizing resources and are an optional step. For more information about tags, see [Use tags to organize your Azure resources and management hierarchy](../../azure-resource-manager/management/tag-resources.md).
If your deployment didn't validate, review the validation failure message(s), an
1. Select the **Create** button to begin the deployment.
-2. The deployment process may take several minutes. The screen displays a message saying that your deployment is in progress.
+2. The deployment process can take several minutes. The screen displays a message saying that your deployment is in progress.
3. When Azure has finishes deploying, a "Your Deployment is complete" message appears and also displays the following information:
Valid and conforming device and FHIR destination mappings have to be provided to
## Next steps
-In this article, you learned how to deploy the MedTech service and required resources using the Azure portal.
-
-To learn about other methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device data processing stages, see
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-For frequently asked questions (FAQs) about the MedTech service, see
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
# Tutorial: Receive device messages through Azure IoT Hub
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-
-The MedTech service can receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template also deploys an IoT hub to create and manage devices, and message routes device messages to an event hub for the MedTech service to read and process. After device data processing, the FHIR resources are persisted in the FHIR service, which is also included in the template.
+The MedTech service can receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template also deploys an IoT hub to create and manage devices, and message routes device messages to an event hub for the MedTech service to read and process. After device data processing, the FHIR&reg; resources are persisted in the FHIR service, which is also included in the template.
:::image type="content" source="media\device-messages-through-iot-hub\device-message-flow-with-iot-hub.png" border="false" alt-text="Diagram of the IoT device message flow through an IoT hub and event hub, and then into the MedTech service." lightbox="media\device-messages-through-iot-hub\device-message-flow-with-iot-hub.png":::
To learn how to get a Microsoft Entra access token and view FHIR resources in yo
## Next steps
-In this tutorial, you deployed an ARM template in the Azure portal, connected to your IoT hub, created a device, sent a test message, and reviewed your MedTech service metrics.
-
-To learn about methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device data processing stages, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-For frequently asked questions (FAQs) about the MedTech service, see
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/frequently-asked-questions.md
description: Learn about the MedTech service frequently asked questions.
+ Last updated 10/11/2023
# Frequently asked questions about the MedTech service
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- ## MedTech service: The basics ## Where is the MedTech service available?
The MedTech service is available in these Azure regions: [Products available by
## Can I use the MedTech service with a different FHIR service other than the Azure Health Data Services FHIR service?
-No. The MedTech service currently only supports the Azure Health Data Services FHIR service for the persistence of transformed device data. The open-source version of the MedTech service supports the use of different FHIR services.
+No. The MedTech service currently only supports the Azure Health Data Services FHIR&reg; service for the persistence of transformed device data. The open-source version of the MedTech service supports the use of different FHIR services.
To learn about the MedTech service open-source projects, see [Open-source projects](git-projects.md). ## What versions of FHIR does the MedTech service support?
-The MedTech service supports the [HL7 FHIR&#174; R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491) standard.
+The MedTech service supports the [HL7 FHIR R4](https://www.hl7.org/implement/standards/product_brief.cfm?product_id=491) standard.
## Why do I have to provide device and FHIR destination mappings to the MedTech service?
To learn about the MedTech service open-source projects, see [Open-source projec
## Next steps
-In this article, you learned about the MedTech service frequently asked questions (FAQs).
-
-For an overview of the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [What is the MedTech service?](overview.md)
-
-To learn about the MedTech service device message data transformation, see
-
-> [!div class="nextstepaction"]
-> [Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
+[What is the MedTech service?](overview.md)
-To learn about methods for deploying the MedTech service, see
+[Understand the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Title: Get started with the MedTech service - Azure Health Data Services
description: Learn the basic steps for deploying the MedTech service. -+ Last updated 06/06/2023
# Get started with the MedTech service
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article and diagram outlines the basic steps to get started with the MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). These steps may help you to assess the [MedTech service deployment methods](deploy-choose-method.md) and determine which deployment method is best for you. As a prerequisite, you need an Azure subscription and have been granted the proper permissions to deploy Azure resource groups and resources. You can follow all the steps, or skip some if you have an existing environment. Also, you can combine all the steps and complete them in Azure PowerShell, Azure CLI, or REST API scripts.
If you have successfully deployed the prerequisite resources, you're now ready t
## Next steps
-This article described the basic steps needed to get started deploying the MedTech service.
-
-To learn about methods of deploying the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
-
-For an overview of the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
+[Choose a deployment method for the MedTech service](deploy-new-choose.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+[Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/git-projects.md
description: Learn about the MedTech service open-source software library for in
+ Last updated 04/28/2023 # Open-source projects
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- Check out our open-source software (OSS) projects on GitHub, which provide source code and instructions to deploy services for various use cases with the MedTech service. > [!IMPORTANT]
Health Data Sync
## Next steps
-In this article, you learned about the open-source projects for the MedTech service.
-
-To learn about the different deployment methods for the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-choose-method.md)
+[Choose a deployment method for the MedTech service](deploy-choose-method.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
# How to configure the MedTech service metrics
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn how to configure the MedTech service metrics in the Azure portal. Also learn how to pin the MedTech service metrics tile to an Azure portal dashboard for later viewing. The MedTech service metrics can be used to help determine the health and performance of your MedTech service and can be useful with troubleshooting and seeing patterns and/or trends with your MedTech service.
To learn how to create an Azure portal dashboard and pin tiles, see [Create a da
## Next steps
-In this article, you learned about how to configure the MedTech service metrics.
-
-To learn how to enable the MedTech service diagnostic settings to export logs and metrics to another location (for example: Azure Log Analytics workspace) for audit, backup, or troubleshooting, see
-
-> [!div class="nextstepaction"]
-> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+[How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Enable Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-enable-diagnostic-settings.md
# How to enable diagnostic settings for the MedTech service
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn how to enable diagnostic settings for the MedTech service to: * Create a diagnostic setting to export logs and metrics for audit, analysis, or troubleshooting of the MedTech service. * Use the Azure Log Analytics workspace to view the MedTech service logs.
-* Access the MedTech service pre-defined Azure Log Analytics queries.
+* Access the MedTech service predefined Azure Log Analytics queries.
## Create a diagnostic setting for the MedTech service
In this article, learn how to enable diagnostic settings for the MedTech service
8. The **Diagnostic settings** page will open, displaying your newly created diagnostic setting for your MedTech service. You'll have the ability to: 1. **Edit setting**: Edit or delete your saved MedTech service diagnostic setting.
- 2. **+ Add diagnostic setting**: Create more diagnostic settings for your MedTech service (for example: you may also want to send your MedTech service metrics to another destination like a Logs Analytics workspace).
+ 2. **+ Add diagnostic setting**: Create more diagnostic settings for your MedTech service (for example: you might also want to send your MedTech service metrics to another destination like a Logs Analytics workspace).
:::image type="content" source="media/how-to-enable-diagnostic-settings/view-and-edit-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings options." lightbox="media/how-to-enable-diagnostic-settings/view-and-edit-diagnostic-settings.png":::
If you choose to include your Log Analytics workspace as a destination option fo
:::image type="content" source="media/how-to-enable-diagnostic-settings/select-logs-button.png" alt-text="Screenshot of logs option." lightbox="media/how-to-enable-diagnostic-settings/select-logs-button.png":::
-2. Copy the below table query string into your Log Analytics workspace query area and select **Run**. Using the *AHDSMedTechDiagnosticLogs* table will provide you with all logs contained in the entire table for the selected **Time range** setting (the default value is **Last 24 hours**). The MedTech service provides five pre-defined queries that will be addressed in the article section titled [Accessing the MedTech service pre-defined Azure Log Analytics queries](#accessing-the-medtech-service-pre-defined-azure-log-analytics-queries).
+2. Copy the below table query string into your Log Analytics workspace query area and select **Run**. Using the *AHDSMedTechDiagnosticLogs* table will provide you with all logs contained in the entire table for the selected **Time range** setting (the default value is **Last 24 hours**). The MedTech service provides five predefined queries that will be addressed in the article section titled [Accessing the MedTech service predefined Azure Log Analytics queries](#accessing-the-medtech-service-predefined-azure-log-analytics-queries).
```Kusto AHDSMedTechDiagnosticLogs
If you choose to include your Log Analytics workspace as a destination option fo
> > For assistance troubleshooting MedTech service errors, see [Troubleshoot errors using the MedTech service logs](troubleshoot-errors-logs.md).
-## Accessing the MedTech service pre-defined Azure Log Analytics queries
+## Accessing the MedTech service predefined Azure Log Analytics queries
-The MedTech service comes with pre-defined queries that can be used anytime in your Log Analytics workspace to filter and summarize your logs for more precise investigation. The queries can also be customized and saved/shared.
+The MedTech service comes with predefined queries that can be used anytime in your Log Analytics workspace to filter and summarize your logs for more precise investigation. The queries can also be customized and saved/shared.
-1. To access the pre-defined queries, select **Queries**, type *MedTech* in the **Search** area, select a pre-defined query by using a double-click, and select **Run** to execute the pre-defined query. In this example, we've selected *MedTech healthcheck exceptions*. You'll select a pre-defined query of your own choosing.
+1. To access the predefined queries, select **Queries**, type *MedTech* in the **Search** area, select a predefined query by using a double-click, and select **Run** to execute the predefined query. In this example, we've selected *MedTech healthcheck exceptions*. You'll select a predefined query of your own choosing.
> [!TIP]
- > You can click on each of the MedTech service pre-defined queries to see their description and access different options for running the query or placing it into the Log Analytics workspace query area.
+ > You can click on each of the MedTech service predefined queries to see their description and access different options for running the query or placing it into the Log Analytics workspace query area.
- :::image type="content" source="media/how-to-enable-diagnostic-settings/select-and-run-pre-defined-query.png" alt-text="Screenshot of searching, selecting, and running a MedTech service pre-defined query." lightbox="media/how-to-enable-diagnostic-settings/select-and-run-pre-defined-query.png":::
+ :::image type="content" source="media/how-to-enable-diagnostic-settings/select-and-run-pre-defined-query.png" alt-text="Screenshot of searching, selecting, and running a MedTech service predefined query." lightbox="media/how-to-enable-diagnostic-settings/select-and-run-pre-defined-query.png":::
-2. Multiple pre-defined queries can be selected. In this example, we've additionally selected *Log count per MedTech log or exception type*. You'll select another pre-defined query of your own choosing.
+2. Multiple predefined queries can be selected. In this example, we've additionally selected *Log count per MedTech log or exception type*. You'll select another predefined query of your own choosing.
- :::image type="content" source="media/how-to-enable-diagnostic-settings/select-and-run-additional-pre-defined-query.png" alt-text="Screenshot of searching, selecting, and running a MedTech service and additional pre-defined query." lightbox="media/how-to-enable-diagnostic-settings/select-and-run-additional-pre-defined-query.png":::
+ :::image type="content" source="media/how-to-enable-diagnostic-settings/select-and-run-additional-pre-defined-query.png" alt-text="Screenshot of searching, selecting, and running a MedTech service and additional predefined query." lightbox="media/how-to-enable-diagnostic-settings/select-and-run-additional-pre-defined-query.png":::
-3. Only the highlighted pre-defined query will be executed.
+3. Only the highlighted predefined query will be executed.
- :::image type="content" source="media/how-to-enable-diagnostic-settings/results-of-select-and-run-additional-pre-defined-query.png" alt-text="Screenshot of results of running a MedTech service and additional pre-defined query." lightbox="media/how-to-enable-diagnostic-settings/results-of-select-and-run-additional-pre-defined-query.png":::
+ :::image type="content" source="media/how-to-enable-diagnostic-settings/results-of-select-and-run-additional-pre-defined-query.png" alt-text="Screenshot of results of running a MedTech service and additional predefined query." lightbox="media/how-to-enable-diagnostic-settings/results-of-select-and-run-additional-pre-defined-query.png":::
> [!WARNING]
-> Any changes that you've made to the pre-defined queries are not saved and will have to be recreated if you leave your Log Analytics workspace without saving custom changes you've made to the pre-defined queries.
+> Any changes that you've made to the predefined queries are not saved and will have to be recreated if you leave your Log Analytics workspace without saving custom changes you've made to the predefined queries.
> > To learn how to save a query in Log Analytics, see [Save a query in Azure Monitor Log Analytics](../../azure-monitor/logs/save-query.md)
The MedTech service comes with pre-defined queries that can be used anytime in y
## Next steps
-In this article, you learned how to enable the diagnostics settings for the MedTech service and use the Log Analytics workspace to query and view the MedTech service logs.
-
-To learn about the MedTech service frequently asked questions (FAQs), see
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
healthcare-apis How To Use Calculatedcontent Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontent-templates.md
Title: How to use CalculatedContent templates with the MedTech service device ma
description: Learn how to use CalculatedContent templates with the MedTech service device mapping. -+ Last updated 08/01/2023
# How to use CalculatedContent templates with the MedTech service device mapping
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides an overview of how to use CalculatedContent templates within a MedTech service device mapping. ## CalculatedContent template basics
The resulting normalized message will look like this after the normalization sta
## Next steps
-In this article, you learned how to use CalculatedContent templates with the MedTech service device mapping.
-
-To learn how to use the MedTech service custom functions, see
-
-> [!div class="nextstepaction"]
-> [How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+[How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
-For an overview of the MedTech service scenario-based mappings samples, see
+[Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office, and is used with permission.
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Title: How to use custom functions with the MedTech service device mapping - Azu
description: Learn how to use custom functions with MedTech service device mapping. -+ Last updated 08/01/2023
# How to use custom functions with the MedTech service device mapping
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- Many functions are available when using **JMESPath** as the expression language. Besides the built-in functions available as part of the [JMESPath specification](https://jmespath.org/specification.html#built-in-functions), many more custom functions may also be used. This article describes how to use the MedTech service-specific custom functions with the MedTech service [device mapping](overview-of-device-mapping.md). > [!TIP]
Examples:
## Next steps
-In this article, you learned how to use the MedTech service custom functions within the device mapping.
-
-For an overview of the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+[Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-For an overview of the MedTech service scenario-based mappings samples, see
+[Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Iotjsonpathcontent Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontent-templates.md
Title: How to use IotJsonPathContent templates with the MedTech service device m
description: Learn how to use IotJsonPathContent templates with the MedTech service device mapping. -+ Last updated 08/01/2023
# How to use IotJsonPathContent templates with the MedTech service device mapping
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides an overview of how to use IotJsonPathContent templates within a MedTech service device mapping. ## IotJsonPathContent template basics
The resulting normalized message will look like this after the normalization sta
## Next steps
-In this article, you learned how to use IotJsonPathContent templates with the MedTech service device mapping.
-
-To deploy the MedTech service with device message routing enabled through an Azure IoT Hub, see
-
-> [!div class="nextstepaction"]
-> [Receive device messages through Azure IoT Hub](device-messages-through-iot-hub.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+[Receive device messages through Azure IoT Hub](device-messages-through-iot-hub.md)
-For an overview of the MedTech service scenario-based mappings samples, see
+[Overview of the FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Mapping Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md
> [!IMPORTANT] > This feature is currently in Public Preview. See [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn how to use the MedTech service Mapping debugger. The Mapping debugger is a self-service tool that is used for creating, updating, and troubleshooting the MedTech service [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings. The Mapping debugger enables you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations. > [!TIP]
For this troubleshooting example, we're using a test device message that is [mes
## Next steps
-In this article, you were provided with an overview and learned about how to use the Mapping debugger to edit and troubleshoot the MedTech service device and FHIR destination mappings.
-
-For an overview of the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
--
-For an overview of the MedTech service scenario-based mappings samples, see
+[Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
+[Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Monitoring And Health Checks Tabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-and-health-checks-tabs.md
# How to use the MedTech service monitoring and health checks tabs
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- In this article, learn how to use the MedTech service monitoring and health check tabs in the Azure portal. The monitoring and health check tabs provide access to crucial MedTech service metrics and health checks. These metrics and health checks can be used in assessing the health and performance of your MedTech service and can be useful seeing patterns and/or trends or assisting with troubleshooting your MedTech service. ## Use the MedTech service monitoring tab
Metric category|Metric name|Metric description|
## Next steps
-In this article, you learned how to use the MedTech service monitoring and health check tab.
-
-To learn how to configure the MedTech service metrics, see
-
-> [!div class="nextstepaction"]
-> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
-
-To learn how to enable the MedTech service diagnostic settings, see
+[How to configure the MedTech service metrics](how-to-configure-metrics.md)
-> [!div class="nextstepaction"]
-> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+[How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Device Data Processing Stages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-data-processing-stages.md
# Overview of the MedTech service device data processing stages
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides an overview of the device data processing stages within the [MedTech service](overview.md). The MedTech service transforms device data into [FHIR Observations](https://www.hl7.org/fhir/observation.html) for persistence in the [FHIR service](../fhir/overview.md). The MedTech service device data processing follows these stages and in this order:
Persist is the final stage where the FHIR Observations from the transform stage
## Next steps
-In this article, you learned about the MedTech service device message processing stages.
-
-For an overview of the MedTech service deployment methods, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-choose-method.md)
-
-For an overview of the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
+[Choose a deployment method for the MedTech service](deploy-choose-method.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
+[Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-For an overview of the MedTech service scenario-based mappings samples, see
+[Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Device Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-device-mapping.md
Title: Overview of the MedTech service device mapping - Azure Health Data Servic
description: Learn about the MedTech service device mapping. -+ Last updated 08/01/2023
When the MedTech service is processing the device message, the templates in the
## Next steps
-In this article, you've been provided an overview of the MedTech service device mapping.
+[How to use CalculatedContent templates with the MedTech service device mapping](how-to-use-calculatedcontent-templates.md)
-To learn how to use CalculatedContent with the MedTech service device mapping, see
+[How to use IotJsonPathContent templates with the MedTech service device mapping](how-to-use-iotjsonpathcontent-templates.md)
-> [!div class="nextstepaction"]
-> [How to use CalculatedContent templates with the MedTech service device mapping](how-to-use-calculatedcontent-templates.md)
+[How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
-To learn how to use IotJsonPathContent with the MedTech service device mapping, see
+[Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-> [!div class="nextstepaction"]
-> [How to use IotJsonPathContent templates with the MedTech service device mapping](how-to-use-iotjsonpathcontent-templates.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-To learn how to use custom functions with the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
-
-For an overview of the MedTech service FHIR destination mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-
-For an overview of the MedTech service scenario-based mappings samples, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Fhir Destination Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-fhir-destination-mapping.md
Title: Overview of the MedTech service FHIR destination mapping - Azure Health D
description: Learn about the MedTech service FHIR destination mapping. -+ Last updated 08/01/2023
# Overview of the MedTech service FHIR destination mapping
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides an overview of the MedTech service FHIR destination mapping. The MedTech service requires two types of [JSON](https://www.json.org/) mappings that are added to your MedTech service through the Azure portal or Azure Resource Manager API. The [device mapping](overview-of-device-mapping.md) is the first type and controls mapping values in the device data sent to the MedTech service to an internal, normalized data object. The device mapping contains expressions that the MedTech service uses to extract types, device identifiers, measurement date time, and measurement value(s). The FHIR destination mapping is the second type and controls how the normalized data is mapped to [FHIR Observations](https://www.hl7.org/fhir/observation.html).
The resulting FHIR Observation will look like this after the transformation stag
## Next steps
-In this article, you've been provided an overview of the MedTech service FHIR destination mapping.
-
-For an overview of the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-
-To learn how to use CalculatedContent with the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [How to use CalculatedContent templates with the MedTech service device mapping](how-to-use-calculatedcontent-templates.md)
-
-To learn how to use IotJsonPathContent with the MedTech service device mapping, see
-
-> [!div class="nextstepaction"]
-> [How to use IotJsonPathContent templates with the MedTech service device mapping](how-to-use-iotjsonpathcontent-templates.md)
+[Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-To learn how to use custom functions with the MedTech service device mapping, see
+[How to use CalculatedContent templates with the MedTech service device mapping](how-to-use-calculatedcontent-templates.md)
-> [!div class="nextstepaction"]
-> [How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
+[How to use IotJsonPathContent templates with the MedTech service device mapping](how-to-use-iotjsonpathcontent-templates.md)
-For an overview of the MedTech service scenario-based mappings samples, see
+[How to use custom functions with the MedTech service device mapping](how-to-use-custom-functions.md)
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
+[Overview of the MedTech service scenario-based mappings samples](overview-of-samples.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview Of Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview-of-samples.md
# Overview of the MedTech service scenario-based mappings samples
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
-- The [MedTech service](overview.md) scenario-based [samples](https://github.com/Azure-Samples/azure-health-data-and-ai-samples/tree/main/samples/medtech-service-mappings) provide conforming and valid [device](overview-of-device-mapping.md) and [FHIR destination](overview-of-fhir-destination-mapping.md) mappings and test device messages. Theses samples can be used to help with the authoring and troubleshooting of your own MedTech service mappings. ## Sample resources
Each MedTech service scenario-based sample contains the following resources:
## Next steps
-In this article, you learned about the MedTech service scenario-based mappings samples.
-
-* To learn about the MedTech service, see [What is MedTech service?](overview.md)
+[What is MedTech service?](overview.md)
-* To learn about the MedTech service device data processing stages, see [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md).
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-* To learn about the different deployment methods for the MedTech service, see [Choose a deployment method for the MedTech service](deploy-choose-method.md).
+[Choose a deployment method for the MedTech service](deploy-choose-method.md)
-* For an overview of the MedTech service device mapping, see [Overview of the MedTech service device mapping](overview-of-device-mapping.md).
+[Overview of the MedTech service device mapping](overview-of-device-mapping.md)
-* For an overview of the MedTech service FHIR destination mapping, see [Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md).
+[Overview of the MedTech service FHIR destination mapping](overview-of-fhir-destination-mapping.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
description: Learn about the MedTech service, its features, functions, integrati
-+ Last updated 10/19/2023
# What is the MedTech service?
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- The MedTech service is a Platform as a Service (PaaS) within the Azure Health Data Services. The MedTech service enables you to ingest device data, transform it into a unified FHIR format, and store it in an enterprise-scale, secure, and compliant cloud environment.  The MedTech service is built to help customers that are dealing with the challenge of gaining relevant insights from device data coming in from multiple and diverse sources. No matter the device or structure, the MedTech service normalizes that device data into a common format, allowing the end user to then easily capture trends, run analytics, and build Artificial Intelligence (AI) models. In the enterprise healthcare setting, the MedTech service is used in the context of remote patient monitoring, virtual health, and clinical trials.
The following Microsoft solutions can use MedTech service for extra functionalit
## Next steps
-In this article, you learned about the MedTech service and its capabilities.
-
-To learn about how the MedTech service processes device data, see
-
-> [!div class="nextstepaction"]
-> [Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-
-To learn about the different deployment methods for the MedTech service, see
-
-> [!div class="nextstepaction"]
-> [Choose a deployment method for the MedTech service](deploy-choose-method.md)
+[Overview of the MedTech service device data processing stages](overview-of-device-data-processing-stages.md)
-To learn about the MedTech service frequently asked questions (FAQs), see
+[Choose a deployment method for the MedTech service](deploy-choose-method.md)
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Troubleshoot Errors Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-deployment.md
# Troubleshoot MedTech service deployment errors
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides troubleshooting steps and fixes for MedTech service deployment errors. > [!TIP]
Here's a list of errors that can be found in the Azure Resource Manager (ARM) AP
## Next steps
-In this article, you learned how to troubleshoot and fix MedTech service deployment errors.
-
-To learn about the MedTech service frequently asked questions (FAQs), see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Troubleshoot Errors Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors-logs.md
# Troubleshoot errors using the MedTech service logs
-> [!NOTE]
-> [Fast Healthcare Interoperability Resources (FHIR&#174;)](https://www.hl7.org/fhir/) is an open healthcare specification.
- This article provides troubleshooting steps and fixes for errors found in the MedTech service logs. > [!TIP]
The expression and line with the error are specified in the error message.
## Next steps
-In this article, you learned how to troubleshoot and fix errors using the MedTech service logs.
-
-To learn about the MedTech service frequently asked questions (FAQs), see
-
-> [!div class="nextstepaction"]
-> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+[Frequently asked questions about the MedTech service](frequently-asked-questions.md)
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-central Concepts Faq Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md
An incident that requires disaster recovery could range from a subset of service
### Applications created after April 2023
-IoT Central applications created after March 2023 initially have a single IoT hub. If the IoT hub becomes unavailable, the application becomes unavailable. However, IoT Central automatically scales the application and adds a new IoT hub for each 10,000 connected devices. If you require multiple IoT hubs for applications with fewer than 10,000 devices, submit a request to [IoT Central customer support](../../iot/iot-support-help.md?toc=%2Fazure%2Fiot-central%2Ftoc.json&bc=%2Fazure%2Fiot-central%2Fbreadcrumb%2Ftoc.json).
+IoT Central applications created after April 2023 initially have a single IoT hub. If the IoT hub becomes unavailable, the application becomes unavailable. However, IoT Central automatically scales the application and adds a new IoT hub for each 10,000 connected devices. If you require multiple IoT hubs for applications with fewer than 10,000 devices, submit a request to [IoT Central customer support](../../iot/iot-support-help.md?toc=%2Fazure%2Fiot-central%2Ftoc.json&bc=%2Fazure%2Fiot-central%2Fbreadcrumb%2Ftoc.json).
Use the `az iot central device manual-failover` command to check if your application currently uses a single IoT hub. This command returns an error if the application currently has a single IoT hub.
iot-hub Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md
Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
key-vault Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/built-in-roles.md
Previously updated : 01/04/2023 Last updated : 11/06/2023
Azure Key Vault Managed HSM local role-based access control (RBAC) has several built-in roles. You can assign these roles to users, service principals, groups, and managed identities.
-To allow a principal to perform an operation, you must assign them a role that grants them permissions to perform that operations. All these roles and operations allow you to manage permissions only for *data plane* operations.
+To allow a principal to perform an operation, you must assign them a role that grants them permissions to perform that operations. All these roles and operations allow you to manage permissions only for *data plane* operations. For *management plane* operations, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md) and [Secure access to your managed HSMs](secure-your-managed-hsm.md).
To manage control plane permissions for the Managed HSM resource, you must use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). Some examples of control plane operations are to create a new managed HSM, or to update, move, or delete a managed HSM.
key-vault Secure Your Managed Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/secure-your-managed-hsm.md
Previously updated : 11/14/2022 Last updated : 11/06/2023 # Customer intent: As a managed HSM administrator, I want to set access control and configure the Managed HSM, so that I can ensure it's secure and auditors can properly monitor all activities for this Managed HSM.
The following table summarizes the role assignments to teams and resources to ac
The three team roles need access to other resources along with managed HSM permissions. To deploy VMs (or the Web Apps feature of Azure App Service), developers and operators need `Contributor` access to those resource types. Auditors need read access to the Storage account where the managed HSM logs are stored.
-To assign management plane roles (Azure RBAC) you can use Azure portal or any of the other management interfaces such as Azure CLI or Azure PowerShell. To assign managed HSM data plane roles you must use Azure CLI.
+To assign management plane roles (Azure RBAC) you can use Azure portal or any of the other management interfaces such as Azure CLI or Azure PowerShell. To assign managed HSM data plane roles you must use Azure CLI. For more information on management plane roles, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). For more information on Managed HSM data plane roles, see [Local RBAC built-in roles for Managed HSM](built-in-roles.md).
The Azure CLI snippets in this section are built with the following assumptions:
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
lab-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md
Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
lighthouse Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md
Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
load-balancer Load Balancer Custom Probe Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-custom-probe-overview.md
The protocol used by the health probe can be configured to one of the following
The interval value determines how frequently the health probe checks for a response from your backend pool instances. If the health probe fails, your backend pool instances are immediately marked as unhealthy. If the health probe succeeds on the next healthy probe up, Azure Load Balancer marks your backend pool instances as healthy. The health probe attempts to check the configured health probe port every 5 seconds by default but can be explicitly set to another value.
-In order to ensure a timely response is received, health probes have built-in timeouts. The following are the timeout durations for TCP and HTTP/S probes:
-* TCP probe timeout duration: 60 seconds
-* HTTP/S probe timeout duration: 30 seconds (60 seconds for establishing a connection)
+In order to ensure a timely response is received, HTTP/S health probes have built-in timeouts. The following are the timeout durations for TCP and HTTP/S probes:
+* TCP probe timeout duration: N/A (probes will fail once the configured probe interval duration has passed and the next probe has beeen sent)
+* HTTP/S probe timeout duration: 30 seconds
-If the configured interval is longer than the above timeout period, the health probe will timeout and fail if no response is received during the timeout period. For example, if a TCP health probe is configured with a probe interval of 120 seconds (every 2 minutes), and no probe response is received within the first 60 seconds, the probe will have reached its timeout period and fail.
+For HTTP/S probes, if the configured interval is longer than the above timeout period, the health probe will timeout and fail if no response is received during the timeout period. For example, if an HTTP health probe is configured with a probe interval of 120 seconds (every 2 minutes), and no probe response is received within the first 30 seconds, the probe will have reached its timeout period and fail.
## Design guidance
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
Use [Add-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/add-a
To save the configuration to the load balancer, use [Set-AzLoadBalancer](/powershell/module/az.network/set-azloadbalancer).
+Use [Get-AzLoadBalancerInboundNatRuleConfig](/powershell/module/az.network/get-azloadbalancerinboundnatruleconfig) to place the newly created inbound NAT rule information into a variable.
+
+Use [Get-AzNetworkInterface](/powershell/module/az.network/get-aznetworkinterface) to place the network interface information into a variable.
+
+Use [Set-AzNetworkInterfaceIpConfig](/powershell/module/az.network/set-aznetworkinterfaceipconfig) to add the newly created inbound NAT rule to the IP configuration of the network interface.
+
+To save the configuration to the network interface, use [Set-AzNetworkInterface](/powershell/module/az.network/set-aznetworkinterface).
++++ ```azurepowershell ## Place the load balancer information into a variable for later use. ## $slb = @{
$lb | Add-AzLoadBalancerInboundNatRuleConfig @rule
$lb | Set-AzLoadBalancer
+## Add the inbound NAT rule to a virtual machine
+
+$NatRule = @{
+ Name = 'MyInboundNATrule'
+ LoadBalancer = $lb
+}
+
+$NatRuleConfig = Get-AzLoadBalancerInboundNatRuleConfig @NatRule
+
+$NetworkInterface = @{
+ ResourceGroupName = 'myResourceGroup'
+ Name = 'MyNIC'
+ }
+
+ $NIC = Get-AzNetworkInterface @NetworkInterface
+
+ $IPconfig = @{
+ Name = 'Ipconfig'
+ LoadBalancerInboundNatRule = $NatRuleConfig
+}
+
+$NIC | Set-AzNetworkInterfaceIpConfig @IPconfig
+
+$NIC | Set-AzNetworkInterface
+ ``` # [**CLI**](#tab/inbound-nat-rule-cli)
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
Install the following tools and versions for your specific operating system: Win
* Azure Functions Core Tools - 4.x version
- * [Windows](https://github.com/Azure/azure-functions-core-tools/releases/tag/4.0.4865): Use the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`.
+ * [Windows](https://github.com/Azure/azure-functions-core-tools/releases): Use the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`.
* [macOS](../azure-functions/functions-run-local.md?tabs=macos#install-the-azure-functions-core-tools) * [Linux](../azure-functions/functions-run-local.md?tabs=linux#install-the-azure-functions-core-tools)
logic-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md
Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023 ms.suite: integration
machine-learning Monitor Azure Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning.md
Previously updated : 11/16/2022 Last updated : 11/06/2023 # Monitor Azure Machine Learning
machine-learning Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md
Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
machine-learning How To Bulk Test Evaluate Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-bulk-test-evaluate-flow.md
In prompt flow, we provide multiple built-in evaluation methods to help you meas
| Classification Accuracy Evaluation | Accuracy | Measures the performance of a classification system by comparing its outputs to ground truth. | No | prediction, ground truth | in the range [0, 1]. | | QnA Relevance Scores Pairwise Evaluation | Score, win/lose | Assesses the quality of answers generated by a question answering system. It involves assigning relevance scores to each answer based on how well it matches the user question, comparing different answers to a baseline answer, and aggregating the results to produce metrics such as averaged win rates and relevance scores. | Yes | question, answer (no ground truth or context) | Score: 0-100, win/lose: 1/0 | | QnA Groundedness Evaluation | Groundedness | Measures how grounded the model's predicted answers are in the input source. Even if LLMΓÇÖs responses are true, if not verifiable against source, then is ungrounded. | Yes | question, answer, context (no ground truth) | 1 to 5, with 1 being the worst and 5 being the best. |
-| QnA GPT Similarity Evaluation | GPT Similarity | Measures similarity between user-provided ground truth answers and the model predicted answer using GPT Model. | Yes | question, answer, ground truth (context not needed) | in the range [0, 1]. |
+| QnA GPT Similarity Evaluation | GPT Similarity | Measures similarity between user-provided ground truth answers and the model predicted answer using GPT Model. | Yes | question, answer, ground truth (context not needed) | 1 to 5, with 1 being the worst and 5 being the best. |
| QnA Relevance Evaluation | Relevance | Measures how relevant the model's predicted answers are to the questions asked. | Yes | question, answer, context (no ground truth) | 1 to 5, with 1 being the worst and 5 being the best. | | QnA Coherence Evaluation | Coherence | Measures the quality of all sentences in a model's predicted answer and how they fit together naturally. | Yes | question, answer (no ground truth or context) | 1 to 5, with 1 being the worst and 5 being the best. | | QnA Fluency Evaluation | Fluency | Measures how grammatically and linguistically correct the model's predicted answer is. | Yes | question, answer (no ground truth or context) | 1 to 5, with 1 being the worst and 5 being the best |
machine-learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/overview.md
+
+ Title: The overview of tools in prompt flow
+
+description: The overview of tools in prompt flow displays an index table for tools and the instructions for custom tool package creation and tool package usage.
+++++++ Last updated : 10/24/2023++
+# The overview of tools in prompt flow (preview)
+This table provides an index of tools in prompt flow. If existing tools can't meet your requirements, you can [develop your own custom tool and make a tool package](https://microsoft.github.io/promptflow/how-to-guides/develop-a-tool/create-and-use-tool-package.html).
+
+> [!IMPORTANT]
+> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+| Tool name | Description | Environment | Package Name |
+||--|-|--|
+| [Python](./python-tool.md) | Run Python code. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [LLM](./llm-tool.md) | Use Open AI's Large Language Model for text completion or chat. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Prompt](./prompt-tool.md) | Craft prompt using Jinja as the templating language. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Embedding](./embedding-tool.md) | Use Open AI's embedding model to create an embedding vector representing the input text. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Open Source LLM](./open-source-llm-tool.md) | Use an Open Source model from the Azure Model catalog, deployed to an Azure Machine Learning Online Endpoint for LLM Chat or Completion API calls. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Serp API](./serp-api-tool.md) | Use Serp API to obtain search results from a specific search engine. | Default | [promptflow-tools](https://pypi.org/project/promptflow-tools/) |
+| [Content Safety (Text)](./content-safety-text-tool.md) | Use Azure Content Safety to detect harmful content. | Default | [promptflow-contentsafety](https://pypi.org/project/promptflow-contentsafety/) |
+| [Faiss Index Lookup](./faiss-index-lookup-tool.md) | Search vector based query from the FAISS index file. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector DB Lookup](./vector-db-lookup-tool.md) | Search vector based query from existing Vector Database. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+| [Vector Index Lookup](./vector-index-lookup-tool.md) | Search text or vector based query from Azure Machine Learning Vector Index. | Default | [promptflow-vectordb](https://pypi.org/project/promptflow-vectordb/) |
+
+To discover more custom tools that developed by the open source community, see [more custom tools](https://microsoft.github.io/promptflow/integrations/tools/https://docsupdatetracker.net/index.html).
+
+For the tools that should be utilized in the custom environment, see [Custom tool package creation and usage](../how-to-custom-tool-package-creation-and-usage.md#prepare-runtime) to prepare the runtime. Then the tools can be displayed in the tool list.
+
machine-learning Transparency Note https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/transparency-note.md
+
+ Title: Transparency Note for Auto-Generate Prompt Variants in Prompt Flow
+
+description: Transparency Note for Auto-Generate Prompt Variants in Prompt Flow
+++++ Last updated : 10/20/2023+++
+# Transparency Note for Auto-Generate Prompt Variants in Prompt Flow
+
+## What is a Transparency Note?
+
+An AI system includes not only the technology, but also the people who use it, the people who are affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. Microsoft's Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system.
+
+Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft's AI principles](https://www.microsoft.com/ai/responsible-ai).
+
+## The basics of Auto-Generate Prompt Variants in Prompt Flow
+
+### Introduction
+
+Prompt engineering is at the center of building applications using Large Language Models. Microsoft's Prompt Flow offers rich capabilities to interactively edit, bulk test, and evaluate prompts with built-in flows to pick the best prompt. With the Auto-Generate Prompt Variants feature in Prompt Flow, we provide the ability to automatically generate variations of a user's base prompt with help of large language models and allow users to test them in Prompt Flow to reach the optimal solution for the user's model and use case needs.
+
+### Key terms
+
+| **Term** | **Definition** |
+| | |
+| Prompt flow | Prompt Flow offers rich capabilities to interactively edit prompts and bulk test them with built-in evaluation flows to pick the best prompt. More information available at [What is prompt flow](./overview-what-is-prompt-flow.md) |
+| Prompt engineering | The practice of crafting and refining input prompts to elicit more desirable responses from a large language model, particularly in large language models. |
+| Prompt variants | Different versions or modifications of a given input prompt designed to test or achieve varied responses from a large language model. |
+| Base prompt | The initial or primary prompt that serves as a starting point for eliciting response from large language models. In this case it is provided by the user and is modified to create prompt variants. |
+| System prompt | A predefined prompt generated by a system, typically to initiate a task or seek specific information. This is not visible but is used internally to generate prompt variants. |
+
+## Capabilities
+
+### System behavior
+
+The Auto-Generate Prompt Variants feature, as part of the Prompt Flow experience, provides the ability to automatically generate and easily assess prompt variations to quickly find the best prompt for your use case. This feature further empowers Prompt Flow's rich set of capabilities to interactively edit and evaluate prompts, with the goal of simplifying prompt engineering.
+
+When provided with the user's base prompt the Auto-Generate Prompt Variants feature generates several variations using the generative power of Azure OpenAI models and an internal system prompt. While Azure OpenAI provides content management filters, we recommend verifying any prompts generated before using them in production scenarios.
+
+### Use cases
+
+#### Intended uses
+
+Auto-Generate Prompt Variants can be used in the following scenarios. The system's intended use is:
+
+**Generate new prompts from a provided base prompt**: "Generate Variants" feature will allow the users of prompt flow to automatically generate variants of their provided base prompt with help of LLMs (Large Language Models).
+
+#### Considerations when choosing a use case
+
+**Do not use Auto-Generate Prompt Variants for decisions that might have serious adverse impacts.**
+
+Auto-Generate Prompt Variants was not designed or tested to recommend items that require additional considerations related to accuracy, governance, policy, legal, or expert knowledge as these often exist outside the scope of the usage patterns carried out by regular (non-expert) users. Examples of such use cases include medical diagnostics, banking, or financial recommendations, hiring or job placement recommendations, or recommendations related to housing.
+
+## Limitations
+
+Explicitly in the generation of prompt variants, it is important to understand that while AI systems are incredibly valuable tools, they are **non-deterministic**. This means that perfect **accuracy** (the measure of how well the system-generated events correspond to real events that happened in a space) of predictions is not possible. A good model will have high accuracy, but it will occasionally output incorrect predictions. Failure to understand this limitation can lead to over-reliance on the system and unmerited decisions that can impact stakeholders.
+
+Furthermore, the prompt variants that are generated using LLMs, are returned to the user as is. It is encouraged to evaluate and compare these variants to determine the best prompt for a given scenario. There are **additional concerns** here because many of the evaluations offered in the Prompt Flow ecosystems also depend on LLMs, potentially further decreasing the utility of any given prompt. Manual review is strongly recommended.
+
+### Technical limitations, operational factors, and ranges
+
+As mentioned previously, the Auto-Generate Prompt Variants feature does not provide a measurement or evaluation of the provided prompt variants. It is strongly recommended that the user of this feature evaluates the suggested prompts in the way which best aligns with their specific use case and requirements.
+
+The Auto-Generate Prompt Variants feature is limited to generating a maximum of five variations from a given base prompt. If more are required, additional prompt variants can be generated after modifying the original base prompt.
+
+Auto-Generate Prompt Variants only supports Azure OpenAI models at this time. In addition to limiting users to only the models which are supported by Azure OpenAI, it also limits content to what is acceptable in terms of the Azure OpenAI's content management policy. Uses outside of this policy are not supported by this feature.
+
+## System performance
+
+Performance for the Auto-Generate Prompt Variants feature is determined by the user's use case in each individual scenario ΓÇô in this way the feature does not evaluate each prompt or generate metrics.
+
+Operating in the Prompt Flow ecosystem, which focuses on Prompt Engineering, provides a strong story for error handling. Often retrying the operation will resolve an error. One error which might arise specific to this feature is response filtering from the Azure OpenAI resource for content or harm detection, this would happen in the case that content in the base prompt is determined to be against Azure OpenAI's content management policy. To resolve these errors please update the base prompt in accordance with the guidance at [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter).
+
+### Best practices for improving system performance
+
+To improve performance there are several parameters which can be modified, depending on the use cases and requirements of the prompt requirements:
+
+- **Model**: The choice of models used with this feature will impact the performance. As general guidance, the GPT-4 model is more powerful than the GPT-3.5 and would thus be expected to generate more performant prompt variants.
+- **Number of Variants**: This parameter specifies how many variants to generate. A larger number of variants will produce more prompts and therefore the likelihood of finding the best prompt for the use case.
+- **Base Prompt**: Since this tool generates variants of the provided base prompt, a strong base prompt can set up the tool to provide the maximum value for your case. Please review the guidelines at Prompt engineering techniques with [Azure OpenAI](/azure/ai-services/openai/concepts/advanced-prompt-engineering).
+
+## Evaluation of Auto-Generate Prompt Variants
+
+### Evaluation methods
+
+The Auto-Generate Prompt Variants feature been testing by the internal development team, targeting fit for purpose and harm mitigation.
+
+### Evaluation results
+
+Evaluation of harm management showed staunch support for the combination of system prompt and Azure Open AI content management policies in actively safe-guarding responses. Additional opportunities to minimize the chance and risk of harms can be found in the Microsoft documentation: [Azure OpenAI Service abuse monitoring](/azure/ai-services/openai/concepts/abuse-monitoring) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter).
+
+Fit for purpose testing supported the quality of generated prompts from creative purposes (poetry) and chat-bot agents. The reader is cautioned from drawing sweeping conclusions given the breadth of possible base prompt and potential use cases. As previously mentioned, please use evaluations appropriate to the required use cases and ensure a human reviewer is part of the process.
+
+## Evaluating and integrating Auto-Generate Prompt Variants for your use
+
+The performance of the Auto-Generate Prompt Variants feature will vary depending on the base prompt and use case in it is used. True usage of the generated prompts will depend on a combination of the many elements of the system in which the prompt is used.
+
+To ensure optimal performance in their scenarios, customers should conduct their own evaluations of the solutions they implement using Auto-Generate Prompt Variants. Customers should, generally, follow an evaluation process that:
+
+- Uses internal stakeholders to evaluate any generated prompt.
+- Uses internal stakeholders to evaluate results of any system which uses a generated prompt.
+- Incorporates KPI (Key Performance Indicators) and metrics monitoring when deploying the service using generated prompts meets evaluation targets.
+
+## Learn more about responsible AI
+
+- [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai)
+- [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources)
+- [Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/)
+
+## Learn more about Auto-Generate Prompt Variants
+ - [What is prompt flow](./overview-what-is-prompt-flow.md)
+
mariadb Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-cli.md
Note the public IP address of the VM. You will use this address to connect to
## Create an Azure Database for MariaDB server
-Create a Azure Database for MariaDB with the az mariadb server create command. Remember that the name of your MariaDB Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
+Create an Azure Database for MariaDB with the az mariadb server create command. Remember that the name of your MariaDB Server must be unique across Azure, so replace the placeholder value in brackets with your own unique value:
```azurecli-interactive # Create a server in the resource group
mariadb Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Azure Database for MariaDB
migrate Concepts Business Case Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md
There are three types of migration strategies that you can choose while building
**Migration Strategy** | **Details** | **Assessment insights** | |
-**Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment is picked. <br/><br/>For general servers, sizing and cost comes from Azure VM assessment.
+**Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments depending on web app readiness and minimum cost. <br/><br/>For general servers, sizing and cost comes from Azure VM assessment.
**Migrate to all IaaS (Infrastructure as a Service)** | You can get a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to SQL Server on Azure VM* report. <br/><br/>For general servers and servers hosting web apps, sizing and cost comes from Azure VM assessment.
-**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets. <br/><br/>General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - *Modernize to PaaS* from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment. For general servers, sizing and cost comes from Azure VM assessment.
+**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets. <br/><br/>General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - *Modernize to PaaS* from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments, with a preference to App Service. For general servers, sizing and cost comes from Azure VM assessment.
Although the Business case picks Azure recommendations from certain assessments, you won't be able to access the assessments directly. To deep dive into sizing, readiness, and Azure cost estimates, you can create respective assessments for the servers or workloads.
Cost components for running on-premises servers. For TCO calculations, an annual
| | | | | Compute | Compute (IaaS) | Azure VM, SQL Server on Azure VM | Compute cost (with AHUB) from Azure VM assessment, Compute cost (with AHUB) from Azure SQL assessment | | | Compute (PaaS) | Azure SQL MI or Azure SQL DB | Compute cost (with AHUB) from Azure SQL assessment. |
-| | Compute(PaaS) | Azure App Service | Plan cost from Azure App Service. |
+| | Compute(PaaS) | Azure App Service or Azure Kubernetes Service | Plan cost from Azure App Service and/or Node pool cost from Azure Kubernetes Service. |
| Storage | Storage (IaaS) | Azure VM - Managed disks, Server on Azure VM - Managed disk | Storage cost from Azure VM assessment/Azure SQL assessment. | | | Storage (PaaS) | Azure SQL MI or Azure SQL DB - Managed disks | Storage cost from Azure SQL assessment. | | | Storage (PaaS) | N/A | N/A |
Cost components for running on-premises servers. For TCO calculations, an annual
| | Maintenance | Maintenance | Defaulted to 15% of network hardware and software cost. | | Security | Server security cost | Defender for servers | For servers recommended for Azure VM, if they're ready to run Defender for Server, the Defender for server cost (Plan 2) per server for that region is added | | | SQL security cost | Defender for SQL | For SQL Server instances and DBs recommended for SQL Server on Azure VM, Azure SQL MI or Azure SQL DB, if they're ready to run Defender for SQL, the Defender for SQL per SQL Server instance for that region is added. For DBs recommended to Azure SQL DB, cost is rolled up at instance level. |
+| | Azure App Service security cost | Defender for App Service | For web apps recommended for App Service or App Service containers, the Defender for App Service cost for that region is added. |
| Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Facilities cost isn't applicable for Azure cost. | | Labor | Labor | IT admin | DC admin cost = ((Number of virtual machines) / (Avg. # of virtual machines that can be managed by a full-time administrator)) * 730 * 12 |
migrate How To Build A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-build-a-business-case.md
There are three types of migration strategies that you can choose while building
**Migration Strategy** | **Details** | **Assessment insights** | |
-**Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment is picked.<br/><br/> For general servers, sizing and cost comes from Azure VM assessment.
+**Azure recommended to minimize cost** | You can get the most cost efficient and compatible target recommendation in Azure across Azure IaaS and Azure PaaS targets. | For SQL Servers, sizing and cost comes from the *Recommended report* with optimization strategy - minimize cost from Azure SQL assessment.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments depending on web app readiness and minimum cost.<br/><br/> For general servers, sizing and cost comes from Azure VM assessment.
**Migrate to all IaaS (Infrastructure as a Service)** | You can get a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to SQL Server on Azure VM* report. <br/><br/> For general servers and servers hosting web apps, sizing and cost comes from Azure VM assessment.
-**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets.<br/><br/> General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to Azure SQL MI* report.<br/><br/> For web apps, sizing and cost comes from Azure App Service assessment. <br/><br/> For general servers, sizing and cost comes from Azure VM assessment.<br/><br/>
+**Modernize to PaaS (Platform as a Service)** | You can get a PaaS preferred recommendation that means, the logic identifies workloads best fit for PaaS targets.<br/><br/> General servers are recommended with a quick lift and shift recommendation to Azure IaaS. | For SQL Servers, sizing and cost comes from the *Instance to Azure SQL MI* report.<br/><br/> For web apps, sizing and cost comes from Azure App Service and Azure Kubernetes Service assessments, with a preference to App Service. <br/><br/> For general servers, sizing and cost comes from Azure VM assessment.<br/><br/>
> [!Note] > Although the Business case picks Azure recommendations from certain assessments, you won't be able to access the assessments directly. To deep dive into sizing, readiness and Azure cost estimates, you can create respective assessments for the servers or workloads.
migrate How To View A Business Case https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-view-a-business-case.md
This section contains the cost estimate by recommended target (Annual cost and a
- Azure SQL: - **Estimated cost by savings options**: This card includes compute cost for Azure SQL MI. It is recommended that all idle SQL instances are migrated via Pay as you go Dev/Test and others (Active and unknown) are migrated using 3 year Reserved Instance to maximize savings. - **Distribution by recommended service tier** : This card covers the recommended service tier.-- Azure App Service:
+- Azure App Service and App Service Container:
- **Estimated cost by savings options**: This card includes Azure App Service Plans cost. It is recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings. - **Distribution by recommended plans** : This card covers the recommended App Service plan.
+- Azure Kubernetes Service:
+ - **Estimated cost by savings options**: This card includes the cost of the recommended AKS node pools. It is recommended that the web apps are migrated using 3 year Reserved Instance or 3 year Savings Plan to maximize savings.
+ - **Distribution by recommended Node pool SKU**: This card covers the recommended SKUs for AKS node pools.
**On-premises tab**
migrate Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md
Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
migrate Troubleshoot Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-dependencies.md
The table below summarizes all errors encountered when gathering dependency data
|--|--|--| | **60001**:UnableToConnectToPhysicalServer | Either the [prerequisites](./migrate-support-matrix-physical.md) to connect to the server have not been met or there are network issues in connecting to the server, for instance some proxy settings.| - Ensure that the server meets the prerequisites and [port access requirements](./migrate-support-matrix-physical.md). <br/> - Add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the Azure Migrate appliance, and retry the operation. This is to allow remote inbound connections on servers - _Windows:_ WinRM port 5985 (HTTP) and _Linux:_ SSH port 22 (TCP). <br/>- Ensure that you have chosen the correct authentication method on the appliance to connect to the server. <br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).| | **60002**:InvalidServerCredentials| Unable to connect to server. Either you have provided incorrect credentials on the appliance or the credentials previously provided have expired.| - Ensure that you have provided the correct credentials for the server on the appliance. You can check that by trying to connect to the server using those credentials.<br/> - If the credentials added are incorrect or have expired, edit the credentials on the appliance and revalidate the added servers. If the validation succeeds, the issue is resolved.<br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).|
-| **60005**:SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on the server.| - Ensure that the impacted server has the latest kernel and OS updates installed.<br/>- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/> - Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.<br/>- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). |
+| **60005**:SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on the server.| - Ensure that the impacted server has the latest kernel and OS updates installed.<br/>- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/> - Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.<br/>- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). |
| **9000**: VMware tools status on the server can't be detected. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9001**: VMware tools aren't installed on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9002**: VMware tools aren't running on the server. | VMware tools might not be installed on the server or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.0 are installed and running on the server. |
The issue happens when the VMware discovery agent in appliance tries to download
The error usually appears for servers running Windows Server 2008 or lower. ### Remediation
-Install the required PowerShell version (2.0 or later) at this location on the server: ($SYSTEMROOT)\System32\WindowsPowershell\v1.0\powershell.exe. [Learn more](/powershell/scripting/windows-powershell/install/installing-windows-powershell) about how to install PowerShell in Windows Server.
+Install Windows PowerShell 5.1 at this location on the server. Following the instructios in [Install and Configure WMF 5.1](/previous-versions/powershell/scripting/windows-powershell/install/installing-windows-powershell) about how to install PowerShell in Windows Server.
+ After you install the required PowerShell version, verify if the error was resolved by following the steps on [this website](troubleshoot-dependencies.md#mitigation-verification).
After you install the required PowerShell version, verify if the error was resol
### Remediation Make sure that the user account provided in the appliance has access to the WMI namespace and subnamespaces. To set the access:
-1. Go to the server that's reporting this error.
-1. Search and select **Run** from the **Start** menu. In the **Run** dialog, enter **wmimgmt.msc** in the **Open** text box, and select **Enter**.
-1. The wmimgmt console opens where you can find **WMI Control (Local)** in the left pane. Right-click it, and select **Properties** from the menu.
-1. In the **WMI Control (Local) Properties** dialog, select the **Securities** tab.
-1. On the **Securities** tab, select **Security** to open the **Security for ROOT** dialog.
-1. Select **Advanced** to open the **Advanced Security Settings for Root** dialog.
-1. Select **Add** to open the **Permission Entry for Root** dialog.
-1. Click **Select a principal** to open the **Select Users, Computers, Service Accounts, or Groups** dialog.
+1. Go to the server that's reporting this error.
+1. Search and select **Run** from the **Start** menu. In the **Run** dialog, enter **wmimgmt.msc** in the **Open** text box, and select **Enter**.
+1. The wmimgmt console opens where you can find **WMI Control (Local)** in the left pane. Right-click it, and select **Properties** from the menu.
+1. In the **WMI Control (Local) Properties** dialog, select the **Securities** tab.
+1. On the **Securities** tab, select **Security** to open the **Security for ROOT** dialog.
+1. Select **Advanced** to open the **Advanced Security Settings for Root** dialog.
+1. Select **Add** to open the **Permission Entry for Root** dialog.
+1. Click **Select a principal** to open the **Select Users, Computers, Service Accounts, or Groups** dialog.
1. Select the usernames or groups you want to grant access to the WMI, and select **OK**. 1. Ensure you grant execute permissions, and select **This namespace and subnamespaces** in the **Applies to** dropdown list. 1. Select **Apply** to save the settings and close all dialogs.
After you use the mitigation steps for the preceding errors, verify if the mitig
1. For agentless dependency analysis, run the following commands to see if you get a successful output. - For Windows servers:
-
- ````
- Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'Get-WmiObject Win32_Process'" -GuestCredential $credential
-
- Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'netstat -ano -p tcp'" -GuestCredential $credential
- ````
+
+ ````
+ Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'Get-WmiObject Win32_Process'" -GuestCredential $credential
+
+ Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'netstat -ano -p tcp'" -GuestCredential $credential
+ ````
- For Linux servers: ```` Invoke-VMScript -VM $vm -ScriptText "ps -o pid,cmd | grep -v ]$" -GuestCredential $credential
-
+ Invoke-VMScript -VM $vm -ScriptText "netstat -atnp | awk '{print $4,$5,$7}'" -GuestCredential $credential ````
migrate Troubleshoot Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-discovery.md
This article helps you troubleshoot issues with ongoing server discovery, softwa
## Discovered servers not showing in the portal You get this error when you don't yet see the servers in the portal, and the discovery state is **Discovery in progress**.
-
+ ### Remediation If the servers don't appear in the portal, wait for a few minutes because it takes around 15 minutes for discovery of servers running on a vCenter server. It takes 2 minutes for each Hyper-V host added on the appliance for discovery of servers running on the host and 1 minute for discovery of each server added on the physical appliance.
If the preceding step doesn't work and you're discovering VMware servers:
## Server data not updating in the portal
-You get this error if the discovered servers don't appear in the portal or if the server data is outdated.
+You get this error if the discovered servers don't appear in the portal or if the server data is outdated.
### Remediation
The table below summarizes all errors encountered when gathering software invent
|--|--|--| | **60001**:UnableToConnectToPhysicalServer | Either the [prerequisites](./migrate-support-matrix-physical.md) to connect to the server have not been met or there are network issues in connecting to the server, for instance some proxy settings.| - Ensure that the server meets the prerequisites and [port access requirements](./migrate-support-matrix-physical.md). <br/> - Add the IP addresses of the remote machines (discovered servers) to the WinRM TrustedHosts list on the Azure Migrate appliance, and retry the operation. This is to allow remote inbound connections on servers - _Windows:_ WinRM port 5985 (HTTP) and _Linux:_ SSH port 22 (TCP). <br/>- Ensure that you have chosen the correct authentication method on the appliance to connect to the server. <br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).| | **60002**:InvalidServerCredentials| Unable to connect to server. Either you have provided incorrect credentials on the appliance or the credentials previously provided have expired.| - Ensure that you have provided the correct credentials for the server on the appliance. You can check that by trying to connect to the server using those credentials.<br/> - If the credentials added are incorrect or have expired, edit the credentials on the appliance and revalidate the added servers. If the validation succeeds, the issue is resolved.<br/> - If the issue persists, submit a Microsoft support case, providing the appliance machine ID (available in the footer of the appliance configuration manager).|
-| **60005**:SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on the server.| - Ensure that the impacted server has the latest kernel and OS updates installed.<br/>- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/> - Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.<br/>- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). |
+| **60005**:SSHOperationTimeout | The operation took longer than expected either due to network latency issues or due to the lack of latest updates on the server.| - Ensure that the impacted server has the latest kernel and OS updates installed.<br/>- Ensure that there is no network latency between the appliance and the server. It is recommended to have the appliance and source server on the same domain to avoid latency issues.<br/> - Connect to the impacted server from the appliance and run the commands [documented here](./troubleshoot-appliance.md) to check if they return null or empty data.<br/>- If the issue persists, submit a Microsoft support case providing the appliance machine ID (available in the footer of the appliance configuration manager). |
| **9000**: VMware tools status on the server can't be detected. | VMware tools might not be installed on the server, or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9001**: VMware tools aren't installed on the server. | VMware tools might not be installed on the server, or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.1 are installed and running on the server. | | **9002**: VMware tools aren't running on the server. | VMware tools might not be installed on the server, or the installed version is corrupted. | Ensure that VMware tools later than version 10.2.0 are installed and running on the server. |
The issue happens when the VMware discovery agent in the appliance tries to down
### Remediation - You can test TCP connectivity to the ESXi host _(name provided in the error message)_ on port 443 (required to be open on ESXi hosts to pull dependency data) from the appliance by opening PowerShell on the appliance server and running the following command:
-
+ ```` Test -NetConnection -ComputeName <Ip address of the ESXi host> -Port 443 ````
The issue happens when the VMware discovery agent in the appliance tries to down
The error usually appears for servers running Windows Server 2008 or lower. ### Remediation
-Install the required PowerShell version (2.0 or later) at this location on the server: ($SYSTEMROOT)\System32\WindowsPowershell\v1.0\powershell.exe. [Learn more](/powershell/scripting/windows-powershell/install/installing-windows-powershell) about how to install PowerShell in Windows Server.
+Install Windows PowerShell 5.1 at this location on the server. Following the instructios in [Install and Configure WMF 5.1](/previous-versions/powershell/scripting/windows-powershell/install/installing-windows-powershell) about how to install PowerShell in Windows Server.
After you install the required PowerShell version, verify if the error was resolved by following the steps on [this website](troubleshoot-discovery.md#mitigation-verification).
After you install the required PowerShell version, verify if the error was resol
### Remediation Make sure that the user account provided in the appliance has access to WMI namespace and subnamespaces. To set the access:
-1. Go to the server that's reporting this error.
+1. Go to the server that's reporting this error.
1. Search and select **Run** from the **Start** menu. In the **Run** dialog, enter **wmimgmt.msc** in the **Open** text box and select **Enter**. 1. The wmimgmt console opens where you can find **WMI Control (Local)** in the left pane. Right-click it, and select **Properties** from the menu. 1. In the **WMI Control (Local) Properties** dialog, select the **Securities** tab. 1. Select **Security** to open the **Security for ROOT** dialog.
-1. Select **Advanced** to open the **Advanced Security Settings for Root** dialog.
+1. Select **Advanced** to open the **Advanced Security Settings for Root** dialog.
1. Select **Add** to open the **Permission Entry for Root** dialog. 1. Click **Select a principal** to open the **Select Users, Computers, Service Accounts or Groups** dialog. 1. Select the usernames or groups you want to grant access to the WMI, and select **OK**.
After you use the mitigation steps for the preceding errors, verify if the mitig
- For Windows servers:
- ````
+ ````
Invoke-VMScript -VM $vm -ScriptText "powershell.exe 'Get-WMIObject win32_operatingsystem'" -GuestCredential $credential Invoke-VMScript -VM $vm -ScriptText "powershell.exe Get-WindowsFeature" -GuestCredential $credential
- ````
+ ````
- For Linux servers: ```` Invoke-VMScript -VM $vm -ScriptText "ls" -GuestCredential $credential
For Windows servers:
$Server = New-PSSession ΓÇôComputerName <IPAddress of Server> -Credential <user_name> ```` and input the server credentials in the prompt.
-
+ 2. Run the following commands to validate for software inventory to see if you get a successful output: ```` Invoke-Command -Session $Server -ScriptBlock {Get-WMIObject win32_operatingsystem}
Typical SQL discovery errors are summarized in the following table.
| **Error** | **Cause** | **Action** | **Guide** |--|--|--|--| |**30000**: Credentials associated with this SQL server didn't work.|Either manually associated credentials are invalid or auto-associated credentials can no longer access the SQL server.|Add credentials for SQL Server on the appliance and wait until the next SQL discovery cycle or force refresh.| - |
-|**30001**: Unable to connect to SQL Server from the appliance.|1. The appliance doesn't have a network line of sight to SQL Server.<br/>2. The firewall is blocking the connection between SQL Server and the appliance.|1. Make SQL Server reachable from the appliance.<br/>2. Allow incoming connections from the appliance to SQL Server.| - |
+|**30001**: Unable to connect to SQL Server from the appliance.|1. The appliance doesn't have a network line of sight to SQL Server.<br/>2. The firewall is blocking the connection between SQL Server and the appliance.|1. Make SQL Server reachable from the appliance.<br/>2. Allow incoming connections from the appliance to SQL Server.| - |
|**30003**: Certificate isn't trusted.|A trusted certificate isn't installed on the computer running SQL Server.|Set up a trusted certificate on the server. [Learn more](/troubleshoot/sql/connect/error-message-when-you-connect).| [View](/troubleshoot/sql/connect/error-message-when-you-connect) | |**30004**: Insufficient permissions.|This error could occur because of the lack of permissions required to scan SQL Server instances. |Grant the sysadmin role to the credentials/ account provided on the appliance for discovering SQL Server instances and databases. [Learn more](/sql/t-sql/statements/grant-server-permissions-transact-sql).| [View](/sql/t-sql/statements/grant-server-permissions-transact-sql) | |**30005**: SQL Server login failed to connect because of a problem with its default master database.|Either the database itself is invalid or the login lacks CONNECT permission on the database.|Use ALTER LOGIN to set the default database to master database.<br/>Grant the sysadmin role to the credentials/ account provided on the appliance for discovering SQL Server instances and databases. [Learn more](/sql/relational-databases/errors-events/mssqlserver-4064-database-engine-error).| [View](/sql/relational-databases/errors-events/mssqlserver-4064-database-engine-error) |
migrate Tutorial App Containerization Aspnet App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-app-service.md
Before you start this tutorial, you should:
**Requirement** | **Details** | **Identify a machine on which to install the tool** | You need a Windows machine on which to install and run the Azure Migrate App Containerization tool. The Windows machine could run a server (Windows Server 2016 or later) or client (Windows 10) operating system. (The tool can run on your desktop.) <br/><br/> The Windows machine running the tool should have network connectivity to the servers or virtual machines hosting the ASP.NET applications that you'll containerize.<br/><br/> Ensure that 6 GB is available on the Windows machine running the Azure Migrate App Containerization tool. This space is for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>If the Microsoft Web Deployment tool isn't already installed on the machine running the App Containerization tool and the application server, install it. You can [download the tool](https://aka.ms/webdeploy3.6).
-**Application servers** | Enable PowerShell remoting on the application servers: sign in to the application server and follow [these instructions to turn on PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting). <br/><br/> Ensure that PowerShell 5.1 is installed on the application server. Follow the instructions [here to download and install PowerShell 5.1](/powershell/scripting/windows-powershell/wmf/setup/install-configure) on the application server. <br/><br/> If the Microsoft Web Deployment tool isn't already installed on the machine running the App Containerization tool and the application server, install it. You can [download the tool](https://aka.ms/webdeploy3.6).
+**Application servers** | Enable PowerShell remoting on the application servers: sign in to the application server and follow [these instructions to turn on PowerShell remoting](/powershell/module/microsoft.powershell.core/enable-psremoting). <br/><br/> Ensure that PowerShell 5.1 is installed on the application server. Follow the instructions in [Install and Configure WMF 5.1](/previous-versions/powershell/scripting/windows-powershell/wmf/setup/install-configure) on the application server. <br/><br/> If the Microsoft Web Deployment tool isn't already installed on the machine running the App Containerization tool and the application server, install it. You can [download the tool](https://aka.ms/webdeploy3.6).
**ASP.NET application** | The tool currently supports: <br> <ul><li> ASP.NET applications that use .NET Framework 3.5 or later.<br/> <li>Application servers that run Windows Server 2012 R2 or later. (Application servers should be running PowerShell 5.1.) <br/><li> Applications that run on Internet Information Services 7.5 or later.</ul> <br/><br/> The tool currently doesn't support: <br/> <ul><li>Applications that require Windows authentication. (AKS doesn't currently support gMSA.) <br/> <li> Applications that depend on other Windows services hosted outside of Internet Information Services.
If you just created a free Azure account, you're the owner of your subscription.
- **Connectivity.** The tool checks whether the Windows machine has internet access. If the machine uses a proxy: 1. Select **Set up proxy** to specify the proxy address (in the form IP address or FQDN) and listening port. 1. Specify credentials if the proxy needs authentication.
-
+ 1. If you've added proxy details or disabled the proxy or authentication, select **Save** to trigger the connectivity check again.
-
+ Only HTTP proxy is supported. - **Install updates.** The tool automatically checks for the latest updates and installs them. You can also [manually install the latest version of the tool](https://go.microsoft.com/fwlink/?linkid=2134571). - **Install Microsoft Web Deploy tool.** The tool checks whether the Microsoft Web Deployment tool is installed on the Windows machine that's running the Azure Migrate App Containerization tool.
The App Containerization tool connects remotely to the application servers by us
![Screenshot that shows the discovered ASP.NET application.](./media/tutorial-containerize-apps-aks/discovered-app-asp.png)
-6. Specify a name for the target container for each selected application. Specify the container name as <*name:tag*>, where *tag* is used for the container image. For example, you can specify the target container name as *appname:v1*.
+6. Specify a name for the target container for each selected application. Specify the container name as <*name:tag*>, where *tag* is used for the container image. For example, you can specify the target container name as *appname:v1*.
### Parameterize application configurations Parameterizing the configuration makes it available as a deploy-time parameter. Parameterization allows you to configure a setting when you deploy the application as opposed to having it hard coded to a specific value in the container image. For example, this option is useful for parameters like database connection strings.
Parameterizing the configuration makes it available as a deploy-time parameter.
3. Select the applications that you want to build images for, and then select **Build**. Selecting **Build** will start the container image build for each application. The tool monitors the build status and will let you continue to the next step when the build finishes.
-4. You can monitor the progress of the build by selecting **Build in Progress** under the status column. The link will become active a couple minutes after you trigger the build process.
+4. You can monitor the progress of the build by selecting **Build in Progress** under the status column. The link will become active a couple minutes after you trigger the build process.
5. After the build is complete, select **Continue** to specify deployment settings:
After the container image is built, the next step is to deploy the application a
1. Select the Azure App Service plan that the application should use.
- If you don't have an App Service plan or want to create a new App Service plan to use, you can create one by selecting **Create new App Service plan**.
+ If you don't have an App Service plan or want to create a new App Service plan to use, you can create one by selecting **Create new App Service plan**.
1. Select **Continue** after you select the App Service plan. 2. If you parameterized application configurations, specify the secret store to use for the application. You can choose Azure Key Vault or App Service application settings to manage your application secrets. For more information, see [Configure connection strings](../app-service/configure-common.md#configure-connection-strings). - If you selected App Service application settings to manage your secrets, select **Continue**.
- - If you want to use an Azure key vault to manage your application secrets, specify the key vault that you want to use.
- - If you donΓÇÖt have an Azure key vault or want to create a new key vault, you can create one by selecting **Create new Azure Key Vault**.
+ - If you want to use an Azure key vault to manage your application secrets, specify the key vault that you want to use.
+ - If you don't have an Azure key vault or want to create a new key vault, you can create one by selecting **Create new Azure Key Vault**.
- The tool will automatically assign the necessary permissions for managing secrets via the key vault. 3. If you added more folders and selected the Azure file share option for persistent storage, specify the Azure file share to be used by the App Containerization tool during deployment. The tool will copy over the application folders that you configured for Azure Files and mount them on the application container during deployment.ΓÇ»
- If you don't have an Azure file share or want to create a new Azure file share, you can create one by selecting **Create new Storage Account and file share**.
+ If you don't have an Azure file share or want to create a new Azure file share, you can create one by selecting **Create new Storage Account and file share**.
4. You now need to specify the deployment configuration for the application. Select **Configure** to customize the deployment for the application. In the configure step, you can provide these customizations: - **Name.** Specify a unique app name for the application. This name will be used to generate the application URL. It will also be used as a prefix for other resources created as part of the deployment.
migrate Tutorial App Containerization Aspnet Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-aspnet-kubernetes.md
Last updated 07/14/2023
# ASP.NET app containerization and migration to Azure Kubernetes Service
-In this article, you'll learn how to containerize ASP.NET applications and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
+In this article, you'll learn how to containerize ASP.NET applications and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesn't require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS).
The Azure Migrate: App Containerization tool currently supports:
Before you begin this tutorial, you should:
**Requirement** | **Details** | **Identify a machine to install the tool** | A Windows machine to install and run the Azure Migrate: App Containerization tool. The Windows machine could be a server (Windows Server 2016 or later) or client (Windows 10) operating system, meaning that the tool can run on your desktop as well. <br/><br/> The Windows machine running the tool should have network connectivity to the servers/virtual machines hosting the ASP.NET applications to be containerized.<br/><br/> Ensure that 6-GB space is available on the Windows machine running the Azure Migrate: App Containerization tool for storing application artifacts. <br/><br/> The Windows machine should have internet access, directly or via a proxy. <br/> <br/>Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
-**Application servers** | Enable PowerShell remoting on the application servers: Sign in to the application server and follow [these](/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> Ensure that PowerShell 5.1 is installed on the application server. Follow the instruction [here](/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
+**Application servers** | Enable PowerShell remoting on the application servers: Sign in to the application server and follow [these](/powershell/module/microsoft.powershell.core/enable-psremoting) instructions to turn on PowerShell remoting. <br/><br/> Ensure that PowerShell 5.1 is installed on the application server. Follow the instruction in [Install and Configure WMF 5.1](/previous-versions/powershell/scripting/windows-powershell/wmf/setup/install-configure) to download and install PowerShell 5.1 on the application server. <br/><br/> Install the Microsoft Web Deploy tool on the machine running the App Containerization helper tool and application server if not already installed. You can download the tool from [here](https://aka.ms/webdeploy3.6).
**ASP.NET application** | The tool currently supports:<br/> - ASP.NET applications using Microsoft .NET framework 3.5 or later. <br/>- Application servers running Windows Server 2012 R2 or later (application servers should be running PowerShell version 5.1). <br/>- Applications running on Internet Information Services (IIS) 7.5 or later. <br/><br/> The tool currently doesn't support: <br/>- Applications requiring Windows authentication (The App Containerization tool currently doesn't support gMSA). <br/>- Applications that depend on other Windows services hosted outside IIS.
If you just created a free Azure account, you're the owner of your subscription.
Alternately, you can open the app from the desktop by selecting the app shortcut.
-2. If you see a warning stating that says your connection isnΓÇÖt private, select **Advanced** and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate.
+2. If you see a warning stating that says your connection isn't private, select **Advanced** and choose to proceed to the website. This warning appears as the web interface uses a self-signed TLS/SSL certificate.
3. In the **Sign in** screen, use the local administrator account on the machine to sign in. 4. Select **ASP.NET web apps** as the type of application you want to containerize. 5. To specify target Azure service, select **Containers on Azure Kubernetes Service**.
The App Containerization helper tool connects remotely to the application server
- For domain accounts (the user must be an administrator on the application server), prefix the username with the domain name in the format *<domain\username>*. - You can run application discovery for upto five servers at a time.
-2. Select **Validate** to verify that the application server is reachable from the machine running the tool and that the credentials are valid. Upon successful validation, the status column will show the status as **Mapped**.
+2. Select **Validate** to verify that the application server is reachable from the machine running the tool and that the credentials are valid. Upon successful validation, the status column will show the status as **Mapped**.
![Screenshot for server IP and credentials.](./media/tutorial-containerize-apps-aks/discovery-credentials-asp.png)
The App Containerization helper tool connects remotely to the application server
4. Use the checkbox to select the applications to containerize.
-5. **Specify container name**: Specify a name for the target container for each selected application. The container name should be specified as <*name:tag*> where the tag is used for container image. For example, you can specify the target container name as *appname:v1*.
+5. **Specify container name**: Specify a name for the target container for each selected application. The container name should be specified as <*name:tag*> where the tag is used for container image. For example, you can specify the target container name as *appname:v1*.
### Parameterize application configurations Parameterizing the configuration makes it available as a deployment time parameter. This allows you to configure this setting while deploying the application as opposed to having it hard-coded to a specific value in the container image. For example, this option is useful for parameters like database connection strings.
Parameterizing the configuration makes it available as a deployment time paramet
## Build container image > [!Important]
-> If you're using AKS 1.23+, edit the scripts as shown below before building the docker image, to ensure a seamless migration.
+> If you're using AKS 1.23+, edit the scripts as shown below before building the docker image, to ensure a seamless migration.
> > Change the script below >
Parameterizing the configuration makes it available as a deployment time paramet
> # Run entrypoint script. > COPY ./Entryscript.ps1 c:/Entryscript.ps1 > ENTRYPOINT powershell c:/Entryscript.ps1
-> ```
-> to
+> ```
+> to
> > ```powershell
-> # Run entrypoint script.
+> # Run entrypoint script.
> COPY ["./Entryscript.ps1", "c:/Entryscript.ps1"] > ENTRYPOINT ["powershell", "c:/Entryscript.ps1"] > ```
To build a container image, follow these steps:
3. **Trigger build process**: Select the applications to build images for and select **Build**. Selecting build will start the container image build for each application. The tool keeps monitoring the build status continuously and will let you proceed to the next step upon successful completion of the build.
-4. **Track build status**: You can also monitor progress of the build step by selecting the **Build in Progress** link under the status column. The link takes a couple of minutes to be active after you've triggered the build process.
+4. **Track build status**: You can also monitor progress of the build step by selecting the **Build in Progress** link under the status column. The link takes a couple of minutes to be active after you've triggered the build process.
5. Once the build is completed, select **Continue** to specify deployment settings.
Once the container image is built, the next step is to deploy the application as
- Run the following command in Azure CLI to attach the AKS cluster to the ACR. ``` Azure CLI az aks update -n <cluster-name> -g <cluster-resource-group> --attach-acr <acr-name>
- ```
- - If you donΓÇÖt have an AKS cluster or would like to create a new AKS cluster to deploy the application to, you can choose to create on from the tool by selecting **Create new AKS cluster**.
+ ```
+ - If you don't have an AKS cluster or would like to create a new AKS cluster to deploy the application to, you can choose to create on from the tool by selecting **Create new AKS cluster**.
- The AKS cluster created using the tool will be created with a Windows node pool. The cluster will be configured to allow it to pull images from the Azure Container Registry that was created earlier (if create new registry option was chosen). - Select **Continue** after selecting the AKS cluster. 2. **Specify secret store**: If you had opted to parameterize application configurations, then specify the secret store to be used for the application. You can choose Azure Key Vault or App Service application settings for managing your application secrets. [Learn more](../app-service/configure-common.md#configure-connection-strings) - If you've selected App Service application settings for managing secrets, then select **Continue**.
- - If you'd like to use an Azure Key Vault for managing your application secrets, then specify the Azure Key Vault that you'd want to use.
- - If you donΓÇÖt have an Azure Key Vault or would like to create a new Key Vault, you can choose to create on from the tool by selecting **Create new Azure Key Vault**.
+ - If you'd like to use an Azure Key Vault for managing your application secrets, then specify the Azure Key Vault that you'd want to use.
+ - If you don't have an Azure Key Vault or would like to create a new Key Vault, you can choose to create on from the tool by selecting **Create new Azure Key Vault**.
- The tool will automatically assign the necessary permissions for managing secrets through the Key Vault. 3. **Specify Azure file share**: If you had added more folders and selected the Persistent Volume option, then specify the Azure file share that should be used by Azure Migrate: App Containerization tool during the deployment process. The tool will create new directories in this Azure file share to copy over the application folders that are configured for Persistent Volume storage. Once the application deployment is complete, the tool will clean up the Azure file share by deleting the directories it had created.
- - If you don't have an Azure file share or would like to create a new Azure file share, you can choose to create on from the tool by selecting **Create new Storage Account and file share**.
+ - If you don't have an Azure file share or would like to create a new Azure file share, you can choose to create on from the tool by selecting **Create new Storage Account and file share**.
4. **Application deployment configuration**: Once you've completed the steps above, you'll need to specify the deployment configuration for the application. Select **Configure** to customize the deployment for the application. In the configure step you can provide the following customizations: - **Prefix string**: Specify a prefix string to use in the name for all resources that are created for the containerized application in the AKS cluster.
Once the container image is built, the next step is to deploy the application as
- **Replica Sets**: Specify the number of application instances (pods) that should run inside the containers. - **Load balancer type**: Select *External* if the containerized application should be reachable from public networks. - **Application Configuration**: For any application configurations that were parameterized, provide the values to use for the current deployment.
- - **Storage**: For any application folders that were configured for Persistent Volume storage, specify whether the volume should be shared across application instances or should be initialized individually with each instance in the container. By default, all application folders on Persistent Volumes are configured as shared.
+ - **Storage**: For any application folders that were configured for Persistent Volume storage, specify whether the volume should be shared across application instances or should be initialized individually with each instance in the container. By default, all application folders on Persistent Volumes are configured as shared.
- Select **Apply** to save the deployment configuration. - Select **Continue** to deploy the application.
migrate Tutorial Assess Aspnet Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-aspnet-aks.md
+
+ Title: Assess ASP.NET web apps for migration to Azure Kubernetes Service
+description: Assessments of ASP.NET web apps to Azure Kubernetes Service using Azure Migrate
++++ Last updated : 08/10/2023+++
+# Assess ASP.NET web apps for migration to Azure Kubernetes Service (preview)
+
+This article shows you how to assess ASP.NET web apps for migration to [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) using Azure Migrate. Creating an assessment for your ASP.NET web app provides key insights such as **app-readiness**, **target right-sizing** and **cost** to host and run these apps month over month.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+> * Choose a set of discovered ASP.NET web apps to assess for migration to AKS.
+> * Provide assessment configurations such as Azure Reserved Instances, target region etc.
+> * Get insights about the migration readiness of their assessed apps.
+> * Get insights on the AKS Node SKUs that can optimally host and run these apps.
+> * Get the estimated cost of running these apps on AKS.
+
+> [!NOTE]
+> Tutorials show you the simplest deployment path for a scenario so that you can quickly set up a proof-of-concept. Tutorials use default options where possible and don't show all possible settings and paths.
+
+## Prerequisites
+
+- Deploy and configure the Azure Migrate appliance in your [VMware](./tutorial-discover-vmware.md), [Hyper-V](./tutorial-discover-hyper-v.md) or [physical environment](./tutorial-discover-physical.md).
+- Check the [appliance requirements](./migrate-appliance.md#appliancevmware) and [URL access](./migrate-appliance.md#url-access) to be provided.
+- Follow [these steps](./how-to-discover-sql-existing-project.md) to discover ASP.NET web apps running on your environment.
+
+## Create an assessment
+
+1. On the **Servers, databases and web apps** page, select **Assess** and then select **Web apps on Azure**.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/hub-assess-webapps.png" alt-text="Screenshot of selecting web app assessments.":::
+
+2. On the **Basics** tab, select the **Scenario** dropdown and select **Web apps to AKS**.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/create-basics-scenario.png" alt-text="Screenshot of selecting the scenario for web app assessment.":::
+
+3. On the same tab, select **Edit** to modify assessment settings. See the table below to update the various assessment settings.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/create-basics-settings.png" alt-text="Screenshot of changing the target settings for web app assessment.":::
+
+ | Setting | Possible Values | Comments |
+ | | | |
+ | Target Location | All locations supported by AKS | Used to generate regional cost for AKS. |
+ | Environment Type | Production <br> Dev/Test | Allows you to toggle between Pay-As-You-Go and Pay-As-You-Go Dev/Test [offers](https://azure.microsoft.com/support/legal/offer-details/). |
+ | Offer/Licensing program | Pay-As-You-Go <br> Enterprise Agreement | Allows you to toggle between Pay-As-You-Go and Enterprise Agreement [offers](https://azure.microsoft.com/support/legal/offer-details/). |
+ | Currency | All common currencies such as USD, INR, GBP, Euro | We generate the cost in the currency selected here. |
+ | Discount Percentage | Numeric decimal value | Use this to factor in any custom discount agreements with Microsoft. This is disabled if Savings options are selected. |
+ | EA subscription | Subscription ID | Select the subscription ID for which you have an Enterprise Agreement. |
+ | Savings options | 1 year reserved <br> 3 years reserved <br> 1 year savings plan <br> 3 years savings plan <br> None | Select a savings option if you have opted for [Reserved Instances](../cost-management-billing/reservations/save-compute-costs-reservations.md) or [Savings Plan](https://azure.microsoft.com/pricing/offers/savings-plan-compute/). |
+ | Category | All <br> Compute optimized <br> General purpose <br> GPU <br> High performance compute <br> Isolated <br> Memory optimized <br> Storage optimized | Selecting a particular SKU category will ensure we recommend the best AKS Node SKUs from that category. |
+ | AKS pricing tier | Standard | Pricing tier for AKS |
+
+4. After reviewing the assessment settings, select **Next**.
+
+5. Select the list of servers which host the web apps to be assessed. Provide a name to this group of servers as well as the assessment. You can also filter web apps discovered by a specific appliance, in case your project has more than one.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/create-server-selection.png" alt-text="Screenshot of selecting servers containing the web apps to be assessed.":::
+
+6. Select **Next** to review the high-level assessment details. Select **Create assessment**.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/create-review.png" alt-text="Screenshot of reviewing the high-level assessment details before creation.":::
+
+## View assessment insights
+
+The assessment can take around 10 minutes to complete.
+
+1. On the **Servers, databases and web apps** page, select the hyperlink next to **Web apps on Azure**.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/hub-view-assessments.png" alt-text="Screenshot of clicking the hyperlink to see the list of web app assessments.":::
+
+2. On the **Assessments** page, use the search bar to filter for your assessment. It should be in the **Ready** state.
+
+ :::image type="content" source="./media/tutorial-assess-aspnet-aks/assessment-list.png" alt-text="Screenshot of filtering for the created assessment.":::
+
+ | Assessment State | Definition |
+ | | |
+ | Creating | The assessment creation is in progress. It takes around 10 minutes to complete. |
+ | Ready | The assessment has successfully been created. |
+ | Invalid | There was an error in the assessment computation. |
+
+### Assessment overview
++
+On the **Overview** page, you're provided with the following details:
+
+1. **Assessed entities**: This section provides the count of servers, web servers and web apps that are part of this assessment.
+
+2. **Migration readiness**: The assessed web apps will have one of the following statuses:
+
+ | Status | Definition |
+ | | |
+ | *Ready* | The web app is ready to be migrated |
+ | *Ready with conditions* | The web app needs minor changes to be ready for migration |
+ | *Not ready* | The web app needs major/breaking changes to be ready for migration |
+ | *Unknown* | The web app discovery data was either incomplete or corrupt to calculate readiness |
+
+> [!NOTE]
+> Web apps that are either *Ready* or *Ready with conditions* are recommended for migration.
+
+3. **Monthly cost estimate**: This section provides the month over month cost projection of running your migration-ready web apps on AKS.
+
+You can update the **Settings** of the assessment after it's created. This triggers a recalculation.
+
+Selecting the **Export assessment** option exports the entire assessment to an Excel spreadsheet.
+
+### Assessment details
+
+#### Readiness
+
+On the **Readiness** tab, you see the list of web apps assessed. For each web app, you see the readiness status, the cluster and the recommended AKS Node SKU.
++
+Select the readiness condition of an app to see the migration warnings or issues. For apps that are *Ready with conditions*, you'll only see warnings. For apps that are *Not ready*, you'll see errors and potentially warnings.
+
+For each issue or warning, you're provided the description, cause and mitigation steps along with useful documentation/blogs for reference.
++
+Selecting the recommended cluster for the app opens the **Cluster details** page. This page surfaces details such as the number of system and user node pools, the SKU for each node pool as well as the web apps recommended for this cluster. Typically, an assessment will only generate a single cluster. The number of clusters increases when the web apps in the assessment start hitting AKS cluster limits.
++
+#### Cost details
+
+On the **Cost details** tab, you see the breakdown of the monthly cost estimate distributed across AKS node pools. AKS pricing is intrinsically dependent on the node pool costs.
+
+For each node pool, you see the associated node SKU, node count and the number of web apps recommended to be scheduled, along with the cost. By default, there will be at least 2 node pools:
+
+1. *System*: Used to host critical system pods such as `CoreDNS`.
+2. *User*: As ASP.NET framework apps need a Windows node to run, the assessment recommends at least one additional Windows-based node pool.
++
+## Next steps
+
+- [Modernize](./tutorial-modernize-asp-net-aks.md) your ASP.NET web apps at-scale to Azure Kubernetes Service.
+- Optimize [Windows Dockerfiles](/virtualization/windowscontainers/manage-docker/optimize-windows-dockerfile?context=/azure/aks/context/aks-context).
+- [Review and implement best practices](../aks/best-practices.md) to build and manage apps on AKS.
migrate Tutorial Assess Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-assess-webapps.md
# Tutorial: Assess ASP.NET web apps for migration to Azure App Service As part of your migration journey to Azure, you assess your on-premises workloads to measure cloud readiness, identify risks, and estimate costs and complexity.
-This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service, using the Azure Migrate: Discovery and assessment tool.
+This article shows you how to assess discovered ASP.NET web apps running on IIS web servers in preparation for migration to Azure App Service Code and Azure App Service Containers, using the Azure Migrate: Discovery and assessment tool. [Learn more](../app-service/overview.md) about Azure App Service.
In this tutorial, you learn how to:
In this tutorial, you learn how to:
> * Review an Azure App Service assessment > [!NOTE]
-> Tutorials show the quickest path for trying out a scenario, and use default options where possible.
+> Tutorials show the quickest path for trying out a scenario and use default options where possible.
## Prerequisites
In this tutorial, you learn how to:
## Run an assessment
-Run an assessment as follows:
+To run an assessment, follow these steps:
1. On the **Get started** page > **Servers, databases and web apps**, select **Discover, assess and migrate**.
+2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Web apps on Azure**.
- :::image type="content" source="./media/tutorial-assess-webapps/discover-assess-migrate.png" alt-text="Overview page for Azure Migrate":::
+ :::image type="content" source="./media/tutorial-assess-webapps/assess-web-apps.png" alt-text="Screenshot of Overview page for Azure Migrate.":::
-2. On **Azure Migrate: Discovery and assessment**, select **Assess** and choose the assessment type as **Azure App Service**.
+3. In **Create assessment**, the assessment type is pre-selected as **Web apps on Azure** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**. Select the **Scenario** as **Web apps to App Service**.
- :::image type="content" source="./media/tutorial-assess-webapps/assess.png" alt-text="Dropdown to choose assessment type as Azure App Service":::
-
-3. In **Create assessment**, you will be able to see the assessment type pre-selected as **Azure App Service** and the discovery source defaulted to **Servers discovered from Azure Migrate appliance**.
+ :::image type="content" source="./media/tutorial-assess-webapps/create-assess-scenario.png" alt-text="Screenshot of Create assessment page for Azure Migrate.":::
4. Select **Edit** to review the assessment properties.
- :::image type="content" source="./media/tutorial-assess-webapps/assess-webapps.png" alt-text="Edit button from where assessment properties can be customized":::
+ The following are included in Azure App Service assessment properties:
-5. Here's what's included in Azure App Service assessment properties:
+ :::image type="content" source="./media/tutorial-assess-webapps/settings.png" alt-text="Screenshot of assessment settings for Azure Migrate.":::
**Property** | **Details** | **Target location** | The Azure region to which you want to migrate. Azure App Service configuration and cost recommendations are based on the location that you specify.
- **Isolation required** | Select yes if you want your web apps to run in a private and dedicated environment in an Azure datacenter using Dv2-series VMs with faster processors, SSD storage, and double the memory to core ratio compared to Standard plans.
- - In **Savings options (compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure compute cost.
- - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (1 year or 3 year reserved) are a good option for the most consistently running resources.
- - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (1 year or 3 year savings plan) provide additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation will be consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
- - When you select 'None', the Azure compute cost is based on the Pay as you go rate or based on actual usage.
- - You need to select pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than 'None', the 'Discount (%)' setting is not applicable.
-
- **Option** | **Description**
- -- | --
- **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
- **Currency** | The billing currency for your account.
- **Discount (%)** | Any subscription-specific discounts you receive on top of the Azure offer. The default setting is 0%.
- **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Retain the default settings for reserved instances and discount (%) properties.
-
+ **Environment type** | Type of environment in which it's running.
+ **Offer** | The [Azure offer](https://azure.microsoft.com/support/legal/offer-details/) in which you're enrolled. The assessment estimates the cost for that offer.
+ **Currency** | The billing currency for your account.
+ **Discount (%)** | Any subscription-specific discounts that you receive on top of the Azure offer. The default setting is 0%.
+ **EA subscription** | Specifies that an Enterprise Agreement (EA) subscription is used for cost estimation. Takes into account the discount applicable to the subscription. <br/><br/> Retain the default settings for reserved instances and discount (%) properties.
+ **Savings options (Compute)** | The Savings option the assessment must consider.
+ **Isolation required** | Select **Yes** if you want your web apps to run in a private and dedicated environment in an Azure datacenter.
+
+ - In **Savings options (Compute)**, specify the savings option that you want the assessment to consider, helping to optimize your Azure Compute cost.
+ - [Azure reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) (one year or three year reserved) are a good option for the most consistently running resources.
+ - [Azure Savings Plan](../cost-management-billing/savings-plan/savings-plan-compute-overview.md) (one year or three year savings plan) provides additional flexibility and automated cost optimization. Ideally post migration, you could use Azure reservation and savings plan at the same time (reservation is consumed first), but in the Azure Migrate assessments, you can only see cost estimates of 1 savings option at a time.
+ - When you select *None*, the Azure Compute cost is based on the Pay-as-you-go rate or based on actual usage.
+ - You need to select Pay-as-you-go in offer/licensing program to be able to use Reserved Instances or Azure Savings Plan. When you select any savings option other than *None*, the **Discount (%)** setting isn't applicable.
+
+1. Select **Save** if you made any changes.
1. In **Create assessment**, select **Next**.
-1. In **Select servers to assess** > **Assessment name** > specify a name for the assessment.
-1. In **Select or create a group**, select **Create New** and specify a group name.
-1. Select the appliance, and select the servers that you want to add to the group. Select **Next**.
-1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
-1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh.
+1. In **Select servers to assess** > **Assessment name**, specify a name for the assessment.
+1. In **Select or create a group**, select **Create New** and specify a group name. You can also use an existing group.
+1. Select the appliance and select the servers that you want to add to the group. Select **Next**.
- :::image type="content" source="./media/tutorial-assess-webapps/tile-refresh.png" alt-text="Refresh discovery and assessment tool data.":::
+ :::image type="content" source="./media/tutorial-assess-webapps/server-selection.png" alt-text="Screenshot of selected servers.":::
-1. Select the number next to Azure App Service assessment.
+1. In **Review + create assessment**, review the assessment details, and select **Create Assessment** to create the group and run the assessment.
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-navigation.png" alt-text="Navigation to created assessment.":::
+ :::image type="content" source="./media/tutorial-assess-webapps/create-app-review.png" alt-text="Screenshot of create assessment.":::
+1. After the assessment is created, go to **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**. Refresh the tile data by selecting the **Refresh** option on top of the tile. Wait for the data to refresh.
+1. Select the number next to **Web apps on Azure** in the **Assessment** section.
1. Select the assessment name, which you wish to view. ## Review an assessment
-**To view an assessment**:
+To view an assessment, follow these steps:
-1. **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Azure App Service assessment.
-2. Select the assessment name, which you wish to view.
+1. In **Servers, databases and web apps** > **Azure Migrate: Discovery and assessment**, select the number next to the Web apps on Azure assessment.
+2. Select the assessment name, which you wish to view.
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-summary.png" alt-text="App Service assessment overview.":::
+
+ :::image type="content" source="./media/tutorial-assess-webapps/overview.png" alt-text="Screenshot of Overview screen.":::
-3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
+ The **Overview** screen contains 3 sections: Essentials, Assessed entities, and Migration scenario.
+
+ **Essentials**
+
+ The **Essentials** section displays the group the assessed entity belongs to, its status, the location, discovery source, and currency in US dollars.
+
+ **Assessed entities**
-#### Azure App Service readiness
+ This section displays the number of servers selected for the assessments, number of Azure app services in the selected servers, and the number of distinct Sprint Boot app instances that were assessed.
-This indicates the distribution of the assessed web apps. You can drill down to understand the details around migration issues/warnings that you can remediate before migration to Azure App Service. [Learn More](concepts-azure-webapps-assessment-calculation.md).
-You can also view the recommended App Service SKU and plan for migrating to Azure App Service.
+ **Migration scenario**
-#### Azure App Service cost details
+ This section provides a pictorial representation of the number of apps that are ready, ready with conditions, and not ready. You can see two graphical representations, one for *All Web applications to App Service Code* and the other for *All Web applications to App Service Containers*. In addition, it also lists the number of apps ready to migrate and the estimated cost for the migration for the apps that are ready to migrate.
-An [App Service plan](../app-service/overview-hosting-plans.md) carries a [charge](https://azure.microsoft.com/pricing/details/app-service/windows/) on the compute resources it uses.
+3. Review the assessment summary. You can also edit the assessment properties or recalculate the assessment.
### Review readiness
-1. Select **Azure App Service readiness**.
+Review the Readiness for the web apps by following these steps:
+
+1. In **Assessments**, select the name of the assessment that you want to view.
+1. Select **View more details** to view more details about each app and instances. Review the Azure App service Code and Azure App service Container readiness column in the table for the assessed web apps:
+
- :::image type="content" source="./media/tutorial-assess-webapps/assessment-webapps-readiness.png" alt-text="Azure App Service readiness details.":::
+ :::image type="content" source="./media/tutorial-assess-webapps/code-readiness.png" alt-text="Screenshot of Azure App Service Code readiness.":::
-1. Review Azure App Service readiness column in table, for the assessed web apps:
1. If there are no compatibility issues found, the readiness is marked as **Ready** for the target deployment type.
- 1. If there are non-critical compatibility issues, such as degraded or unsupported features that do not block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
+ 1. If there are non-critical compatibility issues, such as degraded or unsupported features that don't block the migration to a specific target deployment type, the readiness is marked as **Ready with conditions** (hyperlinked) with **warning** details and recommended remediation guidance.
1. If there are any compatibility issues that may block the migration to a specific target deployment type, the readiness is marked as **Not ready** with **issue** details and recommended remediation guidance.
- 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment could not compute the readiness for that web app.
-1. Review the recommended SKU for the web apps, which is determined as per the matrix below:
+ 1. If the discovery is still in progress or there are any discovery issues for a web app, the readiness is marked as **Unknown** as the assessment couldn't compute the readiness for that web app.
+ 1. If the assessment isn't up-to-date, the status shows as **Outdated**. Select the corresponding assessment and select **Recalculate assessment**. The assessment is recalculated and the Readiness overview screen is updated with the results of the recalculated assessments.
+1. Select the Readiness status to open the **Migration issues and warnings** pane with details of the cause of the issue and recommended action.
- **Isolation required** | **Reserved instance** | **App Service plan/ SKU**
- | |
- Yes | Yes | I1
- Yes | No | I1
- No | Yes | P1v3
- No | No | P1v2
+ :::image type="content" source="./media/tutorial-assess-webapps/code-check.png" alt-text="Screenshot of recommended actions.":::
- **Azure App Service readiness** | **Determine App Service SKU** | **Determine Cost estimates**
+
+1. Review the recommended SKU for the web apps, which is determined as per the matrix below:
+
+ **Readiness** | **Determine size estimate** | **Determine cost estimates**
| | Ready | Yes | Yes Ready with conditions | Yes | Yes Not ready | No | No Unknown | No | No
-1. Select the App Service plan link in the Azure App Service readiness table to see the App Service plan details such as compute resources and other web apps that are part of the same plan.
- ### Review cost estimates
-The assessment summary shows the estimated monthly costs for hosting you web apps in App Service. In App Service, you pay charges per App Service plan and not per web app. One or more apps can be configured to run on the same computing resources (or in the same App Service plan). The apps that you add into this App Service plan run on the compute resources defined by your App Service plan.
-To optimize cost, Azure Migrate assessment allocates multiple web apps to each recommended App Service plan. The number of web apps allocated to each plan instance is shown below.
-
-**App Service plan** | **Web apps per App Service plan**
- |
-I1 | 8
-P1v2 | 8
-P1v3 | 16
+The assessment summary shows the estimated monthly costs for hosting your web apps.
+Select the **Cost details** tab to view a monthly cost estimate depending on the SKUs.
## Next steps
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate.
+## Update (November 2023)
+- Public Preview: Assess your ASP.NET web apps for migration to Azure Kubernetes Service (AKS). Using this feature, you get insights such as app readiness, cluster rightsizing and cost of running these web apps on AKS. [Learn more](tutorial-assess-aspnet-aks.md).
+- Public Preview: Assess your ASP.NET web apps for migration to Azure App Service Containers. [Learn more](tutorial-assess-webapps.md).
+- Public Preview: Get the total cost of ownership (TCO) comparison for your ASP.NET web apps running on AKS and App Service Containers in Azure Migrate Business Case. [Learn more](how-to-build-a-business-case.md).
+ ## Update (September 2023) - Azure Migrate now supports discovery and assessment of Spring Boot apps using the Azure Migrate: Discovery and assessment tool. [Learn more](how-to-create-azure-spring-apps-assessment.md).
mysql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-backup-restore.md
The Backup and Restore blade in the Azure portal provides a complete list of the
In Azure Database for MySQL, performing a restore creates a new server from the original server's backups. There are two types of restore available: - Point-in-time restore: is available with either backup redundancy option and creates a new server in the same region as your original server.-- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to either a geo-paired region or any other azure supported region where flexible server is available. Please note, feature of geo-restore to other regions is currently supported in public preview.-
-> [!NOTE]
-> Universal Geo Restore (Geo-restore to other regions which is different from a paired region) in Azure Database for MySQL - Flexible Server is currently in **public preview**. Few regions that are currently not supported for universal geo-restore feature in public preview are "Brazil South", "USGov Virginia" and "West US 3".
+- Geo-restore: is available only if you configured your server for geo-redundant storage and it allows you to restore your server to either a geo-paired region or any other azure supported region where flexible server is available. Currently, Geo-restore is not supported for regions like `Brazil South`, `USGov Virginia` and `West US 3`.
The estimated time for the recovery of the server depends on several factors:
The estimated time of recovery depends on several factors including the database
## Geo-restore
-You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups. Geo-restore to other regions is supported currently in public preview.
+You can restore a server to it's [geo-paired region](overview.md#azure-regions) where the service is available if you have configured your server for geo-redundant backups or any other azure supported region where flexible server is available. Ability to restore to any non-paired Azure supported region (except `Brazil South`, `USGov Virginia` and `West US 3)` is known as "Universal Geo-restore"
Geo-restore is the default recovery option when your server is unavailable because of an incident in the region where the server is hosted. If a large-scale incident in a region results in unavailability of your database application, you can restore a server from the geo-redundant backups to a server in any other region. Geo-restore utilizes the most recent backup of the server. There is a delay between when a backup is taken and when it is replicated to different region. This delay can be up to an hour, so, if a disaster occurs, there can be up to one hour data loss.
-During geo-restore, the server configurations that can be changed include only security configuration (firewall rules and virtual network settings). Changing other server configurations such as compute, storage or pricing tier (Basic, General Purpose, or Business Critical) during geo-restore is not supported.
- Geo-restore can also be performed on a stopped server leveraging Azure CLI. Read [Restore Azure Database for MySQL - Azure Database for MySQL - Flexible Server with Azure CLI](how-to-restore-server-cli.md) to learn more about geo-restoring a server with Azure CLI. The estimated time of recovery depends on several factors including the database sizes, the transaction log size, the network bandwidth, and the total number of databases recovering in the same region at the same time. > [!NOTE]
-> If you are geo-restoring a flexible server configured with zone redundant high availability, the restored server will be configured in the geo-paired region and the same zone as your primary server, and deployed as a single flexible server in a non-HA mode. Refer to [zone redundant high availability](concepts-high-availability.md) for flexible server.
-
+> If you are geo-restoring a flexible server configured with zone redundant high availability, the restored server will be configured in the geo-paired region and the same zone as your primary server and deployed as a single flexible server in a non-HA mode. Refer to [zone redundant high availability](concepts-high-availability.md) for flexible server.
> [!IMPORTANT] > When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. > With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
After a restore from either **latest restore point** or **custom restore point**
- **How do I backup my server?** By default, Azure Database for MySQL enables automated backups of your entire server (encompassing all databases created) with a default 7-day retention period. You can also trigger a manual backup using On-Demand backup feature. The other way to manually take a backup is by using community tools such as mysqldump as documented [here](../concepts-migrate-dump-restore.md#dump-and-restore-using-mysqldump-utility) or mydumper as documented [here](../concepts-migrate-mydumper-myloader.md#create-a-backup-using-mydumper). If you wish to backup Azure Database for MySQL to a Blob storage, refer to our tech community blog [Backup Azure Database for MySQL to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-mysql/backup-azure-database-for-mysql-to-a-blob-storage/ba-p/803830).- - **Can I configure automatic backups to be retained for long term?** No, currently we only support a maximum of 35 days of automated backup retention. You can take manual backups and use that for long-term retention requirement.- - **What are the backup windows for my server? Can I customize it?** The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are taken daily once. Transaction log backups occur every five minutes. Backup windows are inherently managed by Azure and cannot be customized.- - **Are my backups encrypted?** All Azure Database for MySQL data, backups and temporary files created during query execution are encrypted using AES 256-bit encryption. The storage encryption is always on and cannot be disabled.- - **Can I restore a single/few database(s)?** Restoring a single/few database(s) or tables is not supported. In case you want to restore specific databases, perform a Point in Time Restore and then extract the table(s) or database(s) needed.- - **Is my server available during the backup window?** Yes. Backups are online operations and are snapshot-based. The snapshot operation only takes few seconds and doesnΓÇÖt interfere with production workloads ensuring high availability of the server.- - **When setting up the maintenance window for the server do we need to account for backup window?** No, backups are triggered internally as part of the managed service and have no bearing to the Managed Maintenance Window.- - **Where are my automated backups stored and how do I manage their retention?** Azure Database for MySQL automatically creates server backups and stores them in user-configured, locally redundant storage or in geo-redundant storage. These backup files can't be exported. The default backup retention period is seven days. You can optionally configure the database backup from 1 to 35 days.- - **How can I validate my backups?** The best way to validate availability of successfully completed backups is to view the full-automated backups taken within the retention period in the Backup and Restore blade. If a backup fails it will not be listed in the available backups list and our backup service will try every 20 mins to take a backup until a successful backup is taken. These backup failures are due to heavy transactional production loads on the server.- - **Where can I see the backup usage?** In the Azure portal, under Monitoring tab - Metrics section, you can find the [Backup Storage Used](./concepts-monitoring.md) metric which can help you monitor the total backup usage.- - **What happens to my backups if I delete my server?** If you delete the server, all backups that belong to the server are also deleted and cannot be recovered. To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md).- - **How will I be charged and billed for my use of backups?** Azure Database for MySQL - Flexible Server provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month as per the [pricing model](https://azure.microsoft.com/pricing/details/mysql/server/). Backup storage billing is also governed by the backup retention period selected and backup redundancy option chosen apart from the transactional activity on the server which impacts the total backup storage used directly.- - **How are backups retained for stopped servers?** No new backups are performed for stopped servers. All older backups (within the retention window) at the time of stopping the server are retained until the server is restarted post which backup retention for the active server is governed by itΓÇÖs backup retention window.- - **How will I be billed for backups for a stopped server?** While your server instance is stopped, you are charged for provisioned storage (including Provisioned IOPS) and backup storage (backups stored within your specified retention window). Free backup storage is limited to the size of your provisioned database and only applies to active servers.
+- **How is my backup data protected?**
+Azure database for MySQL Flexible server protects your backup data by blocking any operations that could lead to loss of recovery points for the duration of configured retention period. Backup's taken during the retention period can only be read for the purpose of restoration and is deleted post retention period. Additionally, all backups in Azure Database for MySQL Flexible server is encrypted using AES 256-bit encryption for the data stored at rest.
### Restore related questions
The estimated time for the recovery of the server depends on several factors:
++
mysql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md
If there's a database crash or node failure, the Flexible Server VM is restarted
For zone-redundant HA, while there is no major performance impact for read workloads across availability zones, there might be up to 40 percent drop in write-query latency. The increase in write-latency is due to synchronous replication across Availability zone. The write latency impact is generally twice in zone redundant HA compared to the same zone HA. For same-zone HA, because the primary and the standby replica is in the same zone, the replication latency and consequently the synchronous write latency is lower. In summary, if write-latency is more critical for you compared to availability, you may want to choose same-zone HA but if availability and resiliency of your data is more critical for you at the expense of write-latency drop, you must choose zone-redundant HA. To measure the accurate impact of the latency drop in HA setup, we recommend you to perform performance testing for your workload to take an informed decision.</br> - **How does maintenance of my HA server happen?**</br>
-Planned events like scaling of compute and minor version upgrades happen on the primary and the standby at the same time. You can set the [scheduled maintenance window](./concepts-maintenance.md) for HA servers as you do for flexible servers. The amount of downtime will be the same as the downtime for the Azure Database for MySQL - Flexible Server when HA is disabled. Using the failover mechanism to reduce downtime for HA servers is on our roadmap and will be available soon. </br>
+Planned events like scaling of compute and minor version upgrades happen on the primary and the standby at the same time. You can set the [scheduled maintenance window](./concepts-maintenance.md) for HA servers as you do for flexible servers. The amount of downtime will be the same as the downtime for the Azure Database for MySQL - Flexible Server when HA is disabled. </br>
- **Can I do a point-in-time restore (PITR) of my HA server?**</br> You can do a [PITR](./concepts-backup-restore.md#point-in-time-restore) for an HA-enabled Azure Database for MySQL - Flexible Server to a new Azure Database for MySQL - Flexible Server that has HA disabled. If the source server was created with zone-redundant HA, you can enable zone-redundant HA or same-zone HA on the restored server later. If the source server was created with same-zone HA, you can enable only same-zone HA on the restored server.</br>
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
mysql> SET sql_generate_invisible_primary_key=OFF;
### lower_case_table_names
-For MySQL version 5.7, default value is 1 in Azure Database for MySQL - Flexible Server. It is important to note that while it is possible to change the supported value to 2, reverting from 2 back to 1 is not permitted is not allowed. Please contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value.
-For [MySQl version 8.0+](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html) lower_case_table_names can only be configured when initializing the server. [Learn more](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html). Changing the lower_case_table_names setting after the server is initialized is prohibited. For MySQL version 8.0, default value is 1 in Azure Database for MySQL - Flexible Server. Supported value for MySQL version 8.0 are 1 and 2 Azure Database for MySQL - Flexible Server. Please contact our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance in changing the default value during server creation.
+In Azure Database for MySQL - Flexible Server, the default value for `lower_case_table_names` is 1 for MySQL version 5.7. If you need to adjust this setting, we recommend reaching out to our [support team](https://azure.microsoft.com/support/create-ticket/) for guidance. It's important to understand that once parameter value changed to 2, it's not allowed to revert from 2 back to 1.
+
+For MySQL version 8.0, please note that changing the lower_case_table_names setting after the server is initialized is prohibited. [Learn more](https://dev.mysql.com/doc/refman/8.0/en/identifier-case-sensitivity.html). In Azure Database for MySQL - Flexible Server version 8.0, the default value for `lower_case_table_names` is 1. If you wish to modify this parameter to 2, we suggest creating a MySQL 5.7 server, contacting our [support team](https://azure.microsoft.com/support/create-ticket/) for assistance with the change, and later, if needed, you can upgrade the server to version 8.0.
## Storage engines
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in Azure Database for MySQL -
> This article references the term slave, which Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+## November 2023
+
+- **Universal Geo Restore in Azure Database for MySQL - Flexible Server (General Availability)**
+Universal Geo Restore feature will allow you to restore a source server instance to an alternate region from the list of Azure supported regions where flexible server is [available](./overview.md#azure-regions). If a large-scale incident in a region results in unavailability of database application, then you can use this feature as a disaster recovery option to restore the server to an Azure supported target region, which is different than the source server region. [Learn more](concepts-backup-restore.md#restore)
+ ## October 2023 - **Addition of New vCore Options in Azure Database for MySQL - Flexible Server**
If you have questions about or suggestions for working with Azure Database for M
+
mysql Migrate External Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-external-mysql-import-cli.md
+
+ Title: "Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure MySQL Import CLI"
+description: This tutorial describes how to use the Azure MySQL Import CLI to migrate MySQL on-premises or VM workload to Azure Database for MySQL - Flexible Server.
+++ Last updated : 07/03/2023++++
+ - mvc
+ - devx-track-azurecli
+ - mode-api
+ms.devlang: azurecli
+
+# Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure MySQL Import CLI
+
+Azure MySQL Import enables you to migrate your MySQL on-premises or Virtual Machine (VM) workload seamlessly to Azure Database for MySQL - Flexible Server. It uses a user-provided physical backup file and restores the source server's physical data files to the target server offering a simple and fast migration path. Post MySQL Import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
+
+Based on user-inputs, it takes up the responsibility of provisioning your target Flexible Server and then restoring the user-provided physical backup of the source server stored in the Azure Blob storage account to the target Flexible Server instance.
+
+This tutorial shows how to use the Azure MySQL Import CLI command to migrate your Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server.
+
+## Launch Azure Cloud Shell
+
+The [Azure Cloud Shell](../../cloud-shell/overview.md) is a free interactive shell that you can use to run the steps in this article. It has common Azure tools preinstalled and configured to use with your account.
+
+As the feature is currently in private preview, this tutorial requires you to install Azure Edge Build and use the CLI locally, see [Install Azure Edge Build CLI](https://github.com/Azure/azure-cli#edge-builds).
+
+## Setup
+
+You must sign in to your account using the [az sign-in](/cli/azure/reference-index#az-login) command. Note the **id** property, which refers to your Azure account's **Subscription ID**.
+
+```azurecli-interactive
+az login
+```
+
+Select the specific subscription under your account where you want to deploy the target Flexible Server using the [az account set](/cli/azure/account#az-account-set) command. Note the **id** value from the **az login** output to use as the value for the **subscription** argument in the command. To get all your subscriptions, use [az account list](/cli/azure/account#az-account-list).
+
+```azurecli-interactive
+az account set --subscription <subscription id>
+```
+
+## Prerequisites
+
+* Source server should have the following parameters:
+ * Lower_case_table_names = 1
+ * Innodb_file_per_table = ON
+ * System tablespace name should be ibdata1.
+ * System tablespace size should be greater than or equal to 12 MB. (MySQL Default)
+ * Innodb_page_size = 16348 (MySQL Default)
+ * Only INNODB engine is supported.
+* Take a physical backup of your MySQL workload using Percona XtraBackup
+The following are the steps for using Percona XtraBackup to take a full backup :
+ * Install Percona XtraBackup on the on-premises or VM workload, see [Installing Percona XtraBackup 2.4]( https://docs.percona.com/percona-xtrabackup/2.4/installation.html).
+ * For instructions for taking a Full backup with Percona XtraBackup 2.4, see [Full backup]( https://docs.percona.com/percona-xtrabackup/2.4/backup_scenarios/full_backup.html).
+ * [Create an Azure Blob container](../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) and get the Shared Access Signature (SAS) Token ([Azure portal](../../ai-services/translator/document-translation/how-to-guides/create-sas-tokens.md?tabs=Containers#create-sas-tokens-in-the-azure-portal) or [Azure CLI](../../storage/blobs/storage-blob-user-delegation-sas-create-cli.md)) for the container. Ensure that you grant Add, Create and Write in the **Permissions** drop-down list. Copy and paste the Blob SAS token and URL values in a secure location. They're only displayed once and can't be retrieved once the window is closed.
+* Upload the full backup file to your Azure Blob storage. Follow steps [here]( ../../storage/common/storage-use-azcopy-blobs-upload.md#upload-a-file).
+* For performing an online migration, capture and store the bin-log position of the backup file taken using Percona XtraBackup by running the **cat xtrabackup_info** command and copying the bin_log pos output.
+
+## Limitations
+
+* Source server configuration isn't migrated. You must configure the target Flexible server appropriately.
+* Users and privileges aren't migrated as part of MySQL Import. You must take a manual dump of users and privileges before initiating MySQL Import to migrate logins post import operation by restoring them on the target Flexible Server.
+* High Availability (HA) enabled Flexible Servers are returned as HA disabled servers to increase the speed of migration operation post the import migration. Enable HA for your target Flexible Server post migration.
+
+## Recommendations for an optimal migration experience
+
+* Consider keeping the Azure Blob storage account and the target Flexible Server to be deployed in the same region for better import performance.
+* Recommended SKU configuration for target Azure Database for MySQL Flexible Server ΓÇô
+ * Setting Burstable SKU for target isn't recommended in order to optimize migration time when running the MySQL Import operation. We recommend scaling to General Purpose/ Business Critical for the course of the import operation, post, which you can scale down to Burstable SKU.
+
+## Trigger a MySQL Import operation to migrate from Azure Database for MySQL -Flexible Server
+
+Trigger a MySQL Import operation with the `az mysql flexible-server import create` command. The following command creates a target Flexible Server and performs instance-level import from backup file to target destination using your Azure CLI's local context:
+
+```azurecli
+az mysql flexible-server import create --data-source-type
+ --data-source
+ --data-source-sas-token
+ --resource-group
+ --name
+ --sku-name
+ --tier
+ --version
+ --location
+ [--data-source-backup-dir]
+ [--storage-size]
+ [--mode]
+ [--admin-password]
+ [--admin-user]
+ [--auto-scale-iops {Disabled, Enabled}]
+ [--backup-identity]
+ [--backup-key]
+ [--backup-retention]
+ [--database-name]
+ [--geo-redundant-backup {Disabled, Enabled}]
+ [--high-availability {Disabled, SameZone, ZoneRedundant}]
+ [--identity]
+ [--iops]
+ [--key]
+ [--private-dns-zone]
+ [--public-access]
+ [--resource-group]
+ [--standby-zone]
+ [--storage-auto-grow {Disabled, Enabled}]
+ [--subnet]
+ [--subnet-prefixes]
+ [--tags]
+ [--vnet]
+ [--zone]
++
+The following example takes in the data source information for your source MySQL serverΓÇÖs backup file and target Flexible Server information, creates a target Flexible Server named `test-flexible-server` in the `westus` location and performs an import from backup file to target.
+
+azurecli-interactive
+az mysql flexible-server import create --data-source-type "azure_blob" --data-source "https://onprembackup.blob.core.windows.net/onprembackup" --data-source-backup-dir "mysql_backup_percona" ΓÇô-data-source-token "{sas-token}" --resource-group "test-rg" --name "test-flexible-server" ΓÇô-sku-name Standard_D2ds_v4 --tier GeneralPurpose ΓÇô-version 5.7 -ΓÇôlocation "westusΓÇ¥
+```
+
+Here are the details for the arguments above:
+
+**Setting** | **Sample value** | **Description**
+||
+data-source-type | azure_blob | The type of data source that serves as the source destination for triggering MySQL Import. Accepted values: [azure_blob]. Description of accepted values- azure_blob: Azure Blob storage.
+data-source | {resourceID} | The resource ID of the Azure Blob container.
+data-source-backup-dir | mysql_percona_backup | The directory of the Azure Blob storage container in which the backup file was uploaded. This value is required only when the backup file isn't stored in the root folder of Azure Blob container.
+data-source-sas-token | {sas-token} | The Shared Access Signature (SAS) token generated for granting access to import from the Azure Blob storage container.
+resource-group | test-rg | The name of the Azure resource group of the target Azure Database for MySQL Flexible Server.
+mode | Offline | The mode of MySQL import. Accepted values: [Offline]; Default value: Offline.
+location | westus | The Azure location for the source Azure Database for MySQL Flexible Server.
+name | test-flexible-server | Enter a unique name for your target Azure Database for MySQL Flexible Server. The server name can contain only lowercase letters, numbers, and the hyphen (-) character. It must contain from 3 to 63 characters. Note: This server is deployed in the same subscription, resource group, and region as the source.
+admin-user | adminuser | The username for the administrator sign-in for your target Azure Database for MySQL Flexible Server. It can't be **azure_superuser**, **admin**, **administrator**, **root**, **guest**, or **public**.
+admin-password | *password* | The administrator user's password for your target Azure Database for MySQL Flexible Server. It must contain between 8 and 128 characters. Your password must contain characters from three categories: English uppercase letters, English lowercase letters, numbers, and nonalphanumeric characters.
+sku-name|GP_Gen5_2|Enter the name of the pricing tier and compute configuration for your target Azure Database for MySQL Flexible Server. Follows the convention {pricing tier}*{compute generation}*{vCores} in shorthand. See the [pricing tiers](../flexible-server/concepts-service-tiers-storage.md#service-tiers-size-and-server-types) for more information.
+tier | Burstable | Compute tier of the target Azure Database for MySQL Flexible Server. Accepted values: Burstable, GeneralPurpose, MemoryOptimized; Default value: Burstable.
+public-access | 0.0.0.0 | Determines the public access for the target Azure Database for MySQL Flexible Server. Enter single or range of IP addresses to be included in the allowed list of IPs. IP address ranges must be dash-separated and not contain any spaces. Specifying 0.0.0.0 allows public access from any resources deployed within Azure to access your server. Setting it to "None" sets the server in public access mode but doesn't create a firewall rule.
+vnet | myVnet | Name or ID of a new or existing virtual network. If you want to use a vnet from different resource group or subscription, provide a resource ID. The name must be between 2 to 64 characters. The name must begin with a letter or number, end with a letter, number or underscore, and can contain only letters, numbers, underscores, periods, or hyphens.
+subnet | mySubnet | Name or resource ID of a new or existing subnet. If you want to use a subnet from different resource group or subscription, provide resource ID instead of name. Note that the subnet is delegated to flexibleServers. After delegation, this subnet can't be used for any other type of Azure resources.
+private-dns-zone | myserver.private.contoso.com | The name or ID of new or existing private dns zone. You can use the private dns zone from same resource group, different resource group, or different subscription. If you want to use a zone from different resource group or subscription, provide resource Id. CLI creates a new private dns zone within the same resource group as virtual network if not provided by users.
+key | key identifier of testKey | The resource ID of the primary keyvault key for data encryption.
+identity | testIdentity | The name or resource ID of the user assigned identity for data encryption.
+storage-size | 32 | The storage capacity of the target Azure Database for MySQL Flexible Server. The minimum is 20 GiB, and max is 16 TiB.
+tags | key=value | Provide the name of the Azure resource group.
+version | 5.7 | Server major version of the target Azure Database for MySQL Flexible Server.
+high-availability | ZoneRedundant | Enable (ZoneRedundant or SameZone) or disable the high availability feature for the target Azure Database for MySQL Flexible Server. Accepted values: Disabled, SameZone, ZoneRedundant; Default value: Disabled.
+zone | 1 | Availability zone into which to provision the resource.
+standby-zone | 3 | The availability zone information of the standby server when high Availability is enabled.
+storage-auto-grow | Enabled | Enable or disable auto grow of storage for the target Azure Database for MySQL Flexible Server. The default value is Enabled. Accepted values: Disabled, Enabled; Default value: Enabled.
+iops | 500 | Number of IOPS to be allocated for the target Azure Database for MySQL Flexible Server. You get a certain amount of free IOPS based on compute and storage provisioned. The default value for IOPS is free IOPS. To learn more about IOPS based on compute and storage, refer to IOPS in Azure Database for MySQL Flexible Server.
+
+## Migrate to Flexible Server with minimal downtime
+
+In order to perform an online migration after completing the initial seeding from backup file using MySQL import, you can configure data-in replication between the source and target by following steps [here](../flexible-server/how-to-data-in-replication.md?tabs=bash%2Ccommand-line). You can use the bin-log position captured while taking the backup file using Percona XtraBackup to set up Bin-log position based replication.
+
+## How long does MySQL Import take to migrate my Single Server instance?
+
+Benchmarked performance based on storage size.
+
+ | Backup file Storage Size | MySQL Import time |
+ | - |:-:|
+ | 1 GiB | 0 min 23 secs |
+ | 10 GiB | 4 min 24 secs |
+ | 100 GiB | 10 min 29 secs |
+ | 500 GiB | 13 min 15 secs |
+ | 1 TB | 22 min 56 secs |
+ | 10 TB | 2 hrs 5 min 30 secs |
+
+As the storage size increases, the time required for data copying also increases, almost in a linear relationship. However, it's important to note that copy speed can be significantly impacted by network fluctuations. Therefore, the data provided here should be taken as a reference only.
+
+## Next steps
+
+* [Manage an Azure Database for MySQL - Flexible Server using the Azure portal](../flexible-server/how-to-manage-server-portal.md)
mysql Concepts Connect To A Gateway Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connect-to-a-gateway-node.md
Title: Azure Database for MySQL managing updates and upgrades
-description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL service.
+description: Learn which versions of the MySQL server are supported in the Azure Database for MySQL Service.
Last updated 06/20/2022
[!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)]
-In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL service architecture.
+In the Single Server deployment option, a gateway is used to redirect the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. Review [Connectivity architecture](./concepts-connectivity-architecture.md#connectivity-architecture) to learn more about gateways in Azure Database for MySQL Service architecture.
As Azure Database for MySQL supports major version v5.7 and v8.0, the default port 3306 to connect to Azure Database for MySQL runs MySQL client version 5.6 (least common denominator) to support connections to servers of all 2 supported major versions. However, if your application has a requirement to connect to specific major version say v5.7 or v8.0, you can do so by changing the port in your server connection string.
-In Azure Database for MySQL service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity.
+In Azure Database for MySQL Service, gateway nodes listens on port 3308 for v5.7 clients and port 3309 for v8.0 clients. In other words, if you would like to connect to v5.7 gateway client, you should use your fully qualified server name and port 3308 to connect to your server from client application. Similarly, if you would like to connect to v8.0 gateway client, you can use your fully qualified server name and port 3309 to connect to your server. Check the following example for further clarity.
:::image type="content" source="./media/concepts-supported-versions/concepts-supported-versions-gateway.png" alt-text="Example connecting via different gateway mysql versions":::
mysql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Azure Database for MySQL
network-watcher Migrate To Connection Monitor From Connection Monitor Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 #CustomerIntent: As an Azure administrator, I want to migrate my connection monitors from Connection monitor (classic) to the new Connection monitor so I avoid service disruption.
Last updated 11/03/2023
# Migrate to Connection monitor from Connection monitor (classic) > [!IMPORTANT]
-> Starting July 1, 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to 1 July 2021. To minimize service disruption to your current workloads, migrate from Connection Monitor (classic) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
+> Starting July 1, 2021, you will not be able to add new connection monitors in Connection Monitor (classic) but you can continue to use existing connection monitors created prior to July 1, 2021. To minimize service disruption to your current workloads, migrate from Connection Monitor (classic) to the new Connection Monitor in Azure Network Watcher before February 29, 2024.
You can migrate existing connection monitors to new, improved Connection monitor with only a few clicks and with zero downtime. To learn more about the benefits of the new Connection monitor, see [Connection monitor overview](connection-monitor-overview.md).
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
* General availability of [Grafana Monitoring Dashboard](https://grafana.com/grafana/dashboards/19556-azure-azure-postgresql-flexible-server-monitoring/) for Azure Database for PostgreSQL ΓÇô Flexible Server. * Public preview of Server Logs Download for Azure Database for PostgreSQL ΓÇô Flexible Server.
+## Release: September 2023
+* General availability of [Storage auto-grow](./concepts-compute-storage.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+* General availability of [Cross Subscription and Cross Resource Group Restore](how-to-restore-to-different-subscription-or-resource-group.md) for Azure Database for PostgreSQL ΓÇô Flexible Server.
+
## Release: August 2023 * Support for [minor versions](./concepts-supported-versions.md) 15.3, 14.8, 13.11, 12.15, 11.20 <sup>$</sup> * General availability of [Enhanced Metrics](./concepts-monitoring.md#enhanced-metrics), [Autovacuum Metrics](./concepts-monitoring.md#autovacuum-metrics), [PgBouncer Metrics](./concepts-monitoring.md#pgbouncer-metrics) and [Database availability metric](./concepts-monitoring.md#database-availability-metric) for Azure Database for PostgreSQL ΓÇô Flexible Server.
postgresql Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Azure Database for PostgreSQL
private-5g-core Collect Required Information For A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/collect-required-information-for-a-site.md
Title: Collect information for a site
-description: Learn about the information you'll need to create a site in an existing private mobile network.
+description: Learn about the information you need to create a site in an existing private mobile network.
zone_pivot_groups: ase-pro-version
# Collect the required information for a site
-Azure Private 5G Core private mobile networks include one or more sites. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. This how-to guide takes you through the process of collecting the information you'll need to create a new site.
+Azure Private 5G Core private mobile networks include one or more sites. Each site represents a physical enterprise location (for example, Contoso Corporation's Chicago factory) containing an Azure Stack Edge device that hosts a packet core instance. This how-to guide takes you through the process of collecting the information you need to create a new site.
You can use this information to create a site in an existing private mobile network using the [Azure portal](create-a-site.md). You can also use it as part of an ARM template to [deploy a new private mobile network and site](deploy-private-mobile-network-with-site-arm-template.md), or [add a new site to an existing private mobile network](create-site-arm-template.md).
You can use this information to create a site in an existing private mobile netw
## Choose a service plan
-Choose the service plan that will best fit your requirements and verify pricing and charges. See [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/).
+Choose the service plan that best fits your requirements and verify pricing and charges. See [Azure Private 5G Core pricing](https://azure.microsoft.com/pricing/details/private-5g-core/).
## Collect mobile network site resource values
Collect all the values in the following table for the packet core instance that
|Value |Field name in Azure portal | |||
- |The core technology type the packet core instance should support (5G or 4G). |**Technology type**|
+ |The core technology type the packet core instance should support: 5G, 4G, or combined 4G and 5G. |**Technology type**|
| The Azure Stack Edge resource representing the Azure Stack Edge Pro device in the site. You created this resource as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices).</br></br> If you're going to create your site using the Azure portal, collect the name of the Azure Stack Edge resource.</br></br> If you're going to create your site using an ARM template, collect the full resource ID of the Azure Stack Edge resource. You can do this by navigating to the Azure Stack Edge resource, selecting **JSON View** and copying the contents of the **Resource ID** field. | **Azure Stack Edge device** | |The custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. You commissioned the AKS-HCI cluster as part of the steps in [Commission the AKS cluster](commission-cluster.md).</br></br> If you're going to create your site using the Azure portal, collect the name of the custom location.</br></br> If you're going to create your site using an ARM template, collect the full resource ID of the custom location. You can do this by navigating to the Custom location resource, selecting **JSON View** and copying the contents of the **Resource ID** field.|**Custom location**|
Collect all the values in the following table to define the packet core instance
:::zone pivot="ase-pro-gpu" |Value |Field name in Azure portal | |||
- | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G) or **S1-MME address** (for 4G). |
- | The virtual network name on port 5 on your Azure Stack Edge Pro GPU corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. | **ASE N2 virtual subnet** (for 5G) or **ASE S1-MME virtual subnet** (for 4G). |
- | The virtual network name on port 5 on your Azure Stack Edge Pro GPU corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | **ASE N3 virtual subnet** (for 5G) or **ASE S1-U virtual subnet** (for 4G). |
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2 and S1-MME interfaces. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G), **S1-MME address** (for 4G), or **S1-MME/N2 address (Signaling)** (for combined 4G and 5G). |
+ | The virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface. | **ASE N2 virtual subnet** (for 5G), **ASE S1-MME virtual subnet** (for 4G), or **ASE N2/S1-MME virtual subnet** (for combined 4G and 5G). |
+ | The virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface; for combined 4G and 5G, it's the N3/S1-U interface. | **ASE N3 virtual subnet** (for 5G), **ASE S1-U virtual subnet** (for 4G), or **ASE N3/S1-U virtual subnet** (for combined 4G and 5G). |
:::zone-end :::zone pivot="ase-pro-2" |Value |Field name in Azure portal | |||
- | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G) or **S1-MME address** (for 4G). |
- | The virtual network name on port 3 on your Azure Stack Edge Pro 2 corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. | **ASE N2 virtual subnet** (for 5G) or **ASE S1-MME virtual subnet** (for 4G). |
- | The virtual network name on port 3 on your Azure Stack Edge Pro 2 corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. | **ASE N3 virtual subnet** (for 5G) or **ASE S1-U virtual subnet** (for 4G). |
+ | The IP address for the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2 and S1-MME interfaces. You identified this address in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-subnets-and-ip-addresses). </br></br> This IP address must match the value you used when deploying the AKS-HCI cluster on your Azure Stack Edge Pro device. You did this as part of the steps in [Order and set up your Azure Stack Edge Pro device(s)](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#order-and-set-up-your-azure-stack-edge-pro-devices). |**N2 address (Signaling)** (for 5G), **S1-MME address** (for 4G), or **S1-MME/N2 address (Signaling)** (for combined 4G and 5G). |
+ | The virtual network name on port 3 on your Azure Stack Edge Pro 2 corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface. | **ASE N2 virtual subnet** (for 5G), **ASE S1-MME virtual subnet** (for 4G), or **ASE N2/S1-MME virtual subnet** (for combined 4G and 5G). |
+ | The virtual network name on port 3 on your Azure Stack Edge Pro 2 corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface; for combined 4G and 5G, it's the N3/S1-U interface. | **ASE N3 virtual subnet** (for 5G), **ASE S1-U virtual subnet** (for 4G), or **ASE N3/S1-U virtual subnet** (for combined 4G and 5G). |
:::zone-end
+## Collect UE usage tracking values
+
+If you want to configure UE usage tracking for your site, collect all the values in the following table to define the packet core instance's associated Event Hubs instance.
+
+> [!NOTE]
+> You must already have an [Azure Event Hubs instance](/azure/event-hubs) with an associated user assigned managed identity with the **Resource Policy Contributor** role before you can collect the information in the following table.
+
+> [!NOTE]
+> Azure Private 5G Core does not support Event Hubs with a [log compaction delete cleanup policy](/azure/event-hubs/log-compaction?source=recommendations).
+
+ |Value |Field name in Azure portal |
+ |||
+ |The namespace for the Azure Event Hubs instance that your site will use for UE usage tracking. |**Azure Event Hub Namespace**|
+ |The name of the Azure Event Hubs instance that your site will use for UE usage tracking.|**Event Hub name**|
+ |The user assigned managed identity that has the **Resource Policy Contributor** role for the Event Hubs instance. <br /> **Note:** The managed identity must be assigned to the Packet Core Control Plane for the site and assigned to the Event Hubs instance via the instance's **Identity and Access Management (IAM)** blade. <br /> **Note:** Only assign one managed identity to the site. This managed identity must be used for any UE usage tracking for the site after upgrade and site configuration modifications.<br /><br /> See [Use a user-assigned managed identity to capture events](/azure/event-hubs/event-hubs-capture-managed-identity) for more information on managed identities. |**User Assigned Managed Identity**|
+ ## Collect data network values You can configure up to ten data networks per site. During site creation, you'll be able to choose whether to attach an existing data network or create a new one.
For each data network that you want to configure, collect all the values in the
|Value |Field name in Azure portal | ||| | The name of the data network. This could be an existing data network or a new one you'll create during packet core configuration. |**Data network name**|
- | The virtual network name on port 6 (or port 5 if you plan to have more than six data networks) on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G). |
- | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
- | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
- | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-subnets-and-ip-addresses). </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
+ | The virtual network name on port 6 (or port 5 if you plan to have more than six data networks) on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface; for combined 4G and 5G, it's the N6/SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G), or **ASE N6/SGi virtual subnet** (for combined 4G and 5G). |
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You don't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You don't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-gpu#allocate-subnets-and-ip-addresses). </br></br>This value might be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. </br></br>When NAPT is disabled, static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network must be configured in the data network router. </br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| :::zone-end :::zone pivot="ase-pro-2" |Value |Field name in Azure portal | ||| | The name of the data network. This could be an existing data network or a new one you'll create during packet core configuration. |**Data network name**|
- | The virtual network name on port 4 (or port 3 if you plan to have more than six data networks) on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G). |
- | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
- | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You won't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
- | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-subnets-and-ip-addresses). </br></br>This value may be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
+ | The virtual network name on port 4 (or port 3 if you plan to have more than six data networks) on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface; for combined 4G and 5G, it's the N6/SGi interface. | **ASE N6 virtual subnet** (for 5G) or **ASE SGi virtual subnet** (for 4G), or **ASE N6/SGi virtual subnet** (for combined 4G and 5G). |
+ | The network address of the subnet from which dynamic IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You don't need this address if you don't want to support dynamic IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`192.0.2.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Dynamic UE IP pool prefixes**|
+ | The network address of the subnet from which static IP addresses must be allocated to user equipment (UEs), given in CIDR notation. You don't need this address if you don't want to support static IP address allocation for this site. You identified this in [Allocate user equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-user-equipment-ue-ip-address-pools). The following example shows the network address format. </br></br>`203.0.113.0/24` </br></br>Note that the UE subnets aren't related to the access subnet. |**Static UE IP pool prefixes**|
+ | The Domain Name System (DNS) server addresses to be provided to the UEs connected to this data network. You identified this in [Allocate subnets and IP addresses](complete-private-mobile-network-prerequisites.md?pivots=ase-pro-2#allocate-subnets-and-ip-addresses). </br></br>This value might be an empty list if you don't want to configure a DNS server for the data network. In this case, UEs in this data network will be unable to resolve domain names. | **DNS Addresses** |
|Whether Network Address and Port Translation (NAPT) should be enabled for this data network. NAPT allows you to translate a large pool of private IP addresses for UEs to a small number of public IP addresses. The translation is performed at the point where traffic enters the data network, maximizing the utility of a limited supply of public IP addresses. </br></br>When NAPT is disabled, static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network must be configured in the data network router. </br></br>If you want to use [UE-to-UE traffic](private-5g-core-overview.md#ue-to-ue-traffic) in this data network, keep NAPT disabled. |**NAPT**| :::zone-end
You can use a storage account and user assigned managed identity, with write acc
If you don't want to configure diagnostics package gathering at this stage, you do not need to collect anything. You can configure this after site creation.
-If you want to configure diagnostics package gathering during site creation, see [Collect values for diagnostics package gathering](gather-diagnostics.md#collect-values-for-diagnostics-package-gathering).
+If you want to configure diagnostics package gathering during site creation, see [Collect values for diagnostics package gathering](gather-diagnostics.md#set-up-a-storage-account).
## Choose the authentication method for local monitoring tools Azure Private 5G Core provides dashboards for monitoring your deployment and a web GUI for collecting detailed signal traces. You can access these tools using [Microsoft Entra ID](../active-directory/authentication/overview-authentication.md) or a local username and password. We recommend setting up Microsoft Entra authentication to improve security in your deployment.
-If you want to access your local monitoring tools using Microsoft Entra ID, after creating a site you'll need to follow the steps in [Enable Microsoft Entra ID for local monitoring tools](enable-azure-active-directory.md).
+If you want to access your local monitoring tools using Microsoft Entra ID, after creating a site follow the steps in [Enable Microsoft Entra ID for local monitoring tools](enable-azure-active-directory.md).
If you want to access your local monitoring tools using local usernames and passwords, you don't need to set any additional configuration. After deploying the site, set up your username and password by following [Access the distributed tracing web GUI](distributed-tracing.md#access-the-distributed-tracing-web-gui) and [Access the packet core dashboards](packet-core-dashboards.md#access-the-packet-core-dashboards).
private-5g-core Complete Private Mobile Network Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/complete-private-mobile-network-prerequisites.md
In this how-to guide, you'll carry out each of the tasks you need to complete be
## Tools and access
-To deploy your private mobile network using Azure Private 5G Core, you will need the following:
+To deploy your private mobile network using Azure Private 5G Core, you need:
- A Windows PC with internet access - A Windows Administrator account on that PC
To deploy your private mobile network using Azure Private 5G Core, you will need
Contact your trials engineer and ask them to register your Azure subscription for access to Azure Private 5G Core. If you don't already have a trials engineer and are interested in trialing Azure Private 5G Core, contact your Microsoft account team, or express your interest through the [partner registration form](https://forms.office.com/r/4Q1yNRakXe).
-## Choose the core technology type (5G or 4G)
+## Choose the core technology type (5G, 4G, or combined 4G and 5G)
-Choose whether each site in the private mobile network should provide coverage for 5G or 4G user equipment (UEs). A single site can't support 5G and 4G UEs simultaneously. If you're deploying multiple sites, you can choose to have some sites support 5G UEs and others support 4G UEs.
+Choose whether each site in the private mobile network should provide coverage for 5G, 4G, or combined 4G and 5G user equipment (UEs). If you're deploying multiple sites, they can each support different core technology types.
## Allocate subnets and IP addresses Azure Private 5G Core requires a management network, access network, and up to ten data networks. These networks can all be part of the same, larger network, or they can be separate. The approach you use depends on your traffic separation requirements.
-For each of these networks, allocate a subnet and then identify the listed IP addresses. If you're deploying multiple sites, you'll need to collect this information for each site.
+For each of these networks, allocate a subnet and then identify the listed IP addresses. If you're deploying multiple sites, you need to collect this information for each site.
-Depending on your networking requirements (for example, if a limited set of subnets is available), you may choose to allocate a single subnet for all of the Azure Stack Edge interfaces, marked with an asterisk (*) in the following list.
+Depending on your networking requirements (for example, if a limited set of subnets is available), you might choose to allocate a single subnet for all of the Azure Stack Edge interfaces, marked with an asterisk (*) in the following list.
### Management network
Depending on your networking requirements (for example, if a limited set of subn
- Network address in Classless Inter-Domain Routing (CIDR) notation. - Default gateway. - One IP address for the management port
- - You'll choose a port between 2 and 4 to use as the Azure Stack Edge Pro GPU device's management port as part of [setting up your Azure Stack Edge Pro device](#order-and-set-up-your-azure-stack-edge-pro-devices).*
+ - Choose a port between 2 and 4 to use as the Azure Stack Edge Pro GPU device's management port as part of [setting up your Azure Stack Edge Pro device](#order-and-set-up-your-azure-stack-edge-pro-devices).*
- Six sequential IP addresses for the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster nodes. - One service IP address for accessing local monitoring tools for the packet core instance.
Depending on your networking requirements (for example, if a limited set of subn
- Network address in CIDR notation. - Default gateway.-- One IP address for the control plane interface. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface.*-- One IP address for the user plane interface. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface.*
+- One IP address for the control plane interface.
+ - For 5G, this is the N2 interface
+ - For 4G, this is the S1-MME interface.
+ - For combined 4G and 5G, this is the N2/S1-MME interface.
+- One IP address for the user plane interface.
+ - For 5G, this is the N3 interface
+ - For 4G, this is the S1-U interface.
+ - For combined 4G and 5G, this is the N3/S1-U interface.
- One IP address for port 3 on the Azure Stack Edge Pro 2 device. :::zone-end
Depending on your networking requirements (for example, if a limited set of subn
- Network address in CIDR notation. - Default gateway.-- One IP address for the control plane interface. For 5G, this interface is the N2 interface, whereas for 4G, it's the S1-MME interface.*-- One IP address for the user plane interface. For 5G, this interface is the N3 interface, whereas for 4G, it's the S1-U interface.*
+- One IP address for the control plane interface.
+ - For 5G, this is the N2 interface
+ - For 4G, this is the S1-MME interface.
+ - For combined 4G and 5G, this is the N2/S1-MME interface.
+- One IP address for the user plane interface.
+ - For 5G, this is the N3 interface
+ - For 4G, this is the S1-U interface.
+ - For combined 4G and 5G, this is the N3/S1-U interface.
- One IP address for port 5 on the Azure Stack Edge Pro GPU device. :::zone-end
Allocate the following IP addresses for each data network in the site:
- Network address in CIDR notation. - Default gateway.-- One IP address for the user plane interface. For 5G, this interface is the N6 interface, whereas for 4G, it's the SGi interface.*
+- One IP address for the user plane interface.
+ - For 5G, this is the N6 interface
+ - For 4G, this is the SGi interface.
+ - For combined 4G and 5G, this is the N6/SGi interface.
The following IP addresses must be used by all the data networks in the site: :::zone pivot="ase-pro-2"
The following IP addresses must be used by all the data networks in the site:
### VLANs
-You can optionally configure your Azure Stack Edge Pro device with virtual local area network (VLAN) tags. You can use this to enable layer 2 traffic separation on the N2, N3 and N6 interfaces, or their 4G equivalents. For example, you might want to separate N2 and N3 traffic (which share a port on the ASE device) or separate traffic for each connected data network.
+You can optionally configure your Azure Stack Edge Pro device with virtual local area network (VLAN) tags. You can use this configuration to enable layer 2 traffic separation on the N2, N3 and N6 interfaces, or their 4G equivalents. For example, you might want to separate N2 and N3 traffic (which share a port on the ASE device) or separate traffic for each connected data network.
Allocate VLAN IDs for each network as required.
Azure Private 5G Core supports the following IP address allocation methods for U
- Dynamic. Dynamic IP address allocation automatically assigns a new IP address to a UE each time it connects to the private mobile network. -- Static. Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart. You'll allocate static IP addresses to a UE as part of [provisioning its SIM](provision-sims-azure-portal.md).
+- Static. Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. Static IP addresses are useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you can configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart. You'll allocate static IP addresses to a UE as part of [provisioning its SIM](provision-sims-azure-portal.md).
You can choose to support one or both of these methods for each data network in your site. For each data network you're deploying, do the following: - Decide which IP address allocation methods you want to support.-- For each method you want to support, identify an IP address pool from which IP addresses can be allocated to UEs. You'll need to provide each IP address pool in CIDR notation.
+- For each method you want to support, identify an IP address pool from which IP addresses can be allocated to UEs. You must provide each IP address pool in CIDR notation.
If you decide to support both methods for a particular data network, ensure that the IP address pools are of the same size and don't overlap.
You must set these up in addition to the [ports required for Azure Stack Edge (A
| Port | ASE interface | Description| |--|--|--| | TCP 443 Inbound | Management (LAN) | Access to local monitoring tools (packet core dashboards and distributed tracing). |
-| 5671 In/Outbound | Management (LAN) | Communication to Azure Event Hub, AMQP Protocol |
-| 5672 In/Outbound | Management (LAN) | Communication to Azure Event Hub, AMQP Protocol |
+| 5671 In/Outbound | Management (LAN) | Communication to Azure Event Hubs, AMQP Protocol |
+| 5672 In/Outbound | Management (LAN) | Communication to Azure Event Hubs, AMQP Protocol |
| SCTP 38412 Inbound | Port 3 (Access network) | Control plane access signaling (N2 interface). </br>Only required for 5G deployments. | | SCTP 36412 Inbound | Port 3 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. |
-| UDP 2152 In/Outbound | Port 3 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). |
-| All IP traffic | Ports 3 and 4 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). </br> Only required on port 3 if data networks are configured on that port. |
+| UDP 2152 In/Outbound | Port 3 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G, or N3/S1-U for combined 4G and 5G). |
+| All IP traffic | Ports 3 and 4 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G, or N6/SGi for combined 4G and 5G). </br> Only required on port 3 if data networks are configured on that port. |
:::zone-end :::zone pivot="ase-pro-gpu"
You must set these up in addition to the [ports required for Azure Stack Edge (A
| Port | ASE interface | Description| |--|--|--| | TCP 443 Inbound | Management (LAN) | Access to local monitoring tools (packet core dashboards and distributed tracing). |
-| 5671 In/Outbound | Management (LAN) | Communication to Azure Event Hub, AMQP Protocol |
-| 5672 In/Outbound | Management (LAN) | Communication to Azure Event Hub, AMQP Protocol |
+| 5671 In/Outbound | Management (LAN) | Communication to Azure Event Hubs, AMQP Protocol |
+| 5672 In/Outbound | Management (LAN) | Communication to Azure Event Hubs, AMQP Protocol |
| SCTP 38412 Inbound | Port 5 (Access network) | Control plane access signaling (N2 interface). </br>Only required for 5G deployments. | | SCTP 36412 Inbound | Port 5 (Access network) | Control plane access signaling (S1-MME interface). </br>Only required for 4G deployments. |
-| UDP 2152 In/Outbound | Port 5 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G). |
-| All IP traffic | Ports 5 and 6 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G). </br> Only required on port 5 if data networks are configured on that port. |
+| UDP 2152 In/Outbound | Port 5 (Access network) | Access network user plane data (N3 interface for 5G, S1-U for 4G, or N3/S1-U for combined 4G and 5G). |
+| All IP traffic | Ports 5 and 6 (Data networks) | Data network user plane data (N6 interface for 5G, SGi for 4G, or N6/SGi for combined 4G and 5G)). </br> Only required on port 5 if data networks are configured on that port. |
:::zone-end #### Port requirements for Azure Stack Edge
To use Azure Private 5G Core, you need to register some additional resource prov
az version ```
- If the CLI version is below 2.37.0, you will need to upgrade your Azure CLI to a newer version. See [How to update the Azure CLI](/cli/azure/update-azure-cli).
+ If the CLI version is below 2.37.0, you must upgrade your Azure CLI to a newer version. See [How to update the Azure CLI](/cli/azure/update-azure-cli).
1. Register the following resource providers: ```azurecli
To use Azure Private 5G Core, you need to register some additional resource prov
## Retrieve the Object ID (OID)
-You need to obtain the object ID (OID) of the custom location resource provider in your Azure tenant. You will need to provide this OID when you create the Kubernetes service. You can obtain the OID using the Azure CLI or the Azure Cloud Shell on the portal. You'll need to be an owner of your Azure subscription.
+You need to obtain the object ID (OID) of the custom location resource provider in your Azure tenant. You must provide this OID when you create the Kubernetes service. You can obtain the OID using the Azure CLI or the Azure Cloud Shell on the portal. You must be an owner of your Azure subscription.
1. Sign in to the Azure CLI or Azure Cloud Shell. 1. Retrieve the OID:
You need to obtain the object ID (OID) of the custom location resource provider
az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv ```
-This command queries the custom location and will output an OID string. Save this string for use later when you're commissioning the Azure Stack Edge device.
+This command queries the custom location and will output an OID string. Save this string for use later when you're commissioning the Azure Stack Edge device.
## Order and set up your Azure Stack Edge Pro device(s)
Do the following for each site you want to add to your private mobile network. D
| 2. | Order and prepare your Azure Stack Edge Pro 2 device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-prep.md) | | 3. | Rack and cable your Azure Stack Edge Pro 2 device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 2 - management</br>- Port 3 - access network (and optionally, data networks)</br>- Port 4 - data networks| [Tutorial: Install Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-install?pivots=single-node.md) | | 4. | Connect to your Azure Stack Edge Pro 2 device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro 2 device. </br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data.</br></br> In addition, you can optionally configure your Azure Stack Edge Pro device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
+| 5. | Configure the network for your Azure Stack Edge Pro 2 device. </br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> If the RAN and Packet Core are on the same subnet, you do not need to configure a gateway for Port 3 or Port 4. </br></br> In addition, you can optionally configure your Azure Stack Edge Pro device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md) |
-| 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates?pivots=single-node) |
+| 7. | Configure certificates and configure encryption-at-rest for your Azure Stack Edge Pro 2 device. After changing the certificates, you might have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro 2](/azure/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates?pivots=single-node) |
| 8. | Activate your Azure Stack Edge Pro 2 device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-pro-2-deploy-activate.md) | | 9. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro 2 device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
-| 10. | Run the diagnostics tests for the Azure Stack Edge Pro 2 device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 2 - management</br>- Port 3 - access network (and optionally, data networks)</br>- Port 4 - data networks</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+| 10. | Run the diagnostics tests for the Azure Stack Edge Pro 2 device in the local web UI, and verify they all pass. </br></br>You might see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 2 - management</br>- Port 3 - access network (and optionally, data networks)</br>- Port 4 - data networks</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
> [!IMPORTANT] > You must ensure your Azure Stack Edge Pro 2 device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro 2 device, see [Update your Azure Stack Edge Pro 2](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
Do the following for each site you want to add to your private mobile network. D
| 2. | Order and prepare your Azure Stack Edge Pro GPU device. | [Tutorial: Prepare to deploy Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-prep.md) | | 3. | Rack and cable your Azure Stack Edge Pro GPU device. </br></br>When carrying out this procedure, you must ensure that the device has its ports connected as follows:</br></br>- Port 5 - access network (and optionally, data networks)</br>- Port 6 - data networks</br></br>Additionally, you must have a port connected to your management network. You can choose any port from 2 to 4. | [Tutorial: Install Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-install?pivots=single-node.md) | | 4. | Connect to your Azure Stack Edge Pro GPU device using the local web UI. | [Tutorial: Connect to Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-connect?pivots=single-node.md) |
-| 5. | Configure the network for your Azure Stack Edge Pro GPU device.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data.</br></br> In addition, you can optionally configure your Azure Stack Edge Pro GPU device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro GPU device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
+| 5. | Configure the network for your Azure Stack Edge Pro GPU device.</br> </br> **Note:** When an ASE is used in an Azure Private 5G Core service, Port 2 is used for management rather than data. The tutorial linked assumes a generic ASE that uses Port 2 for data. </br></br> If the RAN and Packet Core are on the same subnet, you do not need to configure a gateway for Port 5 or Port 6. </br></br> In addition, you can optionally configure your Azure Stack Edge Pro GPU device to run behind a web proxy. </br></br> Verify the outbound connections from Azure Stack Edge Pro GPU device to the Azure Arc endpoints are opened. </br></br>**Do not** configure virtual switches, virtual networks or compute IPs. | [Tutorial: Configure network for Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node.md)</br></br>[(Optionally) Configure web proxy for Azure Stack Edge Pro](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy?pivots=single-node#configure-web-proxy)</br></br>[Azure Arc Network Requirements](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli%2Cazure-cloud)</br></br>[Azure Arc Agent Network Requirements](/azure/architecture/hybrid/arc-hybrid-kubernetes)|
| 6. | Configure a name, DNS name, and (optionally) time settings. </br></br>**Do not** configure an update. | [Tutorial: Configure the device settings for Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-set-up-device-update-time.md) |
-| 7. | Configure certificates for your Azure Stack Edge Pro GPU device. After changing the certificates, you may have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates?pivots=single-node) |
+| 7. | Configure certificates for your Azure Stack Edge Pro GPU device. After changing the certificates, you might have to reopen the local UI in a new browser window to prevent the old cached certificates from causing problems.| [Tutorial: Configure certificates for your Azure Stack Edge Pro with GPU](/azure/databox-online/azure-stack-edge-gpu-deploy-configure-certificates?pivots=single-node) |
| 8. | Activate your Azure Stack Edge Pro GPU device. </br></br>**Do not** follow the section to *Deploy Workloads*. | [Tutorial: Activate Azure Stack Edge Pro with GPU](../databox-online/azure-stack-edge-gpu-deploy-activate.md) | | 9. | Enable VM management from the Azure portal. </br></br>Enabling this immediately after activating the Azure Stack Edge Pro device occasionally causes an error. Wait one minute and retry. | Navigate to the ASE resource in the Azure portal, go to **Edge services**, select **Virtual machines** and select **Enable**. |
-| 10. | Run the diagnostics tests for the Azure Stack Edge Pro GPU device in the local web UI, and verify they all pass. </br></br>You may see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
+| 10. | Run the diagnostics tests for the Azure Stack Edge Pro GPU device in the local web UI, and verify they all pass. </br></br>You might see a warning about a disconnected, unused port. You should fix the issue if the warning relates to any of these ports:</br></br>- Port 5.</br>- Port 6.</br>- The port you chose to connect to the management network in Step 3.</br></br>For all other ports, you can ignore the warning. </br></br>If there are any errors, resolve them before continuing with the remaining steps. This includes any errors related to invalid gateways on unused ports. In this case, either delete the gateway IP address or set it to a valid gateway for the subnet. | [Run diagnostics, collect logs to troubleshoot Azure Stack Edge device issues](../databox-online/azure-stack-edge-gpu-troubleshoot.md) |
> [!IMPORTANT] > You must ensure your Azure Stack Edge Pro GPU device is compatible with the Azure Private 5G Core version you plan to install. See [Packet core and Azure Stack Edge (ASE) compatibility](./azure-stack-edge-packet-core-compatibility.md). If you need to upgrade your Azure Stack Edge Pro GPU device, see [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md?tabs=version-2106-and-later).
private-5g-core Create A Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-a-site.md
In this step, you'll create the mobile network site resource representing the ph
> If a warning appears about an incompatibility between the selected packet core version and the current Azure Stack Edge version, you'll need to update ASE first. Select **Upgrade ASE** from the warning prompt and follow the instructions in [Update your Azure Stack Edge Pro GPU](../databox-online/azure-stack-edge-gpu-install-update.md). Once you've finished updating your ASE, go back to the beginning of this step to create the site resource. - Ensure **AKS-HCI** is selected in the **Platform** field.+ :::zone pivot="ase-pro-gpu" 7. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. > [!NOTE]
- > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro GPU device.
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs), **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs), or **ASE N2/S1-MME virtual subnet** and **ASE N3/S1-U virtual subnet** (if this site will support both 4G and 5G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro GPU device.
+
+8. If you want to enable UE Metric monitoring, use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
-8. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-gpu#collect-data-network-values) to fill out the fields. Note the following:
- - **ASE N6 virtual subnet** (if this site will support 5G UEs) or **ASE SGi virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network name on port 5 or 6 on your Azure Stack Edge Pro device.
+9. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-gpu#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site will support 5G UEs), **ASE SGi virtual subnet** (if this site will support 4G UEs), or **ASE N6/SGi virtual subnet** (if this site will support combined 4G and 5G UEs) must match the corresponding virtual network name on port 5 or 6 on your Azure Stack Edge Pro device.
- If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox. - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network.
In this step, you'll create the mobile network site resource representing the ph
7. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) to fill out the fields in the **Access network** section. > [!NOTE]
- > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro 2 device.
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site will support 5G UEs), **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site will support 4G UEs), or **ASE N2/S1-MME virtual subnet** and **ASE N3/S1-U virtual subnet** (if this site will support both 4G and 5G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro device.
+
+8. If you want to enable UE Metric monitoring, select **Enable** from the **UE Metric monitoring** dropdown. Use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
-8. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-2#collect-data-network-values) to fill out the fields. Note the following:
- - **ASE N6 virtual subnet** (if this site will support 5G UEs) or **ASE SGi virtual subnet** (if this site will support 4G UEs) must match the corresponding virtual network name on port 3 or 4 on your Azure Stack Edge Pro device.
+9. In the **Attached data networks** section, select **Attach data network**. Choose whether you want to use an existing data network or create a new one, then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md?pivots=ase-pro-2#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site will support 5G UEs), **ASE SGi virtual subnet** (if this site will support 4G UEs), or **ASE N6/SGi virtual subnet** (if this site will support combined 4G and 5G UEs) must match the corresponding virtual network name on port 3 or 4 on your Azure Stack Edge Pro device.
- If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox. - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network.
In this step, you'll create the mobile network site resource representing the ph
Once you've finished filling out the fields, select **Attach**. :::zone-end
-9. Repeat the previous step for each additional data network you want to configure.
-10. If you decided you want to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
+10. Repeat the previous step for each additional data network you want to configure.
+11. If you decided you want to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
If you decided not to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificates for this site, you can skip this step. 1. Select **+ Add** to configure a user assigned managed identity. 1. In the **Select Managed Identity** side panel: - Select the **Subscription** from the dropdown. - Select the **Managed identity** from the dropdown.
-11. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate at this stage, you can skip this step.
+12. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate at this stage, you can skip this step.
1. Under **Provide custom HTTPS certificate?**, select **Yes**. 1. Use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-12. In the **Local access** section, set the fields as follows:
+13. In the **Local access** section, set the fields as follows:
:::image type="content" source="media/create-a-site/create-site-local-access-tab.png" alt-text="Screenshot of the Azure portal showing the Local access configuration tab for a site resource."::: - Under **Authentication type**, select the authentication method you decided to use in [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools). - Under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-13. Select **Review + create**.
-14. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+14. Select **Review + create**.
+15. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
:::image type="content" source="media/create-a-site/create-site-validation.png" alt-text="Screenshot of the Azure portal showing successful validation of configuration values for a site resource.":::
- If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
+ If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red X icons. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-15. Once your configuration has been validated, you can select **Create** to create the site. The Azure portal will display the following confirmation screen when the site has been created.
+16. Once your configuration has been validated, you can select **Create** to create the site. The Azure portal will display the following confirmation screen when the site has been created.
:::image type="content" source="media/site-deployment-complete.png" alt-text="Screenshot of the Azure portal showing the confirmation of a successful deployment of a site.":::
-16. Select **Go to resource group**, and confirm that it contains the following new resources:
+17. Select **Go to resource group**, and confirm that it contains the following new resources:
- A **Mobile Network Site** resource representing the site as a whole. - A **Packet Core Control Plane** resource representing the control plane function of the packet core instance in the site.
If you decided not to configure diagnostics packet collection or use a user assi
:::image type="content" source="media/create-a-site/site-related-resources.png" alt-text="Screenshot of the Azure portal showing a resource group containing a site and its related resources." lightbox="media/create-a-site/site-related-resources.png":::
-17. If you want to assign additional packet cores to the site, for each new packet core resource see [Create additional Packet Core instances for a site using the Azure portal](create-additional-packet-core.md).
+18. If you want to assign additional packet cores to the site, for each new packet core resource see [Create additional Packet Core instances for a site using the Azure portal](create-additional-packet-core.md).
## Next steps
private-5g-core Create Additional Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-additional-packet-core.md
In this step, you'll create an additional packet core instance for a site in you
- Ensure **AKS-HCI** is selected in the **Platform** field. :::zone pivot="ase-pro-gpu"
+9. If you want to enable UE Metric monitoring, use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
-9. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
+10. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+
+11. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
> [!NOTE]
- > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro GPU device.
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs), **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs), or **ASE N2/S1-MME virtual subnet** and **ASE N3/S1-U virtual subnet** (if this site supports both 4G and 5G UEs) must match the corresponding virtual network names on port 5 on your Azure Stack Edge Pro device.
+
+12. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site supports 5G UEs), **ASE SGi virtual subnet** (if this site supports 4G UEs), or **ASE N6/SGi virtual subnet** (if this site supports both 4G and 5G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
+ - If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox.
+ - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network.
+
+ Once you've finished filling out the fields, select **Attach**.
:::zone-end :::zone pivot="ase-pro-2"
-9. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
- > [!NOTE]
- > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs) or **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro 2 device.
+9. If you want to enable UE Metric monitoring, select **Enable** from the **UE Metric monitoring** dropdown. Use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
10. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
- - **ASE N6 virtual subnet** (if this site supports 5G UEs) or **ASE SGi virtual subnet** (if this site supports 4G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
+
+11. Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the site to fill out the fields in the **Access network** section.
+ > [!NOTE]
+ > **ASE N2 virtual subnet** and **ASE N3 virtual subnet** (if this site supports 5G UEs), **ASE S1-MME virtual subnet** and **ASE S1-U virtual subnet** (if this site supports 4G UEs), or **ASE N2/S1-MME virtual subnet** and **ASE N3/S1-U virtual subnet** (if this site supports both 4G and 5G UEs) must match the corresponding virtual network names on port 3 on your Azure Stack Edge Pro 2 device.
+
+12. In the **Attached data networks** section, select **Attach data network**. Select the existing data network you used for the site then use the information you collected in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values) to fill out the fields. Note the following:
+ - **ASE N6 virtual subnet** (if this site supports 5G UEs), **ASE SGi virtual subnet** (if this site supports 4G UEs), or **ASE N6/SGi virtual subnet** (if this site supports both 4G and 5G UEs) must match the corresponding virtual network name on port 6 on your Azure Stack Edge Pro device.
- If you decided not to configure a DNS server, clear the **Specify DNS addresses for UEs?** checkbox. - If you decided to keep NAPT disabled, ensure you configure your data network router with static routes to the UE IP pools via the appropriate user plane data IP address for the corresponding attached data network. Once you've finished filling out the fields, select **Attach**.
-11. Repeat the previous step for each additional data network configured on the site.
-12. If you decided to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
+13. Repeat the previous step for each additional data network configured on the site.
+14. If you decided to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificate for this site, select **Next : Identity >**.
If you decided not to configure diagnostics packet collection or use a user assigned managed identity for HTTPS certificates for this site, you can skip this step. 1. Select **+ Add** to configure a user assigned managed identity. 1. In the **Select Managed Identity** side panel: - Select the **Subscription** from the dropdown. - Select the **Managed identity** from the dropdown.
-13. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate for monitoring this site, you can skip this step.
+15. If you decided you want to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values), select **Next : Local access >**. If you decided not to provide a custom HTTPS certificate for monitoring this site, you can skip this step.
1. Under **Provide custom HTTPS certificate?**, select **Yes**. 1. Use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-14. In the **Local access** section, set the fields as follows:
+16. In the **Local access** section, set the fields as follows:
- Under **Authentication type**, select the authentication method you decided to use in [Choose the authentication method for local monitoring tools](collect-required-information-for-a-site.md#choose-the-authentication-method-for-local-monitoring-tools). - Under **Provide custom HTTPS certificate?**, select **Yes** or **No** based on whether you decided to provide a custom HTTPS certificate in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values). If you selected **Yes**, use the information you collected in [Collect local monitoring values](collect-required-information-for-a-site.md#collect-local-monitoring-values) to select a certificate.
-15. Select **Review + create**.
-16. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
+17. Select **Review + create**.
+18. Azure will now validate the configuration values you've entered. You should see a message indicating that your values have passed validation.
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-17. Once your configuration has been validated, you can select **Create** to create the packet core instance. The Azure portal will display a confirmation screen when the packet core instance has been created.
+19. Once your configuration has been validated, you can select **Create** to create the packet core instance. The Azure portal will display a confirmation screen when the packet core instance has been created.
-18. Return to the **Site** overview, and confirm that it contains the new packet core instance.
+20. Return to the **Site** overview, and confirm that it contains the new packet core instance.
## Next steps
private-5g-core Create Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/create-site-arm-template.md
Four Azure resources are defined in the template.
| **Existing Data Network Name** | Enter the name of the data network. This value must match the name you used when creating the data network. | | **Site Name** | Enter a name for your site.| | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
- | **Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro GPU device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ | **Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro GPU device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface. |
| **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
- | **User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
- | **User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ | **User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface; for combined 4G and 5G, it's the N3/S1-U interface. |
+ | **User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro GPU device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface; for combined 4G and 5G, it's the N6/SGi interface. |
|**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
+ | **Core Network Technology** | Enter *5GC* for 5G, *EPC* for 4G, or *EPC + 5GC* for combined 4G and 5G. |
| **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. | | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. | | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
Four Azure resources are defined in the template.
| **Existing Data Network Name** | Enter the name of the data network. This value must match the name you used when creating the data network. | | **Site Name** | Enter a name for your site.| | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
- | **Control Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro 2 device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ | **Control Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro 2 device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface. |
| **Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network. |
- | **User Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
- | **User Plane Data Interface Name** | Enter the virtual network name on port 4 on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ | **User Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface; for combined 4G and 5G, it's the N3/S1-U interface. |
+ | **User Plane Data Interface Name** | Enter the virtual network name on port 4 on your Azure Stack Edge Pro 2 device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface; for combined 4G and 5G, it's the N6/SGi interface. |
|**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to UEs in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
- | **Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
+ | **Core Network Technology** | Enter *5GC* for 5G, *EPC* for 4G, or *EPC + 5GC* for combined 4G and 5G. |
| **Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network. | | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. | | **Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site. |
private-5g-core Data Plane Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md
Title: Perform data plane packet capture on a packet core instance
+ Title: Perform packet capture on a packet core instance
-description: In this how-to guide, you'll learn how to perform data plane packet capture on a packet core instance.
--
+description: In this how-to guide, you'll learn how to perform packet capture on the control plane or data plane on a packet core instance.
++ Previously updated : 12/13/2022 Last updated : 10/26/2023
-# Perform data plane packet capture on a packet core instance
+# Perform packet capture on a packet core instance
-Packet capture for data plane packets is performed using the **UPF Trace (UPFT)** tool. UPFT is similar to **tcpdump**, a data-network packet analyzer computer program that runs on a command line interface. You can use this tool to monitor and record packets on any user plane interface on the access network (N3 interface) or data network (N6 interface) on your device.
+Packet capture for control or data plane packets is performed using the **UPF Trace** tool. UPF Trace is similar to **tcpdump**, a data-network packet analyzer computer program that runs on a command line interface (CLI). You can use UPF Trace to monitor and record packets on any user plane interface on the access network (N3 interface) or data network (N6 interface) on your device, as well as the control plane (N2 interface). You can access UPF Trace using the Azure portal or the Azure CLI.
-Data plane packet capture works by mirroring packets to a Linux kernel interface, which can then be monitored using tcpdump. In this how-to guide, you'll learn how to perform data plane packet capture on a packet core instance.
+Packet capture works by mirroring packets to a Linux kernel interface, which can then be monitored using tcpdump. In this how-to guide, you'll learn how to perform packet capture on a packet core instance.
> [!IMPORTANT] > Performing packet capture will reduce the performance of your system and the throughput of your data plane. It is therefore only recommended to use this tool at low scale during initial testing. ## Prerequisites
+You must have an AP5GC site deployed to perform packet capture.
+
+To perform packet capture using the command line, you must:
+ - Identify the **Kubernetes - Azure Arc** resource representing the Azure Arc-enabled Kubernetes cluster on which your packet core instance is running. - Ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Core namespace access](set-up-kubectl-access.md#core-namespace-access).
-## Performing packet capture
+## Performing packet capture using the Azure portal
+
+## Set up a storage account
++
+### Start a packet capture
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. Navigate to the **Packet Core Control Pane** overview page of the site you want to run a packet capture in.
+1. Select **Packet Capture** under the **Help** section on the left side. This will open a **Packet Capture** view.
+1. If this is the first time you've taken a packet capture using the portal, you will see an error message prompting you to configure a storage account. If so:
+ 1. Follow the link in the error message.
+ 1. Enter the **Storage account container URL** that was configured for diagnostics storage and select **Modify**.
+ > [!TIP]
+ > If you don't have the URL for your storage account container:
+ >
+ > 1. Navigate to your **Storage account**.
+ > 1. Select the **...** symbol on the right side of the container that you want to use for packet capture.
+ > 1. Select **Container properties** in the context menu.
+ > 1. Copy the contents of the **URL** field.
+ 1. Return to the **Packet Capture** view.
+1. Select **Start packet capture**.
+1. Fill in the details on the **Start packet capture** pane and select **Create**.
+1. The page will refresh every few seconds until the packet capture has completed. You can also use the **Refresh** button to refresh the page. If you want to stop the packet capture early, select **Stop packet capture**.
+1. Once the packet capture has completed, the AP5GC online service will save the output at the provided storage account URL.
+1. To download the packet capture output, you can use the **Copy to clipboard** button in the **Storage** or **File name** columns to copy those details and then paste them into the **Search** box in the portal. To download the output, right-click the file and select **Download**.
+
+## Performing packet capture using the Azure CLI
1. In a command line with kubectl access to the Azure Arc-enabled Kubernetes cluster, enter the UPF-PP troubleshooter pod:
Data plane packet capture works by mirroring packets to a Linux kernel interface
upft list ```
- This should report a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
+ This should report a single interface on the control plane network (N2), a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
```azurecli
- n6trace1 (Data Network: enterprise)
- n6trace2 (Data Network: test)
+ n2trace
n3trace n6trace0 (Data Network: internet)
+ n6trace1 (Data Network: enterprise)
+ n6trace2 (Data Network: test)
``` 1. Run `upftdump` with any parameters that you would usually pass to tcpdump. In particular, `-i` to specify the interface, and `-w` to specify where to write to. Close the UPFT tool when done by pressing <kbd>Ctrl + C</kbd>. The following examples are common use cases:
Data plane packet capture works by mirroring packets to a Linux kernel interface
- To run capture packets for the N3 interface and the N6 interface for a single data network, enter the UPF-PP troubleshooter pod in two separate windows. In one window run `upftdump -i n3trace -w n3.pcap` and in the other window run `upftdump -i <N6 interface> -w n6.pcap` (use the N6 interface for the data network as identified in step 2). > [!IMPORTANT]
- > Packet capture files may be large, particularly when running packet capture on all interfaces. Specify filters when running packet capture to reduce the file size - see the tcpdump documentation for the available filters.
+ > Packet capture files might be large, particularly when running packet capture on all interfaces. Specify filters when running packet capture to reduce the file size - see the tcpdump documentation for the available filters.
+ 1. Leave the container: ```azurecli
Data plane packet capture works by mirroring packets to a Linux kernel interface
kubectl cp -n core core-upf-pp-0:<path to output file> <location to copy to> -c troubleshooter ```
- The `tcpdump` may have been stopped in the middle of writing a packet, which can cause this step to produce an error stating `unexpected EOF`. However, your file should have copied successfully, but you can check your target output file to confirm.
+ The `tcpdump` might have been stopped in the middle of writing a packet, which can cause this step to produce an error stating `unexpected EOF`. However, your file should have copied successfully, but you can check your target output file to confirm.
1. Remove the output files:
private-5g-core Deploy Private Mobile Network With Site Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/deploy-private-mobile-network-with-site-arm-template.md
The following Azure resources are defined in the template.
|**Sim Group Name** | If you want to provision SIMs, enter the name of the SIM group to which the SIMs will be added. Otherwise, leave this field blank. | |**Sim Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. | | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
- |**Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ |**Control Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface. |
|**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network.</br> Note: Please ensure that the N2 IP address specified here matches the N2 address configured on the ASE Portal. |
- |**User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
- |**User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ |**User Plane Access Interface Name** | Enter the virtual network name on port 5 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface; for combined 4G and 5G, it's the N3/S1-U interface. |
+ |**User Plane Data Interface Name** | Enter the virtual network name on port 6 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface; for combined 4G and 5G, it's the N6/SGi interface. |
|**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. |
- |**Data Network Name** | Enter the name of the data network. |
- |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
+ |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. |
+ |**Data Network Name** | Enter the name of the data network. |
+ |**Core Network Technology** | Enter *5GC* for 5G, *EPC* for 4G, or *EPC + 5GC* for combined 4G and 5G. |
|**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. | |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
The following Azure resources are defined in the template.
|**Sim Group Name** | If you want to provision SIMs, enter the name of the SIM group to which the SIMs will be added. Otherwise, leave this field blank. | |**Sim Resources** | If you want to provision SIMs, paste in the contents of the JSON file containing your SIM information. Otherwise, leave this field unchanged. | | **Azure Stack Edge Device** | Enter the resource ID of the Azure Stack Edge resource in the site. |
- |**Control Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface. |
+ |**Control Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro device corresponding to the control plane interface on the access network. For 5G, this interface is the N2 interface; for 4G, it's the S1-MME interface; for combined 4G and 5G, it's the N2/S1-MME interface. |
|**Control Plane Access Ip Address** | Enter the IP address for the control plane interface on the access network.</br> Note: Please ensure that the N2 IP address specified here matches the N2 address configured on the ASE Portal. |
- |**User Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface. |
- |**User Plane Data Interface Name** | Enter the virtual network name on port 4 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface. |
+ |**User Plane Access Interface Name** | Enter the virtual network name on port 3 on your Azure Stack Edge Pro device corresponding to the user plane interface on the access network. For 5G, this interface is the N3 interface; for 4G, it's the S1-U interface; for combined 4G and 5G, it's the N3/S1-U interface. |
+ |**User Plane Data Interface Name** | Enter the virtual network name on port 4 on your Azure Stack Edge Pro device corresponding to the user plane interface on the data network. For 5G, this interface is the N6 interface; for 4G, it's the SGi interface; for combined 4G and 5G, it's the N6/SGi interface. |
|**User Equipment Address Pool Prefix** | Enter the network address of the subnet from which dynamic IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support dynamic IP address allocation. | |**User Equipment Static Address Pool Prefix** | Enter the network address of the subnet from which static IP addresses must be allocated to User Equipment (UEs) in CIDR notation. You can omit this if you don't want to support static IP address allocation. | |**Data Network Name** | Enter the name of the data network. |
- |**Core Network Technology** | Enter *5GC* for 5G, or *EPC* for 4G. |
+ |**Core Network Technology** | Enter *5GC* for 5G, *EPC* for 4G, or *EPC + 5GC* for combined 4G and 5G. |
|**Napt Enabled** | Set this field depending on whether Network Address and Port Translation (NAPT) should be enabled for the data network.| | **Dns Addresses** | Enter the DNS server addresses. You should only omit this if you don't need the UEs to perform DNS resolution, or if all UEs in the network will use their own locally configured DNS servers. | |**Custom Location** | Enter the resource ID of the custom location that targets the Azure Kubernetes Service on Azure Stack HCI (AKS-HCI) cluster on the Azure Stack Edge Pro device in the site.|
private-5g-core Gather Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/gather-diagnostics.md
You should always collect diagnostics as soon as possible after encountering an
You must already have an AP5GC site deployed to collect diagnostics.
-## Collect values for diagnostics package gathering
+## Set up a storage account
-1. [Create a storage account](../storage/common/storage-account-create.md) for diagnostics with the following additional configuration:
- 1. In the **Data protection** tab, under **Access control**, select **Enable version-level immutability support**. This will allow you to specify a time-based retention policy for the account in the next step.
- 1. If you would like the content of your storage account to be automatically deleted after a period of time, [configure a default time-based retention policy](../storage/blobs/immutable-policy-configure-version-scope.md#configure-a-default-time-based-retention-policy) for your storage account.
- 1. [Create a container](../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container) for your diagnostics.
- 1. Make a note of the **Container blob** URL. For example:
- `https://storageaccountname.blob.core.windows.net/diagscontainername`
- 1. Navigate to your **Storage account**.
- 1. Select the **...** symbol on the right side of the container blob that you want to use for diagnostics collection.
- 1. Select **Container properties** in the context menu.
- 1. Copy the contents of the **URL** field in the **Container properties** view.
-1. Create a [User-assigned identity](../active-directory/managed-identities-azure-resources/overview.md) and assign it to the storage account created above with the **Storage Blob Data Contributor** role.
- > [!TIP]
- > You may have already created and associated a user-assigned identity when creating the site.
-1. Navigate to the **Packet core control plane** resource for the site.
-1. Select **Identity** under **Settings** in the left side menu.
-1. Select **Add**.
-1. Select the user-signed managed identity you created and select **Add**.
## Gather diagnostics for a site 1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to the **Packet Core Control Pane** overview page of the site you want to gather diagnostics for.
-1. Select **Diagnostics Collection** under the **Support + Troubleshooting** section on the left side. This will open a **Diagnostics Collection** view.
-1. Enter the **Container URL** that was configured for diagnostics storage and append the file name that you want to give the diagnostics. For example:
+1. Select **Diagnostics Collection** under the **Help** section on the left side. This will open a **Diagnostics Collection** view.
+1. Enter the **Storage account blob URL** that was configured for diagnostics storage and append the file name that you want to give the diagnostics. For example:
`https://storageaccountname.blob.core.windows.net/diagscontainername/diagsPackageName.zip` > [!TIP]
- > The **Container URL** should have been noted during creation. If it wasn't:
+ > The **Storage account blob URL** should have been noted during creation. If it wasn't:
> > 1. Navigate to your **Storage account**. > 1. Select the **...** symbol on the right side of the container blob that you want to use for diagnostics collection.
You must already have an AP5GC site deployed to collect diagnostics.
1. Select **Diagnostics collection**. 1. The AP5GC online service will generate a package at the provided storage account URL. Once the portal reports that this has succeeded, you'll be able to download the diagnostics package ready to share with Azure support.
- 1. To download the diagnostics package, see [Download a block blob](/azure/storage/blobs/storage-quickstart-blobs-portal#download-a-block-blob).
+ 1. To download the diagnostics package, navigate to the storage account URL, right-click the file and select **Download**.
1. To open a support request and share the diagnostics package with Azure support, see [How to open a support request for Azure Private 5G Core](open-support-request.md). ## Troubleshooting
You must already have an AP5GC site deployed to collect diagnostics.
## Next steps
-To continue to monitor your 5G core:
-
+- [Perform packet capture on a packet core instance](data-plane-packet-capture.md)
- [Monitor Azure Private 5G Core with Azure Monitor platform metrics](monitor-private-5g-core-with-platform-metrics.md) - [Monitor Azure Private 5G Core with packet core dashboards](packet-core-dashboards.md)
private-5g-core Manage Existing Sims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-existing-sims.md
Previously updated : 06/16/2022 Last updated : 10/26/2023
## View existing SIMs
-You can view your existing SIMs in the Azure portal.
+You can view your configured SIMs in the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Search for and select the **Mobile Network** resource representing the private mobile network.
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
:::image type="content" source="media/mobile-network-search.png" alt-text="Screenshot of the Azure portal. It shows the results of a search for a Mobile Network resource."::: 1. To see a list of all existing SIMs in the private mobile network, select **SIMs** from the **Resource** menu.
- :::image type="content" source="media/manage-existing-sims/sims-list-inline.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs for a private mobile network." lightbox="media/manage-existing-sims/sims-list-enlarged.png":::
+ :::image type="content" source="media/manage-existing-sims/sims-list-inline.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs for a private mobile network." lightbox="media/manage-existing-sims/sims-list-inline.png":::
1. To see a list of existing SIMs in a particular SIM group, select **SIM groups** from the resource menu, and then select your chosen SIM group from the list.
- :::image type="content" source="media/sim-group-resource.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs in a SIM group." lightbox="media/sim-group-resource-enlarged.png":::
+ :::image type="content" source="media/sim-group-resource.png" alt-text="Screenshot of the Azure portal. It shows a list of currently provisioned SIMs in a SIM group." lightbox="media/sim-group-resource.png":::
+
+## View SIM statistics
+
+You can also view status information for connected devices in the Azure portal.
+
+1. Search for and select the **Mobile Network** resource representing the private mobile network containing your SIMs.
+1. In the resource menu, select **SIMs**.
+1. Select **SIM stats** from the ribbon.
+
+ :::image type="content" source="media/manage-existing-sims/sim-stats-button.png" alt-text="Screenshot of the Azure portal showing the SIM stats button in the ribbon." lightbox="media/manage-existing-sims/sim-stats-button.png":::
+
+1. The SIM stats page displays connected, disconnected and idle devices on the mobile network with basic status information for each device.
+
+ :::image type="content" source="media/manage-existing-sims/sim-stats-list.png" alt-text="Screenshot of the Azure portal showing the UE information page." lightbox="media/manage-existing-sims/sim-stats-list.png":::
+
+1. Select an IMSI number from the list to view detailed information for that device, including mobile identities, location information, connection information and session information. The information shown varies depending on the device state and whether it is connected to 4G or 5G.
+
+ :::image type="content" source="media/manage-existing-sims/sim-stats-ue-info.png" alt-text="Screenshot of the Azure portal showing the SIM stats page." lightbox="media/manage-existing-sims/sim-stats-ue-info.png":::
## Assign SIM policies
-SIMs need an assigned SIM policy before they can use your private mobile network. You may want to assign a SIM policy to an existing SIM that doesn't already have one, or you may want to change the assigned SIM policy for an existing SIM. For information on configuring SIM policies, see [Configure a SIM policy](configure-sim-policy-azure-portal.md).
+SIMs need an assigned SIM policy before they can use your private mobile network. You might want to assign a SIM policy to an existing SIM that doesn't already have one, or you might want to change the assigned SIM policy for an existing SIM. For information on configuring SIM policies, see [Configure a SIM policy](configure-sim-policy-azure-portal.md).
To assign a SIM policy to one or more SIMs:
To assign a SIM policy to one or more SIMs:
## Assign static IP addresses
-Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you may configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart.
+Static IP address allocation ensures that a UE receives the same IP address every time it connects to the private mobile network. This is useful when you want Internet of Things (IoT) applications to be able to consistently connect to the same device. For example, you can configure a video analysis application with the IP addresses of the cameras providing video streams. If these cameras have static IP addresses, you won't need to reconfigure the video analysis application with new IP addresses each time the cameras restart.
If you've configured static IP address allocation for your packet core instance(s), you can assign static IP addresses to the SIMs you've provisioned. If you have multiple data networks in your private mobile network, you can assign a different static IP address for each data network to the same SIM.
private-5g-core Manage Sim Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/manage-sim-groups.md
# Manage SIM groups - Azure portal
-*SIM groups* allow you to sort SIMs into categories for easier management. Each SIM must be a member of a SIM group, but can't be a member of more than one. If you only have a small number of SIMs, you may want to add them all to the same SIM group. Alternatively, you can create multiple SIM groups to sort your SIMs. For example, you could categorize your SIMs by their purpose (such as SIMs used by specific UE types like cameras or cellphones), or by their on-site location. In this how-to guide, you'll learn how to create, delete, and view SIM groups using the Azure portal.
+*SIM groups* allow you to sort SIMs into categories for easier management. Each SIM must be a member of a SIM group, but can't be a member of more than one. If you only have a small number of SIMs, you might want to add them all to the same SIM group. Alternatively, you can create multiple SIM groups to sort your SIMs. For example, you could categorize your SIMs by their purpose (such as SIMs used by specific UE types like cameras or cellphones), or by their on-site location. In this how-to guide, you'll learn how to create, delete, and view SIM groups using the Azure portal.
## Prerequisites
You can view your existing SIM groups in the Azure portal.
## Create a SIM group
-You can create new SIM groups in the Azure portal. As part of creating a SIM group, you'll be given the option of provisioning new SIMs to add to your new SIM group. If you want to provision new SIMs, you'll need to [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values) before you start.
+You can create new SIM groups in the Azure portal. As part of creating a SIM group, you're given the option of provisioning new SIMs to add to your new SIM group. If you want to provision new SIMs, you need to [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values) before you start.
To create a new SIM group:
To create a new SIM group:
1. Select **Next: Encryption**. 1. On the **Encryption** configuration tab, select your chosen encryption type next to **Encryption Type**. By default, Microsoft-managed keys (MMK) is selected. Once created, you cannot change the encryption type of a SIM group.
- - If you leave **Microsoft-managed keys (MMK)** selected, you will not need to enter any more configuration information on this tab.
- - If you select **Customer-managed Keys (CMK)**, a new set of fields will appear. You need to provide the Key URI and User-assigned identity created or collected in [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values). These values can be updated as required after SIM group creation.
+ - If you leave **Microsoft-managed keys (MMK)** selected, you do not need to enter any more configuration information on this tab.
+ - If you select **Customer-managed Keys (CMK)**, a new set of fields appear. You need to provide the Key URI and User-assigned identity created or collected in [Collect SIM and SIM Group values](collect-required-information-for-private-mobile-network.md#collect-sim-and-sim-group-values). These values can be updated as required after SIM group creation.
:::image type="content" source="media/manage-sim-groups/create-sim-group-encryption-tab.png" alt-text="Screenshot of the Azure portal showing the Encryption configuration tab."::: 1. Select **Next: SIMs**. 1. On the **SIMs** configuration tab, select your chosen input method by selecting the appropriate option next to **How would you like to input the SIMs information?**. You can then input the information you collected for your SIMs. - If you decided that you don't want to provision any SIMs at this point, select **Add SIMs later**.
- - If you select **Add manually**, a new **Add SIM** button will appear under **Enter SIM profile configurations**. Select it, fill out the fields with the correct settings for the first SIM you want to provision, and select **Add SIM**. Repeat this process for every additional SIM you want to provision.
+ - If you select **Add manually**, a new **Add SIM** button appears under **Enter SIM profile configurations**. Select it, fill out the fields with the correct settings for the first SIM you want to provision, and select **Add SIM**. Repeat this process for every additional SIM you want to provision.
:::image type="content" source="media/add-sim-manually.png" alt-text="Screenshot of the Azure portal showing the Add SIM screen.":::
- - If you select **Upload JSON file**, the **Upload SIM profile configurations** field will appear. Use this field to upload your chosen JSON file.
+ - If you select **Upload JSON file**, the **Upload SIM profile configurations** field appears. Use this field to upload your chosen JSON file.
:::image type="content" source="media/manage-sim-groups/create-sim-group-sims-tab.png" alt-text="Screenshot of the Azure portal showing the SIMs configuration tab.":::
To create a new SIM group:
If the validation fails, you'll see an error message and the **Configuration** tab(s) containing the invalid configuration will be flagged with red dots. Select the flagged tab(s) and use the error messages to correct invalid configuration before returning to the **Review + create** tab.
-1. Once your configuration has been validated, you can select **Create** to create the SIM group. The Azure portal will display the following confirmation screen when the SIM group has been created.
+1. Once your configuration has been validated, you can select **Create** to create the SIM group. The Azure portal displays the following confirmation screen when the SIM group has been created.
:::image type="content" source="media/manage-sim-groups/sim-group-deployment-complete.png" alt-text="Screenshot of the Azure portal. It shows confirmation of the successful creation of a SIM group."::: 1. Click **Go to resource group** and then select your new SIM group from the list of resources. You'll be shown your new SIM group and any SIMs you've provisioned.
- :::image type="content" source="media/sim-group-resource.png" alt-text="Screenshot of the Azure portal showing a SIM group containing SIMs." lightbox="media/sim-group-resource-enlarged.png" :::
-
-1. At this point, your SIMs will not have any assigned SIM policies and so will not be brought into service. If you want to begin using the SIMs, [assign a SIM policy to them](manage-existing-sims.md#assign-sim-policies). If you've configured static IP address allocation for your packet core instance(s), you may also want to [assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses) to the SIMs you've provisioned.
+1. At this point, your SIMs do not have any assigned SIM policies and so won't be brought into service. If you want to begin using the SIMs, [assign a SIM policy to them](manage-existing-sims.md#assign-sim-policies). If you've configured static IP address allocation for your packet core instance(s), you might also want to [assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses) to the SIMs you've provisioned.
## Modify a SIM group
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
To modify the packet core and/or access network configuration:
- Use the information you collected in [Collect packet core configuration values](collect-required-information-for-a-site.md#collect-packet-core-configuration-values) for the top-level configuration values. - Use the information you collected in [Collect access network values](collect-required-information-for-a-site.md#collect-access-network-values) for the configuration values under **Access network**.
+ - If you want to enable UE Metric monitoring, use the information collected in [Collect UE Usage Tracking values](collect-required-information-for-a-site.md#collect-ue-usage-tracking-values) to fill out the **Azure Event Hub Namespace**, **Event Hub name** and **User Assigned Managed Identity** values.
+ > [!NOTE]
+ > You must reinstall the packet core control pane** in order to use UE Metric monitoring if it was not already configured.
1. Choose the next step: - If you've finished modifying the packet core instance, go to [Submit and verify changes](#submit-and-verify-changes). - If you want to configure a new or existing data network and attach it to the packet core instance, go to [Attach a data network](#attach-a-data-network).
This change will require a manual packet core reinstall to take effect, see [Nex
## Remove data network resource
-If you removed an attached data network from the packet core and it is no longer attached to any packet cores or referenced by any SIM policies, you may remove the data network from the resource group:
+If you removed an attached data network from the packet core and it is no longer attached to any packet cores or referenced by any SIM policies, you can remove the data network from the resource group:
> [!NOTE] > The data network that you want to delete must have no SIM policies associated with it. If the data network has one or more associated SIM policies data network removal will be prevented.
private-5g-core Packet Core Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/packet-core-dashboards.md
The packet core dashboards are powered by *Grafana*, an open-source, metric anal
## Access the packet core dashboards > [!TIP]
-> When signing in, if you see a warning in your browser that the connection isn't secure, you may be using a self-signed certificate to attest access to your local monitoring tools. We recommend following [Modify the local access configuration in a site](modify-local-access-configuration.md) to configure a custom HTTPS certificate signed by a globally known and trusted certificate authority.
+> When signing in, if you see a warning in your browser that the connection isn't secure, you might be using a self-signed certificate to attest access to your local monitoring tools. We recommend following [Modify the local access configuration in a site](modify-local-access-configuration.md) to configure a custom HTTPS certificate signed by a globally known and trusted certificate authority.
<a name='azure-active-directory'></a>
We'll go through the common concepts and operations you'll need to understand be
You can access the following packet core dashboards: > [!TIP]
-> Some packet core dashboards display different panels depending on whether the packet core instance supports 5G or 4G user equipment (UEs).
+> Some packet core dashboards display different panels depending on whether the packet core instance supports 5G, 4G, or combined 4G and 5G user equipment (UEs).
- The **Overview dashboard** displays important *key performance indicators* (KPIs), including the number of connected devices, throughput, and any alerts firing in the system. :::image type="content" source="media/packet-core-dashboards/packet-core-overview-dashboard.png" alt-text="Screenshot of the packet core Overview dashboard." lightbox="media/packet-core-dashboards/packet-core-overview-dashboard.png":::
+ If you have configured 4G and 5G on a single packet core, the **Overview dashboard** displays 4G and 5G KPIs individually and combined.
+ Each panel on the overview dashboard links to another dashboard with detailed statistics about the KPI shown. You can access the link by hovering your cursor over the upper-left corner of the panel. You can then select the link in the pop-up. :::image type="content" source="media/packet-core-dashboards/packet-core-dashboard-panel-link.png" alt-text="Screenshot of the packet core dashboard. The link to the device and session statistics dashboard is shown.":::
You can access the following packet core dashboards:
- The **Uplink and Downlink Statistics dashboard** provides detailed statistics on the user plane traffic being handled by the packet core instance.
- :::image type="content" source="media/packet-core-dashboards/packet-core-uplink-downlink-stats-dashboard.png" alt-text="Screenshot of the Uplink and Downlink Statistics dashboard. Panels related to throughput, packet rates, and packet size are shown." lightbox="media/packet-core-dashboards/packet-core-device-session-stats-dashboard.png":::
+ :::image type="content" source="media/packet-core-dashboards/packet-core-uplink-downlink-stats-dashboard.png" alt-text="Screenshot of the Uplink and Downlink Statistics dashboard. Panels related to throughput, packet rates, and packet size are shown." lightbox="media/packet-core-dashboards/packet-core-uplink-downlink-stats-dashboard.png":::
- The **Debug** dashboards show detailed breakdowns of the request and response statistics for the packet core instance's interfaces.
The packet core dashboards use the following types of panel. For all panels, you
:::image type="content" source="media/packet-core-dashboards/packet-core-graph-panel.png" alt-text="Screenshot of a graph panel in the packet core dashboards. The panel displays information on total throughput statistics."::: -- **Single stat** panels (called "Singlestat" panels in the Grafana documentation) display a single statistic. The statistic may be presented as a simple count or as a gauge. These panels indicate whether a single statistic has exceeded a threshold by their color.
+- **Single stat** panels (called "Singlestat" panels in the Grafana documentation) display a single statistic. The statistic might be presented as a simple count or as a gauge. These panels indicate whether a single statistic has exceeded a threshold by their color.
- The value displayed on a gauge single stat panel is shown in green at normal operational levels, amber when approaching a threshold, and red when the threshold has been breached. - The entirety of a count single stat panel will turn red if a threshold is breached.
private-5g-core Ping Traceroute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ping-traceroute.md
To access the local UI, see [Tutorial: Connect to Azure Stack Edge Pro with GPU]
upft list ```
- This should report a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
+ This should report a single interface on the control plane network (N2), a single interface on the access network (N3) and an interface for each attached data network (N6). For example:
```azurecli
- n6trace1 (Data Network: enterprise)
- n6trace2 (Data Network: test)
+ n2trace
n3trace n6trace0 (Data Network: internet)
+ n6trace1 (Data Network: enterprise)
+ n6trace2 (Data Network: test)
``` 1. Run the ping command, specifying the network and IP address to test. You can specify `access` for the access network or the network name for a data network.
private-5g-core Private Mobile Network Design Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/private-mobile-network-design-requirements.md
zone_pivot_groups: ase-pro-version
# Private mobile network design requirements
-This article helps you design and prepare for implementing a private 4G or 5G network based on Azure Private 5G Core (AP5GC). It aims to provide an understanding of how these networks are constructed and the decisions that you need to make as you plan your network.
+This article helps you design and prepare for implementing a private 4G, or 5G network based on Azure Private 5G Core (AP5GC). It aims to provide an understanding of how these networks are constructed and the decisions that you need to make as you plan your network.
## Azure Private MEC and Azure Private 5G Core
-[Azure private multi-access edge compute (MEC)](../private-multi-access-edge-compute-mec/overview.md) is a solution that combines Microsoft compute, networking, and application services into a deployment at enterprise premises (the edge). These edge deployments are managed centrally from the cloud. Azure Private 5G Core is an Azure service within Azure Private Multi-access Edge Compute (MEC) that provides 4G and 5G core network functions at the enterprise edge. At the enterprise edge site, devices attach across a cellular radio access network (RAN) and are connected via the Azure Private 5G Core service to upstream networks, applications, and resources. Optionally, devices may use the local compute capability provided by Azure Private MEC to process data streams at very low latency, all under the control of the enterprise.
+[Azure private multi-access edge compute (MEC)](../private-multi-access-edge-compute-mec/overview.md) is a solution that combines Microsoft compute, networking, and application services into a deployment at enterprise premises (the edge). These edge deployments are managed centrally from the cloud. Azure Private 5G Core is an Azure service within Azure Private Multi-access Edge Compute (MEC) that provides 4G and 5G core network functions at the enterprise edge. At the enterprise edge site, devices attach across a cellular radio access network (RAN) and are connected via the Azure Private 5G Core service to upstream networks, applications, and resources. Optionally, devices might use the local compute capability provided by Azure Private MEC to process data streams at very low latency, all under the control of the enterprise.
:::image type="content" source="media/private-5g-elements.png" alt-text="Diagram displaying the components of a private network solution. UEs, RANs and sites are at the edge, while Azure region management is in the cloud.":::
There are multiple ways to set up your network for use with AP5GC. The exact set
### Subnets and IP addresses
-You may have existing IP networks at the enterprise site that the private cellular network will have to integrate with. This might mean, for example:
+You might have existing IP networks at the enterprise site that the private cellular network will have to integrate with. This might mean, for example:
- Selecting IP subnets and IP addresses for AP5GC that match existing subnets without clashing addresses. - Segregating the new network via IP routers or using the private RFC 1918 address space for subnets.
You need to document the IPv4 subnets that will be used for the deployment and a
### Network access
-Your design must reflect the enterpriseΓÇÖs rules on what networks and assets should be reachable by the RAN and UEs on the private 5G network. For example, they might be permitted to access local Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), the internet, or Azure, but not a factory operations local area network (LAN). You may need to arrange for remote access to the network so that you can troubleshoot issues without requiring a site visit. You also need to consider how the enterprise site will be connected to upstream networks such as Azure for management and/or for access to other resources and applications outside of the enterprise site.
+Your design must reflect the enterpriseΓÇÖs rules on what networks and assets should be reachable by the RAN and UEs on the private 5G network. For example, they might be permitted to access local Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), the internet, or Azure, but not a factory operations local area network (LAN). You might need to arrange for remote access to the network so that you can troubleshoot issues without requiring a site visit. You also need to consider how the enterprise site will be connected to upstream networks such as Azure for management and/or for access to other resources and applications outside of the enterprise site.
-You need to agree with the enterprise team which IP subnets and addresses will be allowed to communicate with each other. Then, create a routing plan and/or access control list (ACL) configuration that implements this agreement on the local IP infrastructure. You may also use virtual local area networks (VLANs) to partition elements at layer 2, configuring your switch fabric to assign connected ports to specific VLANs (for example, to put the Azure Stack Edge port used for RAN access into the same VLAN as the RAN units connected to the Ethernet switch). You should also agree with the enterprise to set up an access mechanism, such as a virtual private network (VPN), that allows your support personnel to remotely connect to the management interface of each element in the solution. You also need an IP link between Azure Private 5G Core and Azure for management and telemetry.
+You need to agree with the enterprise team which IP subnets and addresses will be allowed to communicate with each other. Then, create a routing plan and/or access control list (ACL) configuration that implements this agreement on the local IP infrastructure. You might also use virtual local area networks (VLANs) to partition elements at layer 2, configuring your switch fabric to assign connected ports to specific VLANs (for example, to put the Azure Stack Edge port used for RAN access into the same VLAN as the RAN units connected to the Ethernet switch). You should also agree with the enterprise to set up an access mechanism, such as a virtual private network (VPN), that allows your support personnel to remotely connect to the management interface of each element in the solution. You also need an IP link between Azure Private 5G Core and Azure for management and telemetry.
### RAN compliance
The RAN that you use to broadcast the signal across the enterprise site must com
- You have received permission for the RAN to broadcast using spectrum in a certain location, for example, by grant from a telecom operator, regulatory authority or via a technological solution such as a Spectrum Access System (SAS). - The RAN units in a site have access to high-precision timing sources, such as Precision Time Protocol (PTP) and GPS location services.
-You should ask your RAN partner for the countries/regions and frequency bands for which the RAN is approved. You may find that you need to use multiple RAN partners to cover the countries and regions in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, we recommend that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
+You should ask your RAN partner for the countries/regions and frequency bands for which the RAN is approved. You might find that you need to use multiple RAN partners to cover the countries and regions in which you provide your solution. Although the RAN, UE and packet core all communicate using standard protocols, we recommend that you perform interoperability testing for the specific 4G Long-Term Evolution (LTE) or 5G standalone (SA) protocol between Azure Private 5G Core, UEs and the RAN prior to any deployment at an enterprise customer.
-Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries/regions, spectrum must be obtained from the national/regional regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you may need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
+Your RAN will transmit a Public Land Mobile Network Identity (PLMN ID) to all UEs on the frequency band it is configured to use. You should define the PLMN ID and confirm your access to spectrum. In some countries/regions, spectrum must be obtained from the national/regional regulator or incumbent telecommunications operator. For example, if you're using the band 48 Citizens Broadband Radio Service (CBRS) spectrum, you might need to work with your RAN partner to deploy a Spectrum Access System (SAS) domain proxy on the enterprise site so that the RAN can continuously check that it is authorized to broadcast.
#### Maximum Transmission Units (MTUs) The Maximum Transmission Unit (MTU) is a property of an IP link, and it is configured on the interfaces at each end of the link. Packets that exceed an interface's configured MTU are split into smaller packets via IPv4 fragmentation prior to sending and are then reassembled at their destination. However, if an interface's configured MTU is higher than the link's supported MTU, the packet will fail to be transmitted correctly.
-To avoid transmission issues caused by IPv4 fragmentation, a 4G or 5G packet core instructs UEs what MTU they should use. However, UEs do not always respect the MTU signaled by the packet core.
+To avoid transmission issues caused by IPv4 fragmentation, a 4G, or 5G packet core instructs UEs what MTU they should use. However, UEs do not always respect the MTU signaled by the packet core.
IP packets from UEs are tunneled through from the RAN, which adds overhead from encapsulation. The MTU value for the UE should therefore be smaller than the MTU value used between the RAN and the packet core to avoid transmission issues.
-RANs typically come pre-configured with an MTU of 1500. The packet coreΓÇÖs default UE MTU is 1440 bytes to allow for encapsulation overhead. These values maximize RAN interoperability, but risk that certain UEs will not observe the default MTU and will generate larger packets that require IPv4 fragmentation and that may be dropped by the network. If you are affected by this issue, it is strongly recommended to configure the RAN to use an MTU of 1560 or higher, which allows a sufficient overhead for the encapsulation and avoids fragmentation with a UE using a standard MTU of 1500.
+RANs typically come pre-configured with an MTU of 1500. The packet coreΓÇÖs default UE MTU is 1440 bytes to allow for encapsulation overhead. These values maximize RAN interoperability, but risk that certain UEs will not observe the default MTU and will generate larger packets that require IPv4 fragmentation and that might be dropped by the network. If you are affected by this issue, it is strongly recommended to configure the RAN to use an MTU of 1560 or higher, which allows a sufficient overhead for the encapsulation and avoids fragmentation with a UE using a standard MTU of 1500.
You can also change the UE MTU signaled by the packet core. We recommend setting the MTU to a value within the range supported by your UEs and 60 bytes below the MTU signaled by the RAN. Note that:
private-5g-core Ue Usage Event Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/ue-usage-event-hub.md
Title: Monitor UE usage with Event Hubs (preview)
+ Title: Monitor UE usage with Event Hubs
description: Information on using Azure Event Hubs to monitor UE usage in your private mobile network.
Last updated 05/24/2023
-# Monitor UE usage with Event Hubs (preview)
+# Monitor UE usage with Event Hubs
Azure Private 5G Core can be configured to integrate with [Event Hubs](/azure/event-hubs), allowing you to monitor UE usage. Event Hubs is a modern big data streaming platform and event ingestion service that can seamlessly integrate with AP5GC. The service can process millions of events per second with low latency. The data sent to an Event Hubs instance can be transformed and stored by using any real-time analytics providers or batching or storage adapters. You can monitor UE usage based on the monitoring data generated by Azure Event Hubs, and analyze or alert on this data with [Azure Monitor](/azure/azure-monitor/overview).
-> [!IMPORTANT]
-> Azure Private 5G Core integration with Event Hubs is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+## Prerequisites
+
+- You must already have an Event Hubs instance with a shared access policy. The shared access policy must have send and receive access configured.
+ > [!NOTE]
+ > Only the first shared access policy for the event hub will be used by this feature. Any additional shared access policies will be ignored.
+- You must have a user assigned managed identity that has the Resource Policy Contributor or Owner role for the Event Hubs instance and is assigned to the Packet Core Control Plane for the site.
## Configure UE usage monitoring
-UE usage monitoring can be configured during site creation or at a later stage. If you want to configure UE usage monitoring for a site, please contact your support representative.
+UE usage monitoring can be configured during [site creation](create-a-site.md) or at a later stage by [modifying your site](modify-packet-core.md).
Once Event Hubs is receiving data from your AP5GC deployment you can write an application, using SDKs [such as .NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send?tabs=passwordless%2Croles-azure-portal), to consume event data and produce useful metric data.
When configured, AP5GC will send data usage reports per QoS flow level for all P
## Azure Stream Analytics
-Azure Stream Analytics allow you to process and analyze streaming data from Event Hubs. See [Process data from your event hub using Azure Stream Analytics](/azure/event-hubs/process-data-azure-stream-analytics) for more information.
+Azure Stream Analytics allows you to process and analyze streaming data from Event Hubs. See [Process data from your Event Hubs using Azure Stream Analytics](/azure/event-hubs/process-data-azure-stream-analytics) for more information.
## UE usage schema
role-based-access-control Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/best-practices.md
Previously updated : 06/28/2022 Last updated : 11/06/2023 #Customer intent: As a dev, devops, or it admin, I want to learn how to best use Azure RBAC.
For information about how to assign roles, see [Assign Azure roles using the Azu
You should have a maximum of 3 subscription owners to reduce the potential for breach by a compromised owner. This recommendation can be monitored in Microsoft Defender for Cloud. For other identity and access recommendations in Defender for Cloud, see [Security recommendations - a reference guide](../security-center/recommendations-reference.md).
+## Limit privileged administrator role assignments
+
+Some roles are identified as [privileged administrator roles](./role-assignments-steps.md#privileged-administrator-roles). Consider taking the following actions to improve your security posture:
+
+- Remove unnecessary privileged role assignments.
+- Avoid assigning a privileged administrator role when a [job function role](./role-assignments-steps.md#job-function-roles) can be used instead.
+- If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource, instead of a broader scope, such as management group or subscription.
+- If you are assigning a role with permission to create role assignments, consider adding a condition to constrain the role assignment. For more information, see [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md).
+
+For more information, see [List or manage privileged administrator role assignments](./role-assignments-list-portal.md#list-or-manage-privileged-administrator-role-assignments).
+ <a name='use-azure-ad-privileged-identity-management'></a> ## Use Microsoft Entra Privileged Identity Management
For more information, see [Assign a role using the unique role ID and Azure Powe
## Avoid using a wildcard when creating custom roles
-When creating custom roles, you can use the wildcard (`*`) character to define permissions. It's recommended that you specify `Actions` and `DataActions` explicitly instead of using the wildcard (`*`) character. The additional access and permissions granted through future `Actions` or `DataActions` may be unwanted behavior using the wildcard. For more information, see [Azure custom roles](custom-roles.md#wildcard-permissions).
+When creating custom roles, you can use the wildcard (`*`) character to define permissions. It's recommended that you specify `Actions` and `DataActions` explicitly instead of using the wildcard (`*`) character. The additional access and permissions granted through future `Actions` or `DataActions` might be unwanted behavior using the wildcard. For more information, see [Azure custom roles](custom-roles.md#wildcard-permissions).
## Next steps
role-based-access-control Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md
Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
role-based-access-control Role Assignments List Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-list-portal.md
Previously updated : 09/13/2022 Last updated : 11/06/2023
Users that have been assigned the [Owner](built-in-roles.md#owner) role for a su
![Screenshot of subscription Access control and Role assignments tab.](./media/role-assignments-list-portal/sub-access-control-role-assignments-owners.png)
+## List or manage privileged administrator role assignments
+
+On the **Role assignments** tab, you can list and see the count of privileged administrator role assignments at the current scope. For more information, see [Privileged administrator roles](role-assignments-steps.md#privileged-administrator-roles).
+
+1. In the Azure portal, click **All services** and then select the scope. For example, you can select **Management groups**, **Subscriptions**, **Resource groups**, or a resource.
+
+1. Click the specific resource.
+
+1. Click **Access control (IAM)**.
+
+1. Click the **Role assignments** tab and then click the **Privileged** tab to list the privileged administrator role assignments at this scope.
+
+ :::image type="content" source="./media/role-assignments-list-portal/access-control-role-assignments-privileged.png" alt-text="Screenshot of Access control page, Role assignments tab, and Privileged tab showing privileged role assignments." lightbox="./media/role-assignments-list-portal/access-control-role-assignments-privileged.png":::
+
+1. To see the count of privileged administrator role assignments at this scope, see the **Privileged** card.
+
+1. To manage privileged administrator role assignments, see the **Privileged** card and click **View assignments**.
+
+ On the **Manage privileged role assignments** page, you can add a condition to constrain the privileged role assignment or remove the role assignment. For more information, see [Delegate the Azure role assignment task to others with conditions (preview)](delegate-role-assignments-portal.md).
+
+ :::image type="content" source="./media/role-assignments-list-portal/access-control-role-assignments-privileged-manage.png" alt-text="Screenshot of Manage privileged role assignments page showing how to add conditions or remove role assignments." lightbox="./media/role-assignments-list-portal/access-control-role-assignments-privileged-manage.png":::
+ ## List role assignments at a scope 1. In the Azure portal, click **All services** and then select the scope. For example, you can select **Management groups**, **Subscriptions**, **Resource groups**, or a resource.
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 09/20/2023 Last updated : 11/06/2023
If you need to assign administrator roles in Microsoft Entra ID, see [Assign Mic
1. If you want to assign a privileged administrator role, select the **Privileged administrator roles** tab to select the role.
- Privileged administrator roles are roles that grant privileged administrator access, such as the ability to manage Azure resources or assign roles to other users. You should avoid assigning a privileged administrator role when a job function role can be assigned instead. If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource. For more information, see [Privileged administrator roles](./role-assignments-steps.md#privileged-administrator-roles).
-
- ![Screenshot of Add role assignment page with Privileged administrator roles tab selected.](./media/shared/privileged-administrator-roles.png)
+ For best practices when using privileged administrator role assignments, see [Best practices for Azure RBAC](best-practices.md#limit-privileged-administrator-role-assignments).
+
+ ![Screenshot of Add role assignment page with Privileged administrator roles tab selected.](./media/shared/privileged-administrator-roles.png)
1. In the **Details** column, click **View** to get more details about a role.
role-based-access-control Role Assignments Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md
Previously updated : 08/09/2023 Last updated : 11/06/2023
Privileged administrator roles are roles that grant privileged administrator acc
| | | | [Owner](built-in-roles.md#owner) | <ul><li>Grants full access to manage all resources</li><li>Assign roles in Azure RBAC</li></ul> | | [Contributor](built-in-roles.md#contributor) | <ul><li>Grants full access to manage all resources</li><li>Can't assign roles in Azure RBAC</li><li>Can't manage assignments in Azure Blueprints or share image galleries</li></ul> |
+| [Role Based Access Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li><li>Can't manage access using other ways, such as Azure Policy</li></ul> |
| [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li></ul> |
-It's a best practice to grant users the least privilege to get their work done. You should avoid assigning a privileged administrator role when a job function role can be assigned instead. If you must assign a privileged administrator role, use a narrow scope, such as resource group or resource, instead of a broader scope, such as management group or subscription.
+For best practices when using privileged administrator role assignments, see [Best practices for Azure RBAC](best-practices.md#limit-privileged-administrator-role-assignments). For more information, see [Privileged administrator role definition](./role-definitions.md#privileged-administrator-role-definition).
## Step 3: Identify the needed scope
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
Previously updated : 04/05/2023 Last updated : 11/06/2023
Although it's possible to create a custom role with a resource instance in `Assi
For more information about `AssignableScopes` for custom roles, see [Azure custom roles](custom-roles.md).
+## Privileged administrator role definition
+
+Privileged administrator roles are roles that grant privileged administrator access, such as the ability to manage Azure resources or assign roles to other users. If a built-in or custom role includes any of the following actions, it is considered privileged. For more information, see [List or manage privileged administrator role assignments](./role-assignments-list-portal.md#list-or-manage-privileged-administrator-role-assignments).
+
+> [!div class="mx-tableFixed"]
+> | Action string | Description |
+> | | |
+> | `*` | Create and manage resources of all types. |
+> | `*/delete` | Delete resources of all types. |
+> | `*/write` | Write resources of all types. |
+> | `Microsoft.Authorization/denyAssignments/delete` | Delete a deny assignment at the specified scope. |
+> | `Microsoft.Authorization/denyAssignments/write` | Create a deny assignment at the specified scope. |
+> | `Microsoft.Authorization/roleAssignments/delete` | Delete a role assignment at the specified scope. |
+> | `Microsoft.Authorization/roleAssignments/write` | Create a role assignment at the specified scope. |
+> | `Microsoft.Authorization/roleDefinitions/delete` | Delete the specified custom role definition. |
+> | `Microsoft.Authorization/roleDefinitions/write` | Create or update a custom role definition with specified permissions and assignable scopes. |
+ ## Next steps * [Understand role assignments](role-assignments.md)
search Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md
Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
Automation rules provide a way to automate the handling of Microsoft security al
Microsoft security alerts include the following: -- Microsoft Defender for Cloud Apps (formerly Microsoft Cloud App Security) - Microsoft Entra ID Protection-- Microsoft Defender for Cloud (formerly Azure Defender or Azure Security Center)-- Defender for IoT (formerly Azure Security Center for IoT)
+- Microsoft Defender for Cloud
+- Microsoft Defender for Cloud Apps
- Microsoft Defender for Office 365 - Microsoft Defender for Endpoint - Microsoft Defender for Identity
+- Defender for IoT
### Multiple sequenced playbooks/actions in a single rule
sentinel Connect Services Api Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-services-api-based.md
This article presents information that is common to the group of API-based data
|Data connector |Licensing, costs, and other prerequisites | ||| |Microsoft Entra ID Protection | - [Microsoft Entra ID P2 subscription](https://azure.microsoft.com/pricing/details/active-directory/)<br> - Other charges may apply |
- |Dynamics 365 | - [Microsoft Dynamics 365 production license](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description). Not available for sandbox environments.<br>- At least one user assigned a Microsoft/Office 365 [E1 or greater](/power-platform/admin/enable-use-comprehensive-auditing#requirements) license.<br>- Other charges may apply |
+ |Dynamics 365 | - [Microsoft Dynamics 365 production license](/office365/servicedescriptions/microsoft-dynamics-365-online-service-description). Not available for sandbox environments.<br>- At least one user assigned a Microsoft/Office 365 [E1 or greater](/power-platform/admin/enable-use-comprehensive-auditing#requirements) license. <br>- Audit logging enabled in Microsoft Purview. See [Turn auditing on or off](/purview/audit-log-enable-disable). <br>- Audit logging enabled in your Microsoft Dataverse environment. See [Microsoft Dataverse and model-driven apps activity logging](/power-platform/admin/enable-use-comprehensive-auditing). <br>- Other charges may apply. |
|Microsoft Defender for Cloud Apps|For Cloud Discovery logs, [enable Microsoft Sentinel as your SIEM in Microsoft Defender for Cloud Apps](/cloud-app-security/siem-sentinel)| |Microsoft Defender for Endpoint|Valid license for [Microsoft Defender for Endpoint deployment](/microsoft-365/security/defender-endpoint/production-deployment)| |Microsoft Defender for Office 365|Valid license for [Office 365 ATP Plan 2](/microsoft-365/security/office-365-security/office-365-atp#office-365-atp-plan-1-and-plan-2)|
service-bus-messaging Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md
Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
service-fabric Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Azure Service Fabric
service-fabric Service Fabric Cluster Fabric Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-fabric-settings.md
The following is a list of Fabric settings that you can customize, organized by
| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** | | | | | |
+|AllowDisableEnableService|Bool, default is FALSE |Dynamic|Flag to indicate if it's allowed to execute Disable/Enable feature |
|AllowNodeStateRemovedForSeedNode|Bool, default is FALSE |Dynamic|Flag to indicate if it's allowed to remove node state for a seed node | |BuildReplicaTimeLimit|TimeSpan, default is Common::TimeSpan::FromSeconds(3600)|Dynamic|Specify timespan in seconds. The time limit for building a stateful replica; after which a warning health report will be initiated | |ClusterPauseThreshold|int, default is 1|Dynamic|If the number of nodes in system go below this value then placement; load balancing; and failover is stopped. |
The following is a list of Fabric settings that you can customize, organized by
|AffinityConstraintPriority | Int, default is 0 | Dynamic|Determines the priority of affinity constraint: 0: Hard; 1: Soft; negative: Ignore. | |ApplicationCapacityConstraintPriority | Int, default is 0 | Dynamic|Determines the priority of capacity constraint: 0: Hard; 1: Soft; negative: Ignore. | |AutoDetectAvailableResources|bool, default is TRUE|Static|This config will trigger auto detection of available resources on node (CPU and Memory) When this config is set to true - we will read real capacities and correct them if user specified bad node capacities or didn't define them at all If this config is set to false - we will trace a warning that user specified bad node capacities; but we will not correct them; meaning that user wants to have the capacities specified as > than the node really has or if capacities are undefined; it will assume unlimited capacity |
+|AuxiliaryInBuildThrottlingWeight | double, default is 1 | Static|Auxiliary replica's weight against the current InBuildThrottling max limit. |
|BalancingDelayAfterNewNode | Time in seconds, default is 120 |Dynamic|Specify timespan in seconds. Don't start balancing activities within this period after adding a new node. | |BalancingDelayAfterNodeDown | Time in seconds, default is 120 |Dynamic|Specify timespan in seconds. Don't start balancing activities within this period after a node down event. | |BlockNodeInUpgradeConstraintPriority | Int, default is -1 |Dynamic|Determines the priority of capacity constraint: 0: Hard; 1: Soft; negative: Ignore |
The following is a list of Fabric settings that you can customize, organized by
|DetailedPartitionListLimit | Int, default is 15 |Dynamic| Defines the number of partitions per diagnostic entry for a constraint to include before truncation in Diagnostics. | |DetailedVerboseHealthReportLimit | Int, default is 200 | Dynamic|Defines the number of times an unplaced replica has to be persistently unplaced before detailed health reports are emitted. | |EnforceUserServiceMetricCapacities|bool, default is FALSE | Static |Enables fabric services protection. All user services are under one job object/cgroup and limited to specified amount of resources. This needs to be static (requires restart of FabricHost) as creation/removal of user job object and setting limits in done during open of Fabric Host. |
+|EnableServiceSensitivity | bool, default is False | Dynamic|Feature switch to enable/disable the replica sensitivity feature. |
|FaultDomainConstraintPriority | Int, default is 0 |Dynamic| Determines the priority of fault domain constraint: 0: Hard; 1: Soft; negative: Ignore. | |GlobalMovementThrottleCountingInterval | Time in seconds, default is 600 |Static| Specify timespan in seconds. Indicate the length of the past interval for which to track per domain replica movements (used along with GlobalMovementThrottleThreshold). Can be set to 0 to ignore global throttling altogether. | |GlobalMovementThrottleThreshold | Uint, default is 1000 |Dynamic| Maximum number of movements allowed in the Balancing Phase in the past interval indicated by GlobalMovementThrottleCountingInterval. |
The following is a list of Fabric settings that you can customize, organized by
|DisableFirewallRuleForPublicProfile| bool, default is TRUE | Static|Indicates if firewall rule shouldn't be enabled for public profile | | EnforceLinuxMinTlsVersion | bool, default is FALSE | Static | If set to true; only TLS version 1.2+ is supported. If false; support earlier TLS versions. Applies to Linux only | | EnforcePrevalidationOnSecurityChanges | bool, default is FALSE| Dynamic | Flag controlling the behavior of cluster upgrade upon detecting changes of its security settings. If set to 'true', the cluster upgrade will attempt to ensure that at least one of the certificates matching any of the presentation rules can pass a corresponding validation rule. The pre-validation is executed before the new settings are applied to any node, but runs only on the node hosting the primary replica of the Cluster Manager service at the time of initiating the upgrade. The default is currently set to 'false'; starting with release 7.1, the setting will be set to 'true' for new Azure Service Fabric clusters.|
+| EnforceStrictRoleMapping | bool, default is FALSE | Dynamic | The permissions mapping in the SF runtime for the ElevatedAdmin role includes all current operations and any newly introduced functionality remains accessible to ElevatedAmin; i.e. the EA role gets a "*" permission in the code - that is; a blank authorization to invoke all SF APIs. The intent is that a 'deny' rule (Security/ClientAccess MyOperation="None") won't apply to the ElevatedAdmin role by default. However; if EnforceStrictRoleMapping is set to true; existing code or cluster manifest overrides which specify "operation": "Admin" (in Security/ClientAccess section) will make "operation" in effect inaccessible to the ElevatedAdmin role. |
|FabricHostSpn| string, default is "" |Static| Service principal name of FabricHost; when fabric runs as a single domain user (gMSA/domain user account) and FabricHost runs under machine account. It's the SPN of IPC listener for FabricHost; which by default should be left empty since FabricHost runs under machine account | |IgnoreCrlOfflineError|bool, default is FALSE|Dynamic|Whether to ignore CRL offline error when server-side verifies incoming client certificates | |IgnoreSvrCrlOfflineError|bool, default is TRUE|Dynamic|Whether to ignore CRL offline error when client side verifies incoming server certificates; default to true. Attacks with revoked server certificates require compromising DNS; harder than with revoked client certificates. |
The following is a list of Fabric settings that you can customize, organized by
| | | | | |PropertyGroup|X509NameMap, default is None|Dynamic|This is a list of "Name" and "Value" pair. Each "Name" is of subject common name or DnsName of X509 certificates authorized for admin client operations. For a given "Name", "Value" is a comma separate list of certificate thumbprints for issuer pinning, if not empty, the direct issuer of admin client certificates must be in the list. |
+## Security/ElevatedAdminClientX509Names
+
+| **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
+| | | | |
+|PropertyGroup|X509NameMap, default is None|Dynamic|Certificate common names of fabric clients in elevated admin role; used to authorize privileged fabric operations. It is a comma separated list. |
+ ## Security/ClientAccess | **Parameter** | **Allowed Values** | **Upgrade Policy** | **Guidance or Short Description** |
The following is a list of Fabric settings that you can customize, organized by
|DeleteName |string, default is "Admin" |Dynamic|Security configuration for Naming URI deletion. | |DeleteNetwork|string, default is "Admin" |Dynamic|Deletes a container network | |DeleteService |string, default is "Admin" |Dynamic|Security configuration for service deletion. |
-|DeleteVolume|string, default is "Admin"|Dynamic|Deletes a volume.|
+|DeleteVolume|string, default is "Admin"|Dynamic|Deletes a volume.|
+|DisableService|wstring, default is L"Admin"|Dynamic|Security configuration for disabling a service.|
|EnumerateProperties |string, default is "Admin\|\|User" | Dynamic|Security configuration for Naming property enumeration. | |EnumerateSubnames |string, default is "Admin\|\|User" |Dynamic| Security configuration for Naming URI enumeration. |
+|EnableService|wstring, default is L"Admin"|Dynamic|Security configuration for enabling a service.|
|FileContent |string, default is "Admin" |Dynamic| Security configuration for image store client file transfer (external to cluster). | |FileDownload |string, default is "Admin" |Dynamic| Security configuration for image store client file download initiation (external to cluster). | |FinishInfrastructureTask |string, default is "Admin" |Dynamic| Security configuration for finishing infrastructure tasks. |
service-fabric Service Fabric Cluster Standalone Deployment Preparation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-standalone-deployment-preparation.md
Title: Standalone Cluster Deployment Preparation
+ Title: Standalone Cluster Deployment Preparation
description: Documentation related to preparing the environment and creating the cluster configuration, to be considered prior to deploying a cluster intended for handling a production workload.
Here are recommended specs for machines in a Service Fabric cluster:
* Connectivity to a secure network or networks for all machines * Windows Server OS installed (valid versions: 2012 R2, 2016, 1709, or 1803). Service Fabric version 6.4.654.9590 and later also supports Server 2019 and 1809. * [.NET Framework 4.5.1 or higher](https://www.microsoft.com/download/details.aspx?id=40773), full install
-* [Windows PowerShell 3.0](/powershell/scripting/windows-powershell/install/installing-windows-powershell)
+* [Windows PowerShell 3.0](/previous-versions/powershell/scripting/windows-powershell/install/installing-windows-powershell)
* The [RemoteRegistry service](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754820(v=ws.11)) should be running on all the machines * **Service Fabric installation drive must be NTFS File System** * **Windows services *Performance Logs & Alerts* and *Windows Event Log* must [be enabled](/previous-versions/windows/it-pro/windows-server-2008-r2-and-2008/cc755249(v=ws.11))**.
spring-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md
Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
storage-mover Agent Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-deploy.md
Previously updated : 07/25/2023 Last updated : 10/30/2023 <!--
REVIEW Engineering: not reviewed
EDIT PASS: started Initial doc score: 83
-Current doc score: 96 (2038 words and 10 false-positive issues)
+Current doc score: 96 (2093 words and 10 false-positive issues)
!######################################################## -->
The Azure Storage Mover service utilizes agents to perform the migration jobs yo
Because an agent is essentially a migration appliance, you interact with it through an agent-local administrative shell. The shell limits the operations you can perform on this machine, though network configuration and troubleshooting tasks are accessible.
-Use of the agent in migrations is managed through Azure. Both Azure PowerShell and CLI are supported, and graphical interaction is available within the Azure portal. The agent is made available as a disk image compatible with new Windows Hyper-V virtual machines (VM).
+Use of the agent in migrations is managed through Azure. Both Azure PowerShell and CLI are supported, and graphical interaction is available within the Azure portal. The agent is made available as a disk image compatible with either new Windows Hyper-V or VMware virtual machines (VMs).
This article guides you through the steps necessary to successfully deploy a Storage Mover agent VM. ## Prerequisites -- A capable Windows Hyper-V host on which to run the agent VM.<br/> See the [Recommended compute and memory resources](#recommended-compute-and-memory-resources) section in this article for details about resource requirements for the agent VM.
+- A capable Windows Hyper-V or VMware host on which to run the agent VM.<br/> See the [Recommended compute and memory resources](#recommended-compute-and-memory-resources) section in this article for details about resource requirements for the agent VM.
> [!NOTE]
-> At present, Windows Hyper-V is the only supported virtualization environment for your agent VM. Other virtualization environments have not been tested and are not supported.
+> At present, Windows Hyper-V and VMware are the only supported virtualization environments for your agent VM. Other virtualization environments have not been tested and are not supported.
## Determine required resources for the VM
-Like every VM, the agent requires available compute, memory, network, and storage space resources on the host. Although overall data size may affect the time required to complete a migration, it's generally the number of files and folders that drives resource requirements.
+Like every VM, the agent requires available compute, memory, network, and storage space resources on the host. Although overall data size might affect the time required to complete a migration, it's generally the number of files and folders that drives resource requirements.
### Network resources The agent requires unrestricted internet connectivity.
-Although no single network configuration option works for every environment, the simplest configuration involves the deployment of an external virtual switch. The external switch type is connected to a physical adapter and allows your host operating system (OS) to share its connection with all your virtual machines (VMs). This switch allows communication between your physical network, the management operating system, and the virtual adapters on your virtual machines. This approach may be acceptable for a test environment, but is likely not sufficient for a production server.
+Although no single network configuration option works for every environment, the simplest configuration involves the deployment of an external virtual switch. The external switch type is connected to a physical adapter and allows your host operating system (OS) to share its connection with all your virtual machines (VMs). This switch allows communication between your physical network, the management operating system, and the virtual adapters on your virtual machines. This approach might be acceptable for a test environment, but is likely not sufficient for a production server.
After the switch is created, ensure that both the management and agent VMs are on the same switch. On the WAN link firewall, outbound TCP port 443 must be open. Keep in mind that connectivity interruptions are to be expected when changing network configurations.
-You can get help with [creating a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines) in the [Windows Server](/windows-server/) documentation.
+You can get help with [creating a virtual switch for Hyper-V virtual machines](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines) in the [Windows Server](/windows-server/) documentation. Consult the VMware support website for detailed guidance on creating a virtual switch for VMware-hosted VMs.
### Recommended compute and memory resources
The [Performance targets](performance-targets.md) article contains test results
### Local storage capacity
-At a minimum, the agent image needs 20 GiB of local storage. The amount required may increase if a large number of small files are cached during a migration.
+At a minimum, the agent image needs 20 GiB of local storage. The amount required might increase if a large number of small files are cached during a migration.
## Download the agent VM image
-The image is hosted on Microsoft Download Center as a zip file. Download the file at [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent) and extract the agent virtual hard disk (VHD) image to your virtualization host.
+Images for agent VMs are hosted on Microsoft Download Center as a zip file. Download the file at [https://aka.ms/StorageMover/agent](https://aka.ms/StorageMover/agent) and extract the agent virtual hard disk (VHD) image to your virtualization host.
## Create the agent VM
+The following steps describe the process for creating a VM using Microsoft Hyper-V. Consult the VMware support website for detailed guidance on creating a VMware-based VM.
+ 1. Create a new VM to host the agent. Open **Hyper-V Manager**. In the **Actions** pane, select **New** and **Virtual Machine...** to launch the **New Virtual Machine Wizard**. :::image type="content" source="media/agent-deploy/agent-vm-create-sml.png" alt-text="Image showing how to launch the New Virtual Machine Wizard from within the Hyper-V Manager." lightbox="media/agent-deploy/agent-vm-create-lrg.png":::
The image is hosted on Microsoft Download Center as a zip file. Download the fil
## Change the default password
-The agent is delivered with a default user account and password. Immediately after deploying and starting the agent VM, connect to it and change the default password!
+The agent is delivered with a default user account and password. Connect to the newly created agent and change the default password immediately after the agent is deployed and started.
[!INCLUDE [agent-shell-connect](includes/agent-shell-connect.md)]
Take time to consider the amount of bandwidth a new machine uses before you depl
> [!IMPORTANT] > The current Azure Storage Mover agent does not support bandwidth throttling schedules.
-If bandwidth throttling is important to you, create a local virtual network (VNet) with network quality of service (QoS) settings and an internet connection. This approach allows you to expose the agent through the VNet, and to locally configure an unauthenticated network proxy server on the agent if needed.
+If bandwidth throttling is important to you, create a local virtual network with an internet connection and configure quality of service (QoS) settings. This approach allows you to expose the agent through the virtual network and to locally configure an unauthenticated network proxy server on the agent if needed.
## Decommissioning an agent
Several things take place during the unregistration process:
- The agent is removed from the storage mover resource. You can no longer see the agent in the *Registered agents* tab in the portal or select it for new migration jobs. - The agent is also removed from the Azure ARC service. This removal deletes the hybrid compute resource of type *Server - Azure Arc* that represented the agent with the Azure ARC service in the same resource group as your storage mover resource.-- Unregistration removes the managed identity of the agent from Microsoft Entra ID. The associated service principal is automatically removed, invalidating any permissions this agent may have had on other Azure resources. If you check the role-based access control (RBAC) role assignments, for instance of a target storage container the agent previously had permissions to, you no longer find the service principal of the agent, because it was deleted. The assignment itself is still visible as "Unknown service principal" but this assignment no longer connects to an identity and can never be reconnected. It's simply a sign that a role assignment used to be here, of a service principal that no longer exists.
+- Unregistration removes the managed identity of the agent from Microsoft Entra ID. The associated service principal is automatically removed, invalidating any permissions this agent might have on other Azure resources. If you check the role-based access control (RBAC) role assignments, for instance of a target storage container the agent previously had permissions to, you no longer find the service principal of the agent, because it was deleted. The assignment itself is still visible as "Unknown service principal" but this assignment no longer connects to an identity and can never be reconnected. It's simply a sign that a role assignment used to be here, of a service principal that no longer exists.
This behavior is standard, and not specific to Azure Storage Mover. You can observe the same behavior if you remove a different service principal from Microsoft Entra ID and then check a former role assignment.
You can stop the agent VM on your virtualization host after the unregistration i
## Next steps
-After you've deployed your agent VM, started it, and changed the default password of the local account:
+After you deploy your agent, started it, and changed the default password of the local account:
> [!div class="nextstepaction"] > [Register the agent with your storage mover Azure resource](agent-register.md)
storage-mover Agent Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/agent-register.md
Registration is always initiated from the agent. In the interest of security, on
## Step 1: Connect to the agent VM
-The agent VM is an appliance. It offers an administrative shell that limits the operations you can perform on this machine. When you connect to the agent, the shell loads and provides you with options that allow you to interact with it directly. However, the agent VM is a Linux based appliance, and copy and paste functionality often doesn't work within the default Hyper-V window.
+The agent VM is an appliance. It offers an administrative shell that limits the operations you can perform on this machine. When you connect to the agent, the shell loads and provides you with options that allow you to interact with it directly. However, the agent VM is a Linux based appliance, and copy and paste functionality often doesn't work within the default host window.
-Rather than use the Hyper-V window, use an SSH connection instead. This approach provides the following advantages:
+Rather than use the host window, consider using an SSH connection instead. This approach provides the following advantages:
-- You can connect to the agent VM's shell from any management machine and don't need to be logged into the Hyper-V host.
+- You can connect to the agent VM's shell from any management machine and don't need to be logged into the host.
- Copy / paste is fully supported. [!INCLUDE [agent-shell-connect](includes/agent-shell-connect.md)]
storage-mover Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/service-overview.md
Previously updated : 08/04/2023 Last updated : 10/30/2023
CONTENT: final
REVIEW Stephen/Fabian: COMPLETE EDIT PASS: not started
-Document score: 100 (505 words and 0 issues)
+Document score: 100 (520 words and 0 issues)
!######################################################## -->
Azure Storage Mover is a new, fully managed migration service that enables you t
[!INCLUDE [protocol-endpoint-agent](includes/protocol-endpoint-agent.md)]
-> [!IMPORTANT]
-> Storage accounts with the [hierarchical namespace service (HNS)](../storage/blobs/data-lake-storage-namespace.md) feature enabled are not supported at this time.
+An Azure blob container without the hierarchical namespace service feature doesnΓÇÖt have a traditional file system. A standard blob container uses ΓÇ£virtualΓÇ¥ folders to mimic this functionality. When this approach is used, files in folders on the source get their path prepended to their name and placed in a flat list in the target blob container.
-An Azure blob container without the hierarchical namespace service feature doesnΓÇÖt have a traditional file system. A standard blob container supports ΓÇ£virtualΓÇ¥ folders. Files in folders on the source get their path prepended to their name and placed in a flat list in the target blob container.
-
-When migrating data from a source endpoint using the SMB protocol, Storage Mover supports the same level of file fidelity as the underlying Azure file share. Folder structure and metadata values such as file and folder timestamps, ACLs, and file attributes are maintained. When migrating data from an NFS source, the Storage Mover service represents empty folders as an empty blob in the target. The metadata of the source folder is persisted in the custom metadata field of this blob, just as they are with files.
+When the SMB protocol is used during a data migration, Storage Mover supports the same level of file fidelity as the underlying Azure file share. Folder structure and metadata values such as file and folder timestamps, ACLs, and file attributes are maintained. When the NFS protocol is used, the Storage Mover service represents empty folders as an empty blob in the target. The metadata of the source folder is persisted in the custom metadata field of this blob, just as they are with files.
:::image type="content" source="media/overview/source-to-target.png" alt-text="A screenshot illustrating a source NFS share migrated through an Azure Storage Mover agent VM to an Azure Storage blob container." lightbox="media/overview/source-to-target-lrg.png" :::
storage-mover Storage Mover Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/storage-mover-create.md
If there has never been a storage mover deployed in this subscription and you ar
To deploy a storage mover into a resource group, you must be a member of the *Contributor* or *Owner* [RBAC (Role Based Access Control)](../role-based-access-control/overview.md) role for the selected resource group. The section *[Permissions](deployment-planning.md#permissions)* in the planning guide has a table outlining the permissions you need for various migration scenarios.
+Creating a storage mover requires you to decide on a subscription, a resource group, a region, and a name. The *[Planning for an Azure Storage Mover deployment](deployment-planning.md)* article shares best practices. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name.
+ ## Deploy a storage mover resource ### [Azure portal](#tab/portal)
To deploy a storage mover into a resource group, you must be a member of the *Co
1. Search for *Azure Storage Mover*. When you identify the correct search result, select the **Create** button. A wizard to create a storage mover resource opens.
- 1. Creating a storage mover requires you to decide on a subscription, a resource group, a region, and a name. The *[Planning for an Azure Storage Mover deployment](deployment-planning.md)* article shares best practices. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name.
+### [Azure CLI](#tab/CLI)
-### [PowerShell](#tab/powershell)
+### Prepare your Azure CLI environment
-Creating a storage mover requires you to decide on a subscription, a resource group, a region, and a name. The *[Planning for an Azure Storage Mover deployment](deployment-planning.md)* article shares best practices. Refer to the [resource naming convention](../azure-resource-manager/management/resource-name-rules.md#microsoftstoragesync) to choose a supported name.
+
+To create a storage mover resource, use the [az storage-mover create](/cli/azure/storage-mover#az-storage-mover-create) command. You'll need to supply values for the required `--name`, `--resource-group`, `--location` parameters. The `-description` and `tags` parameters are optional.
+
+```azurecli-interactive
+
+## Log into your Azure CLI account, a browser window will appear so that you can confirm your login.
+az login
+
+## The Azure Storage Mover extension for CLI is not installed by default and needs to be installed manually. Install the Azure Storage Mover extension without a prompt.
+az config set extension.use_dynamic_install=yes_without_prompt
+
+## Set variables
+$storageMoverName = "The name of the Storage Mover resource."
+$resourceGroupName = "Name of resource group"
+$description = "A description for the storage mover."
+$location = "The geo-location where the resource lives. When not specified, the location fo the resource group will be used."
+$tags = "Resource tags. Support shorthand-syntax, json-file and yaml-file. Try '??' to show more."
+
+## Create a Storage Mover resource.
+az storage-mover create --Name $storageMoverName \
+ --ResourceGroupName $resourceGroupName \
+ --Location $location \
+
+```
+### [Azure PowerShell](#tab/powershell)
+
+### Prepare your Azure PowerShell environment
+ The `New-AzStorageMover` cmdlet is used to create new storage mover resource in a resource group. If you haven't yet installed the `Az.StorageMover` module:
storage Storage Blob Change Feed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-change-feed.md
This section describes known issues and conditions in the current release of the
- You currently cannot see the **$blobchangefeed** container when you call the ListContainers API. You can view the contents by calling the ListBlobs API on the $blobchangefeed container directly. - Storage account failover of geo-redundant storage accounts with the change feed enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. For more information about such inconsistencies, see [Change feed and blob data inconsistencies](../common/storage-disaster-recovery-guidance.md#change-feed-and-blob-data-inconsistencies). - You might see 404 (Not Found) and 412 (Precondition Failed) errors reported on the **$blobchangefeed** and **$blobchangefeedsys** containers. You can safely ignore these errors.
+- BlobDeleted events are not generated when blob versions or snapshots are deleted. A BlobDeleted event is added only when a base (root) blob is deleted.
## Frequently asked questions (FAQ)
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
Title: Quickstart for using Azure Container Storage Preview with Azure Kubernetes Service (AKS)
-description: Learn how to install Azure Container Storage Preview on an Azure Kubernetes Service cluster using an installation script.
+description: Create a Linux-based Azure Kubernetes Service (AKS) cluster, install Azure Container Storage, and create a storage pool.
Previously updated : 09/20/2023 Last updated : 11/03/2023 + # Quickstart: Use Azure Container Storage Preview with Azure Kubernetes Service
-[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This Quickstart shows you how to install Azure Container Storage Preview on an [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster using a provided installation script.
+
+[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This Quickstart shows you how to create a Linux-based [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster, install Azure Container Storage, and create a storage pool using Azure CLI.
## Prerequisites [!INCLUDE [container-storage-prerequisites](../../../includes/container-storage-prerequisites.md)] -- You'll need an AKS cluster with an appropriate [virtual machine type](install-container-storage-aks.md#vm-types). If you don't already have an AKS cluster, follow [these instructions](install-container-storage-aks.md#getting-started) to create one.
+> [!IMPORTANT]
+> This Quickstart will work for most use cases. An exception is if you plan to use Azure Elastic SAN Preview as backing storage for your storage pool and you don't have owner-level access to the Azure subscription. If both these statements apply to you, use the [manual installation steps](install-container-storage-aks.md) instead. Alternatively, you can complete this Quickstart with the understanding that a storage pool won't be automatically created, and then [create an Elastic SAN storage pool manually](use-container-storage-with-elastic-san.md).
+
+## Getting started
+
+- Take note of your Azure subscription ID. We recommend using a subscription on which you have an [Owner](../../role-based-access-control/built-in-roles.md#owner) role.
+
+- [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
+
+- If you're using Azure Cloud Shell, you might be prompted to mount storage. Select the Azure subscription where you want to create the storage account and select **Create**.
+
+## Install the required extensions
+
+Upgrade to the latest version of the `aks-preview` cli extension by running the following command.
+
+```azurecli-interactive
+az extension add --upgrade --name aks-preview
+```
+
+Add or upgrade to the latest version of k8s-extension by running the following command.
+
+```azurecli-interactive
+az extension add --upgrade --name k8s-extension
+```
+
+## Set subscription context
+
+Set your Azure subscription context using the `az account set` command. You can view the subscription IDs for all the subscriptions you have access to by running the `az account list --output table` command. Remember to replace `<subscription-id>` with your subscription ID.
+
+```azurecli-interactive
+az account set --subscription <subscription-id>
+```
+
+## Register resource providers
+
+The `Microsoft.ContainerService` and `Microsoft.KubernetesConfiguration` resource providers must be registered on your Azure subscription. To register these providers, run the following commands:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService --wait
+az provider register --namespace Microsoft.KubernetesConfiguration --wait
+```
+
+## Create a resource group
+
+An Azure resource group is a logical group that holds your Azure resources that you want to manage as a group. If you already have a resource group you want to use, you can skip this section.
+
+When you create a resource group, you're prompted to specify a location. This location is:
+
+* The storage location of your resource group metadata.
+* Where your resources will run in Azure if you don't specify another region during resource creation.
+
+Create a resource group using the `az group create` command. Replace `<resource-group-name>` with the name of the resource group you want to create, and replace `<location>` with an Azure region such as *eastus*, *westus2*, *westus3*, or *westeurope*. See this [list of Azure regions](container-storage-introduction.md#regional-availability) where Azure Container Storage is available.
+
+```azurecli-interactive
+az group create --name <resource-group-name> --location <location>
+```
+
+If the resource group was created successfully, you'll see output similar to this:
+
+```output
+{
+ "id": "/subscriptions/<guid>/resourceGroups/myContainerStorageRG",
+ "location": "eastus",
+ "managedBy": null,
+ "name": "myContainerStorageRG",
+ "properties": {
+ "provisioningState": "Succeeded"
+ },
+ "tags": null
+}
+```
+
+## Choose a data storage option for your storage pool
+
+Before deploying Azure Container Storage, you'll need to decide which back-end storage option you want to use to create your storage pool and persistent volumes. Three options are currently available:
+
+- **Azure Elastic SAN Preview**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CI/CD environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
+
+- **Azure Disks**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
+
+- **Ephemeral Disk**: This option uses local NVMe drives on the AKS cluster nodes and is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. AKS discovers the available ephemeral storage on AKS nodes and acquires the drives for volume deployment.
+
+You'll specify the storage pool type when you install Azure Container Storage.
+
+## Choose a VM type for your cluster
+
+If you intend to use Azure Elastic SAN Preview or Azure Disks as backing storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes. If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives. You'll specify the VM type when you create the cluster in the next section.
> [!IMPORTANT]
-> If you created your AKS cluster using the Azure portal, it will likely have two node pools: a user node pool and a system/agent node pool. Before you can install Azure Container Storage, you must label the user node pool. In this article, this is done automatically by passing the user node pool name to the script as a parameter. However, if your cluster consists of only a system node pool, which is often the case with test/dev clusters, you'll need to first [add a new user node pool](../../aks/create-node-pools.md#add-a-node-pool) before running the script. This is because when you create an AKS cluster using the Azure portal, a taint `CriticalAddOnsOnly` gets added to the agent/system nodepool, which blocks installation of Azure Container Storage on the system node pool. This taint isn't added when an AKS cluster is created using Azure CLI.
+> You must choose a VM type that supports [Azure premium storage](../../virtual-machines/premium-storage-performance.md). Each VM should have a minimum of four virtual CPUs (vCPUs). Azure Container Storage will consume one core for I/O processing on every VM the extension is deployed to.
-## Install Azure Container Storage
+## Create a new AKS cluster and install Azure Container Storage
+If you already have an AKS cluster deployed, skip this section and go to [Install Azure Container Storage on an existing AKS cluster](#install-azure-container-storage-on-an-existing-aks-cluster).
+
+Run the following command to create a new AKS cluster, install Azure Container Storage, and create a storage pool. Replace `<cluster-name>` and `<resource-group-name>` with your own values, and specify which VM type you want to use. You'll need a node pool of at least three Linux VMs. Replace `<storage-pool-type>` with `azureDisk`, `ephemeraldisk`, or `elasticSan`.
+
+Optional storage pool parameters:
+
+| **Parameter** | **Default** |
+|-|-|
+| --storage-pool-name | mypool-<random 7 char lowercase> |
+| --storage-pool-size | 512Gi (1Ti for Elastic SAN) |
+| --storage-pool-sku | Premium_LRS |
+| --storage-pool-option | NVMe |
+
+```azurecli-interactive
+az aks create -n <cluster-name> -g <resource-group-name> --node-vm-size Standard_D4s_v3 --node-count 3 --enable-azure-container-storage <storage-pool-type>
+```
+
+The deployment will take 10-15 minutes to complete.
+
+## Install Azure Container Storage on an existing AKS cluster
+
+If you already have an AKS cluster that meets the [VM requirements](#choose-a-vm-type-for-your-cluster), run the following command to install Azure Container Storage on the cluster and create a storage pool. Replace `<cluster-name>` and `<resource-group-name>` with your own values. Replace `<storage-pool-type>` with `azureDisk`, `ephemeraldisk`, or `elasticSan`.
+
+Running this command will enable Azure Container Storage on a node pool named `nodepool1`, which is the default node pool name. If you want to install it on other node pools, see [Install Azure Container Storage on specific node pools](#install-azure-container-storage-on-specific-node-pools).
+
+> [!IMPORTANT]
+> **If you created your AKS cluster using the Azure portal:** The cluster will likely have a user node pool and a system/agent node pool. However, if your cluster consists of only a system node pool, which is the case with test/dev clusters created with the Azure portal, you'll need to first [add a new user node pool](../../aks/create-node-pools.md#add-a-node-pool) and then label it. This is because when you create an AKS cluster using the Azure portal, a taint `CriticalAddOnsOnly` is added to the system/agent nodepool, which blocks installation of Azure Container Storage on the system node pool. This taint isn't added when an AKS cluster is created using Azure CLI.
-## Choose a data storage option
+```azurecli-interactive
+az aks update -n <cluster-name> -g <resource-group-name> --enable-azure-container-storage <storage-pool-type>
+```
-Next you'll need to choose a back-end storage option to create your storage pool. Choose one of the following three options and follow the link to create a storage pool and persistent volume claim.
+The deployment will take 10-15 minutes to complete.
-- **Azure Elastic SAN Preview**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time. [Create a storage pool using Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md#create-a-storage-pool).
+### Install Azure Container Storage on specific node pools
-- **Azure Disks**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size. [Create a storage pool using Azure Disks](use-container-storage-with-managed-disks.md#create-a-storage-pool).
+If you want to install Azure Container Storage on specific node pools, follow these instructions. The node pools must contain at least three Linux VMs each.
-- **Ephemeral Disk**: This option uses local NVMe drives on the AKS nodes and is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. AKS discovers the available ephemeral storage on AKS nodes and acquires the drives for volume deployment. [Create a storage pool using Ephemeral Disk](use-container-storage-with-local-disk.md#create-a-storage-pool).
+1. Run the following command to view the list of available node pools. Replace `<resource-group-name>` and `<cluster-name>` with your own values.
+
+ ```azurecli-interactive
+ az aks nodepool list --resource-group <resource-group-name> --cluster-name <cluster-name>
+ ```
+
+2. Run the following command to install Azure Container Storage on specific node pools. Replace `<cluster-name>` and `<resource-group-name>` with your own values. Replace `<storage-pool-type>` with `azureDisk`, `ephemeraldisk`, or `elasticSan`.
+
+ ```azurecli-interactive
+ az aks update -n <cluster-name> -g <resource-group-name>ΓÇ»--enable-azure-container-storageΓÇ»<storage-pool-type> --azure-container-storage-nodepools <comma separated values of nodepool names>
+ ```
storage Container Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md
description: An overview of Azure Container Storage Preview, a service built nat
Previously updated : 09/18/2023 Last updated : 11/06/2023
Azure Container Storage offers persistent volume support with ReadWriteOnce acce
## What's new in Azure Container Storage
-Based on feedback from customers, we've included the following capabilities in the Azure Container Storage Preview update:
+Based on feedback from customers, we've included the following capabilities in the Azure Container Storage Preview:
+- Improve stateful application availability by using [multi-zone storage pools and ZRS disks](enable-multi-zone-redundancy.md)
+- Enable server-side encryption with [customer-managed keys](use-container-storage-with-managed-disks.md#enable-server-side-encryption-with-customer-managed-keys) (Azure Disks only)
- Scale up by [resizing volumes](resize-volume.md) backed by Azure Disks and NVMe storage pools without downtime - [Clone persistent volumes](clone-volume.md) within a storage pool
storage Enable Multi Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/enable-multi-zone-redundancy.md
+
+ Title: Enable multi-zone storage redundancy in Azure Container Storage Preview to improve stateful application availability
+description: Enable storage redundancy across multiple availability zones in Azure Container Storage Preview to improve stateful application availability. Use multi-zone storage pools and zone-redundant storage (ZRS) disks.
+++ Last updated : 11/03/2023+++
+# Enable multi-zone storage redundancy in Azure Container Storage Preview
+
+You can improve stateful application availability by using multi-zone storage pools and zone-redundant storage (ZRS) disks when using [Azure Container Storage](container-storage-introduction.md) in a multi-zone Azure Kubernetes Service (AKS) cluster. To create an AKS cluster that uses availability zones, see [Use availability zones in Azure Kubernetes Service](../../aks/availability-zones.md).
+
+## Prerequisites
+
+- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges.
+- You'll need an AKS cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs).
+- This article assumes you've already [installed Azure Container Storage](container-storage-aks-quickstart.md) on your AKS cluster.
+- You'll need the Kubernetes command-line client, `kubectl`. It's already installed if you're using Azure Cloud Shell, or you can install it locally by running the `az aks install-cli` command.
+
+## Create a multi-zone storage pool
+
+In your storage pool definition, you can specify the zones where you want your storage capacity to be distributed across. The total storage pool capacity will be distributed evenly across the number of zones specified. For example, if two zones are specified, each zone gets half of the storage pool capacity; if three zones are specified, each zone gets one-third of the total capacity. Corresponding storage will be provisioned in each of the zones. This is useful when running workloads that offer application-level replication such as Cassandra.
+
+If there are no nodes available in a specified zone, the capacity will be provisioned once a node is available in that zone. Persistent volumes (PVs) can only be created from storage pool capacity from one zone.
+
+Valid values for `zones` are:
+
+- [""]
+- ["1"]
+- ["2"]
+- ["3"]
+- ["1", "2"]
+- ["1", "3"]
+- ["2", "3"]
+- ["1", "2", "3"]
+
+Follow these steps to create a multi-zone storage pool that uses Azure Disks. For `zones`, choose a valid value.
+
+1. Use your favorite text editor to create a YAML manifest file such as `code acstor-multizone-storagepool.yaml`.
+
+1. Paste in the following code and save the file. The storage pool **name** value can be whatever you want. For **storage**, specify the amount of storage capacity for the pool in Gi or Ti.
+
+ ```yml
+ apiVersion: containerstorage.azure.com/v1beta1
+ kind: StoragePool
+ metadata:
+ name: azuredisk
+ namespace: acstor
+ spec:
+ zones: ["1", "2", "3"]
+ poolType:
+ azureDisk: {}
+ resources:
+ requests:
+ storage: 1Ti
+ ```
+
+1. Apply the YAML manifest file to create the multi-zone storage pool.
+
+ ```azurecli-interactive
+ kubectl apply -f acstor-multizone-storagepool.yaml
+ ```
+
+## Use zone-redundant storage (ZRS) disks
+
+If your workload requires storage redundancy, you can leverage disks that use [zone-redundant storage](../../virtual-machines/disks-deploy-zrs.md), which copies your data synchronously across three Azure availability zones in the primary region.
+
+You can specify the disk `skuName` as either `StandardSSD_ZRS` or `Premium_ZRS` in your storage pool definition, as in the following example.
+
+ ```yml
+ apiVersion: containerstorage.azure.com/v1beta1
+ kind: StoragePool
+ metadata:
+ name: azuredisk
+ namespace: acstor
+ spec:
+ poolType:
+ azureDisk:
+ skuName: Premium_ZRS
+ resources:
+ requests:
+ storage: 1Ti
+ ```
+
+## See also
+
+- [What is Azure Container Storage?](container-storage-introduction.md)
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
description: Learn how to install Azure Container Storage Preview for use with A
Previously updated : 09/26/2023 Last updated : 10/27/2023 # Install Azure Container Storage Preview for use with Azure Kubernetes Service
-[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to create an [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster and install Azure Container Storage Preview on the cluster.
+
+[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to create an [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) cluster, label the node pool, and install Azure Container Storage Preview on the cluster. Alternatively, you can install Azure Container Storage Preview [using a QuickStart](container-storage-aks-quickstart.md) instead of following the manual steps in this article.
## Prerequisites [!INCLUDE [container-storage-prerequisites](../../../includes/container-storage-prerequisites.md)] > [!NOTE]
-> If you already have an AKS cluster deployed, you can proceed to [Connect to the cluster](#connect-to-the-cluster). Alternatively, you can install Azure Container Storage Preview [using an automated installation script](container-storage-aks-quickstart.md) instead of following the manual steps outlined in this article.
+> If you already have an AKS cluster deployed, proceed to [Connect to the cluster](#connect-to-the-cluster).
## Getting started
az group create --name <resource-group-name> --location <location>
If the resource group was created successfully, you'll see output similar to this:
-```json
+```output
{ "id": "/subscriptions/<guid>/resourceGroups/myContainerStorageRG", "location": "eastus",
The deployment will take a few minutes to complete.
## Connect to the cluster
-To connect to the cluster, use the Kubernetes command-line client, `kubectl`.
+To connect to the cluster, use the Kubernetes command-line client, `kubectl`. It's already installed if you're using Azure Cloud Shell, or you can install it locally by running the `az aks install-cli` command.
1. Configure `kubectl` to connect to your cluster using the `az aks get-credentials` command. The following command:
storage Use Container Storage With Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md
description: Configure Azure Container Storage Preview for use with Azure Elasti
Previously updated : 09/07/2023 Last updated : 11/06/2023
## Create a storage pool
-First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file. Follow these steps to create a storage pool with Azure Elastic SAN Preview.
+First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file.
+
+If you enabled Azure Container Storage using `az aks create` or `az aks update` commands, you might already have a storage pool. Use `kubectl get sp -n acstor` to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes).
+
+Follow these steps to create a storage pool with Azure Elastic SAN Preview.
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`.
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
description: Configure Azure Container Storage Preview for use with Ephemeral Di
Previously updated : 09/07/2023 Last updated : 11/06/2023
## Create a storage pool
-First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file. Follow these steps to create a storage pool using local disk.
+First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file.
+
+If you enabled Azure Container Storage using `az aks create` or `az aks update` commands, you might already have a storage pool. Use `kubectl get sp -n acstor` to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes).
+
+Follow these steps to create a storage pool using local disk.
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`.
storage Use Container Storage With Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md
description: Configure Azure Container Storage Preview for use with Azure manage
Previously updated : 09/15/2023 Last updated : 11/06/2023
## Create a storage pool
-First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file. Follow these steps to create a storage pool for Azure Disks.
+First, create a storage pool, which is a logical grouping of storage for your Kubernetes cluster, by defining it in a YAML manifest file.
+
+If you enabled Azure Container Storage using `az aks create` or `az aks update` commands, you might already have a storage pool. Use `kubectl get sp -n acstor` to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes).
+
+> [!IMPORTANT]
+> If you want to use your own keys to encrypt your volumes instead of using Microsoft-managed keys, don't create your storage pool using the steps in this section. Instead, go to [Enable server-side encryption with customer-managed keys](#enable-server-side-encryption-with-customer-managed-keys) and follow the steps there.
+
+Follow these steps to create a storage pool for Azure Disks.
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`.
First, create a storage pool, which is a logical grouping of storage for your Ku
When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`.
+## Enable server-side encryption with customer-managed keys
+
+If you already created a storage pool or you prefer to use the default Microsoft-managed encryption keys, skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes).
+
+All data in an Azure storage account is encrypted at rest. By default, data is encrypted with Microsoft-managed keys. For more control over encryption keys, you can supply customer-managed keys (CMK) to encrypt the persistent volumes that you'll create from an Azure Disk storage pool.
+
+To use your own key, you must have an [Azure Key Vault](../../key-vault/general/overview.md) with a key. The Key Vault should have purge protection enabled, and it must use the Azure RBAC permission model. Learn more about [customer-managed keys on Linux](../../virtual-machines/disk-encryption.md#customer-managed-keys).
+
+When creating your storage pool, you must define the CMK parameters. The required CMK encryption parameters are:
+
+- **keyVersion** specifies the version of the key to use
+- **keyName** is the name of your key
+- **keyVaultUri** is the uniform resource identifier of the Azure Key Vault, for example `https://user.vault.azure.net`
+- **Identity** specifies a managed identity with access to the vault, for example `/subscriptions/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/resourcegroups/MC_user-acstor-westus2-rg_user-acstor-westus2_westus2/providers/Microsoft.ManagedIdentity/userAssignedIdentities/user-acstor-westus2-agentpool`
+
+Follow these steps to create a storage pool using your own encryption key. All persistent volumes created from this storage pool will be encrypted using the same key.
+
+1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool-cmk.yaml`.
+
+1. Paste in the following code, supply the required parameters, and save the file. The storage pool **name** value can be whatever you want. For **skuName**, specify the level of performance and redundancy. Acceptable values are Premium_LRS, Standard_LRS, StandardSSD_LRS, UltraSSD_LRS, Premium_ZRS, PremiumV2_LRS, and StandardSSD_ZRS. For **storage**, specify the amount of storage capacity for the pool in Gi or Ti. Be sure to supply the CMK encryption parameters.
+
+ ```yml
+ apiVersion: containerstorage.azure.com/v1beta1
+ kind: StoragePool
+ metadata:
+ name: azuredisk
+ namespace: acstor
+ spec:
+ poolType:
+ azureDisk:
+ skuName: Premium_LRS
+ encryption: {
+ keyVersion: "<key-version>",
+ keyName: "<key-name>",
+ keyVaultUri: "<key-vault-uri>",
+ identity: "<identity>"
+ }
+ resources:
+ requests:
+ storage: 1Ti
+ ```
+
+1. Apply the YAML manifest file to create the storage pool.
+
+ ```azurecli-interactive
+ kubectl apply -f acstor-storagepool-cmk.yaml
+ ```
+
+ When storage pool creation is complete, you'll see a message like:
+
+ ```output
+ storagepool.containerstorage.azure.com/azuredisk created
+ ```
+
+ You can also run this command to check the status of the storage pool. Replace `<storage-pool-name>` with your storage pool **name** value. For this example, the value would be **azuredisk**.
+
+ ```azurecli-interactive
+ kubectl describe sp <storage-pool-name> -n acstor
+ ```
+
+When the storage pool is created, Azure Container Storage will create a storage class on your behalf, using the naming convention `acstor-<storage-pool-name>`.
+ ## Display the available storage classes When the storage pool is ready to use, you must select a storage class to define how storage is dynamically created when creating persistent volume claims and deploying persistent volumes.
storage Elastic San Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-configure-customer-managed-keys.md
+
+ Title: Use customer-managed keys with an Azure Elastic SAN Preview
+
+description: Learn how to configure Azure Elastic SAN encryption with customer-managed keys for an Elastic SAN volume group by using the Azure PowerShell module.
++++++ Last updated : 11/06/2023+++
+# Configure customer-managed keys for An Azure Elastic SAN using Azure Key Vault
+
+All data written to an Elastic SAN volume is automatically encrypted-at-rest with a data encryption key (DEK). Azure uses *[envelope encryption](../../security/fundamentals/encryption-atrest.md#envelope-encryption-with-a-key-hierarchy)* to encrypt the DEK using a Key Encryption Key (KEK). By default, the KEK is platform-managed (managed by Microsoft), but you can create and manage your own.
+
+This article shows how to configure encryption of an Elastic SAN volume group with customer-managed keys stored in an Azure Key Vault.
+
+## Limitations
++
+## Prerequisites
+
+To perform the operations described in this article, you must prepare your Azure account and the management tools you plan to use. Preparation includes installing the necessary modules, logging in to your account, and setting variables for PowerShell. The same set of variables are used throughout this article, so setting them now allows you to use the same ones in all of the samples.
+
+To perform the operations described in this article using PowerShell:
+
+1. Install [the latest version of Azure PowerShell](/powershell/azure/install-azure-powershell) if you haven't already.
+
+1. Sign in to Azure.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+#### Create variables to be used in the PowerShell samples in this article
+
+Copy the sample code and replace all placeholder text with your own values. Use the same variables in all of the examples in this article:
++
+```azurepowershell
+# Define some variables
+# The name of the resource group where the resources will be deployed.
+$RgName = "ResourceGroupName"
+# The name of the Elastic SAN that contains the volume group to be configured.
+$EsanName = "ElasticSanName"
+# The name of the Elastic SAN volume group to be configured.
+$EsanVgName = "ElasticSanVolumeGroupName"
+# The region where the new resources will be created.
+$Location = "Location"
+# The name of the Azure Key Vault that will contain the KEK.
+$KvName = "KeyVaultName"
+# The name of the Azure Key Vault key that is the KEK.
+$KeyName = "KeyName"
+# The name of the user-assigned managed identity, [if applicable](#choose-a-managed-identity-to-authorize-access-to-the-key-vault).
+$ManagedUserName = "ManagedUserName"
+```
+
+## Configure the key vault
+
+You can use a new or existing key vault to store customer-managed keys. The encrypted resource and the key vault can be in different regions or subscriptions in the same Microsoft Entra ID tenant. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../../key-vault/general/overview.md) and [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md).
+
+Using customer-managed keys with encryption requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and can't be disabled. You can enable purge protection either when you create the key vault or after it's created. Azure Elastic SAN encryption supports RSA keys of sizes 2048, 3072 and 4096.
+
+Azure Key Vault supports authorization with Azure RBAC via an Azure RBAC permission model. Microsoft recommends using the Azure RBAC permission model over key vault access policies. For more information, see [Grant permission to applications to access an Azure key vault using Azure RBAC](../../key-vault/general/rbac-guide.md).
+
+There are two steps involved in preparing a key vault as a store for your volume group KEKs:
+
+> [!div class="checklist"]
+> * Create a new key vault with soft delete and purge protection enabled, or enable purge protection for an existing one.
+> * Assign the role of Key Vault Crypto Officer to your account to be able to create a key in the vault.
+
+The following example:
+
+> [!div class="checklist"]
+> * Creates a new key vault with soft delete and purge protection enabled.
+> * Gets the UPN of your user account.
+> * Assigns the Key Vault Crypto Officer role for the new key vault to your account.
+
+Use the same [variables you defined previously](#create-variables-to-be-used-in-the-powershell-samples-in-this-article) in this article.
+
+```azurepowershell
+# Setup the parameters to create the key vault.
+$NewKvArguments = @{
+ Name = $KvName
+ ResourceGroupName = $RgName
+ Location = $Location
+ EnablePurgeProtection = $true
+ EnableRbacAuthorization = $true
+}
+
+# Create the key vault.
+$KeyVault = New-AzKeyVault @NewKvArguments
+
+# Get the UPN of the currently loggged in user.
+$MyAccountUpn = (Get-AzADUser -SignedIn).UserPrincipalName
+
+# Setup the parameters to create the role assignment.
+$CrptoOfficerRoleArguments = @{
+ SignInName = $MyAccountUpn
+ RoleDefinitionName = "Key Vault Crypto Officer"
+ Scope = $KeyVault.ResourceId
+}
+
+# Assign the Cypto Officer role to your account for the key vault.
+New-AzRoleAssignment @CrptoOfficerRoleArguments
+```
+
+To learn how to enable purge protection on an existing key vault with PowerShell, see [Azure Key Vault recovery overview](../../key-vault/general/key-vault-recovery.md?tabs=azure-powershell).
+
+For more information on how to assign an RBAC role with PowerShell, see [Assign Azure roles using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md).
+
+## Add a key
+
+Next, add a key to the key vault. Before you add the key, make sure that you have assigned to yourself the **Key Vault Crypto Officer** role.
+
+Azure Storage and Elastic SAN encryption support RSA keys of sizes 2048, 3072 and 4096. For more information about supported key types, see [About keys](../../key-vault/keys/about-keys.md).
+
+Use these sample commands to add a key to the key vault with PowerShell. Use the same [variables you defined previously](#create-variables-to-be-used-in-the-powershell-samples-in-this-article) in this article.
+
+```azurepowershell
+# Get the key vault where the key is to be added.
+$KeyVault = Get-AzKeyVault -ResourceGroupName $RgName -VaultName $KvName
+
+# Setup the parameters to add the key to the vault.
+$NewKeyArguments = @{
+ Name = $KeyName
+ VaultName = $KeyVault.VaultName
+ Destination = "Software"
+}
+
+# Add the key to the vault.
+$Key = Add-AzKeyVaultKey @NewKeyArguments
+```
+
+## Choose a key rotation strategy
+
+Following cryptographic best practices means to rotate the key that is protecting your Elastic SAN volume group on a regular schedule, typically at least every two years. Azure Elastic SAN never modifies the key in the key vault, but you can configure a key rotation policy to rotate the key according to your compliance requirements. For more information, see [Configure cryptographic key auto-rotation in Azure Key Vault](../../key-vault/keys/how-to-configure-key-rotation.md).
+
+After the key is rotated in the key vault, the encryption configuration for your Elastic SAN volume group must be updated to use the new key version. Customer-managed keys support both automatic and manual updating of the KEK version. Decide which approach you want to use before you configure customer-managed keys for a new or existing volume group.
+
+For more information on key rotation, see [Update the key version](elastic-san-encryption-manage-customer-keys.md#update-the-key-version).
+
+> [!IMPORTANT]
+> When you modify the key or the key version, the protection of the root data encryption key changes, but the data in your Azure Elastic SAN volume group remains encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance, and there is no downtime associated with it.
+
+### Automatic key version rotation
+
+Azure Elastic SAN can automatically update the customer-managed key that is used for encryption to use the latest key version from the key vault. Elastic SAN checks the key vault daily for a new version of the key. When a new version becomes available, it automatically begins using the latest version of the key for encryption. When you rotate a key, be sure to wait 24 hours before disabling the older version.
+
+> [!IMPORTANT]
+>
+> If the Elastic SAN volume group was previously configured for manual updating of the key version and you want to change it to update automatically, you might need to explicitly change the key version to an empty string. For more information on manually changing the key version, see elastic-san-encryption-mana[Automatically update the key version](elastic-san-encryption-manage-customer-keys.md#automatically-update-the-key-version).
+
+### Manual key version rotation
+
+If you prefer to update the key version manually, specify the URI for a specific version at the time that you configure encryption with customer-managed keys. In this case, Elastic SAN won't automatically update the key version when a new version is created in the key vault. For Elastic SAN to use a new key version, you must update it manually.
+
+To locate the URI for a specific version of a key in the Azure portal:
+
+1. Navigate to your key vault.
+1. Under **Objects** select **Keys**.
+1. Select the desired key to view its versions.
+1. Select a key version to view the settings for that version.
+1. Copy the value of the **Key Identifier** field, which provides the URI.
+1. Save the copied text to use later when configuring encryption for your volume group.
++
+## Choose a managed identity to authorize access to the key vault
+
+When you enable customer-managed encryption keys for an Elastic SAN volume group, you must specify a managed identity that is used to authorize access to the key vault that contains the key. The managed identity must have the following permissions:
+ - *get*
+- *wrapkey*
+- *unwrapkey*
+
+The managed identity that is authorized access to the key vault can be either a user-assigned or system-assigned managed identity. To learn more about system-assigned versus user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types).
+
+When a volume group is created, a system-assigned identity is automatically created for it. If you want to use a user-assigned identity, create it before you configure customer-managed encryption keys for your volume group. To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+### Use a user-assigned managed identity to authorize access
+
+When you enable customer-managed keys for a new volume group, you must specify a user-assigned managed identity. An existing volume group supports using either a user-assigned managed identity or a system-assigned managed identity to configure customer-managed keys.
+
+When you configure customer-managed keys with a user-assigned managed identity, the user-assigned managed identity is used to authorize access to the key vault that contains the key. You must create the user-assigned identity before you configure customer-managed keys.
+
+A user-assigned managed identity is a standalone Azure resource. To learn more about user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
+
+The user-assigned managed identity must have permissions to access the key in the key vault. Assign the **Key Vault Crypto Service Encryption User** role to the user-assigned managed identity with key vault scope to grant these permissions.
+
+The following example shows how to:
+
+> [!div class="checklist"]
+> * Create a new user-assigned managed identity.
+> * Wait for the creation of the user-assigned identity to complete.
+> * Get the `PrincipalId` from the new identity.
+> * Assign the required RBAC role to the new identity, scoped to the key vault.
+
+Use the same [variables you defined previously](#create-variables-to-be-used-in-the-powershell-samples-in-this-article) in this article.
+
+```azurepowershell
+# Create a new user-assigned managed identity.
+$UserIdentity = New-AzUserAssignedIdentity -ResourceGroupName $RgName -Name $ManagedUserName -Location $Location
+```
+
+> [!TIP]
+> Wait about 1 minute for the creation of the user-assigned identity to finish before proceeding.
+
+```azurepowershell
+# Get the `PrincipalId` for the new identity.
+$PrincipalId = $UserIdentity.PrincipalId
+
+# Setup the parameters to assign the Crypto Service Encryption User role.
+$CryptoUserRoleArguments = @{
+ ObjectId = $PrincipalId
+ RoleDefinitionName = "Key Vault Crypto Service Encryption User"
+ Scope = $KeyVault.ResourceId
+}
+
+# Assign the Crypto Service Encryption User role to the managed identity so it can access the key in the vault.
+New-AzRoleAssignment @CryptoUserRoleArguments
+```
++
+### Use a system-assigned managed identity to authorize access
+
+A system-assigned managed identity is associated with an instance of an Azure service, such as an Azure Elastic SAN volume group.
+
+The system-assigned managed identity must have permissions to access the key in the key vault. Assign the **Key Vault Crypto Service Encryption User** role to the system-assigned managed identity with key vault scope to grant these permissions.
+
+When a volume group is created, a system-assigned identity is automatically created for it if the `-IdentityType "SystemAssigned"` parameter is specified with the `New-AzElasticSanVolumeGroup` command. The system-assigned identity isn't available until after the volume group has been created. You must also assign the **Key Vault Crypto Service Encryption User** role to the identity before it can access the encryption key in the key vault. So, you can't configure customer-managed keys to use a system-assigned identity during creation of a volume group. Only existing Elastic SAN volume groups can be configured to use a system-assigned identity to authorize access to the key vault. New volume groups must use a user-assigned identity, if customer-managed keys are to be configured during volume group creation.
+
+Use this sample code to assign the required RBAC role to the system-assigned managed identity, scoped to the key vault. Use the same [variables you defined previously](#create-variables-to-be-used-in-the-powershell-samples-in-this-article) in this article.
+
+```azurepowershell
+# Get the Elastic SAN volume group.
+$ElasticSanVolumeGroup = Get-AzElasticSanVolumeGroup -Name $EsanVgName -ElasticSanName $EsanName -ResourceGroupName $RgName
+
+# Generate a system-assigned identity if one does not already exist.
+If ($ElasticSanVolumeGroup.IdentityPrincipalId -eq $null) {
+Update-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $EsanName -Name $EsanVgName -IdentityType "SystemAssigned"}
+
+# Get the `PrincipalId` (system-assigned identity) of the volume group.
+$PrincipalId = $ElasticSanVolumeGroup.IdentityPrincipalId
+
+# Setup the parameters to assign the Crypto Service Encryption User role.
+$CryptoUserRoleArguments = @{
+ ObjectId = $PrincipalId
+ RoleDefinitionName = "Key Vault Crypto Service Encryption User"
+ Scope = $KeyVault.ResourceId
+}
+
+# Assign the Crypto Service Encryption User role.
+New-AzRoleAssignment @CryptoUserRoleArguments
+```
+
+## Configure customer-managed keys for a volume group
+
+Then select the tab that corresponds to whether you want to configure the settings during creation of a new volume group, or update the settings for an existing one. Each set of tabs includes instructions for how to configure customer-managed encryption keys for automatic and manual updating of the key version.
+
+### New volume group
+
+Use this sample to configure customer-managed keys with **automatic** updating of the key version during creation of a new volume group using PowerShell:
+
+```azurepowershell
+# Setup the parameters to create the volume group.
+$NewVgArguments = @{
+ Name = $EsanVgName
+ ElasticSanName = $EsanName
+ ResourceGroupName = $RgName
+ ProtocolType = "Iscsi"
+ Encryption = "EncryptionAtRestWithCustomerManagedKey"
+ KeyName = $KeyName
+ KeyVaultUri = $KeyVault.VaultUri
+ IdentityType = "UserAssigned"
+ IdentityUserAssignedIdentity = @{$UserIdentity.Id=$UserIdentity}
+ EncryptionIdentityEncryptionUserAssignedIdentity = $UserIdentity.Id
+}
+
+# Create the volume group.
+New-AzElasticSanVolumeGroup @NewVgArguments
+```
+
+To configure customer-managed keys with **manual** updating of the key version during creation of a new volume group using PowerShell, add the `KeyVersion` parameter as shown in this sample:
+
+```azurepowershell
+# Setup the parameters to create the volume group.
+$NewVgArguments = @{
+ Name = $EsanVgName
+ ElasticSanName = $EsanName
+ ResourceGroupName = $RgName
+ ProtocolType = "Iscsi"
+ Encryption = "EncryptionAtRestWithCustomerManagedKey"
+ KeyName = $KeyName
+ KeyVaultUri = $KeyVault.VaultUri
+ KeyVersion = $Key.Version
+ IdentityType = "UserAssigned"
+ IdentityUserAssignedIdentity = @{$UserIdentity.Id=$UserIdentity}
+ EncryptionIdentityEncryptionUserAssignedIdentity = $UserIdentity.Id
+}
+
+# Create the volume group.
+New-AzElasticSanVolumeGroup @NewVgArguments
+```
+
+### Existing volume group
+
+This set of samples shows how to configure an existing volume group to use customer-managed keys with a system-assigned identity. The steps are:
+
+> [!div class="checklist"]
+> * Generate a system-assigned identity for the volume group.
+> * Get the principal ID of the new system-assigned identity.
+> * Assign the Key Vault Crypto Service Encryption User role to the new identity for the key vault.
+> * Update the volume group to use customer-managed keys.
+
+Use this sample to configure an existing volume group to use customer-managed keys with a system-assigned identity and **automatic** updating of the key version using PowerShell:
+
+```azurepowershell
+# Get the Elastic SAN volume group.
+$ElasticSanVolumeGroup = Get-AzElasticSanVolumeGroup -Name $EsanVgName -ElasticSanName $EsanName -ResourceGroupName $RgName
+
+# Generate a system-assigned identity if one does not already exist.
+If ($ElasticSanVolumeGroup.IdentityPrincipalId -eq $null) {
+Update-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $EsanName -Name $EsanVgName -IdentityType "SystemAssigned"}
+
+# Get the `PrincipalId` (system-assigned identity) of the volume group.
+$PrincipalId = $ElasticSanVolumeGroup.IdentityPrincipalId
+
+# Setup the parameters to assign the Crypto Service Encryption User role.
+$CryptoUserRoleArguments = @{
+ ObjectId = $PrincipalId
+ RoleDefinitionName = "Key Vault Crypto Service Encryption User"
+ Scope = $KeyVault.ResourceId
+}
+
+# Assign the Crypto Service Encryption User role.
+New-AzRoleAssignment @CryptoUserRoleArguments
+
+# Setup the parameters to update the volume group.
+$UpdateVgArguments = @{
+ Name = $EsanVgName
+ ElasticSanName = $EsanName
+ ResourceGroupName = $RgName
+ ProtocolType = "Iscsi"
+ Encryption = "EncryptionAtRestWithCustomerManagedKey"
+ KeyName = $KeyName
+ KeyVaultUri = $KeyVault.VaultUri
+}
+
+# Update the volume group.
+Update-AzElasticSanVolumeGroup @UpdateVgArguments
+```
+
+To configure an existing volume group to use customer-managed keys with a system-assigned identity and **manual** updating of the key version using PowerShell, add the `KeyVersion` parameter as shown in this sample:
+
+```azurepowershell
+# Get the Elastic SAN volume group.
+$ElasticSanVolumeGroup = Get-AzElasticSanVolumeGroup -Name $EsanVgName -ElasticSanName $EsanName -ResourceGroupName $RgName
+
+# Generate a system-assigned identity if one does not already exist.
+If ($ElasticSanVolumeGroup.IdentityPrincipalId -eq $null) {
+Update-AzElasticSanVolumeGroup -ResourceGroupName $RgName -ElasticSanName $EsanName -Name $EsanVgName -IdentityType "SystemAssigned"}
+
+# Get the `PrincipalId` (system-assigned identity) of the volume group.
+$PrincipalId = $ElasticSanVolumeGroup.IdentityPrincipalId
+
+# Setup the parameters to assign the Crypto Service Encryption User role.
+$CryptoUserRoleArguments = @{
+ ObjectId = $PrincipalId
+ RoleDefinitionName = "Key Vault Crypto Service Encryption User"
+ Scope = $KeyVault.ResourceId
+}
+
+# Assign the Crypto Service Encryption User role.
+New-AzRoleAssignment @CryptoUserRoleArguments
+
+# Setup the parameters to update the volume group.
+$UpdateVgArguments = @{
+ Name = $EsanVgName
+ ElasticSanName = $EsanName
+ ResourceGroupName = $RgName
+ ProtocolType = "Iscsi"
+ Encryption = "EncryptionAtRestWithCustomerManagedKey"
+ KeyName = $KeyName
+ KeyVaultUri = $KeyVault.VaultUri
+ KeyVersion = $Key.Version
+}
+
+# Update the volume group.
+Update-AzElasticSanVolumeGroup @UpdateVgArguments
+```
++
+## Next steps
+
+- [Manage customer keys for Azure Elastic SAN data encryption](elastic-san-encryption-manage-customer-keys.md)
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h
- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver - [Deploy an Elastic SAN Preview](elastic-san-create.md)-- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure a virtual network endpoint](elastic-san-networking.md)
- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules) ## Limitations
kubectl -n kube-system get pod -o wide -l app=csi-iscsi-node
You need the volume's StorageTargetIQN, StorageTargetPortalHostName, and StorageTargetPortalPort.
-You may get them with the following Azure PowerShell command:
+You can get them with the following Azure PowerShell command:
```azurepowershell Get-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $searchedVolumeGroup -Name $searchedVolume ```
-You may also get them with the following Azure CLI command:
+You can also get them with the following Azure CLI command:
```azurecli az elastic-san volume show --elastic-san-name --name --resource-group --volume-group-name
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
In this article, you'll add the Storage service endpoint to an Azure virtual net
- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - [Deploy an Elastic SAN Preview](elastic-san-create.md)-- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure a virtual network endpoint](elastic-san-networking.md)
- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules) ## Limitations
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
In this article, you add the Storage service endpoint to an Azure virtual networ
- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - [Deploy an Elastic SAN Preview](elastic-san-create.md)-- [Configure a virtual network endpoint](elastic-san-networking.md#configure-a-virtual-network-endpoint)
+- [Configure a virtual network endpoint](elastic-san-networking.md)
- [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules) ## Limitations
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
This article explains how to deploy and configure an elastic storage area networ
- If you're using Azure PowerShell, install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell). - If you're using Azure CLI, install the [latest version](/cli/azure/install-azure-cli).
- - Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN.
+- Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN.
There are no additional registration steps required. ## Limitations
There are no additional registration steps required.
# [PowerShell](#tab/azure-powershell)
-Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in of all the examples in this article:
+Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in all of the examples in this article:
| Placeholder | Description | |-|-|
New-AzElasticSAN -ResourceGroupName $RgName -Name $EsanName -Location $Location
# [Azure CLI](#tab/azure-cli)
-Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in of all the examples in this article:
+Use one of these sets of sample code to create an Elastic SAN that uses locally redundant storage or zone-redundant storage. Replace all placeholder text with your own values and use the same variables in all of the examples in this article:
| Placeholder | Description | |-|-|
az elastic-san volume create --elastic-san-name $EsanName -g $RgName -v $EsanVgN
## Next steps
-Now that you've deployed an Elastic SAN, Connect to Elastic SAN (preview) volumes from either [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-linux.md) clients.
+Now that you've deployed an Elastic SAN, Connect to Elastic SAN (preview) volumes from either [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-linux.md) clients.
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
iscsiadm --mode node --target **yourStorageTargetIQN** --portal **yourStorageTar
## Delete a SAN
-When your SAN has no active connections to any clients, you may delete it using the Azure portal or Azure PowerShell module. If you delete a SAN or a volume group, the corresponding child resources will be deleted along with it. The delete commands for each of the resource levels are below.
+You can delete your SAN by using the Azure portal, Azure PowerShell, or Azure CLI. If you delete a SAN or a volume group, the corresponding child resources will be deleted along with it. The delete commands for each of the resource levels are below.
-To delete volumes, run the following commands.
+The following commands delete your volumes. These commands use `ForceDelete false`, `-DeleteSnapshot false`, `--x-ms-force-delete false`, and `--x-ms-delete-snapshots false` parameters for PowerShell and CLI, respectively. If you set `ForceDelete` or `--x-ms-force-delete` to `true`, it'll cause volume deletion to succeed even if you've active iSCSI connections. If you set `-DeleteSnapshot` or `--x-ms-delete-snapshots` to `true`, it'll delete all snapshots associated with the volume, as well as the volume itself.
# [PowerShell](#tab/azure-powershell) ```azurepowershell
-Remove-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volumeGroupName -Name $volumeName
+Remove-AzElasticSanVolume -ResourceGroupName $resourceGroupName -ElasticSanName $sanName -VolumeGroupName $volumeGroupName -Name $volumeName -ForceDelete false -DeleteSnapshot false
``` # [Azure CLI](#tab/azure-cli) ```azurecli
-az elastic-san volume delete -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName
+az elastic-san volume delete -e $sanName -g $resourceGroupName -v $volumeGroupName -n $volumeName --x-ms-force-delete false --x-ms-delete-snapshots false
```
storage Elastic San Encryption Manage Customer Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-encryption-manage-customer-keys.md
+
+ Title: Learn how to manage keys for Elastic SAN Preview
+
+description: Learn how to manage keys for Elastic SAN Preview
++++ Last updated : 11/06/2023++++
+# Learn how to manage keys for Elastic SAN Preview
+
+All data written to an Elastic SAN volume is automatically encrypted-at-rest with a data encryption key (DEK). Azure DEKs are always *platform-managed* (managed by Microsoft). Azure uses [envelope encryption](../../security/fundamentals/encryption-atrest.md#envelope-encryption-with-a-key-hierarchy), also referred to as wrapping, which involves using a Key Encryption Key (KEK) to encrypt the DEK. By default, the KEK is platform-managed, but you can create and manage your own KEK. [Customer-managed keys](elastic-san-encryption-overview.md#customer-managed-keys) offer greater flexibility to manage access controls and can help you meet your organization security and compliance requirements.
+
+You control all aspects of your key encryption keys, including:
+
+- Which key is used
+- Where your keys are stored
+- How the keys are rotated
+- The ability to switch between customer-managed and platform-managed keys
+
+This article tells you how to manage your customer-managed KEKs.
+
+> [!NOTE]
+> Envelope encryption allows you to change your key configuration without impacting your Elastic SAN volumes. When you make a change, the Elastic SAN service re-encrypts the data encryption keys with the new keys. The protection of the data encryption key changes, but the data in your Elastic SAN volumes remain encrypted at all times. There is no additional action required on your part to ensure that your data is protected. Changing the key configuration doesn't impact performance, and there is no downtime associated with such a change.
+
+## Limitations
++
+## Change the key
+
+You can change the key that you're using for Azure Elastic SAN encryption at any time.
+
+To change the key with PowerShell, call [Update-AzElasticSanVolumeGroup](/powershell/module/az.elasticsan/update-azelasticsanvolumegroup) and provide the new key name and version. If the new key is in a different key vault, then you must also update the key vault URI.
++
+If the new key is in a different key vault, you must [grant the managed identity access to the key in the new vault](elastic-san-configure-customer-managed-keys.md#choose-a-managed-identity-to-authorize-access-to-the-key-vault). If you opt for manual updating of the key version, you'll also need to [update the key vault URI](elastic-san-configure-customer-managed-keys.md#manual-key-version-rotation).
+
+## Update the key version
+
+Following cryptographic best practices means to rotate the key that is protecting your Elastic SAN volume group on a regular schedule, typically at least every two years. Azure Elastic SAN never modifies the key in the key vault, but you can configure a key rotation policy to rotate the key according to your compliance requirements. For more information, see [Configure cryptographic key auto-rotation in Azure Key Vault](../../key-vault/keys/how-to-configure-key-rotation.md).
+
+After the key is rotated in the key vault, the customer-managed KEK configuration for your Elastic SAN volume group must be updated to use the new key version. Customer-managed keys support both automatic and manual updating of the KEK version. You can decide which approach you want to use when you initially configure customer-managed keys, or when you update your configuration.
+
+When you modify the key or the key version, the protection of the root encryption key changes, but the data in your Azure Elastic SAN volume group remains encrypted at all times. There's no extra action required on your part to ensure that your data is protected. Rotating the key version doesn't impact performance, and there's no downtime associated with rotating the key version.
+
+> [!IMPORTANT]
+> To rotate a key, create a new version of the key in the key vault, according to your compliance requirements. Azure Elastic SAN does not handle key rotation, so you will need to manage rotation of the key in the key vault.
+>
+> When you rotate the key used for customer-managed keys, that action is not currently logged to the Azure Monitor logs for Azure Elastic SAN.
+
+### Automatically update the key version
+
+To automatically update a customer-managed key when a new version is available, omit the key version when you enable encryption with customer-managed keys for the Elastic SAN volume group. If the key version is omitted, then Azure Elastic SAN checks the key vault daily for a new version of a customer-managed key. If a new key version is available, then Azure Elastic SAN automatically uses the latest version of the key.
+
+Azure Elastic SAN checks the key vault for a new key version only once daily. When you rotate a key, be sure to wait 24 hours before disabling the older version.
+
+If the Elastic SAN volume group was previously configured for manual updating of the key version and you want to change it to update automatically, you might need to explicitly change the key version to an empty string. For details on how to do this, see [Manual key version rotation](elastic-san-configure-customer-managed-keys.md#manual-key-version-rotation).
+
+### Manually update the key version
+
+To use a specific version of a key for Azure Elastic SAN encryption, specify that key version when you enable encryption with customer-managed keys for the Elastic SAN volume group. If you specify the key version, then Azure Elastic SAN uses that version for encryption until you manually update the key version.
+
+When the key version is explicitly specified, then you must manually update the Elastic SAN volume group to use the new key version URI when a new version is created. To learn how to update the Elastic SAN volume group to use a new version of the key, see [Configure encryption with customer-managed keys stored in Azure Key Vault](elastic-san-configure-customer-managed-keys.md).
+
+## Revoke access to a volume group that uses customer-managed keys
+
+To temporarily revoke access to an Elastic SAN volume group that is using customer-managed keys, disable the key currently being used in the key vault. There's no performance impact or downtime associated with disabling and reenabling the key.
+
+After the key has been disabled, clients can't call operations that read from or write to volumes in the volume group or their metadata.
++
+> [!CAUTION]
+> When you disable the key in the key vault, the data in your Azure Elastic SAN volume group remains encrypted, but it becomes inaccessible until you reenable the key.
+
+To revoke a customer-managed key with PowerShell, call the [Update-AzKeyVaultKey](/powershell/module/az.keyvault/update-azkeyvaultkey) command, as shown in the following example. Remember to replace the placeholder values in brackets with your own values to define the variables, or use the variables defined in the previous examples.
+
+```azurepowershell
+$KvName = "<key-vault-name>"
+$KeyName = "<key-name>"
+$enabled = $false
+# $false to disable the key / $true to enable it
+
+# Check the current state of the key (before and after enabling/disabling it)
+Get-AzKeyVaultKey -Name $KeyName -VaultName $KvName
+
+# Disable (or enable) the key
+Update-AzKeyVaultKey -VaultName $KvName -Name $KeyName -Enable $enabled
+```
+
+## Switch back to platform-managed keys
+
+You can switch from customer-managed keys back to platform-managed keys at any time, using the Azure PowerShell module.
+
+To switch from customer-managed keys back to platform-managed keys with PowerShell, call [Update-AzElasticSanVolumeGroup](/powershell/module/az.elasticsan/update-azelasticsanvolumegroup) with the `-Encryption` option, as shown in the following example. Remember to replace the placeholder values with your own values and to use the variables defined in the previous examples.
+
+```azurepowershell
+Update-AzElasticSanVolumeGroup -ResourceGroupName "ResourceGroupName" -ElasticSanName "ElasticSanName" -Name "ElasticSanVolumeGroupName" -Encryption EncryptionAtRestWithPlatformKey
+```
++
+## See also
+
+- [Configure customer-managed keys for an Elastic SAN volume group](elastic-san-configure-customer-managed-keys.md)
storage Elastic San Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-encryption-overview.md
+
+ Title: Encryption options for Azure Elastic SAN Preview
+
+description: Azure Elastic SAN protects your data by encrypting it at rest. You can use platform-managed keys for the encryption of your Elastic SAN volumes or use customer-managed keys to manage encryption with your own keys.
+ Last updated : 11/06/2023+++++
+# Encrypt an Azure Elastic SAN Preview
+
+Azure Elastic SAN uses server-side encryption (SSE) to automatically encrypt data stored in an Elastic SAN. SSE protects your data and helps you meet your organizational security and compliance requirements.
+
+Data in Azure Elastic SAN volumes is encrypted and decrypted transparently using 256-bit [AES encryption](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard), one of the strongest block ciphers available, and is FIPS 140-2 compliant. For more information about the cryptographic modules underlying Azure data encryption, see [Cryptography API: Next Generation](/windows/desktop/seccng/cng-portal).
+
+SSE is enabled by default and can't be disabled. SSE can't be disabled, doesn't impact the performance of your Elastic SAN, and has no extra cost associated with it.
+
+## About encryption key management
+
+There are two kinds of encryption keys available: platform-managed keys and customer-managed keys. Data written to an Elastic SAN volume is encrypted with platform-managed (Microsoft-managed) keys by default. If you prefer, you can use [Customer-managed keys](#customer-managed-keys) instead, if you have specific organizational security and compliance requirements.
+
+When you configure a volume group, you can choose to use either platform-managed or customer-managed keys. All volumes in a volume group inherit the volume group's configuration. You can switch between customer-managed and platform-managed keys at any time. If you switch between these key types, the Elastic SAN service re-encrypts the data encryption key with the new KEK. The protection of the data encryption key changes, but the data in your Elastic SAN volumes always remains encrypted. There's no extra action required on your part to ensure that your data is protected.
+
+## Customer-managed keys
+
+If you use customer-managed keys, you must use either an [Azure Key Vault](../../key-vault/general/overview.md) to store them.
+
+You can either create and import your own RSA keys and store them in your Azure Key Vault, or you can generate new RSA keys using Azure Key Vault. You can use the Azure Key Vault APIs or management interfaces to generate your keys. The Elastic SAN and the key vault can be in different regions and subscriptions, but they must be in the same Microsoft Entra ID tenant.
+
+The following diagram shows how Azure Elastic SAN uses Microsoft Entra ID and a key vault to make requests using the customer-managed key:
++
+The following list explains the numbered steps in the diagram:
+
+1. An Azure Key Vault admin grants permissions to a managed identity to access the key vault that contains the encryption keys. The managed identity can be either a user-assigned identity that you create and manage, or a system-assigned identity that is associated with the volume group.
+1. An Azure [Elastic SAN Volume Group Owner](../../role-based-access-control/built-in-roles.md#elastic-san-volume-group-owner) configures encryption with a customer-managed key for the volume group.
+1. Azure Elastic SAN uses the managed identity granted permissions in step 1 to authenticate access to the key vault via Microsoft Entra ID.
+1. Azure Elastic SAN wraps the data encryption key with the customer-managed key from the key vault.
+1. For read/write operations, Azure Elastic SAN sends requests to Azure Key Vault to unwrap the account encryption key to perform encryption and decryption operations.
+
+## Next steps
+
+- [Configure customer-managed keys for an Elastic SAN volume group](elastic-san-configure-customer-managed-keys.md)
+- [Manage customer keys for Azure Elastic SAN data encryption](elastic-san-encryption-manage-customer-keys.md)
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
Elastic SAN simplifies deploying and managing storage at scale through grouping
### Performance
-With an Elastic SAN, it's possible to scale your performance up to millions of IOPS, with double-digit GB/s throughput, and have single-digit millisecond latency. The performance of a SAN is shared across all of its volumes. As long as the SAN's caps aren't exceeded and the volumes are large enough, each volume can scale up to 64,000 IOPs. Elastic SAN volumes connect to your clients using the [iSCSI](https://en.wikipedia.org/wiki/ISCSI) protocol, which allows them to bypass the IOPS limit of an Azure VM and offers high throughput limits.
+With an Elastic SAN, it's possible to scale your performance up to millions of IOPS, with double-digit GB/s throughput, and have single-digit millisecond latency. The performance of a SAN is shared across all of its volumes. As long as the SAN's caps aren't exceeded and the volumes are large enough, each volume can scale up to 80,000 IOPs. Elastic SAN volumes connect to your clients using the [iSCSI](https://en.wikipedia.org/wiki/ISCSI) protocol, which allows them to bypass the IOPS limit of an Azure VM and offers high throughput limits.
### Cost optimization and consolidation
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
description: An overview of Azure Elastic SAN Preview networking options, includ
Previously updated : 10/19/2023 Last updated : 11/06/2023
Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
-You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets may belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Microsoft Entra tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
+You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets can belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Microsoft Entra tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
-Depending on your configuration, applications on peered virtual networks or on-premises networks can also access volumes in the group. On-premises networks must be connected to the virtual network by a VPN or ExpressRoute. For more details about virtual network configurations, see [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+Depending on your configuration, applications on peered virtual networks or on-premises networks can also access volumes in the group. On-premises networks must be connected to the virtual network by a VPN or ExpressRoute. For more information about virtual network configurations, see [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
There are two types of virtual network endpoints you can configure to allow access to an Elastic SAN volume group:
To decide which option is best for you, see [Compare Private Endpoints and Servi
After configuring endpoints, you can configure network rules to further control access to your Elastic SAN volume group. Once the endpoints and network rules have been configured, clients can connect to volumes in the group to process their workloads.
+## Public network access
+
+You can enable or disable public Internet access to your Elastic SAN endpoints at the SAN level. Enabling public network access for an Elastic SAN allows you to configure public access to individual volume groups in that SAN over storage service endpoints. By default, public access to individual volume groups is denied even if you allow it at the SAN level. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints.
+ ## Storage service endpoints
-[Azure Virtual Network (VNet) service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
+[Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) provide secure and direct connectivity to Azure services using an optimized route over the Azure backbone network. Service endpoints allow you to secure your critical Azure service resources so only specific virtual networks can access them.
[Cross-region service endpoints for Azure Storage](../common/storage-network-security.md#azure-storage-cross-region-service-endpoints) work between virtual networks and storage service instances in any region. With cross-region service endpoints, subnets no longer use a public IP address to communicate with any storage account, including those in another region. Instead, all the traffic from a subnet to a storage account uses a private IP address as a source IP.
After configuring endpoints, you can configure network rules to further control
## Private endpoints
-> [!IMPORTANT]
-> For Elastic SANs using [locally-redundant storage (LRS)](elastic-san-planning.md#redundancy) as their redundancy option, private endpoints are supported in all regions that Elastic SAN is available. Private endpoints aren't currently supported for elastic SANs using [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy) as their redundancy option.
- Azure [Private Link](../../private-link/private-link-overview.md) enables you to access an Elastic SAN volume group securely over a [private endpoint](../../private-link/private-endpoint-overview.md) from a virtual network subnet. Traffic between your virtual network and the service traverses the Microsoft backbone network, eliminating the risk of exposing your service to the public internet. An Elastic SAN private endpoint uses a set of IP addresses from the subnet address space for each volume group. The maximum number used per endpoint is 20. Private endpoints have several advantages over service endpoints. For a complete comparison of private endpoints to service endpoints, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
+### Restrictions
+
+Private endpoints aren't currently supported for elastic SANs using [zone-redundant storage (ZRS)](elastic-san-planning.md#redundancy).
+
+### How it works
+ Traffic between the virtual network and the Elastic SAN is routed over an optimal path on the Azure backbone network. Unlike service endpoints, you don't need to configure network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. For details on how to configure private endpoints, see [Enable private endpoint](elastic-san-networking.md#configure-a-private-endpoint).
For details on how to configure private endpoints, see [Enable private endpoint]
To further secure access to your Elastic SAN volumes, you can create virtual network rules for volume groups configured with service endpoints to allow access from specific subnets. You don't need network rules to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints.
-Each volume group supports up to 200 virtual network rules. If you delete a subnet that has been included in a network rule, it will be removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
+Each volume group supports up to 200 virtual network rules. If you delete a subnet that has been included in a network rule, it's removed from the network rules for the volume group. If you create a new subnet with the same name, it won't have access to the volume group. To allow access, you must explicitly authorize the new subnet in the network rules for the volume group.
Clients granted access via these network rules must also be granted the appropriate permissions to the Elastic SAN to volume group.
To learn how to define network rules, see [Managing virtual network rules](elast
## Client connections
-After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more details on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections)
+After you have enabled the desired endpoints and granted access in your network rules, you can connect to the appropriate Elastic SAN volumes using the iSCSI protocol. For more information on how to configure client connections, see [Configure access to Elastic SAN volumes from clients](elastic-san-networking.md#configure-client-connections)
> [!NOTE] > If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart. ## Next steps
-[Configure Elastic SAN networking Preview](elastic-san-networking.md)
+[Configure Elastic SAN networking Preview](elastic-san-networking.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Title: How to configure Azure Elastic SAN Preview networking
-description: How to configure networking for Azure Elastic SAN Preview, a service that enables you to create and use network file shares in the cloud using either SMB or NFS protocols.
+ Title: Configure networking for Azure Elastic SAN Preview
+description: Learn how to configure access to an Azure Elastic SAN Preview.
- Previously updated : 09/07/2023+ Last updated : 11/06/2023 -+
-# Configure networking for an Elastic SAN Preview
+# Configure network access for Azure Elastic SAN Preview
-Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require.
+You can control access to your Azure Elastic storage area network (SAN) Preview volumes. Controlling access allows you to secure your data and meet the needs of your applications and enterprise environments.
This article describes how to configure your Elastic SAN to allow access from your Azure virtual network infrastructure. To configure network access to your Elastic SAN: > [!div class="checklist"]
+> - [Configure public network access](#configure-public-network-access)
> - [Configure a virtual network endpoint](#configure-a-virtual-network-endpoint). > - [Configure client connections](#configure-client-connections).
+## Prerequisites
+
+- If you're using Azure PowerShell, install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell).
+- If you're using Azure CLI, install the [latest version](/cli/azure/install-azure-cli).
+- Once you've installed the latest version, run `az extension add -n elastic-san` to install the extension for Elastic SAN.
+There are no extra registration steps required.
+
+## Limitations
++
+## Configure public network access
+
+You enable public Internet access to your Elastic SAN endpoints at the SAN level. Enabling public network access for an Elastic SAN allows you to configure public access to individual volume groups over storage service endpoints. By default, public access to individual volume groups is denied even if you allow it at the SAN level. You must explicitly configure your volume groups to permit access from specific IP address ranges and virtual network subnets.
+
+You can enable public network access when you create an elastic SAN, or enable it for an existing SAN using the Azure portal, PowerShell, or the Azure CLI.
+
+# [Portal](#tab/azure-portal)
+
+To enable public network access when you create a new Elastic SAN, proceed through the deployment. On the **Networking** tab, select **Enable from virtual networks** as shown in this image:
++
+To enable it for an existing Elastic SAN, navigate to **Networking** under **Settings** for the Elastic SAN then select **Enable public access from selected virtual networks** as shown in this image:
++
+# [PowerShell](#tab/azure-powershell)
+
+Use this sample code to create an Elastic SAN with public network access enabled using PowerShell. Replace the variable values before running the sample.
+
+```powershell
+# Set the variable values.
+# The name of the resource group where the Elastic San is deployed.
+$RgName = "<ResourceGroupName>"
+# The name of the Elastic SAN.
+$EsanName = "<ElasticSanName>"
+# The region where the new Elastic San will be created.
+$Location = "<Location>"
+# The SKU of the new Elastic SAN - `Premium_LRS` or `Premium_ZRS`.
+$SkuName = "<SkuName>"
+# The base size of the new Elastic SAN.
+$BaseSize = "<BaseSize>"
+# The extended size of the new Elastic SAN.
+$ExtendedSize = "<ExtendedSize>"
+# Setup the parameters to create an Elastic San with public network access enabled.
+$NewEsanArguments = @{
+ Name = $EsanName
+ ResourceGroupName = $RgName
+ BaseSizeTiB = $BaseSize
+ ExtendedCapacitySizeTiB = $ExtendedSize
+ Location = $Location
+ SkuName = $SkuName
+ PublicNetworkAccess = Enabled
+}
+# Create the Elastic San.
+New-AzElasticSan @NewEsanArguments
+```
+
+Use this sample code to update an Elastic SAN to enable public network access using PowerShell. Replace the values of `RgName` and `EsanName` with your own, then run the sample:
+
+```powershell
+# Set the variable values.
+$RgName = "<ResourceGroupName>"
+$EsanName = "<ElasticSanName>"
+# Update the Elastic San.
+Update-AzElasticSan -Name $EsanName -ResourceGroupName $RgName -PublicNetworkAccess Enabled
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Use this sample code to create an Elastic SAN with public network access enabled using the Azure CLI. Replace the variable values before running the sample.
+
+```azurecli
+# Set the variable values.
+# The name of the resource group where the Elastic San is deployed.
+$RgName="<ResourceGroupName>"
+# The name of the Elastic SAN.
+$EsanName="<ElasticSanName>"
+# The region where the new Elastic San will be created.
+$Location="<Location>"
+# The SKU of the new Elastic SAN - `Premium_LRS` or `Premium_ZRS`.
+$SkuName="<SkuName>"
+# The base size of the new Elastic SAN.
+$BaseSize="<BaseSize>"
+# The extended size of the new Elastic SAN.
+$ExtendedSize="<ExtendedSize>"
+
+# Create the Elastic San.
+az elastic-san create \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --location $Location \
+ --base-size-tib $BaseSize \
+ --extended-capacity-size-tib $ExtendedSize \
+ --sku $SkuName \
+ --public-network-access enabled
+```
+
+Use this sample code to update an Elastic SAN to enable public network access using the Azure CLI. Replace the values of `RgName` and `EsanName` with your own values:
+
+```azurecli
+# Set the variable values.
+$RgName="<ResourceGroupName>"
+$EsanName="<ElasticSanName>"
+# Update the Elastic San.
+az elastic-san update \
+ --elastic-san-name $EsanName \
+ --resource-group $RgName \
+ --public-network-access enabled
+```
+++ ## Configure a virtual network endpoint
-You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets may belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Microsoft Entra tenant.
+You can configure your Elastic SAN volume groups to allow access only from endpoints on specific virtual network subnets. The allowed subnets can belong to virtual networks in the same subscription, or those in a different subscription, including a subscription belonging to a different Microsoft Entra tenant.
You can allow access to your Elastic SAN volume group from two types of Azure virtual network endpoints:
A private endpoint uses one or more private IP addresses from your virtual netwo
Virtual network service endpoints are public and accessible via the internet. You can [Configure virtual network rules](#configure-virtual-network-rules) to control access to your volume group when using storage service endpoints.
-Network rules only apply to the public endpoints of a volume group, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, do not enable service endpoints for the volume group.
+Network rules only apply to the public endpoints of a volume group, not private endpoints. The process of approving the creation of a private endpoint grants implicit access to traffic from the subnet that hosts the private endpoint. You can use [Network Policies](../../private-link/disable-private-endpoint-network-policy.md) to control traffic over private endpoints if you want to refine access rules. If you want to use private endpoints exclusively, don't enable service endpoints for the volume group.
To decide which type of endpoint works best for you, see [Compare Private Endpoints and Service Endpoints](../../virtual-network/vnet-integration-for-azure-services.md#compare-private-endpoints-and-service-endpoints).
You can also use [Network Policies](../../private-link/disable-private-endpoint-
To create a private endpoint for an Elastic SAN volume group, you must have the [Elastic SAN Volume Group Owner](../../role-based-access-control/built-in-roles.md#elastic-san-volume-group-owner) role. To approve a new private endpoint connection, you must have permission to the [Azure resource provider operation](../../role-based-access-control/resource-provider-operations.md#microsoftelasticsan) `Microsoft.ElasticSan/elasticSans/PrivateEndpointConnectionsApproval/action`. Permission for this operation is included in the [Elastic SAN Network Admin](../../role-based-access-control/built-in-roles.md#elastic-san-owner) role, but it can also be granted via a custom Azure role.
-If you create the endpoint from a user account that has all of the necessary roles and permissions required for creation and approval, the process can be completed in one step. If not, it will require two separate steps by two different users.
+If you create the endpoint from a user account that has all of the necessary roles and permissions required for creation and approval, the process can be completed in one step. If not, it requires two separate steps by two different users.
-The Elastic SAN and the virtual network may be in different resource groups, regions and subscriptions, including subscriptions that belong to different Microsoft Entra tenants. In these examples, we are creating the private endpoint in the same resource group as the virtual network.
+The Elastic SAN and the virtual network could be in different resource groups, regions and subscriptions, including subscriptions that belong to different Microsoft Entra tenants. In these examples, we're creating the private endpoint in the same resource group as the virtual network.
# [Portal](#tab/azure-portal)
Currently, you can only configure a private endpoint using PowerShell or the Azu
Deploying a private endpoint for an Elastic SAN Volume group using PowerShell involves these steps:
-1. Get the subnet from which applications will connect.
+1. Get the subnet from the applications that will connect.
1. Get the Elastic SAN Volume Group. 1. Create a private link service connection using the volume group as input. 1. Create the private endpoint using the subnet and the private link service connection as input.
-1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
-
-Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace all placeholder text with your own values:
-
-| Placeholder | Description |
-|-|-|
-| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
-| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
-| `<VnetName>` | The name of the virtual network that includes the subnet. |
-| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
-| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
-| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
-| `<PrivateEndpointName>` | The name of the new private endpoint. |
-| `<Location>` | The region where the new private endpoint will be created. |
-| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
+1. **(Optional** *if you're using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
+
+Use this sample code to create a private endpoint for your Elastic SAN volume group with PowerShell. Replace the values of `RgName`, `VnetName`, `SubnetName`, `EsanName`, `EsanVgName`, `PLSvcConnectionName`, `EndpointName`, and `Location` with your own values:
```powershell # Set the resource group name. $RgName = "<ResourceGroupName>"
-# Get the virtual network and subnet, which is input to creating the private endpoint.
+# Set the virtual network and subnet, which is used when creating the private endpoint.
$VnetName = "<VnetName>" $SubnetName = "<SubnetName>" $Vnet = Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RgName $Subnet = $Vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName}
-# Get the Elastic SAN, which is input to creating the private endpoint service connection.
+# Set the Elastic SAN, which is used when creating the private endpoint service connection.
$EsanName = "<ElasticSanName>" $EsanVgName = "<ElasticSanVolumeGroupName>"
$PeArguments = @{
New-AzPrivateEndpoint @PeArguments # -ByManualRequest # (Uncomment the `-ByManualRequest` parameter if you are using the two-step process). ```
-Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+Use this sample code to approve the private link service connection if you're using the two-step process. Use the same variables from the previous code sample:
```powershell # Get the private endpoint and associated connection.
Deploying a private endpoint for an Elastic SAN Volume group using the Azure CLI
1. Volume group name 1. Resource group name 1. Subnet name
- 1. Vnet name
-1. **(Optional** *if you are using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
-
-Use this sample code to create a private endpoint for your Elastic SAN volume group with the Azure CLI. Uncomment the `--manual-request` parameter if you are using the two-step process. Replace all placeholder text with your own values:
-
-| Placeholder | Description |
-|-|-|
-| `<ResourceGroupName>` | The name of the resource group where the resources are deployed. |
-| `<SubnetName>` | The name of the subnet from which access to the volume group will be configured. |
-| `<VnetName>` | The name of the virtual network that includes the subnet. |
-| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to which a connection is to be created. |
-| `<ElasticSanName>` | The name of the Elastic SAN that the volume group belongs to. |
-| `<PrivateLinkSvcConnectionName>` | The name of the new private link service connection to the volume group. |
-| `<PrivateEndpointName>` | The name of the new private endpoint. |
-| `<Location>` | The region where the new private endpoint will be created. |
-| `<ApprovalDesc>` | The description provided for the approval of the private endpoint connection. |
+ 1. Virtual network name
+1. **(Optional** *if you're using the two-step process (creation, then approval))*: The Elastic SAN Network Admin approves the connection.
+
+Use this sample code to create a private endpoint for your Elastic SAN volume group with the Azure CLI. Uncomment the `--manual-request` parameter if you're using the two-step process. Replace all example variable values with your own:
```azurecli # Define some variables.
+# The name of the resource group where the resources are deployed.
RgName="<ResourceGroupName>"
+# The name of the subnet from which access to the volume group will be configured.
VnetName="<VnetName>"
+# The name of the virtual network that includes the subnet.
SubnetName="<SubnetName>"
+# The name of the Elastic SAN that the volume group belongs to.
EsanName="<ElasticSanName>"
+# The name of the Elastic SAN Volume Group to which a connection is to be created.
EsanVgName="<ElasticSanVolumeGroupName>"
+# The name of the new private endpoint
EndpointName="<PrivateEndpointName>"
+# The name of the new private link service connection to the volume group.
PLSvcConnectionName="<PrivateLinkSvcConnectionName>"
+# The region where the new private endpoint will be created.
Location="<Location>"
+# The description provided for the approval of the private endpoint connection.
ApprovalDesc="<ApprovalDesc>" # Get the id of the Elastic SAN.
az network private-endpoint-connection show \
--name $PLConnectionName ```
-Use this sample code to approve the private link service connection if you are using the two-step process. Use the same variables from the previous code sample:
+Use this sample code to approve the private link service connection if you're using the two-step process. Use the same variables from the previous code sample:
```azurecli az network private-endpoint-connection approve \
To configure an Azure Storage service endpoint from the virtual network where ac
Virtual network service endpoints are public and accessible via the internet. You can [Configure virtual network rules](#configure-virtual-network-rules) to control access to your volume group when using storage service endpoints. > [!NOTE]
-> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Microsoft Entra tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they may be viewed in the portal.
+> Configuration of rules that grant access to subnets in virtual networks that are a part of a different Microsoft Entra tenant are currently only supported through PowerShell, CLI and REST APIs. These rules cannot be configured through the Azure portal, though they can be viewed in the portal.
# [Portal](#tab/azure-portal)
az network vnet subnet update --resource-group $RgName --vnet-name $VnetName --n
#### Configure virtual network rules
-All incoming requests for data over a service endpoint are blocked by default. Only applications that request data from allowed sources that you configure in your network rules will be able to access your data.
+All incoming requests for data over a service endpoint are blocked by default. Only applications that request data from allowed sources that you configure in your network rules are able to access your data.
You can manage virtual network rules for volume groups through the Azure portal, PowerShell, or CLI.
You can manage virtual network rules for volume groups through the Azure portal,
## Configure client connections
-After you have enabled the desired endpoints and granted access in your network rules, you are ready to configure your clients to connect to the appropriate Elastic SAN volumes.
+After you have enabled the desired endpoints and granted access in your network rules, you're ready to configure your clients to connect to the appropriate Elastic SAN volumes.
> [!NOTE] > If a connection between a virtual machine (VM) and an Elastic SAN volume is lost, the connection will retry for 90 seconds until terminating. Losing a connection to an Elastic SAN volume won't cause the VM to restart.
After you have enabled the desired endpoints and granted access in your network
- [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md) - [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md)-- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)
+- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
Before deploying an Elastic SAN Preview, consider the following:
- What level of performance do you need? - What type of redundancy do you require?
-Answering those three questions can help you to successfully provision a SAN that meets your needs.
+Answering those three questions can help you to successfully deploy a SAN that meets your needs.
## Storage and performance
There are two layers when it comes to performance and storage, the total storage
### Elastic SAN
-There are two ways to provision storage for an Elastic SAN: You can either provision base capacity or additional capacity. Each TiB of base capacity also increases your SAN's IOPS and throughput (MB/s) but costs more than each TiB of additional capacity. Increasing additional capacity doesn't increase your SAN's IOPS or throughput (MB/s).
+There are two ways to allocate storage for an Elastic SAN: You can either allocate base capacity or additional capacity. Each TiB of base capacity also increases your SAN's IOPS and throughput (MB/s) but costs more than each TiB of additional capacity. Increasing additional capacity doesn't increase your SAN's IOPS or throughput (MB/s).
-When provisioning storage for an Elastic SAN, consider how much storage you require and how much performance you require. Using a combination of base capacity and additional capacity to meet these requirements allows you to optimize your costs. For example, if you needed 100 TiB of storage but only needed 250,000 IOPS and 4,000 MB/s, you could provision 50 TiB in your base capacity and 50 TiB in your additional capacity.
+When allocating storage for an Elastic SAN, consider how much storage you require and how much performance you require. Using a combination of base capacity and additional capacity to meet these requirements allows you to optimize your costs. For example, if you needed 100 TiB of storage but only needed 250,000 IOPS and 4,000 MB/s, you could allocate 50 TiB in your base capacity and 50 TiB in your additional capacity.
### Volumes
-You create volumes from the storage that you provisioned to your Elastic SAN. When you create a volume, think of it like partitioning a section of the storage of your Elastic SAN. The maximum performance of an individual volume is determined by the amount of storage allocated to it. Individual volumes can have fairly high IOPS and throughput, but the total IOPS and throughput of all your volumes can't exceed the total IOPS and throughput your SAN has.
+You create volumes from the storage that you allocated to your Elastic SAN. When you create a volume, think of it like partitioning a section of the storage of your Elastic SAN. The maximum performance of an individual volume is determined by the amount of storage allocated to it. Individual volumes can have fairly high IOPS and throughput, but the total IOPS and throughput of all your volumes can't exceed the total IOPS and throughput your SAN has.
Using the same example of a 100 TiB SAN that has 250,000 IOPS and 4,000 MB/s. Say this SAN had 100 1 TiB volumes. You could potentially have three of these volumes operating at their maximum performance (64,000 IOPS, 1,024 MB/s) since this would be below the SAN's limits. But if four or five volumes all needed to operate at maximum at the same time, they wouldn't be able to. Instead the performance of the SAN would be split evenly among them. ## Networking
-In the Elastic SAN Preview, you can configure access to volume groups over both public [Azure Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
+In the Elastic SAN Preview, you can enable or disable public network access at the Elastic SAN level. You can also configure access to volume groups in the SAN over both public [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints, regardless of individual configurations for the volume group.
-To allow network access, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.
+To allow network access or an individual volume group, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.
## Redundancy To protect the data in your Elastic SAN against data loss or corruption, all SANs store multiple copies of each file as they're written. Depending on the requirements of your workload, you can select additional degrees of redundancy. The following data redundancy options are currently supported: -- **Locally-redundant storage (LRS)**: With LRS, every SAN is stored three times within an Azure storage cluster. This protects against loss of data due to hardware faults, such as a bad disk drive. However, if a disaster such as fire or flooding occurs within the data center, all replicas of an Elastic SAN using LRS may be lost or unrecoverable.
+- **Locally-redundant storage (LRS)**: With LRS, every SAN is stored three times within an Azure storage cluster. This protects against loss of data due to hardware faults, such as a bad disk drive. However, if a disaster such as fire or flooding occurs within the data center, all replicas of an Elastic SAN using LRS could be lost or unrecoverable.
- **Zone-redundant storage (ZRS)**: With ZRS, three copies of each SAN are stored in three distinct and physically isolated storage clusters in different Azure *availability zones*. Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more data centers equipped with independent power, cooling, and networking. A write request to storage that is using ZRS happens synchronously. The write operation only returns successfully after the data is written to all replicas across the three availability zones. ## Encryption
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
The IOPS of an Elastic SAN increases by 5,000 per base TiB. So if you had an Ela
### Throughput
-The throughput of an Elastic SAN increases by 80 MB/s per base TiB. So if you had an Elastic SAN that has 6 TiB of base capacity, that SAN could still provide up to 480 MB/s. That same SAN would provide 480-MB/s throughput whether it had 50 TiB of additional capacity or 500 TiB of additional capacity, since the SAN's performance is only determined by the base capacity. The throughput of an Elastic SAN is distributed among all its volumes.
+The throughput of an Elastic SAN increases by 200 MB/s per base TiB. So if you had an Elastic SAN that has 6 TiB of base capacity, that SAN could still provide up to 1200 MB/s. That same SAN would provide 1200 MB/s throughput whether it had 50 TiB of additional capacity or 500 TiB of additional capacity, since the SAN's performance is only determined by the base capacity. The throughput of an Elastic SAN is distributed among all its volumes.
### Elastic SAN scale targets
ZRS is only available in France Central, North Europe, West Europe and West US 2
## Volume group
-An Elastic SAN can have a maximum of 20 volume groups, and a volume group can contain up to 1,000 volumes.
+An Elastic SAN can have a maximum of 200 volume groups, and a volume group can contain up to 1,000 volumes.
## Volume
-The performance of an individual volume is determined by its capacity. The maximum IOPS of a volume increase by 750 per GiB, up to a maximum of 64,000 IOPS. The maximum throughput increases by 60 MB/s per GiB, up to a maximum of 1,024 MB/s. A volume needs at least 86 GiB to be capable of using 64,000 IOPS. A volume needs at least 18 GiB in order to be capable of using the maximum 1,024 MB/s. The combined IOPS and throughput of all your volumes can't exceed the IOPS and throughput of your SAN.
+The performance of an individual volume is determined by its capacity. The maximum IOPS of a volume increases by 750 per GiB, up to a maximum of 80,000 IOPS. The maximum throughput increases by 60 MB/s per GiB, up to a maximum of 1,280 MB/s. A volume needs at least 106 GiB to be capable of using the maximum 80,000 IOPS. A volume needs at least 21 GiB in order to be capable of using the maximum 1,280 MB/s. The combined IOPS and throughput of all your volumes can't exceed the IOPS and throughput of your SAN.
### Volume scale targets |Supported capacities |Maximum potential IOPS |Maximum potential throughput (MB/s) | ||||
-|1 GiB - 64 TiB |750 - 64,000 (increases by 750 per GiB) |60 - 1,024 (increases by 60 per GiB) |
+|1 GiB - 64 TiB |750 - 80,000 (increases by 750 per GiB) |60 - 1,280 (increases by 60 per GiB) |
## Next steps
stream-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md
Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
synapse-analytics Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/known-issues.md
# Azure Synapse Analytics known issues
-This page lists the known issues in [Azure Synapse Analytics](overview-what-is.md), as well as their resolution date or possible workaround.
+This page lists the known issues in [Azure Synapse Analytics](overview-what-is.md), and their resolution date or possible workaround.
Before submitting a Support request, please review this list to see if the issue that you're experiencing is already known and being addressed. To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Overview](index.yml), and [What's new in Azure Synapse Analytics?](whats-new.md).
To learn more about Azure Synapse Analytics, see the [Azure Synapse Analytics Ov
|Azure Synapse dedicated SQL pool|[Queries failing with Data Exfiltration Error](#queries-failing-with-data-exfiltration-error)|Has Workaround| |Azure Synapse Workspace|[Blob storage linked service with User Assigned Managed Identity (UAMI) is not getting listed](#blob-storage-linked-service-with-user-assigned-managed-identity-uami-is-not-getting-listed)|Has Workaround| |Azure Synapse Workspace|[Failed to delete Synapse workspace & Unable to delete virtual network](#failed-to-delete-synapse-workspace--unable-to-delete-virtual-network)|Has Workaround|
+|Azure Synapse Apache Spark pool|[Certain spark job or task fails too early with Error Code 503 due to storage account throttling](#certain-spark-job-or-task-fails-too-early-with-error-code-503-due-to-storage-account-throttling)|Has Workaround|
## Azure Synapse Analytics serverless SQL pool active known issues summary
Sometimes you may not be able to execute the ALTER DATABASE SCOPED CREDENTIAL qu
### Queries failing with Data Exfiltration Error
-Synapse workspaces created from an existing dedicated SQL Pool report query failures related to [Data Exfiltration Protection](security/workspace-data-exfiltration-protection.md) with generic error message while Data Exfiltration Protection is turned off in Synapse Analytics:
+Synapse workspaces created from an existing dedicated SQL Pool report query failure related to [Data Exfiltration Protection](security/workspace-data-exfiltration-protection.md) with generic error message while Data Exfiltration Protection is turned off in Synapse Analytics:
`Data exfiltration to '{****}' is blocked. Add destination to allowed list for data exfiltration and try again.`
When using an ARM template, Bicep template, or direct REST API PUT operation to
**Workaround**: The problem can be mitigated by using a REST API PATCH operation or the Azure Portal UI to reverse and retry the desired configuration changes. The engineering team is aware of this behavior and working on a fix.
+## Azure Synapse Analytics Apache Spark pool active known issues summary
+
+The following are known issues with the Synapse Spark.
+
+### Certain spark job or task fails too early with Error Code 503 due to storage account throttling
+
+Starting at 00:00 UTC on October 3, 2023, few Azure Synapse Analytics Apache Spark pools might experience spark job/task failures due to storage API limit threshold being exceeded.
+
+**Workaround**: The engineering team is currently aware of this behavior and working on a fix. We recommend setting the following spark config at [pool level](spark/apache-spark-azure-create-spark-configuration.md#create-an-apache-spark-configuration)
+
+`spark.hadoop.fs.azure.io.retry.max.retries 19`
++ ## Recently Closed Known issues |Synapse Component|Issue|Status|Date Resolved
When using an ARM template, Bicep template, or direct REST API PUT operation to
### Queries using Microsoft Entra authentication fails after 1 hour
-SQL connections using Microsoft Entra authentication that remain active for more than 1 hour will start to fail. This includes querying storage using Microsoft Entra pass-through authentication and statements that interact with Microsoft Entra ID, like CREATE EXTERNAL PROVIDER. This affects every tool that keeps connections active, like query editor in SSMS and ADS. Tools that open new connection to execute queries aren't affected, like Synapse Studio.
+SQL connections using Microsoft Entra authentication that remain active for more than 1 hour starts to fail. This includes querying storage using Microsoft Entra pass-through authentication and statements that interact with Microsoft Entra ID, like CREATE EXTERNAL PROVIDER. This affects every tool that keeps connections active, like query editor in SSMS and ADS. Tools that open new connection to execute queries aren't affected, like Synapse Studio.
**Status**: Resolved
synapse-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md
Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
virtual-machine-scale-sets Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md
Previously updated : 11/03/2023 Last updated : 11/06/2023 # Azure Policy built-in definitions for Azure Virtual Machine Scale Sets
virtual-machine-scale-sets Tutorial Use Custom Image Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-powershell.md
$gallery = New-AzGallery `
## Create an image definition Image definitions create a logical grouping for images. They are used to manage information about the image versions that are created within them. Image definition names can be made up of uppercase or lowercase letters, digits, dots, dashes and periods. For more information about the values you can specify for an image definition, see [Image definitions](../virtual-machines/shared-image-galleries.md#image-definitions).
-Create the image definition using [New-AzGalleryImageDefinition](/powershell/module/az.compute/new-azgalleryimageversion). In this example, the gallery image is named *myGalleryImage* and is created for a specialized image.
+Create the image definition using [New-AzGalleryImageDefinition](/powershell/module/az.compute/new-azgalleryimagedefinition). In this example, the gallery image is named *myGalleryImage* and is created for a specialized image.
```azurepowershell-interactive $galleryImage = New-AzGalleryImageDefinition `
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
GET '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/provider
} ```
+Use [Set Orchestration Service State](/rest/api/compute/virtual-machine-scale-sets/set-orchestration-service-state) to suspend or resume the *serviceState* for automatic repairs.
+
+```http
+POST '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}/instanceView?api-version=2023-07-01'
+
+{
+ "serviceName": "AutomaticRepairs",
+ "action": "Suspend"
+}
+```
+ ### [Azure CLI](#tab/cli-4) Use [get-instance-view](/cli/azure/vmss#az-vmss-get-instance-view) cmdlet to view the *serviceState* for automatic instance repairs.
virtual-machines Basv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/basv2.md
Basv2-series virtual machines offer a balance of compute, memory, and network re
| Standard_B8as_v2 | 8 | 32 | 40% | 240 | 192 | 4608 | 12,800/290 | 20,000/960 | 16 | 6.25 | 2 | | Standard_B16als_v2 | 16 | 32 | 30% | 480 | 288 | 6912 | 25,600/600 | 40,000/960 | 32 | 6.25 | 4 | | Standard_B16as_v2 | 16 | 64 | 40% | 480 | 384 | 9216 | 25,600/600 | 40,000/960 | 32 | 6.25 | 4 |
-| Standard_B32als_v2 | 32 | 64 | 60% | 960 | 576 | 13824 | 25,600/600 | 80,000/960 | 32 | 6.25 | 4 |
+| Standard_B32als_v2 | 32 | 64 | 30% | 960 | 576 | 13824 | 25,600/600 | 80,000/960 | 32 | 6.25 | 4 |
| Standard_B32as_v2 | 32 | 128 | 40% | 960 | 768 | 18432 | 25,600/600 | 80,000/960 | 32 | 6.25 | 4 |
virtual-machines Boot Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/boot-integrity-monitoring-overview.md
Title: Boot integrity monitoring overview
description: How to use the guest attestation extension to secure boot your VM. How to handle traffic blocking. + Previously updated : 04/25/2023 Last updated : 11/06/2023
You can deploy the guest attestation extension for trusted launch VMs using a qu
If Secure Boot and vTPM are ON, boot integrity will be ON.
-1. Create a virtual machine with Trusted Launch that has Secure Boot + vTPM capabilities through initial deployment of trusted launch virtual machine. Configuration of virtual machines are customizable by virtual machine owner (az vm create).
+1. Create a virtual machine with Trusted Launch that has Secure Boot + vTPM capabilities through initial deployment of trusted launch virtual machine. To deploy guest attestation extension use (`-- enable_integrity_monitoring`). Configuration of virtual machines are customizable by virtual machine owner (`az vm create`).
+1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure enable integrity monitoring is turned on (`-- enable_integrity_monitoring`).
-1. For existing VMs, you can enable boot integrity monitoring settings by updating to make sure both Secure Boot and vTPM are on (az vm update).
-For more information on creation or updating a virtual machine to include the boot integrity monitoring through the guest attestation extension, see [Deploy a VM with trusted launch enabled (CLI)](trusted-launch-portal.md#deploy-a-trusted-launch-vm).
+> [!NOTE]
+> The Guest Attestation Extension needs to be configured explicitly.
### [PowerShell](#tab/powershell)
virtual-machines Bsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/bsv2-series.md
Bsv2-series virtual machines offer a balance of compute, memory, and network res
| Size | vCPU | RAM | Base CPU Performance of VM (%) | Initial Credits (#) | Credits banked/hour | Max Banked Credits (#) | Max uncached disk throughput: IOPS/MBps | Max burst uncached disk throughput: IOPS/MBps | Max Data Disks | Max Network Bandwidth (Gbps) | Max NICs | |-||--|--||||--|--|-||-| | Standard_B2ts_v2 | 2 | 1 | 20% | 60 | 24 | 576 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
-| Standard_B2ls_v2 | 2 | 4 | 30% | 60 | 36 | 864 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 |
-| Standard_B2s_v2 | 2 | 8 | 40% | 60 | 48 | 1152 | 3750/85 | 10,000/960 | 4 | 6.50 | 2 |
-| Standard_B4ls_v2 | 4 | 8 | 30% | 120 | 72 | 1728 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 |
-| Standard_B4s_v2 | 4 | 16 | 40% | 120 | 96 | 2304 | 6,400/145 | 20,000/960 | 8 | 6.250 | 2 |
-| Standard_B8ls_v2 | 8 | 16 | 30% | 240 | 144 | 3456 | 12,800/290 | 20,000/960 | 16 | 3.250 | 2 |
-| Standard_B8s_v2 | 8 | 32 | 40% | 240 | 192 | 4608 | 12,800/290 | 20,000/960 | 16 | 6.250 | 2 |
-| Standard_B16ls_v2 | 16 | 32 | 30% | 480 | 288 | 6912 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 |
-| Standard_B16s_v2 | 16 | 64 | 40% | 480 | 384 | 9216 | 25,600/600 | 40,000/960 | 32 | 6.250 | 4 |
-| Standard_B32ls_v2 | 32 | 64 | 30% | 960 | 576 | 13824 | 51,200/600 | 80,000/960 | 32 | 6.250 | 4 |
-| Standard_B32s_v2 | 32 | 128 | 40% | 960 | 768 | 18432 | 51,200/600 | 80,000/960 | 32 | 6.250 | 4 |
+| Standard_B2ls_v2 | 2 | 4 | 30% | 60 | 36 | 864 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
+| Standard_B2s_v2 | 2 | 8 | 40% | 60 | 48 | 1152 | 3750/85 | 10,000/960 | 4 | 6.25 | 2 |
+| Standard_B4ls_v2 | 4 | 8 | 30% | 120 | 72 | 1728 | 6,400/145 | 20,000/960 | 8 | 6.25 | 2 |
+| Standard_B4s_v2 | 4 | 16 | 40% | 120 | 96 | 2304 | 6,400/145 | 20,000/960 | 8 | 6.25 | 2 |
+| Standard_B8ls_v2 | 8 | 16 | 30% | 240 | 144 | 3456 | 12,800/290 | 20,000/960 | 16 | 6.25 | 2 |
+| Standard_B8s_v2 | 8 | 32 | 40% | 240 | 192 | 4608 | 12,800/290 | 20,000/960 | 16 | 6.25 | 2 |
+| Standard_B16ls_v2 | 16 | 32 | 30% | 480 | 288 | 6912 | 25,600/600 | 40,000/960 | 32 | 6.25 | 4 |
+| Standard_B16s_v2 | 16 | 64 | 40% | 480 | 384 | 9216 | 25,600/600 | 40,000/960 | 32 | 6.25 | 4 |
+| Standard_B32ls_v2 | 32 | 64 | 30% | 960 | 576 | 13824 | 51,200/600 | 80,000/960 | 32 | 6.25 | 4 |
+| Standard_B32s_v2 | 32 | 128 | 40% | 960 | 768 | 18432 | 51,200/600 | 80,000/960 | 32 | 6.25 | 4 |
<sup>*</sup> These IOPs values can be guaranteed by using [Gen2 VMs](generation-2.md)<br>
virtual-machines Image Builder Api Update Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-builder-api-update-release-notes.md
Previously updated : 10/05/2023 Last updated : 11/01/2023
This article contains all major API changes and feature updates for the Azure VM
## Updates
+### November 2023
+Azure Image Builder is enabling Isolated Image Builds using Azure Container Instances in a phased manner. The rollout is expected to be completed by early 2024. Your existing image templates will continue to work and there is no change in the way you create or build new image templates.
+
+You might observe a different set of transient Azure resources appear temporarily in the staging resource group but that does not impact your actual builds or the way you interact with Azure Image Builder. For more information, please see [Isolated Image Builds](./security-isolated-image-builds-image-builder.md).
### April 2023 New portal functionality has been added for Azure Image Builder. Search ΓÇ£Image TemplatesΓÇ¥ in Azure portal, then click ΓÇ£CreateΓÇ¥. You can also [get started here](https://ms.portal.azure.com/#create/Microsoft.ImageTemplate) with building and validating custom images inside the portal.
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
description: This article helps you troubleshoot common problems and errors you
Previously updated : 09/18/2023 Last updated : 11/01/2023
Use this article to troubleshoot and resolve common issues that you might encoun
When you're creating a build, do the following: -- The VM Image Builder service communicates to the build VM by using WinRM or Secure Shell (SSH). Do *not* disable these settings as part of the build.-- VM Image Builder creates resources as part of the build. Be sure to verify that Azure Policy doesn't prevent VM Image Builder from creating or using necessary resources.
+- The VM Image Builder service communicates to the build VM by using WinRM or Secure Shell (SSH). Don't* disable these settings as part of the build.
+- VM Image Builder creates resources in the staging resource group as part of the builds. Be sure to verify that Azure Policy doesn't prevent VM Image Builder from creating or using necessary resources.
- Create an IT_ resource group. - Create a storage account without a firewall.
+ - Deploy [Azure Container Instances](../../container-instances/container-instances-overview.md).
+ - Deploy [Azure Virtual Network resources](../../virtual-network/virtual-networks-overview.md) (and subnets therein).
+ - Deploy [Azure Private Endpoint](../../private-link/private-endpoint-overview.md) resources.
+ - Deploy [Azure Files](../../storage/files/storage-files-introduction.md).
- Verify that Azure Policy doesn't install unintended features on the build VM, such as Azure Extensions. - Ensure that VM Image Builder has the correct permissions to read/write images and to connect to the storage account. For more information, review the permissions documentation for the [Azure CLI](./image-builder-permissions-cli.md) or [Azure PowerShell](./image-builder-permissions-powershell.md).-- VM Image Builder will fail the build if the scripts or inline commands fail with errors (non-zero exit codes). Ensure that you've tested the custom scripts and verified that they run without error (exit code 0) or require user input. For more information, see [Create an Azure Virtual Desktop image by using VM Image Builder and PowerShell](../windows/image-builder-virtual-desktop.md#tips-for-building-windows-images).
+- VM Image Builder fails the build if the scripts or inline commands fail with errors (nonzero exit codes). Ensure that you've tested the custom scripts and verified that they run without error (exit code 0) or require user input. For more information, see [Create an Azure Virtual Desktop image by using VM Image Builder and PowerShell](../windows/image-builder-virtual-desktop.md#tips-for-building-windows-images).
+- Ensure your subscription has sufficient [quota](../../container-instances/container-instances-resource-and-quota-limits.md) of Azure Container Instances.
+ - Each image build might deploy up to one temporary Azure Container Instance resource (of four standard cores) in the staging resource group. These resources are required for [Isolated image builds](../security-isolated-image-builds-image-builder.md).
+ VM Image Builder failures can happen in two areas:
Microsoft.VirtualMachineImages/imageTemplates 'helloImageTemplateforSIG01' faile
#### Cause
-In most cases, the resource deployment failure error occurs because of missing permissions. This error may also be caused by a conflict with the staging resource group.
+In most cases, the resource deployment failure error occurs because of missing permissions. This error might also be caused by a conflict with the staging resource group.
#### Solution
The cause might be a timing issue because of the D1_V2 VM size. If customization
To avoid the timing issue, you can increase the VM size or you can add a 60-second PowerShell sleep customization.
+### Azure Container Instances quota exceeded
+
+#### Error
+"Azure Container Instances quota exceeded"
+
+#### Cause
+Your subscription doesn't have enough Azure Container Instances (ACI) quota for Azure Image Builder to successfully build an image.
+
+#### Solution
+You can do the following to make ACI quota available for Azure Image Builder:
+- Lookup other usage of Azure Container Instances in your subscription and remove any unneeded instances to make quota available for Azure Image Builder.
+- Azure Image Builder deploys ACI only temporarily while a build is taking place. These instances are deleted once the build completes. If too many concurrent image builds are taking place in your subscription, then you can consider delaying some of the image builds. This reduces concurrent usage of ACI in your subscription. If your image templates are set up for automatic image builds using triggers, then such failed builds will automatically be retried by Azure Image Builder.
+- If the current ACI limits for your subscription are too low to support your image building scenarios, then you can request an increase in your [ACI quota](../../container-instances/container-instances-resource-and-quota-limits.md#next-steps).
+
+> [!NOTE]
+> ACI resources are required for [Isolated Image Builds](../security-isolated-image-builds-image-builder.md).
+
+### Too many Azure Container Instances deployed within a period of time
+
+#### Error
+"Too many Azure Container Instances deployed within a period of time"
+
+#### Cause
+Your subscription doesn't have enough Azure Container Instances (ACI) quota for Azure Image Builder to successfully build images concurrently.
+
+#### Solution
+You can do the following:
+- Retry your failed builds in a less concurrent manner.
+- If the current ACI limits for your subscription are too low to support your image building scenarios, then you can request an increase in your [ACI quota](../../container-instances/container-instances-resource-and-quota-limits.md#next-steps).
+
+### Isolated Image Build failure
+
+#### Error
+Azure Image Builder builds are failing due to Isolated Image Build.
+
+#### Cause
+Azure Image Builder builds can fail for reasons listed elsewhere in this document. However, there's a small chance that a build fails due to Isolated Image Builds depending on your scenario, subscription quotas, or some unforeseen service error. For more information, see [Isolated Image Builds](../security-isolated-image-builds-image-builder.md).
+
+#### Solution
+If you determine that a build is failing due to Isolated Image Builds, you can do the following:
+- Ensure there's no [Azure Policy](../../governance/policy/overview.md) blocking the deployment of resources mentioned in the Prerequisites section, specifically Azure Container Instances, Azure Virtual Networks, and Azure Private Endpoints.
+- Ensure your subscription has sufficient quota of Azure Container Instances to support all your concurrent image builds. For more information, see, Azure Container Instances [quota exceeded](./image-builder-troubleshoot.md#azure-container-instances-quota-exceeded).
+
+Azure Image Builder is currently in the process of deploying Isolated Image Builds. Specific image templates are not tied to Isolated Image Builds and the same image template might or might not utilize Isolated Image Builds during different builds. You can do the following to temporarily run your build without Isolated Image Builds.
+- Retry your build. Since Image Templates are not tied to the Isolated Image Builds feature, retrying a build has a high probability of rerunning without Isolated Image Builds.
+
+ If none of these solutions mitigate failing image builds, then you can contact Azure support to temporarily opt your subscription out of Isolated Image Builds. For more information, see [Create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+> [!NOTE]
+> Isolated Image Builds will eventually be enabled in all regions and templates. So, the above mitigations should be considered temporary and the underlying cause of build failures must be addressed.
+ ### The build is canceled after the context cancelation context is canceled #### Error
Making these observations is especially important in build failures, where these
#### Error
-When images are stuck in template deletion, the customization log may show the below error:
+When images are stuck in template deletion, the customization log might show the below error:
```output error deleting resource id /subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.Network/networkInterfaces/<networkInterfacName>: resources.Client#DeleteByID: Failure sending request: StatusCode=400 --
virtual-machines Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023
virtual-machines Security Isolated Image Builds Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-isolated-image-builds-image-builder.md
+
+ Title: Isolated Image Builds for Azure VM Image Builder
+description: Isolated Image Builds is achieved by transitioning core process of VM image customization/validation from shared infrastructure to dedicated Azure Container Instances resources in your subscription providing compute and network isolation.
Last updated : 11/01/2023+++++++++
+# What is Isolated Image Builds for Azure Image Builder?
+
+Isolated Image Builds is a feature of Azure Image Builder (AIB). It transitions the core process of VM image customization/validation from shared infrastructure to dedicated Azure Container Instances (ACI) resources in your subscription, providing compute and network isolation.
+
+## Advantages of Isolated Image Builds
+
+Isolated Image Builds enable defense-in-depth by limiting network access of your build VM to just your subscription. Isolated Image Builds also provide you with more transparency by allowing your inspection of the processing done by Image Builder to customize/validate your VM image. Further, Isolated Image Builds eases viewing of live build logs. Specifically:
+
+1. **Compute Isolation:** Isolated Image Builds perform major portion of image building processing in Azure Container Instances resources in your subscription instead of on AIB's shared platform resources. ACI provides hypervisor isolation for each container group to ensure containers run in isolation without sharing a kernel.
+2. **Network Isolation:** Isolated Image Builds remove all direct network WinRM/ssh communication between your build VM and Image Builder service.
+ - If you are provisioning an Image Builder template without your own Virtual Network then a Public IP Address resource will no more be provisioned in your staging resource group at image build time.
+ - If you are provisioning an Image Builder template with an existing Virtual Network in your subscription then a Private Link based communication channel will no more be setup between your Build VM and AIB's backend platform resources. Instead, the communication channel will be setup between the Azure Container Instance and the Build VM resources - both of which reside in the staging resource group in your subscription.
+3. **Transparency:** AIB is built on HashiCorp [Packer](https://www.packer.io/). Isolated Image Builds executes Packer in the ACI in your subscription, which allows you to inspect the ACI resource and its containers. Similarly, having the entire network communication pipeline in your subscription allows you to inspect all the network resources, their settings, and their allowances.
+4. **Better viewing of live logs:** AIB writes customization logs to a storage account in the staging resource group in your subscription. Isolated Image Builds provides with another way to follow the same logs directly in the Azure portal which can be done by navigating to Image Builder's container in the ACI resource.
+
+## Backward compatibility
+
+This is a platform level change and doesn't affect AIB's interfaces. So, your existing Image Template and Trigger resources continue to function and there's no change in the way you'll deploy new resources of these types. Similarly, customization logs continue to be available in the storage account.
+
+You might observe a few new resources temporarily appear in the staging resource group (for example, Azure Container Instance, and Private Endpoint) while some other resource will no longer appear (for example, Public IP Address). Just as earlier, these temporary resources will exist only for the duration of the build and will be deleted by Image Builder thereafter.
+
+Your image builds will automatically be migrated to Isolated Image Builds and you need to take no action to opt-in.
+
+> [!NOTE]
+> Image Builder is in the process of rolling this change out to all locations and customers. Some of these details might change as the process is fine-tuned based on service telemetry and feedback. Please refer to the [troubleshooting guide](./linux/image-builder-troubleshoot.md#troubleshoot-build-failures) for more information.
+
+## Next steps
+
+- [Azure VM Image Builder overview](./image-builder-overview.md)
+- [Getting started with Azure Container Instances](../container-instances/container-instances-overview.md)
+- [Securing your Azure resources](../security/fundamentals/overview.md)
+- [Troubleshooting guide for Azure VM Image Builder](./linux/image-builder-troubleshoot.md#troubleshoot-build-failures)
virtual-machines Trusted Launch Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-portal.md
Title: Deploy a trusted launch VM
description: Deploy a VM that uses trusted launch. -+ Previously updated : 04/26/2023 Last updated : 11/06/2023
az vm update \
--enable-vtpm true ```
+For more information about installing boot integrity monitoring through the Guest Attestation extension, see [Boot integrity](./boot-integrity-monitoring-overview.md).
+ ### [PowerShell](#tab/powershell) In order to provision a VM with Trusted Launch, it first needs to be enabled with the `TrustedLaunch` using the `Set-AzVmSecurityProfile` cmdlet. Then you can use the Set-AzVmUefi cmdlet to set the vTPM and SecureBoot configuration. Use the below snippet as a quick start, remember to replace the values in this example with your own.
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Previously updated : 10/17/2023- Last updated : 11/06/2023+
Azure offers trusted launch as a seamless way to improve the security of [genera
| Type | Supported size families | Currently not supported size families | Not supported size families |: |: |: |: | | [General Purpose](sizes-general.md) |[B-series](sizes-b-series-burstable.md), [DCsv2-series](dcv2-series.md), [DCsv3-series](dcv3-series.md#dcsv3-series), [DCdsv3-series](dcv3-series.md#dcdsv3-series), [Dv4-series](dv4-dsv4-series.md#dv4-series), [Dsv4-series](dv4-dsv4-series.md#dsv4-series), [Dsv3-series](dv3-dsv3-series.md#dsv3-series), [Dsv2-series](dv2-dsv2-series.md#dsv2-series), [Dav4-series](dav4-dasv4-series.md#dav4-series), [Dasv4-series](dav4-dasv4-series.md#dasv4-series), [Ddv4-series](ddv4-ddsv4-series.md#ddv4-series), [Ddsv4-series](ddv4-ddsv4-series.md#ddsv4-series), [Dv5-series](dv5-dsv5-series.md#dv5-series), [Dsv5-series](dv5-dsv5-series.md#dsv5-series), [Ddv5-series](ddv5-ddsv5-series.md#ddv5-series), [Ddsv5-series](ddv5-ddsv5-series.md#ddsv5-series), [Dasv5-series](dasv5-dadsv5-series.md#dasv5-series), [Dadsv5-series](dasv5-dadsv5-series.md#dadsv5-series), [Dlsv5-series](dlsv5-dldsv5-series.md#dlsv5-series), [Dldsv5-series](dlsv5-dldsv5-series.md#dldsv5-series) | [Dpsv5-series](dpsv5-dpdsv5-series.md#dpsv5-series), [Dpdsv5-series](dpsv5-dpdsv5-series.md#dpdsv5-series), [Dplsv5-series](dplsv5-dpldsv5-series.md#dplsv5-series), [Dpldsv5-series](dplsv5-dpldsv5-series.md#dpldsv5-series) | [Av2-series](av2-series.md), [Dv2-series](dv2-dsv2-series.md#dv2-series), [Dv3-series](dv3-dsv3-series.md#dv3-series)
-| [Compute optimized](sizes-compute.md) |[FX-series](fx-series.md), [Fsv2-series](fsv2-series.md) | All sizes supported. | No Gen1-Only Size Family.
+| [Compute optimized](sizes-compute.md) |[FX-series](fx-series.md), [Fsv2-series](fsv2-series.md) | All sizes supported. |
| [Memory optimized](sizes-memory.md) |[Dsv2-series](dv2-dsv2-series.md#dsv2-series), [Esv3-series](ev3-esv3-series.md#esv3-series), [Ev4-series](ev4-esv4-series.md#ev4-series), [Esv4-series](ev4-esv4-series.md#esv4-series), [Edv4-series](edv4-edsv4-series.md#edv4-series), [Edsv4-series](edv4-edsv4-series.md#edsv4-series), [Eav4-series](eav4-easv4-series.md#eav4-series), [Easv4-series](eav4-easv4-series.md#easv4-series), [Easv5-series](easv5-eadsv5-series.md#easv5-series), [Eadsv5-series](easv5-eadsv5-series.md#eadsv5-series), [Ebsv5-series](ebdsv5-ebsv5-series.md#ebsv5-series),[Ebdsv5-series](ebdsv5-ebsv5-series.md#ebdsv5-series) ,[Edv5-series](edv5-edsv5-series.md#edv5-series), [Edsv5-series](edv5-edsv5-series.md#edsv5-series) | [Epsv5-series](epsv5-epdsv5-series.md#epsv5-series), [Epdsv5-series](epsv5-epdsv5-series.md#epdsv5-series), [M-series](m-series.md), [Msv2-series](msv2-mdsv2-series.md#msv2-medium-memory-diskless), [Mdsv2 Medium Memory series](msv2-mdsv2-series.md#mdsv2-medium-memory-with-disk), [Mv2-series](mv2-series.md) |[Ev3-series](ev3-esv3-series.md#ev3-series)
-| [Storage optimized](sizes-storage.md) | [Lsv2-series](lsv2-series.md), [Lsv3-series](lsv3-series.md), [Lasv3-series](lasv3-series.md) | All sizes supported. | No Gen1-Only Size Family.
-| [GPU](sizes-gpu.md) |[NCv2-series](ncv2-series.md), [NCv3-series](ncv3-series.md), [NCasT4_v3-series](nct4-v3-series.md#ncast4_v3-series), [NVv3-series](nvv3-series.md), [NVv4-series](nvv4-series.md), [NDv2-series](ndv2-series.md), [NC_A100_v4-series](nc-a100-v4-series.md#nc-a100-v4-series), [NVadsA10 v5-series](nva10v5-series.md#nvadsa10-v5-series) | [NDasrA100_v4-series](nda100-v4-series.md), [NDm_A100_v4-series](ndm-a100-v4-series.md), [ND-series](nd-series.md) | [NC-series](nc-series.md), [NV-series](nv-series.md), [NP-series](np-series.md)
-| [High Performance Compute](sizes-hpc.md) |[HB-series](hb-series.md), [HBv2-series](hbv2-series.md), [HBv3-series](hbv3-series.md), [HBv4-series](hbv4-series.md), [HC-series](hc-series.md), [HX-series](hx-series.md) | All sizes supported. | No Gen1-Only Size Family.
+| [Storage optimized](sizes-storage.md) | [Lsv2-series](lsv2-series.md), [Lsv3-series](lsv3-series.md), [Lasv3-series](lasv3-series.md) | All sizes supported. |
+| [GPU](sizes-gpu.md) |[NCv2-series](ncv2-series.md), [NCv3-series](ncv3-series.md), [NCasT4_v3-series](nct4-v3-series.md#ncast4_v3-series), [NVv3-series](nvv3-series.md), [NVv4-series](nvv4-series.md), [NDv2-series](ndv2-series.md), [NC_A100_v4-series](nc-a100-v4-series.md#nc-a100-v4-series), [NVadsA10 v5-series](nva10v5-series.md#nvadsa10-v5-series) | [NDasrA100_v4-series](nda100-v4-series.md), [NDm_A100_v4-series](ndm-a100-v4-series.md) | [NC-series](nc-series.md), [NV-series](nv-series.md), [NP-series](np-series.md)
+| [High Performance Compute](sizes-hpc.md) |[HB-series](hb-series.md), [HBv2-series](hbv2-series.md), [HBv3-series](hbv3-series.md), [HBv4-series](hbv4-series.md), [HC-series](hc-series.md), [HX-series](hx-series.md) | All sizes supported. |
> [!NOTE] > - Installation of the **CUDA & GRID drivers on Secure Boot enabled Windows VMs** does not require any extra steps.
Azure offers trusted launch as a seamless way to improve the security of [genera
| OS | Version | |: |: |
-| Alma Linux | 8.4, 8.5, 8.6, 8.7, 8.8, 9.0, 9.1, 9.2 |
+| Alma Linux | 8.7, 8.8, 9.0 |
| Azure Linux | 1.0, 2.0 | | Debian |11, 12 |
-| Oracle Linux |8.3, 8.4, 8.5, 8.6, 9.0, 9.1 LVM |
-| RedHat Enterprise Linux |8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 9.0, 9.1 LVM |
+| Oracle Linux |8.3, 8.4, 8.5, 8.6, 8.7, 8.8 LVM, 9.0, 9.1 LVM |
+| RedHat Enterprise Linux | 8.4, 8.5, 8.6, 8.7, 8.8, 9.0, 9.1 LVM, 9.2 |
| SUSE Enterprise Linux |15SP3, 15SP4, 15SP5 |
-| Ubuntu Server |18.04 LTS, 20.04 LTS, 22.04 LTS |
+| Ubuntu Server |18.04 LTS, 20.04 LTS, 22.04 LTS, 23.04, 23.10 |
| Windows 10 |Pro, Enterprise, Enterprise Multi-Session &#42; | | Windows 11 |Pro, Enterprise, Enterprise Multi-Session &#42; | | Windows Server |2016, 2019, 2022 &#42; |
Trusted launch does not increase existing VM pricing costs.
> The following Virtual Machine features are currently not supported with Trusted Launch. - [Azure Site Recovery](../site-recovery/site-recovery-overview.md)-- [Azure Automanage](../automanage/overview-about.md) - [Ultra disk](disks-enable-ultra-ssd.md)-- [Shared disk](disks-shared.md) - [Managed Image](capture-image-resource.md) (Customers are encouraged to use [Azure Compute Gallery](trusted-launch-portal.md#trusted-launch-vm-supported-images)) - Nested Virtualization (most v5 VM size families supported)
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
Previously updated : 11/01/2023 Last updated : 11/06/2023
You can protect your data and guard against extended downtime by creating virtua
An individual VM restore point is a resource that stores VM configuration and point-in-time application consistent snapshots of all the managed disks attached to the VM. You can use VM restore points to easily capture multi-disk consistent backups. VM restore points contain a disk restore point for each of the attached disks and a disk restore point consists of a snapshot of an individual managed disk.
-VM restore points supports both application consistency and crash consistency (in preview).
+VM restore points supports both application consistency and crash consistency (in preview). Please fill this [form](https://forms.office.com/r/LjLBt6tJRL) if you wish to try crash consistent restore points in preview.
+ Application consistency is supported for VMs running Windows operating systems and support file system consistency for VMs running Linux operating system. Application consistent restore points use VSS writers (or pre/post scripts for Linux) to ensure the consistency of the application data before a restore point is created. To get an application consistent restore point, the application running in the VM needs to provide a VSS writer (for Windows), or pre and post scripts (for Linux) to achieve application consistency. Multi-disk crash consistent VM restore point stores the VM configuration and point-in-time write-order consistent snapshots for all managed disks attached to a virtual machine. This is the same as the status of data in the VM after a power outage or a crash. The "consistencyMode" optional parameter has to be set to "crashConsistent" in the creation request. This feature is currently in preview.
Most common restore points failures are attributed to the communication with the
- [Create a VM restore point](create-restore-points.md). - [Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
+- [Learn more](virtual-machines-restore-points-vm-snapshot-extension.md) about the extensions used with application consistency mode.
+- [Learn more](virtual-machines-restore-points-copy.md) about copying VM restore points across region
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
You can also use the `packageFileName` (and the corresponding `configFileName`)
MyAppe.exe /S ``` > [!TIP]
-> If your blob was originally named "myApp.exe" instead of "MyBlob", then the above script would have worked without setting the `packageFileName` property.
+> If your blob was originally named "myApp.exe" instead of "myapp", then the above script would have worked without setting the `packageFileName` property.
## Command interpreter
virtual-network-manager Concept Network Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-groups.md
All group membership is recorded in Azure Resource Graph and available for your
## Network groups and Azure Policy
-When you create a network group, an Azure Policy is created so that Azure Virtual Network Manager gets notified about changes made to virtual network membership. The policies defined are available for you to see, but they aren't editable by users today. Creating, changing, and deleting Azure Policy definitions and assignments for network groups is only possible through the Azure Network Manager today.
-
-To create an Azure Policy initiative definition and assignment for Azure Virtual Network Manager resources, create and deploy a network group with the necessary configurations. To update an existing Azure Policy initiative definition or corresponding assignment, you need to change and deploy changes to the network group within the Azure Virtual Network Manager resource. To delete an Azure Policy initiative definition and assignment, you need to undeploy and delete the Azure Virtual Network Manager resources associated with your policy. This may include removing a configuration, deleting a configuration, and deleting a network group. For more information on deletion, review the Azure Virtual Network Manager [checklist for removing components](concept-remove-components-checklist.md).
+When you create a network group, an Azure Policy is created so that Azure Virtual Network Manager gets notified about changes made to virtual network membership.
To create, edit, or delete Azure Virtual Network Manager dynamic group policies, you need: - Read and write role-based access control permissions to the underlying policy. - Role-based access control permissions to join the network group (Classic Admin authorization isn't supported).
-For more information on required permissions for Azure Virtual Network Manager dynamic group policies, review [Required permissions](concept-azure-policy-integration.md#required-permissions).
+For more information on required permissions for Azure Virtual Network Manager dynamic group policies, review [required permissions](concept-azure-policy-integration.md#required-permissions).
## Next steps - Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal - Learn how to create a [Hub and spoke topology](how-to-create-hub-and-spoke.md) with Azure Virtual Network Manager - Learn how to block network traffic with a [Security admin configuration](how-to-block-network-traffic-portal.md)-- Review [Azure Policy basics](../governance/policy/overview.md)
+- Review [Azure Policy basics](../governance/policy/overview.md)
virtual-network Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md
Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/03/2023 Last updated : 11/06/2023