Updates from: 01/17/2024 02:12:51
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Shelf Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/shelf-analyze.md
To analyze a shelf image, do the following steps:
1. Copy the following `curl` command into a text editor. ```bash
- curl.exe -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
+ curl -X PUT -H "Ocp-Apim-Subscription-Key: <subscriptionKey>" -H "Content-Type: application/json" "https://<endpoint>/computervision/productrecognition/ms-pretrained-product-detection/runs/<your_run_name>?api-version=2023-04-01-preview" -d "{
'url':'<your_url_string>' }" ```
ai-services Video Retrieval https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/how-to/video-retrieval.md
Azure AI Spatial Analysis Video Retrieval APIs are part of Azure AI Vision and e
## Prerequisites - Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).-- Once you have your Azure subscription, [create a Vision resource using the portal](/azure/cognitive-services/cognitive-services-apis-create-account). For this preview, you must create your resource in the East US region.
+- Once you have your Azure subscription, [create a Vision resource using the portal](/azure/cognitive-services/cognitive-services-apis-create-account). For this preview, you must create your resource in the one of the following regions - Australia East, Switzerland North, Sweden Central, or East US.
- An Azure Storage resource - [Create one](/azure/storage/common/storage-account-create?tabs=azure-portal) ## Input requirements
ai-services Model Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/model-lifecycle.md
Previously updated : 12/19/2023 Last updated : 01/16/2024
Preview models used for preview features do not maintain a minimum retirement pe
By default, API and SDK requests will use the latest Generally Available model. You can use an optional parameter to select the version of the model to be used (not recommended). > [!NOTE]
-> * If you are using a model version that is not listed in the table, then it was subjected to the expiration policy.
-> * Abstractive document and conversation summarization do not provide model versions other than the latest available.
+> If you are using a model version that is not listed in the table, then it was subjected to the expiration policy.
Use the table below to find which model versions are supported by each feature:
Use the table below to find which model versions are supported by each feature:
| Question answering | `latest*` | | | Text Analytics for health | `latest*` | `2022-08-15-preview`, `2023-01-01-preview**`| | Key phrase extraction | `latest*` | |
-| Document summarization - extractive only (preview) | |`2022-08-31-preview**` |
+| Summarization | `latest*` | |
\* Latest Generally Available (GA) model version
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
These models can only be used with Embedding API requests.
| `text-embedding-ada-002` (version 2) | Australia East <br> Canada East <br> East US <br> East US2 <br> France Central <br> Japan East <br> North Central US <br> Norway East <br> South Central US <br> Sweden Central <br> Switzerland North <br> UK South <br> West Europe <br> West US |8,191 | Sep 2021 | 1,536 | | `text-embedding-ada-002` (version 1) | East US <br> South Central US <br> West Europe |2,046 | Sep 2021 | 1,536 |
+> [!NOTE]
+> When sending an array of inputs for embedding, the max number of input items in the array per call to the embedding endpoint is 2048.
+ ### DALL-E models (Preview) | Model ID | Feature Availability | Max Request (characters) |
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
description: Learn how to generate embeddings with Azure OpenAI
Previously updated : 11/06/2023 Last updated : 01/16/2024 recommendations: false
return $response.data.embedding
### Verify inputs don't exceed the maximum length
-The maximum length of input text for our latest embedding models is 8192 tokens. You should verify that your inputs don't exceed this limit before making a request.
+- The maximum length of input text for our latest embedding models is 8192 tokens. You should verify that your inputs don't exceed this limit before making a request.
+- If sending an array of inputs in a single embedding request the max array size is 2048.
+ ## Limitations & risks
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
Previously updated : 11/22/2023 Last updated : 01/06/2023
We recommend using environment variables. If you haven't done this before our [P
<td> ```python
+import os
from openai import OpenAI client = OpenAI(
- api_key=os.environ["OPENAI_API_KEY"]
+ api_key=os.getenv("OPENAI_API_KEY")
)
from openai import AzureOpenAI
client = AzureOpenAI( api_key=os.getenv("AZURE_OPENAI_KEY"), api_version="2023-12-01-preview",
- azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
) ```
client = AzureOpenAI(
<td> ```python
+import os
from openai import OpenAI client = OpenAI(
- api_key=os.environ["OPENAI_API_KEY"]
+ api_key=os.getenv("OPENAI_API_KEY")
)
client = OpenAI(
from azure.identity import DefaultAzureCredential, get_bearer_token_provider from openai import AzureOpenAI
-token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
+token_provider = get_bearer_token_provider(
+ DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
+)
api_version = "2023-12-01-preview" endpoint = "https://my-resource.openai.azure.com"
OpenAI uses the `model` keyword argument to specify what model to use. Azure Ope
```python completion = client.completions.create( model="gpt-3.5-turbo-instruct",
- prompt="<prompt>")
+ prompt="<prompt>"
) chat_completion = client.chat.completions.create(
chat_completion = client.chat.completions.create(
) embedding = client.embeddings.create(
- input="<input>",
- model="text-embedding-ada-002"
+ model="text-embedding-ada-002",
+ input="<input>"
) ```
embedding = client.embeddings.create(
```python completion = client.completions.create( model="gpt-35-turbo-instruct", # This must match the custom deployment name you chose for your model.
- prompt=<"prompt">
+ prompt="<prompt>"
) chat_completion = client.chat.completions.create( model="gpt-35-turbo", # model = "deployment_name".
- messages=<"messages">
+ messages="<messages>"
) embedding = client.embeddings.create(
- input = "<input>",
- model= "text-embedding-ada-002" # model = "deployment_name".
+ model="text-embedding-ada-002", # model = "deployment_name".
+ input="<input>"
) ```
embedding = client.embeddings.create(
## Azure OpenAI embeddings multiple input support
-OpenAI currently allows a larger number of array inputs with text-embedding-ada-002. Azure OpenAI currently supports input arrays up to 16 for text-embedding-ada-002 Version 2. Both require the max input token limit per API request to remain under 8191 for this model.
+OpenAI and Azure OpenAI currently support input arrays up to 2048 input items for text-embedding-ada-002. Both require the max input token limit per API request to remain under 8191 for this model.
<table> <tr>
OpenAI currently allows a larger number of array inputs with text-embedding-ada-
inputs = ["A", "B", "C"] embedding = client.embeddings.create(
- input=inputs,
- model="text-embedding-ada-002"
+ input=inputs,
+ model="text-embedding-ada-002"
)
embedding = client.embeddings.create(
<td> ```python
-inputs = ["A", "B", "C"] #max array size=16
+inputs = ["A", "B", "C"] #max array size=2048
embedding = client.embeddings.create(
- input=inputs,
- model="text-embedding-ada-002" # This must match the custom deployment name you chose for your model.
- #engine="text-embedding-ada-002"
+ input=inputs,
+ model="text-embedding-ada-002" # This must match the custom deployment name you chose for your model.
+ # engine="text-embedding-ada-002"
) ```
ai-services Use Blocklists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-blocklists.md
The configurable content filters are sufficient for most content moderation need
- An Azure subscription. <a href="https://azure.microsoft.com/free/ai-services" target="_blank">Create one for free</a>. - Once you have your Azure subscription, create an Azure OpenAI resource in the Azure portal to get your token, key and endpoint. Enter a unique name for your resource, select the subscription you entered on the application form, select a resource group, supported region, and supported pricing tier. Then select **Create**.
- - The resource takes a few minutes to deploy. After it finishes, sSelect **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
+ - The resource takes a few minutes to deploy. After it finishes, select **go to resource**. In the left pane, under **Resource Management**, select **Subscription Key and Endpoint**. The endpoint and either of the keys are used to call APIs.
- [Azure CLI](/cli/azure/install-azure-cli) installed - [cURL](https://curl.haxx.se/) installed
You can also create custom blocklists in the Azure OpenAI Studio as part of your
- Read more about [content filtering categories and severity levels](/azure/ai-services/openai/concepts/content-filter?tabs=python) with Azure OpenAI Service. -- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs)](/azure/ai-services/openai/concepts/red-teaming) article.
+- Learn more about red teaming from our: [Introduction to red teaming large language models (LLMs)](/azure/ai-services/openai/concepts/red-teaming) article.
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
- ignite-2023 - references_regions Previously updated : 01/12/2024 Last updated : 01/16/2024
The following sections provide you with a quick guide to the default quotas and
| Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion | | Max size of all files per upload (Azure OpenAI on your data) | 16 MB |
+| Max number or inputs in array with `/embeddings` | 2048 |
+| Max number of `/chat/completions` messages | 2048 |
+| Max number of `/chat/completions` functions | 128 |
+| Max number of `/chat completions` tools | 128 |
| Maximum number of Provisioned throughput units per deployment | 100,000 | ++ ## Regional quota limits The default quota for models varies by model and region. Default quota limits are subject to change.
ai-services Batch Transcription Audio Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/batch-transcription-audio-data.md
The batch transcription API supports a number of different formats and codecs, s
- OPUS/OGG - FLAC - WMA
+- AAC
- ALAW in WAV container - MULAW in WAV container - AMR
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article is intended to help you quickly get to deployment. Before going to
* Install a Java SE implementation, version 17 or later. (for example, [Eclipse Open J9](https://www.eclipse.org/openj9/)). * Install [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher. * Install [Docker](https://docs.docker.com/get-docker/) for your OS.
-* Make sure you've been assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
+* Make sure you're assigned either the `Owner` role or the `Contributor` and `User Access Administrator` roles in the subscription. You can verify it by following steps in [List role assignments for a user or group](../role-based-access-control/role-assignments-list-portal.md#list-role-assignments-for-a-user-or-group).
## Create a Liberty on AKS deployment using the portal
-The following steps guide you to create a Liberty runtime on AKS. After completing these steps, you'll have an Azure Container Registry and an Azure Kubernetes Service cluster for the sample application.
+The following steps guide you to create a Liberty runtime on AKS. After completing these steps, you have an Azure Container Registry and an Azure Kubernetes Service cluster for deploying your containerized application.
1. Visit the [Azure portal](https://portal.azure.com/). In the search box at the top of the page, type *IBM WebSphere Liberty and Open Liberty on Azure Kubernetes Service*. When the suggestions start appearing, select the one and only match that appears in the **Marketplace** section. If you prefer, you can go directly to the offer with this shortcut link: [https://aka.ms/liberty-aks](https://aka.ms/liberty-aks). 1. Select **Create**.
-1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`.
-1. Select *East US* as **Region**.
-1. Select **Next: Configure cluster**.
-1. This section allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next: Networking**.
-1. Next to **Connect to Azure Application Gateway?** select **Yes**. This pane lets you customize the following deployment options.
+1. In the **Basics** pane, create a new resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, `ejb0913-java-liberty-project-rg`. Select *East US* as **Region**. Select **Next** to **AKS** pane.
+1. This pane allows you to select an existing AKS cluster and Azure Container Registry (ACR), instead of causing the deployment to create a new one, if desired. This capability enables you to use the sidecar pattern, as shown in the [Azure architecture center](/azure/architecture/patterns/sidecar). You can also adjust the settings for the size and number of the virtual machines in the AKS node pool. Leave all other values at the defaults and select **Next** to **Load balancing** pane.
+1. Next to **Connect to Azure Application Gateway?** select **Yes**. This section lets you customize the following deployment options.
1. You can customize the virtual network and subnet into which the deployment will place the resources. Leave these values at their defaults. 1. You can provide the TLS/SSL certificate presented by the Azure Application Gateway. Leave the values at the default to cause the offer to generate a self-signed certificate. Don't go to production using a self-signed certificate. For more information about self-signed certificates, see [Create a self-signed public certificate to authenticate your application](../active-directory/develop/howto-create-self-signed-certificate.md). 1. You can enable cookie based affinity, also known as sticky sessions. We want sticky sessions enabled for this article, so ensure this option is selected. ![Screenshot of the enable cookie-based affinity checkbox.](./media/howto-deploy-java-liberty-app/enable-cookie-based-affinity.png)
+1. Select **Next** to **Operator and application** pane. This quickstart uses all defaults in this pane. However, it lets you customize the following deployment options.
+ 1. You can deploy WebSphere Liberty Operator by selecting **Yes** for option **IBM supported?**. Leaving the default **No** deploys Open Liberty Operator.
+ 1. You can deploy an application for your selected Operator by selecting **Yes** for option **Deploy an application?**. Leaving the default **No** doesn't deploy any application.
1. Select **Review + create** to validate your selected options. 1. When you see the message **Validation Passed**, select **Create**. The deployment may take up to 20 minutes.
If you navigated away from the **Deployment is in progress** page, the following
These values will be used later in this article. Note that several other useful commands are listed in the outputs.
+> [!NOTE]
+> You may notice a similar output named **appDeploymentYaml**. The difference between output *appDeploymentTemplateYaml* and *appDeploymentYaml* is:
+> * *appDeploymentTemplateYaml* is populated if and only if the deployment **does not include** an application.
+> * *appDeploymentYaml* is populated if and only if the deployment **does include** an application.
+ ## Create an Azure SQL Database The following steps guide you through creating an Azure SQL Database single database for use with your app.
The following steps guide you through creating an Azure SQL Database single data
> At the **Networking** step, set **Connectivity method** to **Public endpoint**, **Allow Azure services and resources to access this server** to **Yes**, and **Add current client IP address** to **Yes**. > > ![Screenshot of configuring SQL database networking.](./media/howto-deploy-java-liberty-app/create-sql-database-networking.png)
- >
- > Also at the **Networking** step, under **Encrypted connections**, set the **Minimum TLS version** to **TLS 1.0**.
- >
- > ![Screenshot of configuring SQL database networking TLS 1.0.](./media/howto-deploy-java-liberty-app/sql-database-minimum-TLS-version.png)
Now that the database and AKS cluster have been created, we can proceed to preparing AKS to host your Open Liberty application.
There are a few samples in the repository. We'll use *java-app/*. Here's the fil
```azurecli-interactive git clone https://github.com/Azure-Samples/open-liberty-on-aks.git cd open-liberty-on-aks
-git checkout 20230830
+git checkout 20240109
``` If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you have checked out a tag.
java-app
Γöé Γö£ΓöÇ aks/ Γöé Γöé Γö£ΓöÇ db-secret.yaml Γöé Γöé Γö£ΓöÇ openlibertyapplication-agic.yaml
+Γöé Γöé Γö£ΓöÇ openlibertyapplication.yaml
+Γöé Γöé Γö£ΓöÇ webspherelibertyapplication-agic.yaml
+Γöé Γöé Γö£ΓöÇ webspherelibertyapplication.yaml
Γöé Γö£ΓöÇ docker/ Γöé Γöé Γö£ΓöÇ Dockerfile Γöé Γöé Γö£ΓöÇ Dockerfile-wlp
java-app
The directories *java*, *resources*, and *webapp* contain the source code of the sample application. The code declares and uses a data source named `jdbc/JavaEECafeDB`.
-In the *aks* directory, we placed three deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used to deploy the application image. In the *docker* directory, there are two files to create the application image with either Open Liberty or WebSphere Liberty.
+In the *aks* directory, there are five deployment files. *db-secret.xml* is used to create [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) with DB connection credentials. The file *openlibertyapplication-agic.yaml* is used in this quickstart to deploy the Open Liberty Application with AGIC. If desired, you can deploy the application without AGIC using the file *openlibertyapplication.yaml*. Use the file *webspherelibertyapplication-agic.yaml* or *webspherelibertyapplication.yaml* to deploy the WebSphere Liberty Application with or without AGIC if you deployed WebSphere Liberty Operator in section [Create a Liberty on AKS deployment using the portal](#create-a-liberty-on-aks-deployment-using-the-portal).
+
+In the *docker* directory, there are two files to create the application image with either Open Liberty or WebSphere Liberty. These files are *Dockerfile* and *Dockerfile-wlp*, respectively. You use the file *Dockerfile* to build the application image with Open Liberty in this quickstart. Similarly, use the file *Dockerfile-wlp* to build the application image with WebSphere Liberty if you deployed WebSphere Liberty Operator in section [Create a Liberty on AKS deployment using the portal](#create-a-liberty-on-aks-deployment-using-the-portal).
In directory *liberty/config*, the *server.xml* file is used to configure the DB connection for the Open Liberty and WebSphere Liberty cluster.
You can now run the `docker build` command to build the image.
```bash cd <path-to-your-repo>/java-app/target
-# If you're running with Open Liberty
docker build -t javaee-cafe:v1 --pull --file=Dockerfile .-
-# If you're running with WebSphere Liberty
-docker build -t javaee-cafe:v1 --pull --file=Dockerfile-wlp .
``` ### (Optional) Test the Docker image locally
The following steps deploy and test the application.
kubectl apply -f db-secret.yaml ```
- You'll see the output `secret/db-secret-postgres created`.
+ You'll see the output `secret/db-secret-sql created`.
1. Apply the deployment file.
az group delete --name <db-resource-group> --yes --no-wait
## Next steps
+You can learn more from the following references:
+ * [Azure Kubernetes Service](https://azure.microsoft.com/free/services/kubernetes-service/) * [Open Liberty](https://openliberty.io/) * [Open Liberty Operator](https://github.com/OpenLiberty/open-liberty-operator)
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Images specified in the exclusion list aren't removed from the cluster. Image Cl
## FAQ
-### How to check eraser version is using?
+### How can I check which version Image Cleaner is using?
```
-kubectl get configmap -n kube-system eraser-manager-config | grep tag -C 3
+kubectl describe configmap -n kube-system eraser-manager-config | grep tag -C 3
``` ### Does Image Cleaner support other vulnerability scanners besides trivy-scanner?
aks Quotas Skus Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quotas-skus-regions.md
Title: Limits for resources, SKUs, and regions in Azure Kubernetes Service (AKS)
description: Learn about the default quotas, restricted node VM SKU sizes, and region availability of the Azure Kubernetes Service (AKS). Previously updated : 12/05/2023 Last updated : 01/12/2024 # Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)
aks Resize Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/resize-node-pool.md
By default, your cluster has AKS_managed pod disruption budgets (such as `coredn
To delete the existing node pool, use the Azure portal or the [az aks nodepool delete][az-aks-nodepool-delete] command:
-> [!IMPORTANT]
-> When you delete a node pool, AKS doesn't perform cordon and drain. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting.
- ```azurecli-interactive az aks nodepool delete \ --resource-group myResourceGroup \
aks Windows Vs Linux Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/windows-vs-linux-containers.md
Title: Windows container considerations in Azure Kubernetes Service
description: See the Windows container considerations with Azure Kubernetes Service (AKS). Previously updated : 12/13/2023 Last updated : 01/12/2024 - # Windows container considerations with Azure Kubernetes Service
This article covers important considerations to keep in mind when using Windows
| Feature | Windows considerations | |--|:--|
-| [Cluster creation][cluster-configuration] | ΓÇó The first system node pool *must* be Linux.<br/> ΓÇó AKS Windows clusters have a maximum limit of 10 node pools.<br/> ΓÇó AKS Windows clusters have a maximum limit of 100 nodes in each node pool.<br/> ΓÇó The Windows Server node pool name has a limit of six characters. |
+| [Cluster creation][cluster-configuration] | ΓÇó The first system node pool *must* be Linux.<br/> ΓÇó The maximum number of nodes per cluster is 5000.<br/> ΓÇó The Windows Server node pool name has a limit of six characters. |
| [Privileged containers][privileged-containers] | Not supported. The equivalent is **HostProcess Containers (HPC) containers**. | | [HPC containers][hpc-containers] | ΓÇó HostProcess containers are the Windows alternative to Linux privileged containers. For more information, see [Create a Windows HostProcess pod](https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/). | | [Azure Network Policy Manager (Azure)][azure-network-policy] | Azure Network Policy Manager doesn't support:<br/> ΓÇó Named ports<br/> ΓÇó SCTP protocol<br/> ΓÇó Negative match labels or namespace selectors (all labels except "debug=true")<br/> ΓÇó "except" CIDR blocks (a CIDR with exceptions)<br/> ΓÇó Windows Server 2019<br/> |
api-center Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/key-concepts.md
Each API version should ideally be defined by at least one definition, such as a
An environment represents a location where an API runtime could be deployed, for example, an Azure API Management service, an Apigee API Management service, or a compute service such as a Kubernetes cluster, a Web App, or an Azure Function. Each environment has a type (such as production or staging) and may include information about developer portal or management interfaces.
+> [!NOTE]
+> Use API Center to track any of your API runtime environments, whether or not they're hosted on Azure infrastructure. These environments aren't the same as Azure Deployment Environments.
+ ## Deployment A deployment is a location (an address) where users can access an API. An API can have multiple deployments, such as different staging environments or regions. For example, an API could have one deployment in an internal staging environment and a second in a production environment. Each deployment is associated with a specific API definition.
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
+
+ Title: Manage API inventory in Azure API Center - Azure CLI
+description: Use the Azure CLI to create and update APIs, API versions, and API definitions in your Azure API center.
+++ Last updated : 01/12/2024++
+# Customer intent: As an API program manager, I want to automate processes to register and update APIs in my Azure API center.
++
+# Use the Azure CLI to manage your API inventory
+
+This article shows how to use [`az apic api`](/cli/azure/apic/api) commands in the Azure CLI to add and configure APIs in your [API center](overview.md) inventory. Use commands in the Azure CLI to script operations to manage your API inventory and other aspects of your API center.
++
+## Prerequisites
+
+* An API center in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md).
+
+* For Azure CLI:
+ [!INCLUDE [include](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+
+ > [!NOTE]
+ > `az apic` commands require the `apic-extension` Azure CLI extension. If you haven't used `az apic` commands, the extension is installed dynamically when you run your first `az apic` command. Learn more about [Azure CLI extensions](/cli/azure/azure-cli-extensions-overview).
+
+## Register API, API version, and definition
+
+The following steps show how to create an API and associate a single API version and API definition. For background about the data model in API Center, see [Key concepts](key-concepts.md).
+
+### Create an API
+
+Use the [az apic api create](/cli/azure/apic/api#az_apic_api_create) command to create an API in your API center.
+
+The following example creates an API named *Petstore API* in the *myResourceGroup* resource group and *myAPICenter* API center. The API is a REST API.
+
+```azurecli-interactive
+az apic api create --resource-group myResourceGroup \
+ --service myAPICenter --name petstore-api \
+ --title "Petstore API" --kind "rest"
+```
+
+By default, the command sets the API's **Lifecycle stage** to *design*.
+
+> [!NOTE]
+> After creating an API, you can update the API's properties by using the [az apic api update](/cli/azure/apic/api#az_apic_api_update) command.
++
+### Create an API version
+
+Use the [az apic api version create](/cli/azure/apic/api/version#az_apic_api_version_create) command to create a version for your API.
+
+The following example creates an API version named *v1-0-0* for the *petstore-api* API that you created in the previous section.
+
+```azurecli-interactive
+az apic api version create --resource-group myResourceGroup \
+ --service myAPICenter --api-name petstore-api \
+ --version v1-0-0 --title "v1-0-0"
+```
+
+### Create API definition and add specification file
+
+Use the [az apic api definition](/cli/azure/apic/api/definition) commands to add a definition and an accompanying specification file for an API version.
+
+#### Create a definition
+
+The following example uses the [az apic api definition create](/cli/azure/apic/api/definition#az_apic_api_definition_create) command to create a definition named *openapi* for the *petstore-api* API version that you created in the previous section.
+
+```azurecli-interactive
+az apic api definition create --resource-group myResourceGroup \
+ --service myAPICenter --api-name petstore-api \
+ --version v1-0-0 --name "openapi" --title "OpenAPI"
+```
+
+#### Import a specification file
+
+Import a specification file to the definition using the [az apic api definition import-specification](/cli/azure/apic/api/definition#az_apic_api_definition_import_specification) command.
+
+The following example imports an OpenAPI specification file from a publicly accessible URL to the *openapi* definition that you created in the previous step. The `name` and `version` properties of the specification resource are passed as JSON.
++
+```azurecli-interactive
+az apic api definition import-specification \
+ --resource-group myResourceGroup --service myAPICenter \
+ --api-name petstore-api --version-name v1-0-0 \
+ --definition-name openapi --format "link" \
+ --value 'https://petstore3.swagger.io/api/v3/openapi.json' \
+ --specification '{"name":"openapi","version":"3.0.2"}'
+```
+
+> [!TIP]
+> You can import the specification file inline by setting the `--format` parameter to `inline` and passing the file contents using the `--value` parameter.
+
+### Export a specification file
+
+To export an API specification from your API center to a local file, use the [az apic api definition export-specification](/cli/azure/apic/api/definition#az_apic_api_definition_export_specification) command.
+
+The following example exports the specification file from the *openapi* definition that you created in the previous section to a local file named *specificationFile.json*.
+
+```azurecli-interactive
+az apic api definition export-specification \
+ --resource-group myResourceGroup --service myAPICenter \
+ --api-name petstore-api --version-name v1-0-0 \
+ --definition-name openapi --file-name "/Path/to/specificationFile.json"
+```
+
+## Register API from a specification file - single step
+
+You can register an API from a local specification file in a single step by using the [az apic api register](/cli/azure/apic/api#az-apic-api-register) command. With this option, a default API version and definition are created automatically for the API.
+
+The following example registers an API in the *myAPICenter* API center from a local OpenAPI definition file named *specificationFile.json*.
++
+```azurecli-interactive
+az apic api register --resource-group myResourceGroup \
+ --service myAPICenter --api-location "/Path/to/specificationFile.json"
+```
+
+* The command sets the API properties such as name and type from values in the definition file.
+* By default, the command sets the API's **Lifecycle stage** to *design*.
+* It creates a default API version named *1-0-0* and a default definition named according to the specification format (for example, *openapi*).
+
+After registering an API, you can update the API's properties by using the [az apic api update](/cli/azure/apic/api#az_apic_api_update), [az apic api version update](/cli/azure/apic/api/version#az_apic_api_version_update), and [az apic api definition update](/cli/azure/apic/api/definition#az_apic_api_definition_update) commands.
+
+## Delete API resources
+
+Use the [az apic api delete](/cli/azure/apic/api#az_apic_api_delete) command to delete an API and all of its version and definition resources. For example:
+
+```azurecli-interactive
+az apic api delete \
+ --resource-group myResoureGroup --service myAPICenter \
+ --name petstore-api
+```
+
+To delete individual API versions and definitions, use [az apic api version delete](/cli/azure/apic/api/version#az-apic-api-version-delete) and [az apic api definition delete](/cli/azure/apic/api/definition#az-apic-api-definition-delete), respectively.
+
+## Related content
+
+See the [Azure CLI reference for API Center](/cli/azure/apic) for a complete command list, including commands to manage [environments](/cli/azure/apic/environment), [deployments](/cli/azure/apic/api/deployment), [metadata schemas](/cli/azure/apic/metadata-schema), and [API Center services](/cli/azure/apic/service).
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This guidance helps you provide the required information to define how to authen
## Sovereign clouds
-Here is an overview of settings that need to be configured to be able to work with sovereign clouds
+Here is an overview of settings that need to be configured to be able to work with sovereign clouds:
| Name | Public | Azure China | US Government | |--||--|-| | config.service.auth.tokenAudience | `https://azure-api.net/configuration` (Default) | `https://azure-api.cn/configuration` | `https://azure-api.us/configuration` |
+| logs.applicationinsights.endpoint | `https://dc.services.visualstudio.com/v2/track` (Default) | `https://dc.applicationinsights.azure.cn/v2/track` | `https://dc.applicationinsights.us/v2/track` |
## How to configure settings
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
description: Learn how to migrate your App Service Environment to App Service En
Previously updated : 12/14/2023 Last updated : 1/16/2024 zone_pivot_groups: app-service-cli-portal
After you add your custom domain suffix details, the "Migrate" button will be en
Once you complete all of the above steps, you can start migration. Make sure you understand the [implications of migration](migrate.md#migrate-to-app-service-environment-v3) including what happens during this time. This step takes three to six hours for v2 to v3 migrations and up to six hours for v1 to v3 migrations depending on environment size. Scaling and modifications to your existing App Service Environment are blocked during this step.
+> [!NOTE]
+> In rare cases, you might see a notification in the portal that says "Migration to App Service Environment v3 failed" after you start migration. There's a known bug that might trigger this notification even if the migration is progressing. Check the activity log for the App Service Environment to determine the validity of this error message.
+>
+> :::image type="content" source="./media/migration/migration-error.png" alt-text="Screenshot that shows the potential error notification after starting migration.":::
+>
+ Detailed migration statuses are only available when using the Azure CLI at this time. For more information, see the CLI guidance under the Azure CLI section for Migrate to App Service Environment v3. When migration is complete, you have an App Service Environment v3, and all of your apps are running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
azure-app-configuration Enable Dynamic Configuration Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-app.md
Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `spr
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>5.4.0</version>
+ <version>5.8.0</version>
</dependency> ```
Then, open the *pom.xml* file in a text editor and add a `<dependency>` for `spr
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>4.10.0</version>
+ <version>4.14.0</version>
</dependency> ```
azure-app-configuration Enable Dynamic Configuration Java Spring Push Refresh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-java-spring-push-refresh.md
In this tutorial, you learn how to:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>5.5.0</version>
+ <version>5.8.0</version>
<type>pom</type> <scope>import</scope> </dependency>
In this tutorial, you learn how to:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
- <version>4.10.0</version>
</dependency> <!-- Adds the Ability to Push Refresh -->
In this tutorial, you learn how to:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>4.11.0</version>
+ <version>4.14.0</version>
<type>pom</type> <scope>import</scope> </dependency>
In this tutorial, you learn how to:
mvn com.microsoft.azure:azure-webapp-maven-plugin:2.5.0:config ```
-1. Open bootstrap.properties and configure Azure App Configuration Push Refresh and Azure Service Bus
+1. Open bootstrap.properties and configure Azure App Configuration Push Refresh.
```properties # Azure App Configuration Properties
Event Grid Web Hooks require validation on creation. You can validate by followi
:::image type="content" source="./media/event-subscription-view-webhook.png" alt-text="Web Hook shows up in a table on the bottom of the page." ::: > [!NOTE]
-> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](../event-grid/event-filtering.md) or [Service Bus subscription filters](../service-bus-messaging/topic-filters.md). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
+> When subscribing for configuration changes, one or more filters can be used to reduce the number of events sent to your application. These can be configured either as [Event Grid subscription filters](../event-grid/event-filtering.md). For example, a subscription filter can be used to only subscribe to events for changes in a key that starts with a specific string.
+
+> [!NOTE]
+> If you have multiple instances of your application running, you can use the `appconfiguration-refresh-bus` endpoint which requires setting up Azure Service Bus, which is used to send a message to all instances of your application to refresh their configuration. This is useful if you have multiple instances of your application running and want to ensure that all instances are updated with the latest configuration. This endpoint isn't available unless you have `spring-cloud-bus` as a dependency with it configured. See the [Azure Service Bus Spring Cloud Bus documentation](/azure/developer/java/spring-framework/using-service-bus-in-spring-applications) for more information. The service bus connection only needs to be set up and the Azure App Configuration library will handle sending and receiving the messages.
## Verify and test application
azure-app-configuration Howto Convert To The New Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-convert-to-the-new-spring-boot.md
All of the group and artifact IDs in the Azure libraries for Spring Boot have be
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>5.5.0</version>
+ <version>5.8.0</version>
<type>pom</type> <scope>import</scope> </dependency>
All of the group and artifact IDs in the Azure libraries for Spring Boot have be
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>4.11.0</version>
+ <version>4.14.0</version>
<type>pom</type> <scope>import</scope> </dependency>
spring.cloud.azure.appconfiguration.stores[0].monitoring.feature-flag-refresh-in
The property `spring.cloud.azure.appconfiguration.stores[0].feature-flags.label` has been removed. Instead, you can use `spring.cloud.azure.appconfiguration.stores[0].feature-flags.selects[0].label-filter` to specify a label filter.
+## Using Client Customizers
+
+`ConfigurationClientCustomizer` and `SecretClientCustomizer` are used to customize the `ConfigurationClient` and `SecretClient` instances. You can use them to modify the clients before they're used to connect to App Configuration. This allows for using any credential type supported by the [Azure Identity library](https://github.com/Azure/azure-sdk-for-jav#credential-classes). You can also modify the clients to set a custom `HttpClient` or `HttpPipeline`.
+
+```java
+import com.azure.core.credential.TokenCredential;
+import com.azure.data.appconfiguration.ConfigurationClientBuilder;
+import com.azure.identity.AzureCliCredential;
+import com.azure.identity.AzureCliCredentialBuilder;
+import com.azure.identity.ChainedTokenCredential;
+import com.azure.identity.ChainedTokenCredentialBuilder;
+import com.azure.identity.EnvironmentCredentialBuilder;
+import com.azure.identity.ManagedIdentityCredential;
+import com.azure.identity.ManagedIdentityCredentialBuilder;
+import com.azure.spring.cloud.appconfiguration.config.ConfigurationClientCustomizer;
+
+public class ConfigurationClientCustomizerImpl implements ConfigurationClientCustomizer {
+
+ @Override
+ public void customize(ConfigurationClientBuilder builder, String endpoint) {
+ AzureCliCredential cliCredential = new AzureCliCredentialBuilder().build();
+ String managedIdentityClientId = System.getenv("MANAGED_IDENTITY_CLIENT_ID");
+ ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredentialBuilder()
+ .clientId(managedIdentityClientId).build();
+ ChainedTokenCredential credential = new ChainedTokenCredentialBuilder().addLast(cliCredential)
+ .addLast(managedIdentityCredential).build();
+ builder.credential(credential);
+ }
+}
+```
+ ## Possible conflicts with Spring Cloud Azure global properties [Spring Cloud Azure common configuration properties](/azure/developer/java/spring-framework/configuration) enable you to customize your connections to Azure services. The new App Configuration library will pick up any global or App Configuration setting that's configured with Spring Cloud Azure common configuration properties. Your connection to App Configuration will change if the configurations are set for another Spring Cloud Azure library.
azure-app-configuration Quickstart Feature Flag Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-feature-flag-spring-boot.md
To create a new Spring Boot project:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>5.5.0</version>
+ <version>5.8.0</version>
<type>pom</type> <scope>import</scope> </dependency>
To create a new Spring Boot project:
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>4.11.0</version>
+ <version>4.14.0</version>
<type>pom</type> <scope>import</scope> </dependency>
azure-app-configuration Quickstart Java Spring App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-java-spring-app.md
To install the Spring Cloud Azure Config starter module, add the following depen
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>5.5.0</version>
+ <version>5.8.0</version>
<type>pom</type> <scope>import</scope> </dependency>
To install the Spring Cloud Azure Config starter module, add the following depen
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>4.11.0</version>
+ <version>4.14.0</version>
<type>pom</type> <scope>import</scope> </dependency>
azure-app-configuration Use Feature Flags Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-feature-flags-spring-boot.md
The easiest way to connect your Spring Boot application to App Configuration is
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>5.5.0</version>
+ <version>5.8.0</version>
<type>pom</type> <scope>import</scope> </dependency>
The easiest way to connect your Spring Boot application to App Configuration is
<dependency> <groupId>com.azure.spring</groupId> <artifactId>spring-cloud-azure-dependencies</artifactId>
- <version>4.11.0</version>
+ <version>4.14.0</version>
<type>pom</type> <scope>import</scope> </dependency>
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
### Dell
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
+|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version|
|--|--|--|--|--|
-| [Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated|
-| [PowerStore T](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated|
-| [PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.25.0 | 1.21.0_2023-07-11 | 16.0.5100.7242 | 14.5 (Ubuntu 20.04) |
-| [PowerStore X](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance/powerstore-x-series.htm)|1.20.6|1.0.0_2021-07-30|15.0.2148.140 | 12.3 (Ubuntu 12.3-1) |
+|[PowerStore](https://www.dell.com/en-us/shop/powerstore/sf/power-store)|1.25.15|1.25.0_2023-11-14|16.0.5100.7246|Not validated|
+|[Unity XT](https://www.dell.com/en-us/dt/storage/unity.htm) |1.24.3|1.15.0_2023-01-10|16.0.816.19223 |Not validated|
+|[PowerFlex](https://www.dell.com/en-us/dt/storage/powerflex.htm) |1.25.0 |1.21.0_2023-07-11 |16.0.5100.7242 |14.5 (Ubuntu 20.04) |
### Hitachi |Solution and version |Kubernetes version |Azure Arc-enabled data services version |SQL engine version |PostgreSQL server version|
azure-functions Functions Bindings Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-dapr.md
Learn how to use the Dapr Extension for Azure Functions via the provided samples
::: zone-end
+## Troubleshooting
+
+This section describes how to troubleshoot issues that can occur when using the Dapr extension for Azure Functions.
+
+### Ensure Dapr is enabled in your environment
+
+If you're using Dapr bindings and triggers in Azure Functions, and Dapr isn't enabled in your environment, you might receive the error message: `Dapr sidecar isn't present. Please see (https://aka.ms/azure-functions-dapr-sidecar-missing) for more information.` To enable Dapr in your environment:
+
+- If your Azure Function is deployed in Azure Container Apps, refer to [Dapr enablement instructions for the Dapr extension for Azure Functions](../azure-functions/functions-bindings-dapr.md#dapr-enablement).
+
+- If your Azure Function is deployed in Kubernetes, verify that your [deployment's YAML configuration](https://github.com/azure/azure-functions-dapr-extension/blob/master/deploy/kubernetes/kubernetes-deployment.md#sample-kubernetes-deployment) has the following annotations:
+
+ ```YAML
+ annotations:
+ ...
+ dapr.io/enabled: "true"
+ dapr.io/app-id: "functionapp"
+ # You should only set app-port if you are using a Dapr trigger in your code.
+ dapr.io/app-port: "<DAPR_APP_PORT>"
+ ...
+ ```
+
+- If you're running your Azure Function locally, run the following command to ensure you're [running the function app with Dapr](https://github.com/azure/azure-functions-dapr-extension/tree/master/samples/python-v2-azurefunction#step-2run-function-app-with-dapr):
+
+ ```bash
+ dapr run --app-id functionapp --app-port <DAPR_APP_PORT> --components-path <COMPONENTS_PATH> -- func host start
+ ```
+
+### Verify app-port value in Dapr configuration
+
+The Dapr extension for Azure Functions starts an HTTP server on port `3001` by default. You can configure this port using the [`DAPR_APP_PORT` environment variable](https://docs.dapr.io/reference/environment/).
+
+If you provide an incorrect app port value when running an Azure Functions app, you might receive the error message: `The Dapr sidecar is configured to listen on port {portInt}, but the app server is running on port {appPort}. This may cause unexpected behavior. For more information, visit [this link](https://aka.ms/azfunc-dapr-app-config-error).` To resolve this error message:
+
+1. In your container app's Dapr settings:
+
+ - If you're using a Dapr trigger in your code, verify that the app port is set to `3001` or to the value of the `DAPR_APP_PORT` environment variable.
+
+ - If you're _not_ using a Dapr trigger in your code, verify that the app port is _not_ set. It should be empty.
+
+1. Verify that you provide the correct app port value in the Dapr configuration.
+
+ - If you're using Azure Container Apps, specify the app port in Bicep:
+
+ ```bash
+ DaprConfig: {
+ ...
+ appPort: <DAPR_APP_PORT>
+ ...
+ }
+ ```
+
+ - If you're using a Kubernetes environment, set the `dapr.io/app-port` annotation:
+
+ ```
+ annotations:
+ ...
+ dapr.io/app-port: "<DAPR_APP_PORT>"
+ ...
+ ```
+
+ - If you're developing locally, verify you set `--app-port` when running the function app with Dapr:
+
+ ```
+ dapr run --app-id functionapp --app-port <DAPR_APP_PORT> --components-path <COMPONENTS_PATH> -- func host start
+ ```
+ ## Next steps [Learn more about Dapr.](https://docs.dapr.io/)
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
You can define a data collection rule to send data from multiple machines to mul
### [API](#tab/api)
-1. Create a DCR file by using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
+1. Create a DCR file by using the JSON format shown in [Sample DCR](../essentials/data-collection-rule-samples.md#azure-monitor-agentevents-and-performance-data).
1. Create the rule by using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
azure-monitor Data Collection Rule Sample Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-sample-agent.md
- Title: Sample data collection rule - agent
-description: Sample data collection rule for Azure Monitor agent
- Previously updated : 07/19/2023-----
-# Sample data collection rule - agent
-The sample [data collection rule](../essentials/data-collection-rule-overview.md) below is for virtual machines with Azure Monitor agent and has the following details:
--- Performance data
- - Collects specific Processor, Memory, Logical Disk, and Physical Disk counters every 15 seconds and uploads every minute.
- - Collects specific Process counters every 30 seconds and uploads every 5 minutes.
-- Windows events
- - Collects Windows security events and uploads every minute.
- - Collects Windows application and system events and uploads every 5 minutes.
-- Syslog
- - Collects Debug, Critical, and Emergency events from cron facility.
- - Collects Alert, Critical, and Emergency events from syslog facility.
-- Destinations
- - Sends all data to a Log Analytics workspace named centralWorkspace.
-
-> [!NOTE]
-> For an explanation of XPaths that are used to specify event collection in data collection rules, see [Limit data collection with custom XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
-
-## Sample DCR
-
-```json
-{
- "location": "eastus",
- "properties": {
- "dataSources": {
- "performanceCounters": [
- {
- "name": "cloudTeamCoreCounters",
- "streams": [
- "Microsoft-Perf"
- ],
- "scheduledTransferPeriod": "PT1M",
- "samplingFrequencyInSeconds": 15,
- "counterSpecifiers": [
- "\\Processor(_Total)\\% Processor Time",
- "\\Memory\\Committed Bytes",
- "\\LogicalDisk(_Total)\\Free Megabytes",
- "\\PhysicalDisk(_Total)\\Avg. Disk Queue Length"
- ]
- },
- {
- "name": "appTeamExtraCounters",
- "streams": [
- "Microsoft-Perf"
- ],
- "scheduledTransferPeriod": "PT5M",
- "samplingFrequencyInSeconds": 30,
- "counterSpecifiers": [
- "\\Process(_Total)\\Thread Count"
- ]
- }
- ],
- "windowsEventLogs": [
- {
- "name": "cloudSecurityTeamEvents",
- "streams": [
- "Microsoft-Event"
- ],
- "scheduledTransferPeriod": "PT1M",
- "xPathQueries": [
- "Security!*"
- ]
- },
- {
- "name": "appTeam1AppEvents",
- "streams": [
- "Microsoft-Event"
- ],
- "scheduledTransferPeriod": "PT5M",
- "xPathQueries": [
- "System!*[System[(Level = 1 or Level = 2 or Level = 3)]]",
- "Application!*[System[(Level = 1 or Level = 2 or Level = 3)]]"
- ]
- }
- ],
- "syslog": [
- {
- "name": "cronSyslog",
- "streams": [
- "Microsoft-Syslog"
- ],
- "facilityNames": [
- "cron"
- ],
- "logLevels": [
- "Debug",
- "Critical",
- "Emergency"
- ]
- },
- {
- "name": "syslogBase",
- "streams": [
- "Microsoft-Syslog"
- ],
- "facilityNames": [
- "syslog"
- ],
- "logLevels": [
- "Alert",
- "Critical",
- "Emergency"
- ]
- }
- ]
- },
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace",
- "name": "centralWorkspace"
- }
- ]
- },
- "dataFlows": [
- {
- "streams": [
- "Microsoft-Perf",
- "Microsoft-Syslog",
- "Microsoft-Event"
- ],
- "destinations": [
- "centralWorkspace"
- ]
- }
- ]
- }
- }
-```
--
-## Next steps
--- [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
Previously updated : 10/25/2023 Last updated : 01/14/2024 # Manage your alert rules
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
## Enable recommended alert rules in the Azure portal
-If you don't have alert rules defined for the selected resource, either individually or as part of a resource group or subscription, you can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or enable recommended out-of-the-box alert rules in the Azure portal.
+You can [create a new alert rule](alerts-log.md#create-a-new-log-alert-rule-in-the-azure-portal), or enable recommended out-of-the-box alert rules in the Azure portal.
The system compiles a list of recommended alert rules based on: - The resource providerΓÇÖs knowledge of important signals and thresholds for monitoring the resource.-- Telemetry that tells us what customers commonly alert on for this resource.
+- Data that tells us what customers commonly alert on for this resource.
> [!NOTE] > The alert rule recommendations feature is enabled for:
The system compiles a list of recommended alert rules based on:
To enable recommended alert rules: 1. In the left pane, select **Alerts**.
-1. Select **View + enable**. The **Set up recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
-1. In the **Alert me if** section, all recommended alerts are enabled by default. The rules are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like, or turn off an alert.
+1. Select **View + set up**. The **Set up recommended alert rules** pane opens with a list of recommended alert rules based on your type of resource.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/set-up-recommended-alerts.png" alt-text="Screenshot of recommended alert rules pane.":::
+
+1. In the **Select alert rules** section, all recommended alerts are populated with the default values for the rule condition, such as the percentage of CPU usage that you want to trigger an alert. You can change the default values if you would like, or turn off an alert.
+1. Expand each of the alert rules to see its details. By default, the severity for each is **Informational**. You can change to another severity if you'd like.
+
+ :::image type="content" source="media/alerts-managing-alert-instances/configure-alert-severity.png" alt-text="Screenshot of recommended alert rule severity configuration." lightbox="media/alerts-managing-alert-instances/configure-alert-severity.png":::
+ 1. In the **Notify me by** section, select the way you want to be notified if an alert is fired. 1. Select **Use an existing action group**, and enter the details of the existing action group if you want to use an action group that already exists. 1. Select **Save**.
- :::image type="content" source="media/alerts-managing-alert-instances/set-up-recommended-alerts.png" alt-text="Screenshot of recommended alert rules pane.":::
- ## Manage metric alert rules with the Azure CLI This section describes how to manage metric alert rules using the cross-platform [Azure CLI](/cli/azure/get-started-with-azure-cli). The following examples use [Azure Cloud Shell](../../cloud-shell/overview.md).
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
description: Learn about Azure Monitor alerts, alert rules, action processing ru
Previously updated : 09/12/2023 Last updated : 01/14/2024
This table describes when a stateful alert is considered resolved:
## Recommended alert rules
-If you don't have alert rules defined for the selected resource, you can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-manage-alert-rules.md#enable-recommended-alert-rules-in-the-azure-portal).
+You can [enable recommended out-of-the-box alert rules in the Azure portal](alerts-manage-alert-rules.md#enable-recommended-alert-rules-in-the-azure-portal).
The system compiles a list of recommended alert rules based on:
Use [log alert rules](alerts-create-log-alert-rule.md) to monitor all resources
You can also create resource-centric alerts instead of workspace-centric alerts by using **Split by dimensions**. When you split on the resourceId column, you will get one alert per resource that meets the condition.
-Log alert rules that use splitting by dimensions are charged based on the number of time series created by the dimensions resulting from your query. If the data is already collected to an Log Analytics workspace, there is no additional cost.
+Log alert rules that use splitting by dimensions are charged based on the number of time series created by the dimensions resulting from your query. If the data is already collected to a Log Analytics workspace, there is no additional cost.
If you use metric data at scale in the Log Analytics workspace, pricing will change based on the data ingestion. ### Using Azure policies for alerting at scale
-You can use [Azure policies](/azure/governance/policy/overview) to set up alerts at-scale. This has the advantage of easily implementing alerts at-scale. You can see how this is implementated with [Azure Monitor baseline alerts](https://aka.ms/amba).
+You can use [Azure policies](/azure/governance/policy/overview) to set up alerts at-scale. This has the advantage of easily implementing alerts at-scale. You can see how this is implemented with [Azure Monitor baseline alerts](https://aka.ms/amba).
Keep in mind that if you use policies to create alert rules, you may have the increased overhead of maintaining a large alert rule set.
azure-monitor Tutorial Monitor Vm Alert Recommended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/tutorial-monitor-vm-alert-recommended.md
description: Enable set of recommended metric alert rules for an Azure virtual m
Previously updated : 10/20/2023 Last updated : 01/14/2024
To complete the steps in this article you need the following:
- An Azure virtual machine to monitor.
-> [!IMPORTANT]
-> If the VM has any other alert rules associate with it, then recommended alerts will not be available. You can access recommended alerts by removing any alert rules targeted to the VM.
-- ## Create recommended alert rules
-From the menu for the VM, select **Alerts** in the **Monitoring** section. Select **View + enable**.
--
-A list of recommended alert rules is displayed. You can select which ones to create and change their recommended threshold if you want. Ensure that **Email** is enabled and provide an email address to be notified when any of the alerts fire. An [action group](../alerts/action-groups.md) will be created with this address. If you already have an action group that you want to use, you can specify it instead.
+1. From the menu for the VM, select **Alerts** in the **Monitoring** section. Select **View + set up**.
+ :::image type="content" source="media/tutorial-monitor-vm/enable-recommended-alerts.png" alt-text="Screenshot of option to enable recommended alerts for a virtual machine." lightbox="media/tutorial-monitor-vm/enable-recommended-alerts.png":::
+ A list of recommended alert rules is displayed. You can select which rules to create. You can also change the recommended threshold. Ensure that **Email** is enabled and provide an email address to be notified when any of the alerts fire. An [action group](../alerts/action-groups.md) will be created with this address. If you already have an action group that you want to use, you can specify it instead.
-Expand each of the alert rules to inspect its details. By default, the severity for each is **Informational**. You might want to change to another severity such as **Error**.
+ :::image type="content" source="media/tutorial-monitor-vm/set-up-recommended-alerts.png" alt-text="Screenshot of recommended alert rule configuration." lightbox="media/tutorial-monitor-vm/set-up-recommended-alerts.png":::
+1. Expand each of the alert rules to see its details. By default, the severity for each is **Informational**. You might want to change to another severity such as **Error**.
+ :::image type="content" source="media/tutorial-monitor-vm/configure-alert-severity.png" alt-text="Screenshot of recommended alert rule severity configuration." lightbox="media/tutorial-monitor-vm/configure-alert-severity.png":::
-Select **Save** to create the alert rules.
+1. Select **Save** to create the alert rules.
## View created alert rules
azure-netapp-files Azure Netapp Files Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-performance-considerations.md
> This article addresses performance considerations for *regular volumes* only. > For *large volumes*, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md#requirements-and-considerations).
-The combination of the quota assigned to the volume and the selected service level determins the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS . For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
+The combination of the quota assigned to the volume and the selected service level determines the [throughput limit](azure-netapp-files-service-levels.md) for a volume with automatic QoS . For volumes with manual QoS, the throughput limit can be defined individually. When you make performance plans about Azure NetApp Files, you need to understand several considerations.
## Quota and throughput
azure-netapp-files Backup Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-introduction.md
Azure NetApp Files backup is supported for the following regions:
* Brazil Southeast * Canada Central * Canada East
+* Central India
* Central US * East Asia * East US
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
# Configure customer-managed keys for Azure NetApp Files volume encryption
-Customer-managed keys in Azure NetApp Files volume encryption enable you to use your own keys rather than a Microsoft-managed key when creating a new volume. With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
+Customer-managed keys for Azure NetApp Files volume encryption enable you to use your own keys rather than a Microsoft-managed key when creating a new volume. With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys.
The following diagram demonstrates how customer-managed keys work with Azure NetApp Files:
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
### [Azure CLI](#tab/azure-cli)
-The process to configure a NetApp account with customer-managed keys in the Azure CLI depends on whether you are using a [system-assigned identity](#use-a-system-assigned-identity) or an [user-assigned identity](#use-a-new-user-assigned-identity).
+How you configure a NetApp account with customer-managed keys with the Azure CLI depends on whether you are using a [system-assigned identity](#use-a-system-assigned-identity) or an [user-assigned identity](#use-a-new-user-assigned-identity).
#### Use a system-assigned identity
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* Switzerland West * UAE North * US Gov Arizona
+* US Gov Virginia
* West US ## Effects of cool access on data
azure-netapp-files Large Volumes Requirements Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/large-volumes-requirements-considerations.md
Support for Azure NetApp Files large volumes is available in the following regio
* Central US * East US * East US 2
+* France Central
* Germany West Central * Japan East * North Europe
azure-portal Azure Portal Dashboards Create Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-create-programmatically.md
Declare required template metadata and the parameters at the top of the JSON tem
} }, "variables": {},-
- ... rest of template omitted ...
+ "resources": [
+ ... rest of template omitted ...
+ ]
+}
``` Once you've configured your template, deploy it using any of the following methods:
azure-resource-manager Bicep Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-array.md
Title: Bicep functions - arrays description: Describes the functions to use in a Bicep file for working with arrays.- - Previously updated : 12/09/2022 Last updated : 01/11/2024 # Array functions for Bicep
For arrays, the function iterates through each element in the first parameter an
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+The union function merges not only the top-level elements but also recursively merges any nested objects within them. Nested array values are not merged. See the second example in the following section.
+ ### Example The following example shows how to use union with arrays and objects:
The output from the preceding example with the default values is:
| objectOutput | Object | {"one": "a", "two": "b", "three": "c2", "four": "d", "five": "e"} | | arrayOutput | Array | ["one", "two", "three", "four"] |
+The following example shows the deep merge capability:
+
+```bicep
+var firstObject = {
+ property: {
+ one: 'a'
+ two: 'b'
+ three: 'c1'
+ }
+ nestedArray: [
+ 1
+ 2
+ ]
+}
+var secondObject = {
+ property: {
+ three: 'c2'
+ four: 'd'
+ five: 'e'
+ }
+ nestedArray: [
+ 3
+ 4
+ ]
+}
+var firstArray = [
+ [
+ 'one'
+ 'two'
+ ]
+ [
+ 'three'
+ ]
+]
+var secondArray = [
+ [
+ 'three'
+ ]
+ [
+ 'four'
+ 'two'
+ ]
+]
+
+output objectOutput object = union(firstObject, secondObject)
+output arrayOutput array = union(firstArray, secondArray)
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| objectOutput | Object |{"property":{"one":"a","two":"b","three":"c2","four":"d","five":"e"},"nestedArray":[3,4]}|
+| arrayOutput | Array |[["one","two"],["three"],["four","two"]]|
+
+If nested arrays were merged, then the value of **objectOutput.nestedArray** would be [1, 2, 3, 4], and the value of **arrayOutput** would be [["one", "two", "three"], ["three", "four", "two"]].
+ ## Next steps * To get an array of string values delimited by a value, see [split](./bicep-functions-string.md#split).
azure-resource-manager Bicep Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-object.md
Title: Bicep functions - objects description: Describes the functions to use in a Bicep file for working with objects.-- Previously updated : 03/19/2023 Last updated : 01/11/2024 # Object functions for Bicep
For arrays, the function iterates through each element in the first parameter an
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+The union function merges not only the top-level elements but also recursively merges any nested objects within them. Nested array values are not merged. See the second example in the following section.
+ ### Example The following example shows how to use union with arrays and objects:
The output from the preceding example with the default values is:
| objectOutput | Object | {"one": "a", "two": "b", "three": "c2", "four": "d", "five": "e"} | | arrayOutput | Array | ["one", "two", "three", "four"] |
+The following example shows the deep merge capability:
+
+```bicep
+var firstObject = {
+ property: {
+ one: 'a'
+ two: 'b'
+ three: 'c1'
+ }
+ nestedArray: [
+ 1
+ 2
+ ]
+}
+var secondObject = {
+ property: {
+ three: 'c2'
+ four: 'd'
+ five: 'e'
+ }
+ nestedArray: [
+ 3
+ 4
+ ]
+}
+var firstArray = [
+ [
+ 'one'
+ 'two'
+ ]
+ [
+ 'three'
+ ]
+]
+var secondArray = [
+ [
+ 'three'
+ ]
+ [
+ 'four'
+ 'two'
+ ]
+]
+
+output objectOutput object = union(firstObject, secondObject)
+output arrayOutput array = union(firstArray, secondArray)
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| objectOutput | Object |{"property":{"one":"a","two":"b","three":"c2","four":"d","five":"e"},"nestedArray":[3,4]}|
+| arrayOutput | Array |[["one","two"],["three"],["four","two"]]|
+
+If nested arrays were merged, then the value of **objectOutput.nestedArray** would be [1, 2, 3, 4], and the value of **arrayOutput** would be [["one", "two", "three"], ["three", "four", "two"]].
+ ## Next steps * For a description of the sections in a Bicep file, see [Understand the structure and syntax of Bicep files](./file.md).
azure-resource-manager Template Functions Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-array.md
Title: Template functions - arrays
description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with arrays. Previously updated : 05/22/2023 Last updated : 01/11/2024 # Array functions for ARM templates
For arrays, the function iterates through each element in the first parameter an
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+The union function merges not only the top-level elements but also recursively merges any nested objects within them. Nested array values are not merged. See the second example in the following section.
+ ### Example The following example shows how to use union with arrays and objects.
The output from the preceding example with the default values is:
| objectOutput | Object | {"one": "a", "two": "b", "three": "c2", "four": "d", "five": "e"} | | arrayOutput | Array | ["one", "two", "three", "four"] |
+The following example shows the deep merge capability:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "firstObject": {
+ "property": {
+ "one": "a",
+ "two": "b",
+ "three": "c1"
+ },
+ "nestedArray": [
+ 1,
+ 2
+ ]
+ },
+ "secondObject": {
+ "property": {
+ "three": "c2",
+ "four": "d",
+ "five": "e"
+ },
+ "nestedArray": [
+ 3,
+ 4
+ ]
+ },
+ "firstArray": [
+ [
+ "one",
+ "two"
+ ],
+ [
+ "three"
+ ]
+ ],
+ "secondArray": [
+ [
+ "three"
+ ],
+ [
+ "four",
+ "two"
+ ]
+ ]
+ },
+ "resources": [],
+ "outputs": {
+ "objectOutput": {
+ "type": "Object",
+ "value": "[union(variables('firstObject'), variables('secondObject'))]"
+ },
+ "arrayOutput": {
+ "type": "Array",
+ "value": "[union(variables('firstArray'), variables('secondArray'))]"
+ }
+ }
+}
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| objectOutput | Object |{"property":{"one":"a","two":"b","three":"c2","four":"d","five":"e"},"nestedArray":[3,4]}|
+| arrayOutput | Array |[["one","two"],["three"],["four","two"]]|
+
+If nested arrays were merged, then the value of **objectOutput.nestedArray** would be [1, 2, 3, 4], and the value of **arrayOutput** would be [["one", "two", "three"], ["three", "four", "two"]].
+ ## Next steps * For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Template Functions Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-object.md
Title: Template functions - objects
description: Describes the functions to use in an Azure Resource Manager template (ARM template) for working with objects. Previously updated : 08/22/2023 Last updated : 01/11/2024 # Object functions for ARM templates
For arrays, the function iterates through each element in the first parameter an
For objects, property names and values from the first parameter are added to the result. For later parameters, any new names are added to the result. If a later parameter has a property with the same name, that value overwrites the existing value. The order of the properties isn't guaranteed.
+The union function merges not only the top-level elements but also recursively merging any nested arrays and objects within them. See the second example in the following section.
+ ### Example The following example shows how to use union with arrays and objects:
The output from the preceding example with the default values is:
| objectOutput | Object | {"one": "a", "two": "b", "three": "c2", "four": "d", "five": "e"} | | arrayOutput | Array | ["one", "two", "three", "four"] |
+The following example shows the deep merge capability:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "variables": {
+ "firstObject": {
+ "property": {
+ "one": "a",
+ "two": "b",
+ "three": "c1"
+ },
+ "nestedArray": [
+ 1,
+ 2
+ ]
+ },
+ "secondObject": {
+ "property": {
+ "three": "c2",
+ "four": "d",
+ "five": "e"
+ },
+ "nestedArray": [
+ 3,
+ 4
+ ]
+ },
+ "firstArray": [
+ [
+ "one",
+ "two"
+ ],
+ [
+ "three"
+ ]
+ ],
+ "secondArray": [
+ [
+ "three"
+ ],
+ [
+ "four",
+ "two"
+ ]
+ ]
+ },
+ "resources": [],
+ "outputs": {
+ "objectOutput": {
+ "type": "Object",
+ "value": "[union(variables('firstObject'), variables('secondObject'))]"
+ },
+ "arrayOutput": {
+ "type": "Array",
+ "value": "[union(variables('firstArray'), variables('secondArray'))]"
+ }
+ }
+}
+```
+
+The output from the preceding example is:
+
+| Name | Type | Value |
+| - | - | -- |
+| objectOutput | Object |{"property":{"one":"a","two":"b","three":"c2","four":"d","five":"e"},"nestedArray":[3,4]}|
+| arrayOutput | Array |[["one","two"],["three"],["four","two"]]|
+
+If nested arrays were merged, then the value of **objectOutput.nestedArray** would be [1, 2, 3, 4], and the value of **arrayOutput** would be [["one", "two", "three"], ["three", "four", "two"]].
+ ## Next steps * For a description of the sections in an ARM template, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Understand the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 1/8/2024 Last updated : 1/16/2024
The Multi-AZ capability for Azure VMware Solution Stretched Clusters is also tag
| East US | AZ01 | AV36P | No | | East US | AZ02 | AV36P | No | | East US | AZ03 | AV36, AV36P, AV64 | No |
-| East US 2 | AZ01 | AV36 | No |
+| East US 2 | AZ01 | AV36, AV64 | No |
| East US 2 | AZ02 | AV36P, AV52, AV64 | No | | France Central | AZ01 | AV36 | No | | Germany West Central | AZ02 | AV36 | Yes |
azure-web-pubsub Howto Enable Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-enable-geo-replication.md
To ensure effective failover management, it is recommended to set each replica's
For more performance evaluation, refer to [Performance](concept-performance.md).
-## Breaking issues
-* **Using replica and event handler together**
-
- If you use the Web PubSub event handler with Web PubSub C# server SDK or an Azure Function that utilizes the Web PubSub extension, you might encounter issues with the abuse protection once replicas are enabled. To address this, you can either **disable the abuse protection** or **upgrade to the latest SDK/extension versions**.
-
- For a detailed explanation and potential solutions, please refer to this [issue](https://github.com/Azure/azure-webpubsub/issues/598).
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
Title: Container workloads on Azure Batch description: Learn how to run and scale apps from container images on Azure Batch. Create a pool of compute nodes that support running container tasks. Previously updated : 12/06/2023 Last updated : 01/10/2024 ms.devlang: csharp, python
Keep in mind the following limitations:
- For Windows container workloads, you should choose a multicore VM size for your pool. > [!IMPORTANT]
-> Note that docker, by default, will create a network bridge with a subnet specification of `172.17.0.0/16`. If you are specifying a
+> Docker, by default, will create a network bridge with a subnet specification of `172.17.0.0/16`. If you are specifying a
> [virtual network](batch-virtual-network.md) for your pool, please ensure that there are no conflicting IP ranges. ## Supported VM images
Use one of the following supported Windows or Linux images to create a pool of V
### Windows support
-Batch supports Windows server images that have container support designations. Typically, these image SKU names are suffixed with `win_2016_mcr_20_10` or `win_2022_mcr_20_10` under the Mirantis publisher and are offered as `windows_2016_with_mirantis_container_runtime` or `windows_2022_with_mirantis_container_runtime`. Additionally, [the API to list all supported images in Batch](batch-linux-nodes.md#list-of-virtual-machine-images) denotes a `DockerCompatible` capability if the image supports Docker containers.
+Batch supports Windows server images that have container support designations.
+[The API to list all supported images in Batch](batch-linux-nodes.md#list-of-virtual-machine-images) denotes
+a `DockerCompatible` capability if the image supports Docker containers. Batch allows, but doesn't directly
+support, images published by Mirantis with capability noted as `DockerCompatible`. These images may only be
+deployed under a User Subscription pool allocation mode Batch account.
You can also create custom images from VMs running Docker on Windows. > [!NOTE]
-> The image SKUs `-with-containers` or `-with-containers-smalldisk` are retired. Please see the [announcement](https://techcommunity.microsoft.com/t5/containers/updates-to-the-windows-container-runtime-support/ba-p/2788799) for details and alternative container runtime options for Kubernetes environment.
+> The image SKUs `-with-containers` or `-with-containers-smalldisk` are retired. Please see the [announcement](https://techcommunity.microsoft.com/t5/containers/updates-to-the-windows-container-runtime-support/ba-p/2788799) for details and alternative container runtime options.
### Linux support
For Linux container workloads, Batch currently supports the following Linux imag
- Offer: `centos-container` - Offer: `ubuntu-server-container`
+- Publisher: `microsoft-dsvm`
+ - Offer: `ubuntu-hpc`
+ #### VM sizes with RDMA - Publisher: `microsoft-azure-batch`
For Linux container workloads, Batch currently supports the following Linux imag
- Publisher: `microsoft-dsvm` - Offer: `ubuntu-hpc`
+> [!IMPORTANT]
+> It is recommended to use the `microsoft-dsvm` `ubuntu-hpc` VM image if possible.
+ #### Notes The docker data root of the above images lies in different places:
- - For the batch image `microsoft-azure-batch` (Offer: `centos-container-rdma`, etc.), the docker data root is mapped to _/mnt/batch/docker_, which is usually located on the temporary disk.
- - For the HPC image, or `microsoft-dsvm` (Offer: `ubuntu-hpc`, etc.), the docker data root is unchanged from the Docker default which is _/var/lib/docker_ on Linux and _C:\ProgramData\Docker_ on Windows. These folders are usually located on the OS disk.
+ - For the Azure Batch published `microsoft-azure-batch` images (Offer: `centos-container-rdma`, etc.), the docker data root is mapped to _/mnt/batch/docker_, which is located on the temporary disk.
+ - For the HPC image, or `microsoft-dsvm` (Offer: `ubuntu-hpc`, etc.), the docker data root is unchanged from the Docker default which is _/var/lib/docker_ on Linux and _C:\ProgramData\Docker_ on Windows. These folders are located on the OS disk.
For non-Batch published images, the OS disk has the potential risk of being filled up quickly as container images are downloaded.
To configure a container-enabled pool without prefetched container images, defin
```python image_ref_to_use = batch.models.ImageReference(
- publisher='microsoft-azure-batch',
- offer='ubuntu-server-container',
- sku='20-04-lts',
+ publisher='microsoft-dsvm',
+ offer='ubuntu-hpc',
+ sku='2204',
version='latest') """
new_pool = batch.models.PoolAddParameter(
virtual_machine_configuration=batch.models.VirtualMachineConfiguration( image_reference=image_ref_to_use, container_configuration=container_conf,
- node_agent_sku_id='batch.node.ubuntu 20.04'),
- vm_size='STANDARD_D1_V2',
+ node_agent_sku_id='batch.node.ubuntu 22.04'),
+ vm_size='STANDARD_D2S_V3',
target_dedicated_nodes=1) ... ``` ```csharp ImageReference imageReference = new ImageReference(
- publisher: "microsoft-azure-batch",
- offer: "ubuntu-server-container",
- sku: "20-04-lts",
+ publisher: "microsoft-dsvm",
+ offer: "ubuntu-hpc",
+ sku: "2204",
version: "latest"); // Specify container configuration. This is required even though there are no prefetched images.
ContainerConfiguration containerConfig = new ContainerConfiguration();
// VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,
- nodeAgentSkuId: "batch.node.ubuntu 20.04");
+ nodeAgentSkuId: "batch.node.ubuntu 22.04");
virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Create pool CloudPool pool = batchClient.PoolOperations.CreatePool( poolId: poolId, targetDedicatedComputeNodes: 1,
- virtualMachineSize: "STANDARD_D1_V2",
+ virtualMachineSize: "STANDARD_D2S_V3",
virtualMachineConfiguration: virtualMachineConfiguration); ```
The following basic Python example shows how to prefetch a standard Ubuntu conta
```python image_ref_to_use = batch.models.ImageReference(
- publisher='microsoft-azure-batch',
- offer='ubuntu-server-container',
- sku='20-04-lts',
+ publisher='microsoft-dsvm',
+ offer='ubuntu-hpc',
+ sku='2204',
version='latest') """
new_pool = batch.models.PoolAddParameter(
virtual_machine_configuration=batch.models.VirtualMachineConfiguration( image_reference=image_ref_to_use, container_configuration=container_conf,
- node_agent_sku_id='batch.node.ubuntu 20.04'),
- vm_size='STANDARD_D1_V2',
+ node_agent_sku_id='batch.node.ubuntu 22.04'),
+ vm_size='STANDARD_D2S_V3',
target_dedicated_nodes=1) ... ```
The following C# example assumes that you want to prefetch a TensorFlow image fr
```csharp ImageReference imageReference = new ImageReference(
- publisher: "microsoft-azure-batch",
- offer: "ubuntu-server-container",
- sku: "20-04-lts",
+ publisher: "microsoft-dsvm",
+ offer: "ubuntu-hpc",
+ sku: "2204",
version: "latest"); ContainerRegistry containerRegistry = new ContainerRegistry(
containerConfig.ContainerRegistries = new List<ContainerRegistry> { containerReg
// VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,
- nodeAgentSkuId: "batch.node.ubuntu 20.04");
+ nodeAgentSkuId: "batch.node.ubuntu 22.04");
virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Set a native host command line start task
StartTask startTaskContainer = new StartTask( commandLine: "<native-host-command
// Create pool CloudPool pool = batchClient.PoolOperations.CreatePool( poolId: poolId,
- virtualMachineSize: "Standard_NC6",
+ virtualMachineSize: "Standard_NC6S_V3",
virtualMachineConfiguration: virtualMachineConfiguration); // Start the task in the pool
You can also prefetch container images by authenticating to a private container
```python image_ref_to_use = batch.models.ImageReference(
- publisher='microsoft-azure-batch',
- offer='ubuntu-server-container',
- sku='20-04-lts',
- version='latest')
+ publisher='microsoft-dsvm',
+ offer='ubuntu-hpc',
+ sku='2204',
+ version='latest')
# Specify a container registry container_registry = batch.models.ContainerRegistry(
new_pool = batch.models.PoolAddParameter(
virtual_machine_configuration=batch.models.VirtualMachineConfiguration( image_reference=image_ref_to_use, container_configuration=container_conf,
- node_agent_sku_id='batch.node.ubuntu 20.04'),
- vm_size='STANDARD_D1_V2',
+ node_agent_sku_id='batch.node.ubuntu 22.04'),
+ vm_size='STANDARD_D2S_V3',
target_dedicated_nodes=1) ```
containerConfig.ContainerRegistries = new List<ContainerRegistry> { containerReg
// VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,
- nodeAgentSkuId: "batch.node.ubuntu 20.04");
+ nodeAgentSkuId: "batch.node.ubuntu 22.04");
virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Create pool CloudPool pool = batchClient.PoolOperations.CreatePool( poolId: poolId,
- targetDedicatedComputeNodes: 4,
- virtualMachineSize: "Standard_NC6",
+ targetDedicatedComputeNodes: 2,
+ virtualMachineSize: "Standard_NC6S_V3",
virtualMachineConfiguration: virtualMachineConfiguration); ... ``` ### Managed identity support for ACR
-When you access containers stored in [Azure Container Registry](https://azure.microsoft.com/services/container-registry), either a username/password or a managed identity can be used to authenticate with the service. To use a managed identity, first ensure that the identity has been [assigned to the pool](managed-identity-pools.md) and that the identity has the `AcrPull` role assigned for the container registry you wish to access. Then, simply tell Batch which identity to use when authenticating with ACR.
+When you access containers stored in [Azure Container Registry](https://azure.microsoft.com/services/container-registry),
+either a username/password or a managed identity can be used to authenticate with the service. To use a managed identity,
+first ensure that the identity has been [assigned to the pool](managed-identity-pools.md) and that the identity has the
+`AcrPull` role assigned for the container registry you wish to access. Then, instruct Batch with which identity to use
+when authenticating with ACR.
```csharp ContainerRegistry containerRegistry = new ContainerRegistry(
containerConfig.ContainerRegistries = new List<ContainerRegistry> { containerReg
// VM configuration VirtualMachineConfiguration virtualMachineConfiguration = new VirtualMachineConfiguration( imageReference: imageReference,
- nodeAgentSkuId: "batch.node.ubuntu 20.04");
+ nodeAgentSkuId: "batch.node.ubuntu 22.04");
virtualMachineConfiguration.ContainerConfiguration = containerConfig; // Create pool CloudPool pool = batchClient.PoolOperations.CreatePool( poolId: poolId,
- targetDedicatedComputeNodes: 4,
- virtualMachineSize: "Standard_NC6",
+ targetDedicatedComputeNodes: 2,
+ virtualMachineSize: "Standard_NC6S_V3",
virtualMachineConfiguration: virtualMachineConfiguration); ... ```
CloudPool pool = batchClient.PoolOperations.CreatePool(
To run a container task on a container-enabled pool, specify container-specific settings. Settings include the image to use, registry, and container run options. -- Use the `ContainerSettings` property of the task classes to configure container-specific settings. These settings are defined by the [TaskContainerSettings](/dotnet/api/microsoft.azure.batch.taskcontainersettings) class. Note that the `--rm` container option doesn't require an additional `--runtime` option since it's taken care of by Batch.
+- Use the `ContainerSettings` property of the task classes to configure container-specific settings. These settings are defined by the [TaskContainerSettings](/dotnet/api/microsoft.azure.batch.taskcontainersettings) class. The `--rm` container option doesn't require an additional `--runtime` option since it's taken care of by Batch.
- If you run tasks on container images, the [cloud task](/dotnet/api/microsoft.azure.batch.cloudtask) and [job manager task](/dotnet/api/microsoft.azure.batch.cloudjob.jobmanagertask) require container settings. However, the [start task](/dotnet/api/microsoft.azure.batch.starttask), [job preparation task](/dotnet/api/microsoft.azure.batch.cloudjob.jobpreparationtask), and [job release task](/dotnet/api/microsoft.azure.batch.cloudjob.jobreleasetask) don't require container settings (that is, they can run within a container context or directly on the node). -- For Windows, tasks must be run with [ElevationLevel](/rest/api/batchservice/task/add#elevationlevel) set to `admin`.- - For Linux, Batch maps the user/group permission to the container. If access to any folder within the container requires Administrator permission, you may need to run the task as pool scope with admin elevation level. This ensures that Batch runs the task as root in the container context. Otherwise, a non-admin user might not have access to those folders. - For container pools with GPU-enabled hardware, Batch automatically enables GPU for container tasks, so you shouldn't include the `ΓÇôgpus` argument.
Optional [ContainerRunOptions](/dotnet/api/microsoft.azure.batch.taskcontainerse
### Container task working directory
-A Batch container task executes in a working directory in the container that's very similar to the directory that Batch sets up for a regular (non-container) task. Note that this working directory is different from the [WORKDIR](https://docs.docker.com/engine/reference/builder/#workdir) if configured in the image, or the default container working directory (`C:\` on a Windows container, or `/` on a Linux container).
+A Batch container task executes in a working directory in the container that's similar to the directory that Batch sets up for a regular (non-container) task. This working directory is different from the [WORKDIR](https://docs.docker.com/engine/reference/builder/#workdir) if configured in the image, or the default container working directory (`C:\` on a Windows container, or `/` on a Linux container).
For a Batch container task:
For a Batch container task:
- All task environment variables are mapped into the container. - The task working directory `AZ_BATCH_TASK_WORKING_DIR` on the node is set the same as for a regular task and mapped into the container.
+> [!IMPORTANT]
+> For Windows container pools on VM families with ephemeral disks, the entire ephemeral disk is mapped to container space
+> due to Windows container limitations.
+ These mappings allow you to work with container tasks in much the same way as non-container tasks. For example, install applications using application packages, access resource files from Azure Storage, use task environment settings, and persist task output files after the container stops. ### Troubleshoot container tasks
communication-services Record Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/record-calls.md
zone_pivot_groups: acs-plat-web-ios-android-windows
::: zone pivot="platform-web" [!INCLUDE [Record Calls Client-side JavaScript](./includes/record-calls/record-calls-web.md)] ::: zone-end ::: zone pivot="platform-android"
container-registry Container Registry Java Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-java-quickstart.md
Finally, you'll update your project configuration and use the command prompt to
</properties> ```
-1. Update the `<plugins>` collection in the *pom.xml* file so that the `<plugin>` element contains and an entry for the `jib-maven-plugin`, as shown in the following example. Note that we are using a base image from the Microsoft Container Registry (MCR): `mcr.microsoft.com/java/jdk:8-zulu-alpine`, which contains an officially supported JDK for Azure. For other MCR base images with officially supported JDKs, see [Install the Microsoft Build of OpenJDK.](/java/openjdk/install)
+1. Update the `<plugins>` collection in the *pom.xml* file so that the `<plugin>` element contains and an entry for the `jib-maven-plugin`, as shown in the following example. Note that we are using a base image from the Microsoft Container Registry (MCR): `mcr.microsoft.com/openjdk/jdk:11-ubuntu`, which contains an officially supported JDK for Azure. For other MCR base images with officially supported JDKs, see [Install the Microsoft Build of OpenJDK.](/java/openjdk/install)
```xml <plugin>
Finally, you'll update your project configuration and use the command prompt to
<version>${jib-maven-plugin.version}</version> <configuration> <from>
- 
+ 
</from> <to> 
container-registry Container Registry Tutorial Sign Trusted Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-sign-trusted-ca.md
Here are the requirements for certificates issued by a CA:
- The `exportable` property must be set to `false`. - Select a supported key type and size from the [Notary Project specification](https://github.com/notaryproject/specifications/blob/v1.0.0/specs/signature-specification.md#algorithm-selection).
+> [!IMPORTANT]
+> To ensure successful integration with [Image Integrity](/azure/aks/image-integrity), the content type of certificate should be set to PEM.
+ > [!NOTE] > This guide uses version 1.0.1 of the AKV plugin. Prior versions of the plugin had a limitation that required a specific certificate order in a certificate chain. Version 1.0.1 of the plugin does not have this limitation so it is recommended that you use version 1.0.1 or later.
To import the certificate:
See [Use Image Integrity to validate signed images before deploying them to your Azure Kubernetes Service (AKS) clusters (Preview)](/azure/aks/image-integrity?tabs=azure-cli) and [Ratify on Azure](https://ratify.dev/docs/1.0/quickstarts/ratify-on-azure/) to get started into verifying and auditing signed images before deploying them on AKS.
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
cost-management-billing Discount Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/discount-application.md
Previously updated : 11/17/2023 Last updated : 01/12/2024 # How saving plan discount is applied Azure savings plans save you money when you have consistent usage of Azure compute resources. An Azure savings plan can help you save money by allowing you to commit to a fixed hourly spend on compute services for one-year or three-year terms. The savings can significantly reduce your resource costs by up to 65% from pay-as-you-go prices. Discount rates per meter vary by commitment term (1-year or 3-year), not commitment amount.
-Each hour with savings plan, your eligible compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage after you reach your commitment amount is priced at pay-as-you-go rates. To be eligible for a savings plan benefit, the usage must be generated by a resource within the savings plan's scope. Each hour's benefit is _use-it-or-lose-it_, and can't be rolled over to another hour.
+Each hour with savings plan, your eligible compute usage is discounted until you reach your commitment amount ΓÇô subsequent usage after you reach your commitment amount is priced at pay-as-you-go rates.A resource within the savings plan's scope must generate the usage to be eligible for a savings plan benefit. Each hour's benefit is _use-it-or-lose-it_, and can't be rolled over to another hour.
The benefit is first applied to the product that has the greatest savings plan discount when compared to the equivalent pay-as-you-go rate (see your price list for savings plan pricing). The application prioritization is done to ensure that you receive the maximum benefit from your savings plan investment. We multiply the savings plan rate to that product's usage and deduct the result from the savings plan commitment. The process repeats until the commitment is exhausted (or until there's no more usage to consume the benefit). A savings plan discount only applies to resources associated with Enterprise Agreement, Microsoft Partner Agreement, and Microsoft Customer Agreements. Resources that run in a subscription with other offer types don't receive the discount.
+Here's a video that explains how an Azure savings plan is applied to the compute environment.
+
+>[!VIDEO https://www.youtube.com/embed/AZOyh1rl3kU]
+ ## Savings plans and VM reservations
-If you have both dynamic and stable workloads, you likely will have both Azure savings plans and VM reservations. Since reservation benefits are more restrictive than savings plans, and usually have greater discounts, Azure applies reservation benefits first.
+If you have both dynamic and stable workloads, you likely have both Azure savings plans and VM reservations. Since reservation benefits are more restrictive than savings plans, and usually have greater discounts, Azure applies reservation benefits first.
For example, VM *X* has the highest savings plan discount of all savings plan-eligible resources you used in a particular hour. If you have an available VM reservation that's compatible with *X*, the reservation is consumed instead of the savings plan. The approach reduces the possibility of waste and it ensures that youΓÇÖre always getting the best benefit. ## Savings plan and Azure consumption discounts
-In most situations, an Azure savings plan provides the best combination of flexibility and pricing. If you're operating under an Azure consumption discount (ACD), in rare occasions, you may have some pay-as-you-go rates that are lower than the savings plan rate. In these cases, Azure uses the lower of the two rates.
+In most situations, an Azure savings plan provides the best combination of flexibility and pricing. If you're operating under an Azure consumption discount (ACD), in rare occasions, you might have some pay-as-you-go rates that are lower than the savings plan rate. In these cases, Azure uses the lower of the two rates.
For example, VM *X* has the highest savings plan discount of all savings plan-eligible resources you used in a particular hour. If you have an ACD rate that is lower than the savings plan rate, the ACD rate is applied to your hourly usage. The result is decremented from your hourly commitment. The approach ensures you always get the best available rate.
For example, VM *X* has the highest savings plan discount of all savings plan-el
With an Azure savings plan, you get significant and flexible discounts off your pay-as-you-go rates in exchange for a one or three-year spend commitment. When you use an Azure resource, usage details are periodically reported to the Azure billing system. The billing system is tasked with quickly applying your savings plan in the most beneficial manner possible. The plan benefits are applied to usage that has the largest discount percentage first. For the application to be most effective, the billing system needs visibility to your usage in a timely manner.
-The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you may see changes to charges, including the possibility of savings plan utilization that's greater than 100%. The situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
+The Azure savings plan benefit application operates under a best fit benefit model. When your benefit application is evaluated for a given hour, the billing system incorporates usage arriving up to 48 hours after the given hour. During the sliding 48-hour window, you might see changes to charges, including the possibility of savings plan utilization that's greater than 100%. The situation happens because the system is constantly working to provide the best possible benefit application. Keep the 48-hour window in mind when you inspect your usage.
## Utilize multiple savings plans
data-factory Delete Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/delete-activity.md
Dataset for data destination used by copy activity.
You can also get the template to move files from [here](solution-template-move-files.md).
-## Known limitation
+## Known limitations
- Delete activity doesn't support deleting list of folders described by wildcard.
defender-for-cloud Agentless Malware Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-malware-scanning.md
+
+ Title: Protect servers with agentless malware scanning
+description: Learn how agentless malware scanning in Defender for Cloud can protect your virtual machines from malware.
++++ Last updated : 01/16/2024+++
+# Protect servers with agentless malware scanning
+
+Microsoft Defender for Cloud's Defender for Servers plan 2 supports an agentless malware scanning capability that scans and detects malware and viruses. The scanner is available for Azure virtual machines (VM), AWS EC2 instances and GCP VM instances.
+
+Agentless malware scanning provides:
+
+- Up-to-date and comprehensive malware detection capabilities that utilize the [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows?view=o365-worldwide) engine and [cloud protection](/microsoft-365/security/defender-endpoint/cloud-protection-microsoft-defender-antivirus?view=o365-worldwide) signature feed that Microsoft's intelligence feeds support.
+
+- Quick and full scans that use heuristic and signature-based threat detection.
+
+- Security alerts that are generated when malware is detected. These alerts provide extra details and context for investigations, and are sent to both the Defender for Cloud Alerts page and Defender XDR.
+
+> [!IMPORTANT]
+> Agentless malware scanning is only available through Defender for Servers plan 2 with agentless scanning enabled.
+
+## Agentless malware detection
+
+Agentless malware scanning offers the following benefits to both protected and unprotected machines:
+
+- **Improved coverage** - If a machine doesn't have an antivirus solution enabled, the agentless detector scans that machine to detect malicious activity.
+
+- **Detect potential threats** - The agentless scanner scans all files and folders including any files or folders that are excluded from the agent-based antivirus scans, without having an effect on the performance of the machine.
+
+You can learn more about [agentless machine scanning](concept-agentless-data-collection.md) and how to [enable agentless scanning for VMs](enable-agentless-scanning-vms.md).
+
+> [!IMPORTANT]
+> Security alerts appear on the portal only in cases where threats are detected on your environment. If you do not have any alerts it may be because there are no threats on your environment. You can [test to see if the agentless malware scanning capability has been properly onboarded and is reporting to Defender for Cloud](enable-agentless-scanning-vms.md#test-the-agentless-malware-scanners-deployment).
+
+On the Security alerts page, you can [manage and respond to security alerts](managing-and-responding-alerts.md). You can also [review the agentless malware scanner's results](managing-and-responding-alerts.md#review-the-agentless-scans-results). Security alerts can also be [exported to Sentinel](export-to-siem.md).
++
+## Next steps
+
+Learn more about how to [Enable agentless scanning for VMs](enable-agentless-scanning-vms.md).
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Title: Learn about agentless scanning for VMs
+ Title: Agentless machine scanning
description: Learn how Defender for Cloud can gather information about your multicloud compute resources without installing an agent on your machines. Previously updated : 08/15/2023- Last updated : 12/27/2023+
-# Learn about agentless scanning
+# Agentless machine scanning
-Microsoft Defender for Cloud maximizes coverage on OS posture issues and extends beyond the reach of agent-based assessments. With agentless scanning for VMs, you can get frictionless, wide, and instant visibility on actionable posture issues without installed agents, network connectivity requirements, or machine performance impact.
+Microsoft Defender for Cloud improves compute posture for Azure, AWS and GCP environments with machine scanning. For requirements and support, see the [compute support matrix in Defender for Cloud](support-matrix-defender-for-servers.md).
-Agentless scanning for VMs provides vulnerability assessment and software inventory, both powered by Microsoft Defender Vulnerability Management, in Azure and Amazon AWS environments. Agentless scanning is available in both [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) and [Defender for Servers P2](defender-for-servers-introduction.md).
+Agentless scanning for virtual machines (VM) provides:
+
+- Broad, frictionless visibility into your software inventory using Microsoft Defender Vulnerability Management.
+- Deep analysis of operating system configuration and other machine meta data.
+- [Vulnerability assessment](enable-agentless-scanning-vms.md) using Defender Vulnerability Management.
+- [Secret scanning](secret-scanning.md) to locate plain text secrets in your compute environment.
+- Threat detection with [agentless malware scanning](agentless-malware-scanning.md), using [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows?view=o365-worldwide).
+
+Agentless scanning assists you in the identification process of actionable posture issues without the need for installed agents, network connectivity, or any effect on machine performance. Agentless scanning is available through both the [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) plan and [Defender for Servers P2](plan-defender-for-servers-select-plan.md#plan-features) plan.
## Availability
Agentless scanning for VMs provides vulnerability assessment and software invent
||| |Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)|
-| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secrets scanning |
+| Supported use cases:| :::image type="icon" source="./medi) **Only available with Defender for Servers plan 2**|
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux | | Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) | | Encryption: | **Azure**<br>:::image type="icon" source="./medi) with platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô other scenarios using platform-managed keys (PMK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Encrypted ΓÇô customer-managed keys (CMK)<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Unencrypted<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - PMK<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Encrypted - CMK<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Google-managed encryption key<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Customer-managed encryption key (CMEK)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Customer-supplied encryption key (CSEK) |
-## How agentless scanning for VMs works
+## How agentless scanning works
-While agent-based methods use OS APIs in runtime to continuously collect security related data, agentless scanning for VMs uses cloud APIs to collect data. Defender for Cloud takes snapshots of VM disks and does an out-of-band, deep analysis of the OS configuration and file system stored in the snapshot. The copied snapshot doesn't leave the original compute region of the VM, and the VM is never impacted by the scan.
+Agentless scanning for VMs uses cloud APIs to collect data. Whereas agent-based methods use operating system APIs in runtime to continuously collect security related data. Defender for Cloud takes snapshots of VM disks and performs an out-of-band, deep analysis of the operating system configuration and file system stored in the snapshot. The copied snapshot remains in the same region as the VM. The VM isn't affected by the scan.
-After the necessary metadata is acquired from the disk, Defender for Cloud immediately deletes the copied snapshot of the disk and sends the metadata to Microsoft engines to analyze configuration gaps and potential threats. For example, in vulnerability assessment, the analysis is done by Defender Vulnerability Management. The results are displayed in Defender for Cloud, seamlessly consolidating agent-based and agentless results.
+After acquiring the necessary metadata is acquired from the copied disk, Defender for Cloud immediately deletes the copied snapshot of the disk and sends the metadata to Microsoft engines to detect configuration gaps and potential threats. For example, in vulnerability assessment, the analysis is done by Defender Vulnerability Management. The results are displayed in Defender for Cloud, which consolidates both the agent-based and agentless results on the Security alerts page.
The scanning environment where disks are analyzed is regional, volatile, isolated, and highly secure. Disk snapshots and data unrelated to the scan aren't stored longer than is necessary to collect the metadata, typically a few minutes.
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
You can choose which ticketing system to integrate. For preview, only ServiceNow
- Review the [Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) to learn about Defender CSPM pricing. -- DevOps security features under the Defender CSPM plan will remain free until March 1, 2024. Defender CSPM DevOps security features include code-to-cloud contextualization powering security explorer and attack paths and pull request annotations for Infrastructure-as-Code security findings.
+- Defender CSPM for GCP is free until January 31, 2024.
+
+- From March 1, 2023, advanced DevOps security posture capabilities will only be available through the paid Defender CSPM plan. Free foundational security posture management in Defender for Cloud will continue providing a number of Azure DevOps recommendations. Learn more about [DevOps security features](devops-support.md#azure-devops).
- For subscriptions that use both Defender CSPM and Defender for Containers plans, free vulnerability assessment is calculated based on free image scans provided via the Defender for Containers plan, as summarized [in the Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
defender-for-cloud Defender For Containers Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-architecture.md
Previously updated : 09/06/2023 Last updated : 01/10/2024 # Defender for Containers architecture
To learn more about implementation details such as supported operating systems,
## [**Azure (AKS)**](#tab/defender-for-container-arch-aks)
-### Architecture diagram of Defender for Cloud and AKS clusters<a name="jit-asc"></a>
+### Architecture diagram of Defender for Cloud and AKS clusters
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, the collection of audit log data is agentless and collected automatically through Azure infrastructure with no additional cost or configuration considerations. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers:
When Defender for Cloud protects a cluster hosted in Azure Kubernetes Service, t
| microsoft-defender-collector-misc-* | kube-system | [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) | A set of containers that focus on collecting inventory and security events from the Kubernetes environment that aren't bounded to a specific node. | N/A | memory: 64Mi <br> <br>cpu: 60m | No | | microsoft-defender-publisher-ds-* | kube-system | [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) | Publish the collected data to Microsoft Defender for Containers backend service where the data will be processed for and analyzed. | N/A | memory: 200Mi <br> <br> cpu: 60m | Https 443 <br> <br> Learn more about the [outbound access prerequisites](../aks/outbound-rules-control-egress.md#microsoft-defender-for-containers) |
-\* Resource limits aren't configurable; Learn more about [Kubernetes resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes)
+\* Resource limits aren't configurable; Learn more about [Kubernetes resources limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#resource-units-in-kubernetes).
+
+### How does agentless discovery for Kubernetes in Azure work?
+
+The discovery process is based on snapshots taken at intervals:
++
+When you enable the agentless discovery for Kubernetes extension, the following process occurs:
+
+- **Create**:
+ - If the extension is enabled from Defender CSPM, Defender for Cloud creates an identity in customer environments called `CloudPosture/securityOperator/DefenderCSPMSecurityOperator`.
+ - If the extension is enabled from Defender for Containers, Defender for Cloud creates an identity in customer environments called `CloudPosture/securityOperator/DefenderForContainersSecurityOperator`.
+- **Assign**: Defender for Cloud assigns a built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope. The role contains the following permissions:
+
+ - AKS read (Microsoft.ContainerService/managedClusters/read)
+ - AKS Trusted Access with the following permissions:
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
+ - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
+
+ Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
+
+- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.
+- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role *Microsoft.Security/pricings/microsoft-defender-operator*. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.
## [**On-premises / IaaS (Arc)**](#tab/defender-for-container-arch-arc)
When Defender for Cloud protects a cluster hosted in Elastic Kubernetes Service,
:::image type="content" source="./media/defender-for-containers/architecture-eks-cluster.png" alt-text="Diagram of high-level architecture of the interaction between Microsoft Defender for Containers, Amazon Web Services' EKS clusters, Azure Arc-enabled Kubernetes, and Azure Policy." lightbox="./media/defender-for-containers/architecture-eks-cluster.png":::
+### How does agentless discovery for Kubernetes in AWS work?
+
+The discovery process is based on snapshots taken at intervals:
+
+When you enable the agentless discovery for Kubernetes extension, the following process occurs:
+
+- **Create**:
+ - The Defender for Cloud role *MDCContainersAgentlessDiscoveryK8sRole* must be added to the *aws-auth ConfigMap* of the EKS clusters. The name can be customized.
+
+- **Assign**: Defender for Cloud assigns the *MDCContainersAgentlessDiscoveryK8sRole* role the following permissions:
+
+ - `eks:UpdateClusterConfig`
+ - `eks:DescribeCluster`
+
+- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the EKS clusters in your environment using API calls to the API server of EKS.
+ ## [**GCP (GKE)**](#tab/defender-for-container-gke)
-### Architecture diagram of Defender for Cloud and GKE clusters<a name="jit-asc"></a>
+### Architecture diagram of Defender for Cloud and GKE clusters
When Defender for Cloud protects a cluster hosted in Google Kubernetes Engine, the collection of audit log data is agentless. These are the required components in order to receive the full protection offered by Microsoft Defender for Containers:
When Defender for Cloud protects a cluster hosted in Google Kubernetes Engine, t
-## How does agentless discovery for Kubernetes work?
-
-The discovery process is based on snapshots taken at intervals:
--
-When you enable the agentless discovery for Kubernetes extension, the following process occurs:
--- **Create**:
- - If the extension is enabled from Defender CSPM, Defender for Cloud creates an identity in customer environments called `CloudPosture/securityOperator/DefenderCSPMSecurityOperator`.
- - If the extension is enabled from Defender for Containers, Defender for Cloud creates an identity in customer environments called `CloudPosture/securityOperator/DefenderForContainersSecurityOperator`.
-- **Assign**: Defender for Cloud assigns a built-in role called **Kubernetes Agentless Operator** to that identity on subscription scope. The role contains the following permissions:-
- - AKS read (Microsoft.ContainerService/managedClusters/read)
- - AKS Trusted Access with the following permissions:
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/write
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/read
- - Microsoft.ContainerService/managedClusters/trustedAccessRoleBindings/delete
-
- Learn more about [AKS Trusted Access](/azure/aks/trusted-access-feature).
--- **Discover**: Using the system assigned identity, Defender for Cloud performs a discovery of the AKS clusters in your environment using API calls to the API server of AKS.-- **Bind**: Upon discovery of an AKS cluster, Defender for Cloud performs an AKS bind operation between the created identity and the Kubernetes role ΓÇ£Microsoft.Security/pricings/microsoft-defender-operatorΓÇ¥. The role is visible via API and gives Defender for Cloud data plane read permission inside the cluster.- ## Next steps In this overview, you learned about the architecture of container security in Microsoft Defender for Cloud. To enable the plan, see:
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
You can learn more by watching this video from the Defender for Cloud in the Fie
- **Agentless discovery for Kubernetes** - provides zero footprint, API-based discovery of your Kubernetes clusters, their configurations and deployments. -- **[Agentless vulnerability assessment](agentless-vulnerability-assessment-azure.md)** - provides vulnerability assessment for all container images, including recommendations for registry and runtime, near real-time scans of new images, daily refresh of results, exploitability insights, and more. Vulnerability information is added to the security graph for contextual risk assessment and calculation of attack paths, and hunting capabilities.
+- **[Agentless vulnerability assessment](agentless-vulnerability-assessment-azure.md)** - provides vulnerability assessment for all container images, including recommendations for registry and runtime, quick scans of new images, daily refresh of results, exploitability insights, and more. Vulnerability information is added to the security graph for contextual risk assessment and calculation of attack paths, and hunting capabilities.
- **Comprehensive inventory capabilities** - enables you to explore resources, pods, services, repositories, images and configurations through [security explorer](how-to-manage-cloud-security-explorer.md#build-a-query-with-the-cloud-security-explorer) to easily monitor and manage your assets.
You can learn more about [Kubernetes data plane hardening](kubernetes-workload-p
## Vulnerability assessment
-Defender for Containers scans the container images in Azure Container Registry (ACR) and Amazon AWS Elastic Container Registry (ECR) to provide agentless vulnerability assessment for your container images, including registry and runtime recommendations, remediation guidance, near real-time scan of new images, real-world exploit insights, exploitability insights, and more.
+Defender for Containers scans the container images in Azure Container Registry (ACR) and Amazon AWS Elastic Container Registry (ECR) to provide agentless vulnerability assessment for your container images, including registry and runtime recommendations, remediation guidance, quick scans of new images, real-world exploit insights, exploitability insights, and more.
Vulnerability information powered by Microsoft Defender Vulnerability Management is added to the [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph) for contextual risk, calculation of attack paths, and hunting capabilities.
defender-for-cloud Enable Agentless Scanning Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-agentless-scanning-vms.md
Previously updated : 08/15/2023 Last updated : 01/16/2024 # Enable agentless scanning for VMs Agentless scanning provides visibility into installed software and software vulnerabilities on your workloads to extend vulnerability assessment coverage to server workloads without a vulnerability assessment agent installed.
-Learn more about [agentless scanning](concept-agentless-data-collection.md).
- Agentless vulnerability assessment uses the Microsoft Defender Vulnerability Management engine to assess vulnerabilities in the software installed on your VMs, without requiring Defender for Endpoint to be installed. Vulnerability assessment shows software inventory and vulnerability results in the same format as the agent-based assessments. ## Compatibility with agent-based vulnerability assessment solutions
When you enable agentless vulnerability assessment:
- If you select **Vulnerability assessment with Qualys or BYOL integrations** - Defender for Cloud shows the agent-based results by default. Results from the agentless scan are shown for machines that don't have an agent installed or from machines that aren't reporting findings correctly.
- If you want to change the default behavior so that Defender for Cloud always displays results from MDVM (regardless of a third-party agent solution), select the [Microsoft Defender Vulnerability Management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution.
+ To change the default behavior to always display results from MDVM (regardless if a third-party agent solution), select the [Microsoft Defender Vulnerability Management](auto-deploy-vulnerability-assessment.md#automatically-enable-a-vulnerability-assessment-solution) setting in the vulnerability assessment solution.
## Enabling agentless scanning for machines
When you enable [Defender Cloud Security Posture Management (CSPM)](concept-clou
If you have Defender for Servers P2 already enabled and agentless scanning is turned off, you need to turn on agentless scanning manually.
+You can enable agentless scanning on
+- [Azure](#agentless-vulnerability-assessment-on-azure)
+- [AWS](#agentless-vulnerability-assessment-on-aws)
+- [GCP](#enable-agentless-scanning-in-gcp)
+
+> [!NOTE]
+> Agentless malware scanning is only available if you have [enabled Defender for Servers plan 2](tutorial-enable-servers-plan.md#select-a-defender-for-servers-plan)
+ ### Agentless vulnerability assessment on Azure **To enable agentless vulnerability assessment on Azure**:
If you have Defender for Servers P2 already enabled and agentless scanning is tu
After you enable agentless scanning, software inventory and vulnerability information are updated automatically in Defender for Cloud.
-## Enable agentless scanning in GCP
+### Enable agentless scanning in GCP
-1. From Defender for Cloud's menu, select **Environment settings**.
+1. In Defender for Cloud, select **Environment settings**.
1. Select the relevant project or organization. 1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, selectΓÇ» **Settings**. :::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-plan.png" alt-text="Screenshot that shows where to select the plan for GCP projects." lightbox="media/enable-agentless-scanning-vms/gcp-select-plan.png":::
-1. In the settings pane, turn on ΓÇ»**Agentless scanning**.
+1. Toggle Agentless scanning to **On**.
:::image type="content" source="media/enable-agentless-scanning-vms/gcp-select-agentless.png" alt-text="Screenshot that shows where to select agentless scanning." lightbox="media/enable-agentless-scanning-vms/gcp-select-agentless.png":::
After you enable agentless scanning, software inventory and vulnerability inform
1. Select ΓÇ»**Next: Review and generate**. 1. Select ΓÇ»**Update**.
+## Test the agentless malware scanner's deployment
+
+Security alerts appear on the portal only in cases where threats are detected on your environment. If you do not have any alerts it may be because there are no threats on your environment. You can test to see that the device is properly onboarded and reporting to Defender for Cloud by creating a test file.
+
+### Create a test file for Linux
+
+1. Open a terminal window on the VM.
+
+1. Execute the following command:
+
+ ```bash
+ # test string
+ TEST_STRING='$$89-barbados-dublin-damascus-notice-pulled-natural-31$$'
+
+ # File to be created
+ FILE_PATH="/tmp/virus_test_file.txt"
+
+ # Write the test string to the file
+ echo -n $TEST_STRING > $FILE_PATH
+
+ # Check if the file was created and contains the correct string
+ if [ -f "$FILE_PATH" ]; then
+ if grep -Fq "$TEST_STRING" "$FILE_PATH"; then
+ echo "Virus test file created and validated successfully."
+ else
+ echo "Virus test file does not contain the correct string."
+ fi
+ else
+ echo "Failed to create virus test file."
+ fi
+ ```
+
+The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24 hours in the Defender for Cloud Alerts page and in the Defender XDR portal.
++
+### Create a test file for Windows
+
+#### Create a test file with a text document
+
+1. Create a text file on your VM.
+
+1. Paste the text `$$89-barbados-dublin-damascus-notice-pulled-natural-31$$` into the text file.
+
+ > [!IMPORTANT]
+ > Ensure that there are no extra spaces or lines in the text file.
+
+1. Save the file.
+
+1. Open the file to validate that it contains the content from stage 2.
+
+The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24 hours in the Defender for Cloud Alerts page and in the Defender XDR portal.
++
+#### Create a test file with PowerShell
+
+1. Open PowerShell on your VM.
+
+1. Execute the following script.
+
+ ```powershell
+ # virus test string
+
+ $TEST_STRING = '$$89-barbados-dublin-damascus-notice-pulled-natural-31$$'
+
+
+
+ # File to be created
+
+ $FILE_PATH = "C:\temp\virus_test_file.txt"
+
+
+
+ # Write the test string to the file without a trailing newline
+
+ [IO.File]::WriteAllText($FILE_PATH, $TEST_STRING)
+
+
+
+ # Check if the file was created and contains the correct string
+
+ if (Test-Path -Path $FILE_PATH) {
+
+ $content = [IO.File]::ReadAllText($FILE_PATH)
+
+ if ($content -eq $TEST_STRING) {
+
+ Write-Host "Test file created and validated successfully."
+
+ }
+
+ else {
+
+ Write-Host "Test file does not contain the correct string."
+
+ }
+
+ }
+
+ else {
+
+ Write-Host "Failed to create test file."
+
+ }
+ ```
+
+The alert `MDC_Test_File malware was detected (Agentless)` will appear within 24 hours in the Defender for Cloud Alerts page and in the Defender XDR portal.
++ ## Exclude machines from scanning Agentless scanning applies to all of the eligible machines in the subscription. To prevent specific machines from being scanned, you can exclude machines from agentless scanning based on your pre-existing environment tags. When Defender for Cloud performs the continuous discovery for machines, excluded machines are skipped.
-To configure machines for exclusion:
+**To configure machines for exclusion**:
-1. From Defender for Cloud's menu, open **Environment settings**.
+1. In Defender for Cloud, select **Environment settings**.
1. Select the relevant subscription or multicloud connector. 1. For either the Defender Cloud Security Posture Management (CSPM) or Defender for Servers P2 plan, select **Settings**. 1. For agentless scanning, select **Edit configuration**.
To configure machines for exclusion:
:::image type="content" source="media/enable-vulnerability-assessment-agentless/agentless-scanning-exclude-tags.png" alt-text="Screenshot of the tag and value fields for excluding machines from agentless scanning.":::
-1. Select **Save** to apply the changes.
+1. Select **Save**.
## Next steps
-In this article, you learned about how to scan your machines for software vulnerabilities without installing an agent.
- Learn more about:
+- [Agentless scanning](concept-agentless-data-collection.md).
- [Vulnerability assessment with Microsoft Defender for Endpoint](deploy-vulnerability-assessment-defender-vulnerability-management.md) - [Vulnerability assessment with Qualys](deploy-vulnerability-assessment-vm.md) - [Vulnerability assessment with BYOL solutions](deploy-vulnerability-assessment-byol-vm.md)
defender-for-cloud Managing And Responding Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/managing-and-responding-alerts.md
description: This document helps you to use Microsoft Defender for Cloud capabil
Previously updated : 12/24/2023 Last updated : 01/16/2024 # Manage and respond to security alerts
To learn about the different types of alerts, see [Security alerts - a reference
For an overview of how Defender for Cloud generates alerts, see [How Microsoft Defender for Cloud detects and responds to threats](alerts-overview.md).
+## Review the agentless scan's results
+
+Results for both the agent-based and agentless scanner appear on the Security alerts page.
++
+> [!NOTE]
+> Remediating one of these alerts will not remediate the other alert until the next scan is completed.
+ ## See also In this document, you learned how to view security alerts. See the following pages for related material:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 01/15/2024 Last updated : 01/16/2024 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
## January 2024
-| Date | Update |
-|--|--|
+|Date | Update |
+|-|-|
+| January 16 | [Public preview of agentless malware scanning for servers](#public-preview-of-agentless-malware-scanning-for-servers)|
| January 15 | [General availability of Defender for Cloud's integration with Microsoft Defender XDR](#general-availability-of-defender-for-clouds-integration-with-microsoft-defender-xdr) | | January 12 | [DevOps security Pull Request annotations are now enabled by default for Azure DevOps connectors](#devops-security-pull-request-annotations-are-now-enabled-by-default-for-azure-devops-connectors) | | January 4 | [Recommendations released for preview: Nine new Azure security recommendations](#recommendations-released-for-preview-nine-new-azure-security-recommendations) |
+### Public preview of agentless malware scanning for servers
+
+January 16, 2024
+
+We're announcing the release of Defender for Cloud's agentless malware detection for Azure virtual machines (VM), AWS EC2 instances and GCP VM instances, as a new feature included in [Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features).
+
+Agentless malware detection for VMs is now included in our agentless scanning platform. Agentless malware scanning utilizes [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/microsoft-defender-antivirus-windows?view=o365-worldwide) anti-malware engine to scan and detect malicious files. Any detected threats, trigger security alerts directly into Defender for Cloud and Defender XDR, where they can be investigated and remediated. The Agentless malware scanner complements the agent-based coverage with a second layer of threat detection with frictionless onboarding and has no effect on your machine's performance.
+
+Learn more about [agentless malware scanning](agentless-malware-scanning.md) for servers and [agentless scanning for VMs](concept-agentless-data-collection.md).
+ ### General availability of Defender for Cloud's integration with Microsoft Defender XDR January 15, 2024
See the [list of security recommendations](recommendations-reference.md).
December 24, 2023
-It is now possible to manage Defender for Servers on specific resources within your subscription, giving you full control over your protection strategy. With this capability, you can configure specific resources with custom configurations that differ from the settings configured at the subscription level.
+It's now possible to manage Defender for Servers on specific resources within your subscription, giving you full control over your protection strategy. With this capability, you can configure specific resources with custom configurations that differ from the settings configured at the subscription level.
Learn more about [enabling Defender for Servers at the resource level](tutorial-enable-servers-plan.md#enable-the-plan-at-the-resource-level).
Microsoft Defender for Cloud now supports the latest [CIS Azure Security Foundat
|Date |Update | |-|-|
-| September 27 | [Data security dashboard available in public preview](#data-security-dashboard-available-in-public-preview)
+| September 27 | [Data security dashboard available in public preview](#data-security-dashboard-available-in-public-preview) |
| September 21 | [Preview release: New autoprovisioning process for SQL Server on machines plan](#preview-release-new-autoprovisioning-process-for-sql-server-on-machines-plan) | | September 20 | [GitHub Advanced Security for Azure DevOps alerts in Defender for Cloud](#github-advanced-security-for-azure-devops-alerts-in-defender-for-cloud) | | September 11 | [Exempt functionality now available for Defender for APIs recommendations](#exempt-functionality-now-available-for-defender-for-apis-recommendations) |
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
description: Review support requirements for the Defender for Servers plan in Mi
Previously updated : 12/21/2023 Last updated : 01/07/2024 # Defender for Servers support
Validate the following endpoints are configured for outbound access so that Azur
This table summarizes Azure cloud support for Defender for Servers features.
-**Feature/Plan** | **Azure** | **Azure Government** | **Microsoft Azure operated by 21Vianet**<br/>**21Vianet**
+| **Feature/Plan** | **Azure** | **Azure Government** | **Microsoft Azure operated by 21Vianet**<br/>**21Vianet** |
| | |
-[Microsoft Defender for Endpoint integration](./integration-defender-for-endpoint.md) | GA | GA | NA
-[Compliance standards](./regulatory-compliance-dashboard.md)<br/>Compliance standards might differ depending on the cloud type.| GA | GA | GA
-[Microsoft Cloud Security Benchmark recommendations for OS hardening](apply-security-baseline.md) | GA | GA | GA
-[VM vulnerability scanning-agentless](concept-agentless-data-collection.md) | GA | NA | NA
-[VM vulnerability scanning - Microsoft Defender for Endpoint sensor](deploy-vulnerability-assessment-defender-vulnerability-management.md) | GA | NA | NA
-[VM vulnerability scanning - Qualys](deploy-vulnerability-assessment-vm.md) | GA | NA | NA
-[Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA
-[File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA
-[Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA
-[Adaptive network hardening](./adaptive-network-hardening.md) | GA | NA | NA
-[Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
-[Agentless secrets scanning](secret-scanning.md) | GA | NA | NA
+| [Microsoft Defender for Endpoint integration](./integration-defender-for-endpoint.md) | GA | GA | NA |
+| [Compliance standards](./regulatory-compliance-dashboard.md)<br/>Compliance standards might differ depending on the cloud type.| GA | GA | GA |
+| [Microsoft Cloud Security Benchmark recommendations for OS hardening](apply-security-baseline.md) | GA | GA | GA |
+| [VM vulnerability scanning-agentless](concept-agentless-data-collection.md) | GA | NA | NA |
+| [VM vulnerability scanning - Microsoft Defender for Endpoint sensor](deploy-vulnerability-assessment-defender-vulnerability-management.md) | GA | NA | NA |
+| [VM vulnerability scanning - Qualys](deploy-vulnerability-assessment-vm.md) | GA | NA | NA |
+| [Just-in-time VM access](./just-in-time-access-usage.md) | GA | GA | GA |
+| [File integrity monitoring](./file-integrity-monitoring-overview.md) | GA | GA | GA |
+| [Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA |
+| [Adaptive network hardening](./adaptive-network-hardening.md) | GA | NA | NA |
+| [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA |
+| [Agentless secret scanning](secret-scanning.md) | GA | NA | NA |
+| [Agentless malware scanning](agentless-malware-scanning.md) | Preview | NA | NA |
## Windows machine support The following table shows feature support for Windows machines in Azure, Azure Arc, and other clouds.
-| **Feature** | **Azure VMs*<br/> **[VM Scale Sets (Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
-| | :--: | :-: | :-: |
-| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
-| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Yes |
-| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Yes |
-| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | - | Yes |
-| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | Yes |
-| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | Yes |
-| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Yes |
-| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | Yes |
-| [Network map](protect-network-resources.md#network-map) | Γ£ö | - | Yes |
-| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | Yes |
-| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Yes |
-| [Docker host hardening](./harden-docker-hosts.md) | - | - | Yes |
-| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No |
-| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No |
-| [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
+| **Feature** | **Azure VMs*<br/> **[VM Scale Sets (Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+|--|:-:|:-:|:-:|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](alerts-reference.md) | Γ£ö | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | Γ£ö | Γ£ö | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | - | - | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Disk encryption assessment | Γ£ö</br>([supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No |
+| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
## Linux machine support The following table shows feature support for Linux machines in Azure, Azure Arc, and other clouds.
-| **Feature** | **Azure VMs**<br/> **[VM Scale Sets (Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
-| | :--: | :-: | :-: |
-| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö | Yes |
-| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
-| [Fileless security alerts](alerts-reference.md#alerts-windows) | - | - | Yes |
-| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | - | Yes |
-| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | Yes |
-| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | Yes |
-| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Yes |
-| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | Yes |
-| [Network map](protect-network-resources.md#network-map) | Γ£ö | - | Yes |
-| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | Yes |
-| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Yes |
-| [Docker host hardening](./harden-docker-hosts.md) | Γ£ö | Γ£ö | Yes |
-| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
-| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | - | - | No |
-| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No |
-| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No |
-| [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
+| **Feature** | **Azure VMs**<br/> **[VM Scale Sets (Flexible orchestration](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration)** | **Azure Arc-enabled machines** | **Defender for Servers required** |
+|--|:-:|:-:|:-:|
+| [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) | Γ£ö | Γ£ö | Yes |
+| [Virtual machine behavioral analytics (and security alerts)](./azure-defender.md) | Γ£ö</br>(on supported versions) | Γ£ö | Yes |
+| [Fileless security alerts](alerts-reference.md#alerts-windows) | - | - | Yes |
+| [Network-based security alerts](other-threat-protections.md#network-layer) | Γ£ö | - | Yes |
+| [Just-in-time VM access](just-in-time-access-usage.md) | Γ£ö | - | Yes |
+| [Integrated Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md#overview-of-the-integrated-vulnerability-scanner) | Γ£ö | Γ£ö | Yes |
+| [File Integrity Monitoring](file-integrity-monitoring-overview.md) | Γ£ö | Γ£ö | Yes |
+| [Adaptive application controls](adaptive-application-controls.md) | Γ£ö | Γ£ö | Yes |
+| [Network map](protect-network-resources.md#network-map) | Γ£ö | - | Yes |
+| [Adaptive network hardening](adaptive-network-hardening.md) | Γ£ö | - | Yes |
+| [Regulatory compliance dashboard & reports](regulatory-compliance-dashboard.md) | Γ£ö | Γ£ö | Yes |
+| [Docker host hardening](./harden-docker-hosts.md) | Γ£ö | Γ£ö | Yes |
+| Missing OS patches assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| Security misconfigurations assessment | Γ£ö | Γ£ö | Azure: No<br><br>Azure Arc-enabled: Yes |
+| [Endpoint protection assessment](supported-machines-endpoint-solutions-clouds-servers.md#supported-endpoint-protection-solutions) | - | - | No |
+| Disk encryption assessment | Γ£ö</br>(for [supported scenarios](../virtual-machines/windows/disk-encryption-windows.md)) | - | No |
+| Third-party vulnerability assessment (BYOL) | Γ£ö | - | No |
+| [Network security assessment](protect-network-resources.md) | Γ£ö | - | No |
## Multicloud machines
The following table shows feature support for AWS and GCP machines.
| Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
-| [Agentless secrets scanning](secret-scanning.md) | Γ£ö | Γ£ö |
+| [Agentless secret scanning](secret-scanning.md) | Γ£ö | Γ£ö |
+| [Agentless malware scanning](agentless-malware-scanning.md) | Γ£ö | Γ£ö |
## Endpoint protection support The following table provides a matrix of supported endpoint protection solutions. The table indicates whether you can use Defender for Cloud to install each solution for you.
-| Solution | Supported platforms | Defender for Cloud installation |
-||||
-| Microsoft Defender Antivirus | Windows Server 2016 or later | No (built into OS) |
-| System Center Endpoint Protection (Microsoft Antimalware) | Windows Server 2012 R2 | Via extension |
-| Trend Micro ΓÇô Deep Security | Windows Server (all) | No |
-| Symantec v12.1.1100+ | Windows Server (all) | No |
-| McAfee v10+ | Windows Server (all) | No |
-| McAfee v10+ | Linux (GA) | No |
-| Microsoft Defender for Endpoint for Linux<sup>[1](#footnote1)</sup> | Linux (GA) | Via extension |
-| Microsoft Defender for Endpoint Unified Solution<sup>[2](#footnote2)</sup> | Windows Server 2012 R2 and Windows 2016 | Via extension |
-| Sophos V9+ | Linux (GA) | No |
+| Solution | Supported platforms | Defender for Cloud installation |
+|--|--|--|
+| Microsoft Defender Antivirus | Windows Server 2016 or later | No (built into OS) |
+| System Center Endpoint Protection (Microsoft Antimalware) | Windows Server 2012 R2 | Via extension |
+| Trend Micro ΓÇô Deep Security | Windows Server (all) | No |
+| Symantec v12.1.1100+ | Windows Server (all) | No |
+| McAfee v10+ | Windows Server (all) | No |
+| McAfee v10+ | Linux (GA) | No |
+| Microsoft Defender for Endpoint for Linux<sup>[1](#footnote1)</sup> | Linux (GA) | Via extension |
+| Microsoft Defender for Endpoint Unified Solution<sup>[2](#footnote2)</sup> | Windows Server 2012 R2 and Windows 2016 | Via extension |
+| Sophos V9+ | Linux (GA) | No |
<sup><a name="footnote1"></a>1</sup> It's not enough to have Microsoft Defender for Endpoint on the Linux machine: the machine will only appear as healthy if the always-on scanning feature (also known as real-time protection (RTP)) is active. By default, the RTP feature is **disabled** to avoid clashes with other AV software.
defender-for-cloud Tutorial Enable Container Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-container-aws.md
Title: Protect your Amazon Web Service (AWS) accounts containers with Defender for Containers description: Learn how to enable the Defender for Containers plan on your Amazon Web Service (AWS) accounts for Microsoft Defender for Cloud. Previously updated : 06/29/2023 Last updated : 01/10/2024 # Protect your Amazon Web Service (AWS) containers with Defender for Containers
To protect your EKS clusters, you need to enable the Containers plan on the rele
> [!NOTE] > If you disable this configuration, then the `Threat detection (control plane)` feature will be disabled. Learn more about [features availability](supported-machines-endpoint-solutions-clouds-containers.md).
- - [Agentless discovery for Kubernetes](defender-for-containers-architecture.md#how-does-agentless-discovery-for-kubernetes-work) provides API-based discovery of your Kubernetes clusters. To enable the **Agentless discovery for Kubernetes** feature, toggle the setting to **On**.
+ - [Agentless discovery for Kubernetes](defender-for-containers-architecture.md#how-does-agentless-discovery-for-kubernetes-in-aws-work) provides API-based discovery of your Kubernetes clusters. To enable the **Agentless discovery for Kubernetes** feature, toggle the setting to **On**.
- The [Agentless Container Vulnerability Assessment](agentless-vulnerability-assessment-aws.md) provides vulnerability management for images stored in ECR and running images on your EKS clusters. To enable the **Agentless Container Vulnerability Assessment** feature, toggle the setting to **On**. 1. Select **Next: Review and generate**.
defender-for-cloud Tutorial Enable Servers Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md
Title: Protect your servers with Defender for Servers description: Learn how to enable the Defender for Servers on your Azure subscription for Microsoft Defender for Cloud. Previously updated : 12/21/2023 Last updated : 01/16/2024 # Protect your servers with Defender for Servers
When you enable the Defender for Servers plan, you're then given the option to s
### Configure monitoring coverage
-There are three components that can be enabled and configured to provide extra protections to your environments in the Defender for Servers plans.
+There are components that can be enabled and configured to provide extra protections to your environments in the Defender for Servers plans.
| Component | Description | Learn more | |:--:|:--:|:--:|
-| [Log Analytics agent/Azure Monitor agent](plan-defender-for-servers-agents.md) | Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis. | [Learn more](../azure-monitor/agents/log-analytics-agent.md) about the Log Analytics agent. |
-| Vulnerability assessment for machines | Enables vulnerability assessment on your Azure and hybrid machines. | [Learn more](monitoring-components.md) about how Defender for Cloud collects data. |
+| [Log Analytics agent](plan-defender-for-servers-agents.md) | Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis. | [Learn more](../azure-monitor/agents/log-analytics-agent.md) about the Log Analytics agent. |
+| [Vulnerability assessment for machines](deploy-vulnerability-assessment-defender-vulnerability-management.md) | Enables vulnerability assessment on your Azure and hybrid machines. | [Learn more](monitoring-components.md) about how Defender for Cloud collects data. |
+| [Endpoint protection](integration-defender-for-endpoint.md) | Enables protection powered by Microsoft Defender for Endpoint, including automatic agent deployment to your servers, and security data integration with Defender for Cloud | [Learn more](integration-defender-for-endpoint.md) about endpoint protection |
| [Agentless scanning for machines](concept-agentless-data-collection.md) | Scans your machines for installed software and vulnerabilities without relying on agents or impacting machine performance. | [Learn more](concept-agentless-data-collection.md) about agentless scanning for machines. | Toggle the corresponding switch to **On**, to enable any of these options.
-### Configure Log Analytics agent/Azure Monitor agent
+### Configure Log Analytics agent
-After enabling the Log Analytics agent/Azure Monitor agent, you'll be presented with the option to select either the Log Analytics agent or the Azure Monitor agent and which workspace should be utilized.
+After enabling the Log Analytics agent, you'll be presented with the option to select which workspace should be utilized.
-**To configure the Log Analytics agent/Azure Monitor agent**:
+**To configure the Log Analytics agent**:
1. Select **Edit configuration**. :::image type="content" source="media/tutorial-enable-servers-plan/edit-configuration-log.png" alt-text="Screenshot that shows you where on the screen you need to select edit configuration, to edit the log analytics agent/azure monitor agent." lightbox="media/tutorial-enable-servers-plan/edit-configuration-log.png":::
-1. In the Auto provisioning configuration window, select one of the following two agent types:
-
- - **Log Analytic Agent (Default)** - Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis.
-
- - **Azure Monitor Agent (Preview)** - Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis.
+1. Select either a **Default workspace(s)** or a **Custom workspace** depending on your need.
:::image type="content" source="media/tutorial-enable-servers-plan/auto-provisioning-screen.png" alt-text="Screenshot of the auto provisioning configuration screen with the available options to select." lightbox="media/tutorial-enable-servers-plan/auto-provisioning-screen.png":::
-1. Select either a **Default workspace(s)** or a **Custom workspace** depending on your need.
- 1. Select **Apply**. ### Configure vulnerability assessment for machines
Vulnerability assessment for machines allows you to select between two vulnerabi
1. Select **Apply**.
+## Configure endpoint protection
+
+With Microsoft Defender for Servers, you enable the protections provided by [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint?view=o365-worldwide) to your server resources. Defender for Endpoint includes automatic agent deployment to your servers, and security data integration with Defender for Cloud.
+
+**To configure endpoint protection**:
+
+1. Toggle the switch to **On**.
+ ### Configure agentless scanning for machines Defender for Cloud has the ability to scan your Azure machines for installed software and vulnerabilities without requiring you to install agents, have network connectivity or affect your machine's performance.
Defender for Cloud has the ability to scan your Azure machines for installed sof
1. Select **Apply**.
+Learn more about agentless scanning and how to [enable agentless scanning](enable-agentless-scanning-vms.md) on other cloud environments.
+ ## Enable the plan at the resource level
-While our recommendation is to enable Defender for Servers on the entire Azure subscription, to protect all existing and future resources in it, there are some cases where more flexibility is required for excluding specific resources or to manage security configurations at a lower hierarchy level than subscription. Resource level enablement is available for **Azure machines** and on-premises with **Azure Arc** as part of Defender for Servers plans:
+While our recommendation is to enable Defender for Servers on the entire Azure subscription, to protect all existing and future resources in it, there are some cases where more flexibility is required to exclude specific resources or to manage security configurations at a lower hierarchy level than subscription. Resource level enablement is available for **Azure machines** and on-premises with **Azure Arc** as part of Defender for Servers plans:
- **Defender for Servers Plan 1**: you can enable / disable the plan at the resource level. - **Defender for Servers Plan 2**: you can only disable the plan at the resource level. For example, itΓÇÖs possible to enable the plan at the subscription level and disable specific resources, however itΓÇÖs not possible to enable the plan only for specific resources.
Supported resource types include:
- Azure VMs - On-premises with Azure Arc-- VMSS Flex
+- Azure Virtual Machine Scale Sets Flex
### Enablement via REST API
defender-for-cloud Understand Malware Scan Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/understand-malware-scan-results.md
Malware scanning might fail to scan a blob. When this happens, the scan result i
| SAM259210: "Scan aborted - the requested blob is protected by password." | The blob is password-protected and can't be scanned. For more information, see the [malware scanning limitations](defender-for-storage-malware-scan.md#limitations) documentation. | N/A | Yes | | SAM259211: "Scan aborted - maximum archive nesting depth exceeded." | The maximum archive nesting depth was exceeded. | Archive nesting is a known method for evading malware detection. Handle this blob with care. | Yes | | SAM259212: "Scan aborted - the requested blob data is corrupt." | The blob is corrupted, and Malware Scanning was unable to scan it. | N/A | Yes |
+|SAM259213: “Scan was throttled by the service."| The scan request has temporarily exceeded the service’s rate limit. This is a measure we take to manage server load and ensure optimal performance for all users. For more information, see the malware scanning limitations documentation.|To avoid this issue in the future, please ensure your scan requests stay within the service’s rate limit. If your needs exceed the current rate limit, consider distributing your scan requests more evenly over time. |No|
## Next steps
dev-box How To Configure Dev Box Hibernation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-dev-box-hibernation.md
az devcenter admin devbox-definition update
--dev-box-definition-name <devBoxDefinitionName> -ΓÇôdev-center-name <devCenterName> --resource-group <resourceGroupName> ΓÇô-hibernateSupport Enabled ```
+### Troubleshooting
+
+If you enable hibernation on a Dev Box definition, but the definition reports that hibernation couldn't be enabled:
+- We recommend using the Visual Studio for Dev Box marketplace images, either directly, or as base images for generating your custom image.
+- The Windows + OS optimizations image contains optimized power settings, and they can't be used with hibernation.
+- If you're using a custom Azure Compute Gallery image, enable hibernation on your Azure Compute Gallery image before enabling hibernation on your Dev Box definition.
+- If hibernation can't be enabled on the definition even after you enable it on your gallery image, your custom image likely has a Windows configuration that prevents hibernation.
+
+For more information, see [Settings not compatible with hibernation](how-to-configure-dev-box-hibernation.md#settings-not-compatible-with-hibernation).
+ ## Disable hibernation on a dev box definition If you have issues provisioning new VMs after you enable hibernation on a pool, you can disable hibernation on the dev box definition. You can also disable hibernation when you want to revert the setting to only shutdown dev boxes.
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
Title: How to determine your resource usage and quota
+ Title: Determine your resource usage and quota
description: Learn how to determine where the Dev Box resources for your subscription are used and if you have any spare capacity against your quota.
Last updated 01/09/2024
# Determine resource usage and quota for Microsoft Dev Box
-To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a _quota_.
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota. There are different types of quota related to Dev Box that you might see in the Developer portal and Azure portal, such as quota for Dev Box vCPU for box creation as well as portal resource limits for Dev Centers, network connections, and Dev Box Definitions.
Keeping track of how your quota of virtual machine cores is being used across your subscriptions can be difficult. You might want to know what your current usage is, how much is remaining, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the **Usage + Quotas** page in the Azure portal.
-## Determine your Dev Box usage and quota by subscription
+For example, if dev box users encounter a vCPU quota error such as, *QuotaExceeded*, error during dev box creation there may be a need to increase this quota. A great place to start is to determine the current quota available.
++
+## Determine your Dev Box usage and quota by subscription in Azure portal
1. Sign in to the [Azure portal](https://portal.azure.com), and go to the subscription you want to examine.
Each subscription has its own **Usage + quotas** page that covers all the variou
- Check the default quota for each resource type by subscription type with [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits) - Learn how to [request a quota limit increase](./how-to-request-quota-increase.md)
+- You can also check your Dev Box quota using either:
+ - REST API: [Usages - List By Location - REST API (Azure Dev Center)](/rest/api/devcenter/administrator/usages/list-by-location?tabs=HTTP)
+ - CLI: [az devcenter admin usage](/cli/azure/devcenter/admin/usage)
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
To create a dev box in the Microsoft Dev Box developer portal:
1. Use the dev box tile in the developer portal to track the progress of creation.
-
-
+ > [!Note]
+ > If you encounter a vCPU quota error with a *QuotaExceeded* message, ask your administrator to [request an increased quota limit](/azure/dev-box/how-to-request-quota-increase). If your admin can't increase the quota limit at this time, try selecting another pool with a region close to your location.
+
:::image type="content" source="./media/quickstart-create-dev-box/dev-box-tile-creating.png" alt-text="Screenshot of the developer portal that shows the dev box card with a status of Creating.":::
education-hub Navigate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/navigate-costs.md
Additionally, you can ΓÇÿView cost detailsΓÇÖ, which will send you into Microsof
<iframe width="560" height="315" src="https://www.youtube.com/embed/UrkHiUx19Po?si=EREdwKeBAGnlOeSS" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
-Read more about this tutorial [Create and Manage Budgets](https://learn.microsoft.com/azure/cost-management-billing/costs/tutorial-acm-create-budgets)
+Read more about this tutorial [Create and Manage Budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md)
## Next steps
energy-data-services How To Manage Acls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-acls.md
In this article, you learn how to add or remove ACLs from the data record in your Azure Data Manager for Energy instance.
+## Create a data group as ACL
+Run the following curl command in Azure Cloud Shell to create a new data group, e.g., data.sampledb.viewer, in the specific data partition of the Azure Data Manager for Energy instance.
+
+**Request format**
+
+```bash
+ curl --location --request POST "https://<URI>/api/entitlements/v2/groups/" \
+ --header 'data-partition-id: <data-partition>' \
+ --header 'Authorization: Bearer <access_token>'
+ --data-raw '{
+ "description": "<data-group-description>",
+ "name": "data.sampledb.viewer"
+ }
+```
+
+users.data.root entitlement group is the default member of all data groups when groups are created. If you try to remove users.data.root from any data group, you get error since this membership is enforced by OSDU.
+
+In case, a data record has 2 ACLs, ACL_1 and ACL_2, and a given user is member of ACL_1 and users.data.root, now if you remove this given user from ACL_1, the user remains to have access of the data record via users.data.root group.
+ ## Create a record with ACLs **Request format**
If you delete the last owner ACL from the data record, you get the error.
} ``` + ## Next steps After you add ACLs to the data records, you can:
governance Migrate From Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/migrate-from-azure-automation.md
configuration stored in Azure Automation by making a REST request to the service
[04]: /powershell/gallery/how-to/working-with-local-psrepositories [05]: ./how-to-create-package.md [06]: ./how-to-create-package.md#author-a-configuration
-[07]: https://learn.microsoft.com/powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-7.4
+[07]: /powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-7.4
[08]: https://github.com/Azure/azure-policy/blob/bbfc60104c2c5b7fa6dd5b784b5d4713ddd55218/samples/GuestConfiguration/package-samples/resource-modules/WindowsDscConfiguration/DscResources/WindowsDscConfiguration/WindowsDscConfiguration.psm1#L97 [09]: ./dsc-in-machine-configuration.md#special-requirements-for-get [10]: ../../azure-resource-manager/management/overview.md#terminology
governance Paginate Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/paginate-powershell.md
We'll then configure the query to return five records (VMs) at a time.
```powershell # Login first with Connect-AzAccount if not using Cloud Shell
- # Run Azure Resource Graph query Search-AzGraph -Query "Resources | join kind=leftouter
- (ResourceContainers | where type=='microsoft.resources/subscriptions' | project subscriptionName
- = name, subscriptionId) on subscriptionId | where type =~ 'Microsoft.Compute/virtualMachines' |
- project VMResourceId = id, subscriptionName, resourceGroup, name"
+ # Run Azure Resource Graph query
+ Search-AzGraph -Query "Resources | join kind=leftouter (ResourceContainers | where
+ type=='microsoft.resources/subscriptions' | project subscriptionName = name, subscriptionId) on
+ subscriptionId | where type =~ 'Microsoft.Compute/virtualMachines' | project VMResourceId = id,
+ subscriptionName, resourceGroup, name"
``` 1. Update the query to implement the `skipToken` parameter and return 5 VMs in each batch:
healthcare-apis Fhir Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-faq.md
-
+
Title: FAQ about FHIR service in Azure Health Data Services description: Get answers to frequently asked questions about FHIR service, such as the storage location of data behind FHIR APIs and version support.
Azure API for FHIR was our initial generally available product and is being reti
- [Incremental Import](configure-import-data.md) - [Autoscaling](fhir-service-autoscale.md) enabled by default
+By default each Azure Health Data Services, FHIR instance is limited to storage capacity of 4TB.
+To provision a FHIR instance with storage capacity beyond 4TB, create support request with Issue type 'Service and Subscription limit (quotas)'.
+> [!NOTE]
+> Due to issue in billing metrics for storage. Customers opting for more than 4TB storage capacity will not be billed for storage till the issue is addressed.
### What's the difference between the FHIR service in Azure Health Data Services and the open-source FHIR server?
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-features-supported.md
FHIR service uses [Microsoft Entra ID](https://azure.microsoft.com/services/acti
## Service limits * **Bundle size** - Each bundle is limited to 500 items.
+* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR services. The limit can be used in one or many workspaces.
+* **Storage size** - By default each FHIR instance is limited to storage capacity of 4TB. To provision a FHIR instance with storage capacity beyond 4TB, create support request with Issue type 'Service and Subscription limit (quotas)'.
-* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR services. The limit can be used in one or many workspaces.
## Next steps
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Content-Type:application/fhir+json
| -- | -- | -- | -- | | type | Resource type of input file | 1..1 | A valid [FHIR resource type](https://www.hl7.org/fhir/resourcelist.html) that matches the input file. | |URL | Azure storage url of input file | 1..1 | URL value of the input file that can't be modified. |
-| etag | Etag of the input file on Azure storage used to verify the file content hasn't changed. | 0..1 | Etag value of the input file that can't be modified. |
+| etag | Etag of the input file on Azure storage; used to verify the file content has not changed after $import registration. | 0..1 | Etag value of the input file. |
**Sample body for import:**
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Azure Health Data Services is a set of managed API services based on open standa
This article provides details about the features and enhancements made to Azure Health Data Services, including the different services (FHIR service, DICOM service, and MedTech service) that seamlessly work with one another.
+## January 2024
+
+### FHIR Service
+**Storage size support in FHIR service beyond 4TB**
+
+By default each FHIR instance is limited to storage capacity of 4TB. To provision a FHIR instance with storage capacity beyond 4TB, create support request with Issue type 'Service and Subscription limit (quotas)'.
+> [!NOTE]
+> Due to issue in billing metrics for storage. Customers opting for more than 4TB storage capacity will not be billed for storage till the issue is addressed.
+ ## December 2023 ### Azure Health Data Services
iot-hub-device-update Device Update Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-error-codes.md
# Device Update for IoT Hub error codes
-This document provides a table of error codes for various Device Update components. It's meant to be used as a reference for users who want to parse their own error codes to diagnose and troubleshoot issues.
+This document provides a table of error codes for various Device Update components.
There are two primary client-side components that may throw error codes: the Device Update agent, and the Delivery Optimization agent. Error codes also come from the Device Update content service.
There are two primary client-side components that may throw error codes: the Dev
### ResultCode and ExtendedResultCode
-The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. For more information about the Device Update Core PnP interface, see [Device Update and Plug and Play](device-update-plug-and-play.md).
+The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. For more information about the Device Update Core PnP interface, see [Device Update and Plug and Play](device-update-plug-and-play.md). For more details regarding the default meanings of Device Update agent ResultCode and ExtendedResultCodes, see the [Device Update Github repository](https://aka.ms/du-resultcodes).
`ResultCode` is a general status code and `ExtendedResultCode` is an integer with encoded error information.
-You'll most likely see the `ExtendedResultCode` as a signed integer in the PnP interface. To decode the `ExtendedResultCode`, convert the signed integer to
+The `ExtendedResultCode` appears as a signed integer in the PnP interface. To decode the `ExtendedResultCode`, convert the signed integer to
unsigned hex. Only the first 4 bytes of the `ExtendedResultCode` are used and are of the form `F` `FFFFFFF` where the first nibble is the **Facility Code** and the rest of the bits are the **Error Code**.
The DO error code can be obtained by examining the exceptions thrown in response
| 0x80D02002L | DO_E_DOWNLOAD_NO_PROGRESS | Download Job | Download of a file saw no progress within the defined period | | 0x80D02011L | DO_E_UNKNOWN_PROPERTY_ID | Download Job | SetProperty() or GetProperty() called with an unknown property ID | | 0x80D02012L | DO_E_READ_ONLY_PROPERTY | Download Job | Unable to call SetProperty() on a read-only property |
-| 0x80D02013L | DO_E_INVALID_STATE | Download Job | The requested action is not allowed in the current job state. The job might have been canceled or completed transferring. It is in a read-only state now. |
+| 0x80D02013L | DO_E_INVALID_STATE | Download Job | The requested action isn't allowed in the current job state. The job might have been canceled or completed transferring. It is in a read-only state now. |
| 0x80D02018L | DO_E_FILE_DOWNLOADSINK_UNSPECIFIED | Download Job | Unable to start a download because no download sink (either local file or stream interface) was specified | | 0x80D02200L | DO_E_DOWNLOAD_NO_URI | IDODownload Interface| The download was started without providing a URI | | 0x80D03805L | DO_E_BLOCKED_BY_NO_NETWORK | Transient conditions | Download paused due to loss of network connectivity |
The following table lists error codes pertaining to the content service componen
| Error importing update due to exceeded limit. | Cannot import additional update provider with the specified compatibility.<br><br>_or_<br><br>Cannot import additional update name with the specified compatibility.<br><br>_or_<br><br>Cannot import additional update version with the specified compatibility. | When defining [compatibility properties](import-schema.md#compatibility-object) in an import manifest, keep in mind that Device Update for IoT Hub supports a single provider and name combination for a given set of compatibility properties. If you try to use the same compatibility properties with more than one provider/name combination, you'll see these errors. To resolve this issue, make sure that all updates for a given device (as defined by compatibility properties) use the same provider and name. | | CannotProcessUpdateFile | Error processing source file. | | | ContentFileCannotDownload | Cannot download source file. | Check to make sure the URL for the update file(s) is still valid. |
-| SourceFileMalwareDetected | A known malware signature was detected in a file being imported. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. If a known malware signature is identified, the import fails and a unique error message is returned. The error message contains the description of the malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. |
-| SourceFilePendingMalwareAnalysis | A signature was detected in a file being imported that may indicate malware is present. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. The import fails if a scan signature has characteristics of malware, even if there is not an exact match to known malware. When this occurs, a unique error message is returned. The error message contains the description of the suspected malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware.<br><br>Once you've removed the malware from any files being imported, you can start the import process again. If you're certain your files are free of malware and continue to see this error, use the [Contact Microsoft Support](troubleshoot-device-update.md#contact) process. |
+| SourceFileMalwareDetected | A known malware signature was detected in a file being imported. | Device Update for IoT Hub scans imported content for malware using several different mechanisms. If a known malware signature is identified, the import fails and a unique error message is returned. The error message contains the description of the malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. |
+| SourceFilePendingMalwareAnalysis | A signature was detected in a file being imported that may indicate malware is present. | Device Update for IoT Hub scans imported content for malware using several different mechanisms. The import fails if a scan signature has characteristics of malware, even if there isn't an exact match to known malware. When this occurs, a unique error message is returned. The error message contains the description of the suspected malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware.<br><br>Once you've removed the malware from any files being imported, you can start the import process again. If you're certain your files are free of malware and continue to see this error, use the [Contact Microsoft Support](troubleshoot-device-update.md#contact) process. |
## Next steps
iot-hub Iot Hub Tls Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-tls-support.md
- Last updated 06/29/2021
+ Last updated 01/05/2024
TLS 1.0 and 1.1 are considered legacy and are planned for deprecation. For more
## IoT Hub's server TLS certificate
-During a TLS handshake, IoT Hub presents RSA-keyed server certificates to connecting clients. Its' root is the Baltimore Cybertrust Root CA. Because the Baltimore root is at end-of-life, we'll be migrating to a new root called DigiCert Global G2. This change will impact all devices currently connecting to IoT Hub. To prepare for this migration and for all other details, see [IoT TLS certificate update](https://aka.ms/iot-ca-updates).
+During a TLS handshake, IoT Hub presents RSA-keyed server certificates to connecting clients.In the past, the certificates were all rooted from the Baltimore Cybertrust Root CA. Because the Baltimore root is at end-of-life, we are in the process of migrating to a new root called DigiCert Global G2. This migration impacts all devices currently connecting to IoT Hub. For more information, see [IoT TLS certificate update](https://aka.ms/iot-ca-updates).
+
+Although root CA migrations are rare, for resilience in the modern security landscape you should prepare your IoT scenario for the unlikely event that a root CA is compromised or an emergency root CA migration is necessary. We strongly recommend that all devices trust the following three root CAs:
+
+* Baltimore CyberTrust root CA
+* DigiCert Global G2 root CA
+* Microsoft RSA root CA 2017
+
+For links to download these certificates, see [Azure Certificate Authority details](../security/fundamentals/azure-CA-details.md).
### Elliptic Curve Cryptography (ECC) server TLS certificate (preview)
-IoT Hub ECC server TLS certificate is available for public preview. While offering similar security to RSA certificates, ECC certificate validation (with ECC-only cipher suites) uses up to 40% less compute, memory, and bandwidth. These savings are important for IoT devices because of their smaller profiles and memory, and to support use cases in network bandwidth limited environments. The ECC server certificate's root is DigiCert Global Root G3.
+IoT Hub ECC server TLS certificate is available for public preview. While offering similar security to RSA certificates, ECC certificate validation (with ECC-only cipher suites) uses up to 40% less compute, memory, and bandwidth. These savings are important for IoT devices because of their smaller profiles and memory, and to support use cases in network bandwidth limited environments.
+
+We strongly recommend that all devices using ECC trust the following two root CAs:
+
+* DigiCert Global G3 root CA
+* Microsoft RSA root CA 2017
+
+For links to download these certificates, see [Azure Certificate Authority details](../security/fundamentals/azure-CA-details.md).
To preview IoT Hub's ECC server certificate: 1. [Create a new IoT hub with preview mode on](iot-hub-preview-mode.md). 1. [Configure your client](#tls-configuration-for-sdk-and-iot-edge) to include *only* ECDSA cipher suites and *exclude* any RSA ones. These are the supported cipher suites for the ECC certificate public preview:
- - `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`
- - `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`
- - `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`
- - `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384`
+ * `TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256`
+ * `TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384`
+ * `TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256`
+ * `TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384`
1. Connect your client to the preview IoT hub. ## TLS 1.2 enforcement available in select regions
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
Previously updated : 11/03/2023 Last updated : 01/16/2024 # Migrate IoT Hub resources to a new TLS certificate root
You should start planning now for the effects of migrating your IoT hubs to the
## Timeline
-The IoT Hub team began migrating IoT hubs in February, 2023 and the migration is complete except for hubs that have already been approved for a later migration. If your IoT hub is found to be using the Baltimore certificate without an agreement in place with the product team, your hub will be migrated without any further notice.
+The IoT Hub migration is complete except for hubs that have already been approved for an extension. If your IoT hub is found to be using the Baltimore certificate without an agreement in place with the product team, your hub will be migrated without any further notice.
-After all IoT hubs have migrated, DPS will perform its migration between January 15 and February 15, 2024.
+After all IoT hubs have migrated, DPS will perform its migration between January 15 and September 30, 2024.
For each IoT hub with an extension agreement in place, you can expect the following:
For each IoT hub with an extension agreement in place, you can expect the follow
### Request an extension
-As of August, 2023 the extension request process is closed for IoT Hub and IoT Central. If your IoT hub is found to be using the Baltimore certificate without an extension agreement in place with the product team, your hub will be migrated without any further notice.
+As of August 2023 the extension request process is closed for IoT Hub and IoT Central. If your IoT hub is found to be using the Baltimore certificate without an extension agreement in place with the product team, your hub will be migrated without any further notice.
## Required steps
To prepare for the migration, take the following steps:
It's important to have all three certificates on your devices until the IoT Hub and DPS migrations are complete. Keeping the Baltimore CyberTrust Root ensures that your devices will stay connected until the migration, and adding the DigiCert Global Root G2 ensures that your devices will seamlessly switch over and reconnect after the migration. The Microsoft RSA Root Certificate Authority 2017 helps prevent future disruptions in case the DigiCert Global Root G2 is retired unexpectedly.
+ For more information about IoT Hub's recommended certificate practices, see [TLS support](./iot-hub-tls-support.md).
+ 2. Make sure that you aren't pinning any intermediate or leaf certificates, and are using the public roots to perform TLS server validation. IoT Hub and DPS occasionally roll over their intermediate certificate authority (CA). In these instances, your devices will lose connectivity if they explicitly look for an intermediate CA or leaf certificate. However, devices that perform validation using the public roots will continue to connect regardless of any changes to the intermediate CA.
To know whether an IoT hub has been migrated or not, check the active certificat
1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
-1. Select **Certificates** in the **Security settings** section of the navigation menu.
+1. Select **Export template** in the **Automation** section of the navigation menu.
-1. If the **Certificate root** is listed as Baltimore CyberTrust, then the hub has not been migrated yet. If it is listed as DigiCert Global G2, then the migration is complete.
+1. Wait for the template to generate, then navigate to the **resources.properties.features** property in the JSON template. If **RootCertificateV2** is listed as a feature, then your hub has been migrated to DigiCert Global G2.
# [Azure CLI](#tab/cli)
Yes, IoT Central uses both IoT Hub and DPS in the backend. The TLS migration wil
You can migrate your application from the Baltimore CyberTrust Root to the DigiCert Global G2 Root on your own schedule. We recommend the following process:
-1. **Keep the Baltimore CyberTrust Root on your device until the transition period is completed on 15 February 2024** (necessary to prevent connection interruption).
+1. **Keep the Baltimore CyberTrust Root on your device until the transition period is completed on September 30, 2024** (necessary to prevent connection interruption).
2. **In addition** to the Baltimore Root, ensure the DigiCert Global G2 Root is added to your trusted root store. 3. Make sure you aren’t pinning any intermediate or leaf certificates and are using the public roots to perform TLS server validation. 4. In your IoT Central application you can find the Root Certification settings under **Settings** > **Application** > **Baltimore Cybertrust Migration**. 
Also, as part of the migration, your IoT hub might get a new IP address. If your
### When can I remove the Baltimore Cybertrust Root from my devices?
-You can remove the Baltimore root certificate once all stages of the migration are complete. If you only use IoT Hub, then you can remove the old root certificate after the IoT Hub migration is scheduled to complete on October 15, 2023. If you use Device Provisioning Service or IoT Central, then you need to keep both root certificates on your device until the DPS migration is scheduled to complete on February 15, 2024.
+You can remove the Baltimore root certificate once all stages of the migration are complete. If you only use IoT Hub, then you can remove the old root certificate after the IoT Hub migration is scheduled to complete on October 15, 2023. If you use Device Provisioning Service or IoT Central, then you need to keep both root certificates on your device until the DPS migration is scheduled to complete on September 30, 2024.
## Troubleshoot
load-balancer Configure Vm Scale Set Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-portal.md
Sign in to the [Azure portal](https://portal.azure.com).
In this section, you'll create a Virtual Machine Scale Set in the Azure portal with an existing Azure load balancer. > [!NOTE]
-> The following steps assume a virtual network named **myVNet** and a Azure load balancer named **myLoadBalancer** has been previously deployed.
+> The following steps assume a virtual network named **myVNet** and an Azure load balancer named **myLoadBalancer** has been previously deployed.
1. On the top left-hand side of the screen, select **Create a resource** and search for **Virtual Machine Scale Set** in the marketplace search.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Previously updated : 03/07/2023 Last updated : 01/16/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning Concept Secure Code Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-code-best-practice.md
Previously updated : 11/04/2022 Last updated : 01/16/2024 # Secure code best practices with Azure Machine Learning
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
Previously updated : 10/03/2022 Last updated : 01/16/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-soft-delete.md
Previously updated : 11/07/2022 Last updated : 01/16/2024 monikerRange: 'azureml-api-2 || azureml-api-1' #Customer intent: As an IT pro, understand how to enable data protection capabilities, to protect against accidental deletion.
machine-learning Concept Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-vulnerability-management.md
description: Learn how Azure Machine Learning manages vulnerabilities in images
Previously updated : 10/20/2022 Last updated : 01/16/2024
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Azure Private Link enables you to connect to your workspace using a private endp
* You must have an existing virtual network to create the private endpoint in.
- > [!IMPORTANT]
- > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ > [!WARNING]
+ > Do not use the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network, and will result in errors if used for your VNet. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
* [Disable network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md) before adding the private endpoint.
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
Once your deployment completes, your deployment is ready to serve request. One o
:::code language="json" source="~/azureml-examples-main/cli/endpoints/online/ncd/sample-request-sklearn.json"::: > [!NOTE]
-> Notice how the key `input_data` has been used in this example instead of `inputs` as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. See [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#differences-between-models-deployed-in-azure-machine-learning-and-mlflow-built-in-server) for details about expected input format.
+> Notice how the key `input_data` has been used in this example instead of `inputs` as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. See [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#models-deployed-in-azure-machine-learning-vs-models-deployed-in-the-mlflow-built-in-server) for details about expected input format.
To submit a request to the endpoint, you can do as follows:
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
description: Learn to deploy your MLflow model to the deployment targets supported by Azure Machine Learning. -+ Previously updated : 06/06/2022-
+reviewer: msakande
Last updated : 01/16/2024+ ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-In this article, learn how to deploy your [MLflow](https://www.mlflow.org) model to Azure Machine Learning for both real-time and batch inference. Learn also about the different tools you can use to perform management of the deployment.
+In this article, learn about deployment of [MLflow](https://www.mlflow.org) models to Azure Machine Learning for both real-time and batch inference. Learn also about the different tools you can use to manage the deployment.
-## Deploying MLflow models vs custom models
+## Deployment of MLflow models vs. custom models
-When deploying MLflow models to Azure Machine Learning, you don't have to provide a scoring script or an environment for deployment as they're automatically generated for you. We typically refer to this functionality as no-code deployment.
+Unlike custom model deployment in Azure Machine Learning, when you deploy MLflow models to Azure Machine Learning, you don't have to provide a scoring script or an environment for deployment. Instead, Azure Machine Learning automatically generates the scoring script and environment for you. This functionality is called _no-code deployment_.
-For no-code-deployment, Azure Machine Learning:
+For no-code deployment, Azure Machine Learning:
-* Ensures all the package dependencies indicated in the MLflow model are satisfied.
-* Provides a MLflow base image/curated environment that contains the following items:
+* Ensures that all the package dependencies indicated in the MLflow model are satisfied.
+* Provides an MLflow base image or curated environment that contains the following items:
* Packages required for Azure Machine Learning to perform inference, including [`mlflow-skinny`](https://github.com/mlflow/mlflow/blob/master/README_SKINNY.rst). * A scoring script to perform inference.
For no-code-deployment, Azure Machine Learning:
### Python packages and dependencies
-Azure Machine Learning automatically generates environments to run inference of MLflow models. Those environments are built by reading the conda dependencies specified in the MLflow model. Azure Machine Learning also adds any required package to run the inferencing server, which will vary depending on the type of deployment you're doing.
+Azure Machine Learning automatically generates environments to run inference on MLflow models. To build the environments, Azure Machine Learning reads the conda dependencies that are specified in the MLflow model and adds any packages that are required to run the inferencing server. These extra packages vary, depending on your deployment type.
+
+The following _conda.yaml_ file shows an example of conda dependencies specified in an MLflow model.
__conda.yaml__ > [!WARNING]
-> MLflow performs automatic package detection when logging models, and pins their versions in the conda dependencies of the model. However, such action is performed at the best of its knowledge and there might be cases when the detection doesn't reflect your intentions or requirements. On those cases consider [logging models with a custom conda dependencies definition](how-to-log-mlflow-models.md?#logging-models-with-a-custom-signature-environment-or-samples).
+> MLflow automatically detects packages when logging a model and pins the package versions in the model's conda dependencies. However, this automatic package detection might not always reflect your intentions or requirements. In such cases, consider [logging models with a custom conda dependencies definition](how-to-log-mlflow-models.md?#logging-models-with-a-custom-signature-environment-or-samples).
-### Implications of models with signatures
+### Implications of using models with signatures
-MLflow models can include a signature that indicates the expected inputs and their types. For those models containing a signature, Azure Machine Learning enforces compliance with it, both in terms of the number of inputs and their types. This means that your data input should comply with the types indicated in the model signature. If the data can't be parsed as expected, the invocation will fail. This applies for both online and batch endpoints.
+MLflow models can include a signature that indicates the expected inputs and their types. When such models are deployed to online or batch endpoints, Azure Machine Learning enforces that the number and types of the data inputs comply with the signature. If the input data can't be parsed as expected, the model invocation will fail.
-__MLmodel__
+You can inspect an MLflow model's signature by opening the MLmodel file associated with the model. For more information on how signatures work in MLflow, see [Signatures in MLflow](concept-mlflow-models.md#model-signature).
+The following file shows the MLmodel file associated with an MLflow model.
+
+__MLmodel__
-You can inspect your model's signature by opening the MLmodel file associated with your MLflow model. For more information on how signatures work in MLflow, see [Signatures in MLflow](concept-mlflow-models.md#model-signature).
> [!TIP]
-> Signatures in MLflow models are optional but they are highly encouraged as they provide a convenient way to early detect data compatibility issues. For more information about how to log models with signatures read [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
+> Signatures in MLflow models are optional but highly recommended, as they provide a convenient way to detect data compatibility issues early. For more information about how to log models with signatures, see [Logging models with a custom signature, environment or samples](how-to-log-mlflow-models.md#logging-models-with-a-custom-signature-environment-or-samples).
-## Differences between models deployed in Azure Machine Learning and MLflow built-in server
+## Models deployed in Azure Machine Learning vs. models deployed in the MLflow built-in server
-MLflow includes built-in deployment tools that model developers can use to test models locally. For instance, you can run a local instance of a model registered in MLflow server registry with `mlflow models serve -m my_model` or you can use the MLflow CLI `mlflow models predict`. Azure Machine Learning online and batch endpoints run different inferencing technologies, which might have different features. Read this section to understand their differences.
+MLflow includes built-in deployment tools that model developers can use to test models locally. For instance, you can run a local instance of a model that is registered in the MLflow server registry, using `mlflow models serve -m my_model` or using the MLflow CLI `mlflow models predict`.
-### Batch vs online endpoints
+### Inferencing with batch vs. online endpoints
-Azure Machine Learning supports deploying models to both online and batch endpoints. Online Endpoints compare to [MLflow built-in server](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) and they provide a scalable, synchronous, and lightweight way to run models for inference. Batch Endpoints, on the other hand, provide a way to run asynchronous inference over long running inferencing processes that can scale to large amounts of data. This capability isn't present by the moment in MLflow server although similar capability can be achieved [using Spark jobs](how-to-deploy-mlflow-model-spark-jobs.md).
+Azure Machine Learning supports deploying models to both online and batch endpoints. These endpoints run different inferencing technologies that can have different features.
-The rest of this section mostly applies to online endpoints but you can learn more of batch endpoint and MLflow models at [Use MLflow models in batch deployments](how-to-mlflow-batch.md).
+Online endpoints are similar to the [MLflow built-in server](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) in that they provide a scalable, synchronous, and lightweight way to run models for inference.
+
+On the other hand, batch endpoints are capable of running asynchronous inference over long-running inferencing processes that can scale to large amounts of data. The MLflow server currently lacks this capability, although a similar capability can be achieved by [using Spark jobs](how-to-deploy-mlflow-model-spark-jobs.md). To learn more about batch endpoints and MLflow models, see [Use MLflow models in batch deployments](how-to-mlflow-batch.md).
+
+The sections that follow focus more on MLflow models deployed to Azure Machine Learning online endpoints.
### Input formats
The rest of this section mostly applies to online endpoints but you can learn mo
| Tensor input format as JSON-serialized lists (tensors) and dictionary of lists (named tensors) | **&check;** | **&check;** | | Tensor input formatted as in TF Serving's API | **&check;** | |
-> [!NOTE]
-> - <sup>1</sup> We suggest you to explore batch inference for processing files. See [Deploy MLflow models to Batch Endpoints](how-to-mlflow-batch.md).
+<sup>1</sup> Consider using batch inferencing to process files. For more information, see [Deploy MLflow models to batch endpoints](how-to-mlflow-batch.md).
### Input structure
-Regardless of the input type used, Azure Machine Learning requires inputs to be provided in a JSON payload, within a dictionary key `input_data`. The following section shows different payload examples and the differences between MLflow built-in server and Azure Machine Learning inferencing server.
-
-> [!WARNING]
-> Note that such key is not required when serving models using the command `mlflow models serve` and hence payloads can't be used interchangeably.
+Regardless of the input type used, Azure Machine Learning requires you to provide inputs in a JSON payload, within the dictionary key `input_data`. Because this key isn't required when using the command `mlflow models serve` to serve models, payloads can't be used interchangeably for Azure Machine Learning online endpoints and the MLflow built-in server.
> [!IMPORTANT]
-> **MLflow 2.0 advisory**: Notice that the payload's structure has changed in MLflow 2.0.
+> **MLflow 2.0 advisory**: Notice that the payload's structure changed in MLflow 2.0.
+
+This section shows different payload examples and the differences for a model that is deployed in the MLflow built-in server versus the Azure Machine Learning inferencing server.
+
-#### Payload example for a JSON-serialized pandas DataFrames in the split orientation
+#### Payload example for a JSON-serialized pandas DataFrame in the split orientation
# [Azure Machine Learning](#tab/azureml)
Regardless of the input type used, Azure Machine Learning requires inputs to be
} ```
-The previous payload corresponds to MLflow server 2.0+.
+This payload corresponds to MLflow server 2.0+.
The previous payload corresponds to MLflow server 2.0+.
-For more information about MLflow built-in deployment tools, see [MLflow documentation section](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools).
+For more information about MLflow built-in deployment tools, see [Built-in deployment tools](https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools) in the MLflow documentation.
-## How to customize inference when deploying MLflow models
+## Customize inference when deploying MLflow models
-You might be used to authoring scoring scripts to customize how inference is executed for your custom models. However, when deploying MLflow models to Azure Machine Learning, the decision about how inference should be executed is done by the model builder (the person who built the model), rather than by the DevOps engineer (the person who is trying to deploy it). Each model framework might automatically apply specific inference routines.
+You might be used to authoring scoring scripts to customize how inferencing is executed for your custom models. However, when deploying MLflow models to Azure Machine Learning, the decision about how inferencing should be executed is done by the model builder (the person who built the model), rather than by the DevOps engineer (the person who is trying to deploy it). Each model framework might automatically apply specific inference routines.
-If you need to change the behavior at any point about how inference of an MLflow model is executed, you can either [change how your model is being logged in the training routine](#change-how-your-model-is-logged-during-training) or [customize inference with a scoring script at deployment time](#customize-inference-with-a-scoring-script).
+At any point, if you need to change how inference of an MLflow model is executed, you can do one of two things:
+- Change how your model is being logged in the training routine.
+- Customize inference with a scoring script at deployment time.
-### Change how your model is logged during training
-When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict()` function generates results. However, there are scenarios where you probably want to do some preprocessing or post-processing before and after your model is executed. On another scenarios, you might want to change what's returned like probabilities vs classes.
+#### Change how your model is logged during training
-A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. For instance, [`sklearn.pipeline.Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) or [`pyspark.ml.Pipeline`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.Pipeline.html) are popular (and sometimes encourageable for performance considerations) ways to do so. Another alternative is to [customize how your model does inference using a custom model flavor](how-to-log-mlflow-models.md?#logging-custom-models).
+When you log a model, using either `mlflow.autolog` or `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what results the model returns. MLflow doesn't enforce any specific behavior for how the `predict()` function generates results.
-### Customize inference with a scoring script
+In some cases, however, you might want to do some preprocessing or post-processing before and after your model is executed. At other times, you might want to change what is returned (for example, probabilities versus classes). One solution is to implement machine learning pipelines that move from inputs to outputs directly. For example, [`sklearn.pipeline.Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) or [`pyspark.ml.Pipeline`](https://spark.apache.org/docs/latest/api/python/reference/api/pyspark.ml.Pipeline.html) are popular ways to implement pipelines, and are sometimes recommended for performance considerations. Another alternative is to [customize how your model does inferencing, by using a custom model flavor](how-to-log-mlflow-models.md?#logging-custom-models).
-Although MLflow models don't require a scoring script, you can still provide one if needed. You can use it to customize how inference is executed for MLflow models. To learn how to do it, refer to [Customizing MLflow model deployments (Online Endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (Batch Endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
+#### Customize inference with a scoring script
+
+Although MLflow models don't require a scoring script, you can still provide one, if needed. You can use the scoring script to customize how inference is executed for MLflow models. For more information on how to customize inference, see [Customizing MLflow model deployments (online endpoints)](how-to-deploy-mlflow-models-online-endpoints.md#customizing-mlflow-model-deployments) and [Customizing MLflow model deployments (batch endpoints)](how-to-mlflow-batch.md#customizing-mlflow-models-deployments-with-a-scoring-script).
> [!IMPORTANT]
-> When you opt-in to specify a scoring script for an MLflow model deployment, you also need to provide an environment for it.
+> If you choose to specify a scoring script for an MLflow model deployment, you also need to provide an environment for the deployment.
## Deployment tools
-Azure Machine Learning offers many ways to deploy MLflow models to online and batch endpoints. You can deploy models using the following tools:
+Azure Machine Learning offers many ways to deploy MLflow models to online and batch endpoints. You can deploy models, using the following tools:
> [!div class="checklist"] > - MLflow SDK
-> - Azure Machine Learning CLI and Azure Machine Learning SDK for Python
+> - Azure Machine Learning CLI
+> - Azure Machine Learning SDK for Python
> - Azure Machine Learning studio
-Each workflow has different capabilities, particularly around which type of compute they can target. The following table shows them.
+Each workflow has different capabilities, particularly around which type of compute they can target. The following table shows the different capabilities.
| Scenario | MLflow SDK | Azure Machine Learning CLI/SDK | Azure Machine Learning studio | | :- | :-: | :-: | :-: |
Each workflow has different capabilities, particularly around which type of comp
| Deploy to web services (ACI/AKS) | Legacy support<sup>2</sup> | Not supported<sup>2</sup> | Not supported<sup>2</sup> | | Deploy to web services (ACI/AKS - with a scoring script) | Not supported<sup>3</sup> | Legacy support<sup>2</sup> | Legacy support<sup>2</sup> |
-> [!NOTE]
-> - <sup>1</sup> Deployment to online endpoints that are in workspaces with private link enabled requires you to [package models before deployment (preview)](how-to-package-models.md).
-> - <sup>2</sup> We recommend switching to [managed online endpoints](concept-endpoints.md) instead.
-> - <sup>3</sup> MLflow (OSS) doesn't have the concept of a scoring script and doesn't support batch execution currently.
+<sup>1</sup> Deployment to online endpoints that are in workspaces with private link enabled requires you to [package models before deployment (preview)](how-to-package-models.md).
+
+<sup>2</sup> We recommend switching to [managed online endpoints](concept-endpoints.md) instead.
+
+<sup>3</sup> MLflow (OSS) doesn't have the concept of a scoring script and doesn't support batch execution currently.
### Which deployment tool to use?
-If you're familiar with MLflow or your platform supports MLflow natively (like Azure Databricks), and you wish to continue using the same set of methods, use the MLflow SDK.
+- Use the MLflow SDK if _both_ of these conditions apply:
-However, if you're more familiar with the [Azure Machine Learning CLI v2](concept-v2.md), you want to automate deployments using automation pipelines, or you want to keep deployment configuration in a git repository; we recommend that you use the [Azure Machine Learning CLI v2](concept-v2.md).
+ - You're familiar with MLflow, or you're using a platform that supports MLflow natively (like Azure Databricks).
+ - You wish to continue using the same set of methods from MLflow.
-If you want to quickly deploy and test models trained with MLflow, you can use the [Azure Machine Learning studio](https://ml.azure.com) UI deployment.
+- Use the [Azure Machine Learning CLI v2](concept-v2.md) if _any_ of these conditions apply:
+ - You're more familiar with the [Azure Machine Learning CLI v2](concept-v2.md).
+ - You want to automate deployments, using automation pipelines.
+ - You want to keep deployment configuration in a git repository.
+- Use the [Azure Machine Learning studio](https://ml.azure.com) UI deployment if you want to quickly deploy and test models trained with MLflow.
-## Next steps
-To learn more, review these articles:
+## Related content
- [Deploy MLflow models to online endpoints](how-to-deploy-mlflow-models-online-endpoints.md) - [Progressive rollout of MLflow models](how-to-deploy-mlflow-models-online-progressive.md)
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
serverless_compute:
Update workspace: ```azurecli
-az ml workspace update -n <workspace-name> -g <resource-group-name> -file serverlesscomputevnetsettings.yml
+az ml workspace update -n <workspace-name> -g <resource-group-name> --file serverlesscomputevnetsettings.yml
``` ```yaml
serverless_compute:
Update workspace: ```azurecli
-az ml workspace update -n <workspace-name> -g <resource-group-name> -file serverlesscomputevnetsettings.yml
+az ml workspace update -n <workspace-name> -g <resource-group-name> --file serverlesscomputevnetsettings.yml
``` ```yaml
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
+ An existing virtual network and subnet to use with your compute resources.
- > [!IMPORTANT]
- > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ > [!WARNING]
+ > Do not use the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network, and will result in errors if used for your VNet. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
[!INCLUDE [network-rbac](includes/network-rbac.md)]
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
* You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace. * The key vault that contains your customer-managed key must be in the same Azure subscription as the Azure Machine Learning workspace. * OS disk of machine learning compute can't be encrypted with customer-managed key, but can be encrypted with Microsoft-managed key if the workspace is created with `hbi_workspace` parameter set to `TRUE`. For more details, see [Data encryption](concept-data-encryption.md#machine-learning-compute).
-* Workspace with customer-managed key doesn't currently support v2 batch endpoint.
> [!IMPORTANT] > When using a customer-managed key, the costs for your subscription will be higher because of the additional resources in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
machine-learning How To Troubleshoot Protobuf Descriptor Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-protobuf-descriptor-error.md
Previously updated : 11/04/2022 Last updated : 01/16/2024 monikerRange: 'azureml-api-1 || azureml-api-2'
machine-learning How To Workspace Diagnostic Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-workspace-diagnostic-api.md
Previously updated : 09/14/2022 Last updated : 01/16/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
machine-learning Reference Migrate Sdk V1 Mlflow Tracking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-migrate-sdk-v1-mlflow-tracking.md
Previously updated : 05/04/2022 Last updated : 01/16/2024
machine-learning Tutorial Create Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace-vnet.md
To create a virtual network, use the following steps:
1. Look at the default __IPv4 address space__ value. In the screenshot, the value is __172.16.0.0/16__. __The value may be different for you__. While you can use a different value, the rest of the steps in this tutorial are based on the __172.16.0.0/16 value__.
- > [!IMPORTANT]
- > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ > [!WARNING]
+ > Do not use the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network, and will result in errors if used for your VNet. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
1. Select the __Default__ subnet and then select __Remove subnet__.
machine-learning Tutorial Enable Recurrent Materialization Run Batch Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md
Before you proceed with this tutorial, be sure to complete the first and second
1. Install the Azure Machine Learning extension.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)]
2. Authenticate.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)]
3. Set the default subscription.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)]
machine-learning Tutorial Get Started With Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md
Not applicable.
1. Install the Azure Machine Learning CLI extension.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)]
1. Authenticate.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)]
1. Set the default subscription.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)]
This tutorial doesn't need explicit installation of these resources, because the
### [SDK and CLI track](#tab/SDK-and-CLI-track)
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)]
> [!NOTE] > - The default blob store for the feature store is an ADLS Gen2 container.
This tutorial doesn't need explicit installation of these resources, because the
For more information more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md).
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=assign-aad-ds-role-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=assign-aad-ds-role-cli)]
## Prototype and develop a feature set
As a best practice, entities help enforce use of the same join key definition ac
Create an `account` entity that has the join key `accountID` of type `string`.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)]
Use this code to register a feature set asset with the feature store. You can th
### [SDK and CLI track](#tab/SDK-and-CLI-track)
-[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
+[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)]
The Storage Blob Data Reader role must be assigned to your user account on the o
Execute this code cell for role assignment. The permissions might need some time to propagate.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=grant-rbac-to-user-identity-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=grant-rbac-to-user-identity-cli)]
The Storage Blob Data Reader role must be assigned to your user account on the o
> The sample data used in this notebook is small. Therefore, this parameter is set to 1 in the > featureset_asset_offline_enabled.yaml file.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=enable-offline-mat-txns-fset-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=enable-offline-mat-txns-fset-cli)]
The Storage Blob Data Reader role must be assigned to your user account on the o
This code cell materializes data by current status *None* or *Incomplete* for the defined feature window.
- [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=backfill-txns-fset-cli)]
+ [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=backfill-txns-fset-cli)]
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-workspace-vnet.md
In this article you learn how to enable the following workspaces resources in a
+ An existing virtual network and subnet to use with your compute resources.
- > [!IMPORTANT]
- > We do not recommend using the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
+ > [!WARNING]
+ > Do not use the 172.17.0.0/16 IP address range for your VNet. This is the default subnet range used by the Docker bridge network, and will result in errors if used for your VNet. Other ranges may also conflict depending on what you want to connect to the virtual network. For example, if you plan to connect your on premises network to the VNet, and your on-premises network also uses the 172.16.0.0/16 range. Ultimately, it is up to __you__ to plan your network infrastructure.
[!INCLUDE [network-rbac](../includes/network-rbac.md)]
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It is recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
+**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments and IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It's recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- DECLARE @SID NVARCHAR(MAX) = N'';
- CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
- SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
- IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
- ELSE
- PRINT N'Login creation failed'
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
+ ELSE
+ PRINT N'Login creation failed'
GO
-
- -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
- use master;
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+
+ -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ USE master;
+ EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN
+ DECLARE @is_secondary_replica BIT = 0;
+ IF CAST(PARSENAME(CAST(SERVERPROPERTY(''ProductVersion'') AS VARCHAR), 4) AS INT) >= 11
+ BEGIN
+ DECLARE @innersql NVARCHAR(MAX);
+ SET @innersql = N''
+ SELECT @is_secondary_replica = IIF(
+ EXISTS (
+ SELECT 1
+ FROM sys.availability_replicas a
+ INNER JOIN sys.dm_hadr_database_replica_states b
+ ON a.replica_id = b.replica_id
+ WHERE b.is_local = 1
+ AND b.is_primary_replica = 0
+ AND a.secondary_role_allow_connections = 2
+ AND b.database_id = DB_ID()
+ ), 1, 0
+ );
+ '';
+ EXEC sp_executesql @innersql, N''@is_secondary_replica BIT OUTPUT'', @is_secondary_replica OUTPUT;
+ END
+ IF (@is_secondary_replica = 0)
+ BEGIN
+ CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT];
+ END
+ END'
GO
-
+ -- Provide server level read-only permissions use master;
- BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- GO
-
- -- Required from SQL 2014 onwards for database connectivity.
- use master;
- BEGIN TRY GRANT CONNECT ANY DATABASE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT];
+ GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT];
+ GRANT EXECUTE ON OBJECT::sys.xp_instance_regread TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT];
GO
-
+ -- Provide msdb specific permissions use msdb;
- BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT];
GO
-
+
-- Clean up --use master;
- -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
- -- BEGIN TRY DROP LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; DROP USER [MYDOMAIN\MYACCOUNT]'
+ -- DROP LOGIN [MYDOMAIN\MYACCOUNT];
--GO
- ```
+ ```
#### SQL Server Authentication ```sql
- -- Create a login to run the assessment
+ Create a login to run the assessment
use master;
- -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID with SQL login.
- -- After the account is created in one of the member instances, copy the SID output from the script and include
- -- this value when executing against the remaining replicas.
- -- When the SID needs to be specified, add the value to the @SID variable definition below.
- DECLARE @SID NVARCHAR(MAX) = N'';
- IF (@SID = N'')
- BEGIN
- CREATE LOGIN [evaluator]
- WITH PASSWORD = '<provide a strong password>'
- END
- ELSE
- BEGIN
- DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
- WITH PASSWORD = ''<provide a strong password>''
- , SID = '+@SID
+ -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- After the account is created in one of the members, copy the SID output from the script and include this value
+ -- when executing against the remaining replicas.
+ -- When the SID needs to be specified, add the value to the @SID variable definition below.
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ IF (@SID = N'')
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ END
+ ELSE
+ BEGIN
+ DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
+ WITH PASSWORD = ''<provide a strong password>''
+ , SID = ' + @SID
EXEC SP_EXECUTESQL @SQLString
- END
- SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'evaluator'
- IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
- ELSE
- PRINT N'Login creation failed'
- GO
-
- -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
- use master;
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [evaluator] FOR LOGIN [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ END
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR(100), sid, 2) FROM sys.syslogins where name = 'evaluator'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
+ ELSE
+ PRINT N'Login creation failed'
GO
-
- -- Provide server level read-only permissions
- use master;
- BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW SERVER STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW ANY DEFINITION TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+
+ -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ USE master;
+ EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN
+ DECLARE @is_secondary_replica BIT = 0;
+ IF CAST(PARSENAME(CAST(SERVERPROPERTY(''ProductVersion'') AS VARCHAR), 4) AS INT) >= 11
+ BEGIN
+ DECLARE @innersql NVARCHAR(MAX);
+ SET @innersql = N''
+ SELECT @is_secondary_replica = IIF(
+ EXISTS (
+ SELECT 1
+ FROM sys.availability_replicas a
+ INNER JOIN sys.dm_hadr_database_replica_states b
+ ON a.replica_id = b.replica_id
+ WHERE b.is_local = 1
+ AND b.is_primary_replica = 0
+ AND a.secondary_role_allow_connections = 2
+ AND b.database_id = DB_ID()
+ ), 1, 0
+ );
+ '';
+ EXEC sp_executesql @innersql, N''@is_secondary_replica BIT OUTPUT'', @is_secondary_replica OUTPUT;
+ END
+
+ IF (@is_secondary_replica = 0)
+ BEGIN
+ CREATE USER [evaluator] FOR LOGIN [evaluator];
+ GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator];
+ GRANT VIEW DATABASE STATE TO [evaluator];
+ END
+ END'
GO
-
- -- Required from SQL 2014 onwards for database connectivity.
- use master;
- BEGIN TRY GRANT CONNECT ANY DATABASE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+
+ -- Provide server level read-only permissions
+ USE master;
+ GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator];
+ GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator];
+ GRANT EXECUTE ON OBJECT::sys.xp_instance_regread TO [evaluator];
+ GRANT VIEW DATABASE STATE TO [evaluator];
+ GRANT VIEW SERVER STATE TO [evaluator];
+ GRANT VIEW ANY DEFINITION TO [evaluator];
GO
-
+
-- Provide msdb specific permissions
- use msdb;
- BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ USE msdb;
+ GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator];
GO
-
+
-- Clean up --use master; -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
Support | Details
**Operating systems** | All Windows and Linux versions with [Hyper-V integration services](/virtualization/hyper-v-on-windows/about/supported-guest-os) enabled. **Server requirements** | Windows servers must have PowerShell remoting enabled and PowerShell version 2.0 or later installed. <br/><br/> Linux servers must have SSH connectivity enabled and ensure that the following commands can be executed on the Linux servers: touch, chmod, cat, ps, grep, echo, sha256sum, awk, netstat, ls, sudo, dpkg, rpm, sed, getcap, which, date. **Windows server access** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
**Port access** | For Windows server, need access on port 5985 (HTTP) and for Linux servers, need access on port 22(TCP). **Discovery method** | Agentless dependency analysis is performed by directly connecting to the servers using the server credentials added on the appliance. <br/><br/> The appliance gathers the dependency information from Windows servers using PowerShell remoting and from Linux servers using SSH connection. <br/><br/> No agent is installed on the servers to pull dependency data.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
For Linux servers, based on the features you want to perform, you can create a u
### Option 1 - You need a sudo user account on the servers that you want to discover. Use this account to pull configuration and performance metadata, perform software inventory (discovery of installed applications) and enable agentless dependency analysis using SSH connectivity. - You need to enable sudo access on /usr/bin/bash to execute the commands listed [here](discovered-metadata.md#linux-server-metadata). In addition to these commands, the user account also needs to have permissions to execute ls and netstat commands to perform agentless dependency analysis.-- Make sure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.
+- Make sure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked.
- Azure Migrate supports the following Linux OS distributions for discovery using an account with sudo access: Operating system | Versions
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It is recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
+**Supported servers** | supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments and IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It's recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- DECLARE @SID NVARCHAR(MAX) = N'';
- CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
- SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
- IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
- ELSE
- PRINT N'Login creation failed'
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
+ ELSE
+ PRINT N'Login creation failed'
GO
-
- -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
- use master;
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+
+ -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ USE master;
+ EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN
+ DECLARE @is_secondary_replica BIT = 0;
+ IF CAST(PARSENAME(CAST(SERVERPROPERTY(''ProductVersion'') AS VARCHAR), 4) AS INT) >= 11
+ BEGIN
+ DECLARE @innersql NVARCHAR(MAX);
+ SET @innersql = N''
+ SELECT @is_secondary_replica = IIF(
+ EXISTS (
+ SELECT 1
+ FROM sys.availability_replicas a
+ INNER JOIN sys.dm_hadr_database_replica_states b
+ ON a.replica_id = b.replica_id
+ WHERE b.is_local = 1
+ AND b.is_primary_replica = 0
+ AND a.secondary_role_allow_connections = 2
+ AND b.database_id = DB_ID()
+ ), 1, 0
+ );
+ '';
+ EXEC sp_executesql @innersql, N''@is_secondary_replica BIT OUTPUT'', @is_secondary_replica OUTPUT;
+ END
+ IF (@is_secondary_replica = 0)
+ BEGIN
+ CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT];
+ END
+ END'
GO
-
+ -- Provide server level read-only permissions use master;
- BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- GO
-
- -- Required from SQL 2014 onwards for database connectivity.
- use master;
- BEGIN TRY GRANT CONNECT ANY DATABASE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT];
+ GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT];
+ GRANT EXECUTE ON OBJECT::sys.xp_instance_regread TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT];
GO
-
+ -- Provide msdb specific permissions use msdb;
- BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT];
GO
-
+
-- Clean up --use master;
- -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
- -- BEGIN TRY DROP LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; DROP USER [MYDOMAIN\MYACCOUNT]'
+ -- DROP LOGIN [MYDOMAIN\MYACCOUNT];
--GO
- ```
+ ```
#### SQL Server Authentication ```sql
- -- Create a login to run the assessment
+ Create a login to run the assessment
use master;
- -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
- -- After the account is created in one of the members, copy the SID output from the script and include this value
- -- when executing against the remaining replicas.
- -- When the SID needs to be specified, add the value to the @SID variable definition below.
- DECLARE @SID NVARCHAR(MAX) = N'';
- IF (@SID = N'')
- BEGIN
- CREATE LOGIN [evaluator]
- WITH PASSWORD = '<provide a strong password>'
- END
- ELSE
- BEGIN
- DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
- WITH PASSWORD = ''<provide a strong password>''
- , SID = '+@SID
+ -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- After the account is created in one of the members, copy the SID output from the script and include this value
+ -- when executing against the remaining replicas.
+ -- When the SID needs to be specified, add the value to the @SID variable definition below.
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ IF (@SID = N'')
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ END
+ ELSE
+ BEGIN
+ DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
+ WITH PASSWORD = ''<provide a strong password>''
+ , SID = ' + @SID
EXEC SP_EXECUTESQL @SQLString
- END
- SELECT @SID = N'0x'+CONVERT(NVARCHAR(35), sid, 2) FROM sys.syslogins where name = 'evaluator'
- IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
- ELSE
- PRINT N'Login creation failed'
- GO
-
- -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
- use master;
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [evaluator] FOR LOGIN [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ END
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR(100), sid, 2) FROM sys.syslogins where name = 'evaluator'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
+ ELSE
+ PRINT N'Login creation failed'
GO
-
- -- Provide server level read-only permissions
- use master;
- BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW SERVER STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW ANY DEFINITION TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+
+ -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ USE master;
+ EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN
+ DECLARE @is_secondary_replica BIT = 0;
+ IF CAST(PARSENAME(CAST(SERVERPROPERTY(''ProductVersion'') AS VARCHAR), 4) AS INT) >= 11
+ BEGIN
+ DECLARE @innersql NVARCHAR(MAX);
+ SET @innersql = N''
+ SELECT @is_secondary_replica = IIF(
+ EXISTS (
+ SELECT 1
+ FROM sys.availability_replicas a
+ INNER JOIN sys.dm_hadr_database_replica_states b
+ ON a.replica_id = b.replica_id
+ WHERE b.is_local = 1
+ AND b.is_primary_replica = 0
+ AND a.secondary_role_allow_connections = 2
+ AND b.database_id = DB_ID()
+ ), 1, 0
+ );
+ '';
+ EXEC sp_executesql @innersql, N''@is_secondary_replica BIT OUTPUT'', @is_secondary_replica OUTPUT;
+ END
+
+ IF (@is_secondary_replica = 0)
+ BEGIN
+ CREATE USER [evaluator] FOR LOGIN [evaluator];
+ GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator];
+ GRANT VIEW DATABASE STATE TO [evaluator];
+ END
+ END'
GO
-
- -- Required from SQL 2014 onwards for database connectivity.
- use master;
- BEGIN TRY GRANT CONNECT ANY DATABASE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+
+ -- Provide server level read-only permissions
+ USE master;
+ GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator];
+ GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator];
+ GRANT EXECUTE ON OBJECT::sys.xp_instance_regread TO [evaluator];
+ GRANT VIEW DATABASE STATE TO [evaluator];
+ GRANT VIEW SERVER STATE TO [evaluator];
+ GRANT VIEW ANY DEFINITION TO [evaluator];
GO
-
+
-- Provide msdb specific permissions
- use msdb;
- BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ USE msdb;
+ GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator];
GO
-
+
-- Clean up --use master; -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
VMware | Details
## Azure Migrate appliance requirements
-Azure Migrate uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. You can deploy the appliance as a server in your VMware environment using a VMware Open Virtualization Appliance (OVA) template that's imported into vCenter Server or by using a [PowerShell script](deploy-appliance-script.md). Learn more about [appliance requirements for VMware](migrate-appliance.md#appliancevmware).
+Azure Migrate uses the [Azure Migrate appliance](migrate-appliance.md) for discovery and assessment. You can deploy the appliance as a server in your VMware environment using a VMware Open Virtualization Appliance (OVA) template imported into vCenter Server or by using a [PowerShell script](deploy-appliance-script.md). Learn more about [appliance requirements for VMware](migrate-appliance.md#appliancevmware).
Here are more requirements for the appliance:
Support | Details
**Supported servers** | You can perform software inventory on up to 10,000 servers running across vCenter Server(s) added to each Azure Migrate appliance. **Operating systems** | Servers running all Windows and Linux versions are supported. **Server requirements** | For software inventory, VMware Tools must be installed and running on your servers. The VMware Tools version must be version 10.2.1 or later.<br /><br /> Windows servers must have PowerShell version 2.0 or later installed.<br/><br/>WMI must be enabled and available on Windows servers to gather the details of the roles and features installed on the servers.
-**vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account that's used for assessment must have privileges for guest operations on VMware VMs.
+**vCenter Server account** | To interact with the servers for software inventory, the vCenter Server read-only account used for assessment must have privileges for guest operations on VMware VMs.
**Server access** | You can add multiple domain and non-domain (Windows/Linux) credentials in the appliance configuration manager for software inventory.<br /><br /> You must have a guest user account for Windows servers and a standard user account (non-`sudo` access) for all Linux servers. **Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running servers on which you want to perform software inventory. The server running vCenter Server returns an ESXi host connection to download the file that contains the details of the software inventory. <br /><br /> If using domain credentials, the Azure Migrate appliance must be able to connect to the following TCP and UDP ports: <br /> <br />TCP 135 ΓÇô RPC Endpoint<br />TCP 389 ΓÇô LDAP<br />TCP 636 ΓÇô LDAP SSL<br />TCP 445 ΓÇô SMB<br />TCP/UDP 88 ΓÇô Kerberos authentication<br />TCP/UDP 464 ΓÇô Kerberos change operations **Discovery** | Software inventory is performed from vCenter Server by using VMware Tools installed on the servers.<br/><br/> The appliance gathers the information about the software inventory from the server running vCenter Server through vSphere APIs.<br/><br/> Software inventory is agentless. No agent is installed on the server, and the appliance doesn't connect directly to the servers.
After the appliance is connected, it gathers configuration and performance data
Support | Details |
-**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments as well as IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It is recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
+**Supported servers** | Supported only for servers running SQL Server in your VMware, Microsoft Hyper-V, and Physical/Bare metal environments and IaaS Servers of other public clouds such as AWS, GCP, etc. <br /><br /> You can discover up to 750 SQL Server instances or 15,000 SQL databases, whichever is less, from a single appliance. It's recommended that you ensure that an appliance is scoped to discover less than 600 servers running SQL to avoid scaling issues.
**Windows servers** | Windows Server 2008 and later are supported. **Linux servers** | Currently not supported. **Authentication mechanism** | Both Windows and SQL Server authentication are supported. You can provide credentials of both authentication types in the appliance configuration manager.
The following are sample scripts for creating a login and provisioning it with t
```sql -- Create a login to run the assessment use master;
- DECLARE @SID NVARCHAR(MAX) = N'';
- CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
- SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
- IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
- ELSE
- PRINT N'Login creation failed'
+ DECLARE @SID NVARCHAR(MAX) = N'';
+ CREATE LOGIN [MYDOMAIN\MYACCOUNT] FROM WINDOWS;
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR, sid, 2) FROM sys.syslogins where name = 'MYDOMAIN\MYACCOUNT'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [MYDOMAIN\MYACCOUNT] with SID = ' + @SID
+ ELSE
+ PRINT N'Login creation failed'
GO
-
- -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
- use master;
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+
+ -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ USE master;
+ EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN
+ DECLARE @is_secondary_replica BIT = 0;
+ IF CAST(PARSENAME(CAST(SERVERPROPERTY(''ProductVersion'') AS VARCHAR), 4) AS INT) >= 11
+ BEGIN
+ DECLARE @innersql NVARCHAR(MAX);
+ SET @innersql = N''
+ SELECT @is_secondary_replica = IIF(
+ EXISTS (
+ SELECT 1
+ FROM sys.availability_replicas a
+ INNER JOIN sys.dm_hadr_database_replica_states b
+ ON a.replica_id = b.replica_id
+ WHERE b.is_local = 1
+ AND b.is_primary_replica = 0
+ AND a.secondary_role_allow_connections = 2
+ AND b.database_id = DB_ID()
+ ), 1, 0
+ );
+ '';
+ EXEC sp_executesql @innersql, N''@is_secondary_replica BIT OUTPUT'', @is_secondary_replica OUTPUT;
+ END
+ IF (@is_secondary_replica = 0)
+ BEGIN
+ CREATE USER [MYDOMAIN\MYACCOUNT] FOR LOGIN [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT];
+ END
+ END'
GO
-
+ -- Provide server level read-only permissions use master;
- BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- GO
-
- -- Required from SQL 2014 onwards for database connectivity.
- use master;
- BEGIN TRY GRANT CONNECT ANY DATABASE TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GRANT SELECT ON sys.sql_expression_dependencies TO [MYDOMAIN\MYACCOUNT];
+ GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [MYDOMAIN\MYACCOUNT];
+ GRANT EXECUTE ON OBJECT::sys.xp_instance_regread TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW DATABASE STATE TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW SERVER STATE TO [MYDOMAIN\MYACCOUNT];
+ GRANT VIEW ANY DEFINITION TO [MYDOMAIN\MYACCOUNT];
GO
-
+ -- Provide msdb specific permissions use msdb;
- BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syscategories] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [MYDOMAIN\MYACCOUNT];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [MYDOMAIN\MYACCOUNT];
GO
-
+
-- Clean up --use master;
- -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
- -- BEGIN TRY DROP LOGIN [MYDOMAIN\MYACCOUNT] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ -- EXECUTE sp_MSforeachdb 'USE [?]; DROP USER [MYDOMAIN\MYACCOUNT]'
+ -- DROP LOGIN [MYDOMAIN\MYACCOUNT];
--GO
- ```
+ ```
#### SQL Server Authentication ```sql
- -- Create a login to run the assessment
+ Create a login to run the assessment
use master;
- -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
- -- After the account is created in one of the members, copy the SID output from the script and include this value
- -- when executing against the remaining replicas.
- -- When the SID needs to be specified, add the value to the @SID variable definition below.
+ -- NOTE: SQL instances that host replicas of Always On Availability Groups must use the same SID for the SQL login.
+ -- After the account is created in one of the members, copy the SID output from the script and include this value
+ -- when executing against the remaining replicas.
+ -- When the SID needs to be specified, add the value to the @SID variable definition below.
DECLARE @SID NVARCHAR(MAX) = N'';
- IF (@SID = N'')
- BEGIN
- CREATE LOGIN [evaluator]
- WITH PASSWORD = '<provide a strong password>'
- END
- ELSE
- BEGIN
- DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
- WITH PASSWORD = ''<provide a strong password>''
- , SID = ' + @SID
+ IF (@SID = N'')
+ BEGIN
+ CREATE LOGIN [evaluator]
+ WITH PASSWORD = '<provide a strong password>'
+ END
+ ELSE
+ BEGIN
+ DECLARE @SQLString NVARCHAR(500) = 'CREATE LOGIN [evaluator]
+ WITH PASSWORD = ''<provide a strong password>''
+ , SID = ' + @SID
EXEC SP_EXECUTESQL @SQLString
- END
- SELECT @SID = N'0x'+CONVERT(NVARCHAR(100), sid, 2) FROM sys.syslogins where name = 'evaluator'
- IF (ISNULL(@SID,'') != '')
- PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
- ELSE
- PRINT N'Login creation failed'
- GO
-
- -- Create user in every database other than tempdb and model and provide minimal read-only permissions.
- use master;
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY CREATE USER [evaluator] FOR LOGIN [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
- EXECUTE sp_MSforeachdb 'USE [?]; IF (''?'' NOT IN (''tempdb'',''model'')) BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator]END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH'
+ END
+ SELECT @SID = N'0x'+CONVERT(NVARCHAR(100), sid, 2) FROM sys.syslogins where name = 'evaluator'
+ IF (ISNULL(@SID,'') != '')
+ PRINT N'Created login [evaluator] with SID = '''+ @SID +'''. If this instance hosts any Always On Availability Group replica, use this SID value when executing the script against the instances hosting the other replicas'
+ ELSE
+ PRINT N'Login creation failed'
GO
-
- -- Provide server level read-only permissions
- use master;
- BEGIN TRY GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW DATABASE STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW SERVER STATE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT VIEW ANY DEFINITION TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+
+ -- Create user in every database other than tempdb, model and secondary AG databases(with connection_type = ALL) and provide minimal read-only permissions.
+ USE master;
+ EXECUTE sp_MSforeachdb '
+ USE [?];
+ IF (''?'' NOT IN (''tempdb'',''model''))
+ BEGIN
+ DECLARE @is_secondary_replica BIT = 0;
+ IF CAST(PARSENAME(CAST(SERVERPROPERTY(''ProductVersion'') AS VARCHAR), 4) AS INT) >= 11
+ BEGIN
+ DECLARE @innersql NVARCHAR(MAX);
+ SET @innersql = N''
+ SELECT @is_secondary_replica = IIF(
+ EXISTS (
+ SELECT 1
+ FROM sys.availability_replicas a
+ INNER JOIN sys.dm_hadr_database_replica_states b
+ ON a.replica_id = b.replica_id
+ WHERE b.is_local = 1
+ AND b.is_primary_replica = 0
+ AND a.secondary_role_allow_connections = 2
+ AND b.database_id = DB_ID()
+ ), 1, 0
+ );
+ '';
+ EXEC sp_executesql @innersql, N''@is_secondary_replica BIT OUTPUT'', @is_secondary_replica OUTPUT;
+ END
+
+ IF (@is_secondary_replica = 0)
+ BEGIN
+ CREATE USER [evaluator] FOR LOGIN [evaluator];
+ GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator];
+ GRANT VIEW DATABASE STATE TO [evaluator];
+ END
+ END'
GO
-
- -- Required from SQL 2014 onwards for database connectivity.
- use master;
- BEGIN TRY GRANT CONNECT ANY DATABASE TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+
+ -- Provide server level read-only permissions
+ USE master;
+ GRANT SELECT ON sys.sql_expression_dependencies TO [evaluator];
+ GRANT EXECUTE ON OBJECT::sys.xp_regenumkeys TO [evaluator];
+ GRANT EXECUTE ON OBJECT::sys.xp_instance_regread TO [evaluator];
+ GRANT VIEW DATABASE STATE TO [evaluator];
+ GRANT VIEW SERVER STATE TO [evaluator];
+ GRANT VIEW ANY DEFINITION TO [evaluator];
GO
-
+
-- Provide msdb specific permissions
- use msdb;
- BEGIN TRY GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
- BEGIN TRY GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;
+ USE msdb;
+ GRANT EXECUTE ON [msdb].[dbo].[agent_datetime] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobsteps] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syssubsystems] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobhistory] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syscategories] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysjobs] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmaintplan_plans] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[syscollector_collection_sets] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profile] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_profileaccount] TO [evaluator];
+ GRANT SELECT ON [msdb].[dbo].[sysmail_account] TO [evaluator];
GO
-
+
-- Clean up --use master; -- EXECUTE sp_MSforeachdb 'USE [?]; BEGIN TRY DROP USER [evaluator] END TRY BEGIN CATCH PRINT ERROR_MESSAGE() END CATCH;'
Support | Details
**Server requirements** | VMware Tools (10.2.1 and later) must be installed and running on servers you want to analyze.<br /><br /> Servers must have PowerShell version 2.0 or later installed.<br /><br /> WMI should be enabled and available on Windows servers. **vCenter Server account** | The read-only account used by Azure Migrate for assessment must have privileges for guest operations on VMware VMs. **Windows server acesss** | A user account (local or domain) with administrator permissions on servers.
-**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time a sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
+**Linux server access** | Sudo user account with permissions to execute ls and netstat commands. If you're providing a sudo user account, ensure that you enable **NOPASSWD** for the account to run the required commands without prompting for a password every time a sudo command is invoked. <br /><br /> Alternatively, you can create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files, set using the following commands:<br /><code>sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/ls<br /> sudo setcap CAP_DAC_READ_SEARCH,CAP_SYS_PTRACE=ep /bin/netstat</code>
**Port access** | The Azure Migrate appliance must be able to connect to TCP port 443 on ESXi hosts running the servers that have dependencies you want to discover. The server running vCenter Server returns an ESXi host connection to download the file containing the dependency data. **Discovery method** | Dependency information between servers is gathered by using VMware Tools installed on the server running vCenter Server.<br /><br /> The appliance gathers the information from the server by using vSphere APIs.<br /><br /> No agent is installed on the server, and the appliance doesnΓÇÖt connect directly to servers.
Requirement | Details
| **Before deployment** | You should have a project in place, with the Azure Migrate: Discovery and assessment tool added to the project.<br /><br />Deploy dependency visualization after setting up an Azure Migrate appliance to discover your on-premises servers.<br /><br />Learn how to [create a project for the first time](create-manage-projects.md).<br /> Learn how to [add a discovery and assessment tool to an existing project](how-to-assess.md).<br /> Learn how to set up the Azure Migrate appliance for assessment of [Hyper-V](how-to-set-up-appliance-hyper-v.md), [VMware](how-to-set-up-appliance-vmware.md), or physical servers. **Supported servers** | Supported for all servers in your on-premises environment.
-**Log Analytics** | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after the workspace is added. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> In Log Analytics, the workspace that's associated with Azure Migrate is tagged with the project key and project name.
+**Log Analytics** | Azure Migrate uses the [Service Map](/previous-versions/azure/azure-monitor/vm/service-map) solution in [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md) for dependency visualization.<br /><br /> You associate a new or existing Log Analytics workspace with a project. You can't modify the workspace for a project after the workspace is added. <br /><br /> The workspace must be in the same subscription as the project.<br /><br /> The workspace must be located in the East US, Southeast Asia, or West Europe regions. Workspaces in other regions can't be associated with a project.<br /><br /> The workspace must be in a [region in which Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> In Log Analytics, the workspace associated with Azure Migrate is tagged with the project key and project name.
**Required agents** | On each server that you want to analyze, install the following agents:<br />- [Microsoft Monitoring Agent (MMA)](../azure-monitor/agents/agent-windows.md)<br />- [Dependency agent](../azure-monitor/vm/vminsights-dependency-agent-maintenance.md)<br /><br /> If on-premises servers aren't connected to the internet, download and install the Log Analytics gateway on them.<br /><br /> Learn more about installing the [Dependency agent](how-to-create-group-machine-dependencies.md#install-the-dependency-agent) and the [MMA](how-to-create-group-machine-dependencies.md#install-the-mma). **Log Analytics workspace** | The workspace must be in the same subscription as the project.<br /><br /> Azure Migrate supports workspaces that are located in the East US, Southeast Asia, and West Europe regions.<br /><br /> The workspace must be in a region in which [Service Map is supported](https://azure.microsoft.com/global-infrastructure/services/?products=monitor&regions=all). You can monitor Azure VMs in any region. The VMs themselves aren't limited to the regions supported by the Log Analytics workspace.<br /><br /> The workspace for a project can't be modified after the workspace is added. **Cost** | The Service Map solution doesn't incur any charges for the first 180 days (from the day you associate the Log Analytics workspace with the project).<br /><br /> After 180 days, standard Log Analytics charges apply.<br /><br /> Using any solution other than Service Map in the associated Log Analytics workspace incurs [standard charges](https://azure.microsoft.com/pricing/details/log-analytics/) for Log Analytics.<br /><br /> When the project is deleted, the workspace isn't automatically deleted. After deleting the project, Service Map usage isn't free, and each node will be charged per the paid tier of Log Analytics workspace.<br /><br />If you have projects that you created before Azure Migrate general availability (February 28, 2018), you might have incurred additional Service Map charges. To ensure that you're charged only after 180 days, we recommend that you create a new project. Workspaces that were created before GA are still chargeable.
nat-gateway Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/manage-nat-gateway.md
This article explains how to manage the following aspects of NAT gateway:
## Prerequisites
+# [**Azure portal**](#tab/manage-nat-portal)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+ - The example virtual network that is used in this article is named *myVNet*.
+
+ - The example subnet is named *mySubnet*.
+
+ - The example NAT gateway is named *myNATgateway*.
+
+# [**Azure PowerShell**](#tab/manage-nat-powershell)
+ - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
To use Azure PowerShell for this article, you need:
- Sign in to Azure PowerShell and select the subscription that you want to use. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+# [**Azure CLI**](#tab/manage-nat-cli)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+ - The example virtual network that is used in this article is named *myVNet*.
+
+ - The example subnet is named *mySubnet*.
+
+ - The example NAT gateway is named *myNATgateway*.
+ To use Azure CLI for this article, you need: - Azure CLI version 2.31.0 or later. Azure Cloud Shell uses the latest version. [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+# [**Bicep**](#tab/manage-nat-bicep)
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+
+ - The example virtual network that is used in this article is named *myVNet*.
+
+ - The example subnet is named *mySubnet*.
+
+ - The example NAT gateway is named *myNATgateway*.
+++ ## Create a NAT gateway and associate it with an existing subnet
-You can create a NAT gateway resource and add it to an existing subnet by using the Azure portal, Azure PowerShell, or the Azure CLI.
+You can create a NAT gateway resource and add it to an existing subnet by using the Azure portal, Azure PowerShell, Azure CLI, or Bicep.
# [**Azure portal**](#tab/manage-nat-portal)
You can create a NAT gateway resource and add it to an existing subnet by using
1. Select **Create**.
-# [**PowerShell**](#tab/manage-nat-powershell)
+# [**Azure PowerShell**](#tab/manage-nat-powershell)
### Public IP address
az network vnet subnet update \
--nat-gateway myNATgateway ```
+# [**Bicep**](#tab/manage-nat-bicep)
+
+```bicep
+
+@description('Name of the NAT gateway')
+param natgatewayname string = 'nat-gateway'
+
+@description('Name of the NAT gateway public IP')
+param publicipname string = 'public-ip-nat'
+
+@description('Name of resource group')
+param location string = resourceGroup().location
+
+var existingVNetName = 'vnet-1'
+var existingSubnetName = 'subnet-1'
+
+resource vnet 'Microsoft.Network/virtualNetworks@2023-05-01' existing = {
+ name: existingVNetName
+}
+output vnetid string = vnet.id
+
+resource publicip 'Microsoft.Network/publicIPAddresses@2023-06-01' = {
+ name: publicipname
+ location: location
+ sku: {
+ name: 'Standard'
+ }
+ properties: {
+ publicIPAddressVersion: 'IPv4'
+ publicIPAllocationMethod: 'Static'
+ idleTimeoutInMinutes: 4
+ }
+}
+
+resource natgateway 'Microsoft.Network/natGateways@2023-06-01' = {
+ name: natgatewayname
+ location: location
+ sku: {
+ name: 'Standard'
+ }
+ properties: {
+ idleTimeoutInMinutes: 4
+ publicIpAddresses: [
+ {
+ id: publicip.id
+ }
+ ]
+ }
+}
+output natgatewayid string = natgateway.id
+
+resource updatedsubnet01 'Microsoft.Network/virtualNetworks/subnets@2023-06-01' = {
+ parent: vnet
+ name: existingSubnetName
+ properties: {
+ addressPrefix: vnet.properties.subnets[0].properties.addressPrefix
+ natGateway: {
+ id: natgateway.id
+ }
+ }
+}
+
+```
+ ## Remove a NAT gateway from an existing subnet and delete the resource
You can now associate the NAT gateway with a different subnet or virtual network
1. Select **Yes**.
-# [**PowerShell**](#tab/manage-nat-powershell)
+# [**Azure PowerShell**](#tab/manage-nat-powershell)
Removing the NAT gateway from a subnet by using Azure PowerShell isn't currently supported.
az network nat gateway delete \
--resource-group myResourceGroup ```
+# [**Bicep**](#tab/manage-nat-bicep)
+
+Use the Azure portal, Azure PowerShell, or Azure CLI to remove a NAT gateway from a subnet and delete the resource.
+ > [!NOTE]
az network nat gateway update \
--public-ip-addresses myPublicIP-NAT ```
+# [**Bicep**](#tab/manage-nat-bicep)
+
+Use the Azure portal, Azure PowerShell, or Azure CLI to add or remove a public IP address from a NAT gateway.
+ ## Add or remove a public IP prefix
Complete the following steps to add or remove a public IP prefix from a NAT gate
1. Select **Save**.
-# [**PowerShell**](#tab/manage-nat-powershell)
+# [**Azure PowerShell**](#tab/manage-nat-powershell)
### Add public IP prefix
az network nat gateway update \
--public-ip-prefixes myPublicIPprefix-NAT ```
+# [**Bicep**](#tab/manage-nat-bicep)
+
+Use the Azure portal, Azure PowerShell, or Azure CLI to add or remove a public IP prefix from a NAT gateway.
+ ## Next steps
network-watcher Vnet Flow Logs Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-overview.md
Previously updated : 11/30/2023 Last updated : 01/16/2024 #CustomerIntent: As an Azure administrator, I want to learn about VNet flow logs so that I can log my network traffic to analyze and optimize the network performance.
If you want to retain data forever and don't want to apply any retention policy,
## Pricing
-VNet flow logs are not currently billed. In future, VNet flow logs will be charged per gigabyte of "Network Logs Collected" and come with a free tier of 5 GB/month per subscription. If traffic analytics is enabled with VNet flow logs, then existing traffic analytics pricing is applicable. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
+Currently, VNet flow logs aren't billed. In the future, VNet flow logs will be billed per gigabyte of *Network Logs Collected* and will come with a free tier of 5 GB/month per subscription. If VNet flow logs are configured with traffic analytics enabled, existing traffic analytics pricing applies. For more information, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
## Availability
VNet flow logs is available in the following regions during the preview:
- West US - West US 2
-To sign up to obtain access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup).
+To sign up to get access to the public preview, see [VNet flow logs - public preview sign up](https://aka.ms/VNetflowlogspreviewsignup).
## Related content
operator-insights Concept Data Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-quality-monitoring.md
Every Data Product working on Azure Operator Insights platform has built-in supp
Azure Operator Insights platform monitors data quality when data is ingested into Data Product input storage (first AOI Data Product Storage block in the following image) and after data is processed and made available to customers (AOI Data Product Compute in following image).
+ Diagram of the Azure Operator Insights architecture. It shows ingestion by ingestion agents from on-premises data sources, processing in a Data Product, and analysis and use in Logic Apps and Power BI.
## Quality dimensions
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
Last updated 10/31/2023
# Create and configure MCC EDR Ingestion Agents for Azure Operator Insights
-The MCC EDR agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent receives EDRs from an Affirmed MCC, and forwards them to Azure Operator Insights. 
+The MCC EDR agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent receives EDRs from an Affirmed MCC, and forwards them to Azure Operator Insights Data Products.
## Prerequisites - You must have an Affirmed Networks MCC deployment that generates EDRs.-- You must have an Azure Operator Insights MCC Data product deployment.
+- You must deploy an Azure Operator Insights MCC Data Product.
- You must provide VMs with the following specifications to run the agent: | Resource | Requirements |
operator-insights How To Install Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-sftp-agent.md
Last updated 12/06/2023
# Create and configure SFTP Ingestion Agents for Azure Operator Insights
-An SFTP Ingestion Agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent pulls files from an SFTP server, and forwards them to Azure Operator Insights.
+An SFTP Ingestion Agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent pulls files from an SFTP server, and forwards them to Azure Operator Insights Data Products.
For more background, see [SFTP Ingestion Agent overview](sftp-agent-overview.md). ## Prerequisites -- You must have an SFTP server containing the files to be uploaded to Azure Operator Insights. This SFTP server must be accessible from the VM where you install the agent.-- You must have an Azure Operator Insights Data Product deployment.
+- You must deploy an Azure Operator Insights Data Product.
+- You must have an SFTP server containing the files to be uploaded to the Azure Operator Insights Data Product. This SFTP server must be accessible from the VM where you install the agent.
- You must choose the number of agents and VMs on which to install the agents, using the guidance in the following section. ### Choosing agents and VMs
operator-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/overview.md
Previously updated : 10/26/2023 Last updated : 01/10/2024 # What is Azure Operator Insights?
High scale ingestion to handle large amounts of network data from operator data
- Pipelines managed for all operators, leading to economies of scale dropping the price. - Operator privacy module. - Operator compliance including handling retention policies. -- Common data model with open standards such as parquet and delta lake for easy integration with other Microsoft and third-party services.
+- Common data model with open standards such as parquet and delta lake for easy integration with other Microsoft and non-Microsoft services.
- High speed analytics to enable fast data exploration and correlation between different data sets produced by disaggregated 5G multi-vendor networks.
-The result is that the operator has a lower total cost of ownership but higher insights of their network over equivalent on-premises or cloud chemistry set platforms.
+The result is that the operator has a lower total cost of ownership but higher insights of their network over equivalent on-premises or cloud chemistry set platforms.
+
+## How does Azure Operator Insights work?
+
+Azure Operator Insights requires two separate types of resources.
+
+- _Ingestion agents_ in your network collect data from your network and upload them to Data Products in Azure.
+- _Data Product_ resources in Azure process the data provided by ingestion agents, enrich it and make it available to you.
+ - You can use prebuilt dashboards provided by the Data Product or build your own in Azure Data Explorer. Azure Data Explorer also allows you to query your data directly, analyze it in Power BI or use it with Logic Apps. For more information, see [Data visualization in Data Products](concept-data-visualization.md).
+ - Data Products provide [metrics for monitoring the quality of your data](concept-data-quality-monitoring.md).
+ - Data Products are designed for specific types of source data and provide specialized processing for that source data. For more information, see [Data types](concept-data-types.md).
+
+ Diagram of the Azure Operator Insights architecture. It shows ingestion by ingestion agents from on-premises data sources, processing in a Data Product, and analysis and use in Logic Apps and Power BI.
+
+We provide the following Data Products.
+
+|Data Product |Purpose |Supporting ingestion agent|
+||||
+|[Quality of Experience - Affirmed MCC Data Product](concept-mcc-data-product.md) | Analysis and insight from EDRs provided by Affirmed Networks Mobile Content Cloud (MCC) network elements| [MCC EDR ingestion agent](how-to-install-mcc-edr-agent.md)|
+| [Monitoring - Affirmed MCC Data Product](concept-monitoring-mcc-data-product.md) | Analysis and insight from performance management data (performance statistics) from Affirmed Networks MCC network elements| [SFTP ingestion agent](sftp-agent-overview.md) |
+
+If you prefer, you can provide your own ingestion agent to upload data to your chosen Data Product.
## How do I get access to Azure Operator Insights?
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
VACUUM ANALYZE VERBOSE;
> [!NOTE] > > The VERBOSE flag is optional, but using it shows you the progress. -
-> [!NOTE]
-> If you have pg_qs enabled and collecting data on an instance of PostgreSQL running a major version <= 14, and perform an [in-place major version upgrade](./concepts-major-version-upgrade.md) to any version >= 15, know that the values returned in the query_type column of query_store.qs_view for any newly created time windows can be considered correct. However, for all the time windows which were created when the version of the engine was <= 14, where it reports `merge` it corresponds to `utility`, and when it reports `nothing` it corresponds to `utility`. The reason for that inconsistency has to do with the way [MERGE statement was implemented in PostgreSQL](https://github.com/postgres/postgres/commit/7103ebb7aae8ab8076b7e85f335ceb8fe799097c), which, instead of appending a new item to the existing ones in the CmdType enum, interleaved an item for MERGE between DELETE and UTILITY.
## Next steps
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
This view returns all the data that has already been persisted in the supporting
| query_type | text | | Type of operation represented by the query. Possible values are `unknown`, `select`, `update`, `insert`, `delete`, `merge`, `utility`, `nothing`, `undefined`. |
-> [!NOTE]
-> If you have pg_qs enabled and collecting data on an instance of PostgreSQL running a major version <= 14, and perform an [in-place major version upgrade](./concepts-major-version-upgrade.md) to any version >= 15, know that the values returned in the query_type column of query_store.qs_view for any newly created time windows can be considered correct. However, for all the time windows which were created when the version of the engine was <= 14, where it reports `merge` it corresponds to `utility`, and when it reports `nothing` it corresponds to `utility`. The reason for that inconsistency has to do with the way [MERGE statement was implemented in PostgreSQL](https://github.com/postgres/postgres/commit/7103ebb7aae8ab8076b7e85f335ceb8fe799097c), which, instead of appending a new item to the existing ones in the CmdType enum, interleaved an item for MERGE between DELETE and UTILITY.
-- #### query_store.query_texts_view This view returns query text data in Query Store. There's one row for each distinct query_sql_text.
postgresql How To Read Replicas Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-portal.md
Before setting up a read replica for Azure Database for PostgreSQL, ensure the p
:::image type="content" source="./media/how-to-read-replicas-portal/primary-compute.png" alt-text="Screenshot of server settings." lightbox="./media/how-to-read-replicas-portal/primary-compute.png":::
+#### [CLI](#tab/cli)
+
+> [!NOTE]
+> The commands provided in this guide are applicable for Azure CLI version 2.56.0 or higher. Ensure that you have the required version or a later one installed to execute these commands successfully. You can check your current Azure CLI version by running `az --version` in your command line interface. To update Azure CLI to the latest version, follow the instructions provided in the [Azure CLI documentation](/cli/azure/update-azure-cli).
++
+To view the configuration and current status of an Azure PostgreSQL Flexible Server, use the `az postgres flexible-server show` command. This command provides detailed information about the specified server.
+
+```azurecli-interactive
+az postgres flexible-server show \
+ --resource-group <resource-group> \
+ --name <server-name>
+```
+
+Replace `<resource-group>` and `<server-name>` with your specific resource group and the name of the server you wish to view.
+
+Review and note the following settings:
+
+ - Compute Tier, Processor, Size (ex `Standard_D8ads_v5`).
+ - Storage
+ - Type
+ - Storage size (ex `128`)
+ - autoGrow
+ - Network
+ - High Availability
+ - Enabled / Disabled
+ - Availability zone settings
+ - Backup settings
+ - Retention period
+ - Redundancy Options
+
+**Sample response**
+
+```json
+{
+ "administratorLogin": "myadmin",
+ "administratorLoginPassword": null,
+ "authConfig": {
+ "activeDirectoryAuth": "Disabled",
+ "passwordAuth": "Enabled",
+ "tenantId": null
+ },
+ "availabilityZone": "2",
+ "backup": {
+ "backupRetentionDays": 7,
+ "earliestRestoreDate": "2024-01-06T11:43:44.485537+00:00",
+ "geoRedundantBackup": "Disabled"
+ },
+ "createMode": null,
+ "dataEncryption": {
+ "geoBackupEncryptionKeyStatus": null,
+ "geoBackupKeyUri": null,
+ "geoBackupUserAssignedIdentityId": null,
+ "primaryEncryptionKeyStatus": null,
+ "primaryKeyUri": null,
+ "primaryUserAssignedIdentityId": null,
+ "type": "SystemManaged"
+ },
+ "fullyQualifiedDomainName": "{serverName}.postgres.database.azure.com",
+ "highAvailability": {
+ "mode": "Disabled",
+ "standbyAvailabilityZone": null,
+ "state": "NotEnabled"
+ },
+ "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforPostgreSQL/flexibleServers/{serverName}",
+ "identity": null,
+ "location": "East US",
+ "maintenanceWindow": {
+ "customWindow": "Disabled",
+ "dayOfWeek": 0,
+ "startHour": 0,
+ "startMinute": 0
+ },
+ "minorVersion": "0",
+ "name": "{serverName}",
+ "network": {
+ "delegatedSubnetResourceId": null,
+ "privateDnsZoneArmResourceId": null,
+ "publicNetworkAccess": "Enabled"
+ },
+ "pointInTimeUtc": null,
+ "privateEndpointConnections": null,
+ "replica": {
+ "capacity": 5,
+ "promoteMode": null,
+ "promoteOption": null,
+ "replicationState": null,
+ "role": "Primary"
+ },
+ "replicaCapacity": 5,
+ "replicationRole": "Primary",
+ "resourceGroup": "{resourceGroupName}",
+ "sku": {
+ "name": "Standard_D8ads_v5",
+ "tier": "GeneralPurpose"
+ },
+ "sourceServerResourceId": null,
+ "state": "Ready",
+ "storage": {
+ "autoGrow": "Disabled",
+ "iops": 500,
+ "storageSizeGb": 128,
+ "throughput": null,
+ "tier": "P10",
+ "type": ""
+ },
+ "systemData": {
+ "createdAt": "2023-11-08T11:27:48.972812+00:00",
+ "createdBy": null,
+ "createdByType": null,
+ "lastModifiedAt": null,
+ "lastModifiedBy": null,
+ "lastModifiedByType": null
+ },
+ "tags": {},
+ "type": "Microsoft.DBforPostgreSQL/flexibleServers",
+ "version": "16"
+}
+
+```
+ #### [REST API](#tab/restapi) To obtain information about the configuration of a server in Azure Database for PostgreSQL - Flexible Server, especially to view settings for recently introduced features like storage auto-grow or private link, you should use the latest API version `2023-06-01-preview`. The `GET` request for this would be formatted as follows:
To create a read replica, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/list-replica.png" alt-text="Screenshot of viewing the new replica in the replication window." lightbox="./media/how-to-read-replicas-portal/list-replica.png":::
+#### [CLI](#tab/cli)
+
+You can create a read replica for your Azure PostgreSQL Flexible Server by using the [`az postgres flexible-server replica create`](/cli/azure/postgres/flexible-server/replica#az-postgres-flexible-server-replica-create) command.
+
+```azurecli-interactive
+az postgres flexible-server replica create \
+ --replica-name <replica-name> \
+ --resource-group <resource-group> \
+ --source-server <source-server-name> \
+ --location <location>
+```
+
+Replace `<replica-name>`, `<resource-group>`, `<source-server-name>` and `<location>` with your specific values.
++ #### [REST API](#tab/restapi) Initiate an `HTTP PUT` request by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
Here, you need to replace `{subscriptionId}`, `{resourceGroupName}`, and `{repli
:::image type="content" source="./media/how-to-read-replicas-portal/replica-promote-attempt.png" alt-text="Screenshot of promotion error when missing virtual endpoint.":::
+#### [CLI](#tab/cli)
+You can create a virtual endpoint by using the [`az postgres flexible-server virtual-endpoint create`](/cli/azure/postgres/flexible-server/virtual-endpoint#az-postgres-flexible-server-virtual-endpoint-create) command.
+
+```azurecli-interactive
+ az postgres flexible-server virtual-endpoint create \
+ --resource-group <resource-group> \
+ --server-name <primary-name> \
+ --name <virtual-endpoint-name> \
+ --endpoint-type ReadWrite \
+ --members <replica-name>
+```
+
+Replace `<resource-group>`, `<primary-name>`, `<virtual-endpoint-name>`, and `<replica-name>` with your specific values.
++ #### [REST API](#tab/restapi) To create a virtual endpoint in a preview environment using Azure's REST API, you would use an `HTTP PUT` request. The request would look like this:
To list virtual endpoints in the preview version of Azure Database for PostgreSQ
:::image type="content" source="./media/how-to-read-replicas-portal/virtual-endpoints-show.png" alt-text="Screenshot of virtual endpoints list." lightbox="./media/how-to-read-replicas-portal/virtual-endpoints-show.png":::
+#### [CLI](#tab/cli)
+
+You can view the details of the virtual endpoint using either the [`list`](/cli/azure/postgres/flexible-server/virtual-endpoint#az-postgres-flexible-server-virtual-endpoint-list) or [`show`](/cli/azure/postgres/flexible-server/virtual-endpoint#az-postgres-flexible-server-virtual-endpoint-show) command. Given that only one virtual endpoint is allowed per primary-replica pair, both commands will yield the same result.
+
+Here's an example of how to use the `list` command:
+
+```azurecli-interactive
+az postgres flexible-server virtual-endpoint list \
+ --resource-group <resource-group> \
+ --server-name <server-name>
+```
+
+Replace `<server-name>` with the name of your primary server and `<resource-group>` with the name of your resource group.
+
+Here's how you can use the `show` command:
+
+```azurecli-interactive
+az postgres flexible-server virtual-endpoint show \
+ --name <virtual-endpoint-name>
+ --resource-group <resource-group> \
+ --server-name <server-name>
+```
+In this command, replace `<virtual-endpoint-name>`,`<server-name>`, and `<resource-group>` with the respective names. `<server-name>` is the name of your primary server.
+ #### [REST API](#tab/restapi) ```http request
To promote replica from the Azure portal, follow these steps:
6. Select **Promote** to begin the process. Once it's completed, the roles reverse: the replica becomes the primary, and the primary will assume the role of the replica.
+#### [CLI](#tab/cli)
+
+When promoting a replica to a primary server in Azure PostgreSQL Flexible Server, use the `az postgres flexible-server replica promote` command. This process is essential for elevating a replica server to function as the primary server and demotion of current primary to replica role. Specify `--promote-mode switchover` and `--promote-option planned` in the command.
+
+```azurecli-interactive
+az postgres flexible-server replica promote \
+ --resource-group <resource-group> \
+ --name <replica-server-name> \
+ --promote-mode switchover \
+ --promote-option planned
+```
+
+Replace `<resource-group>` and `<replica-server-name>` with your specific resource group and replica server name. This command ensures a smooth transition of the replica to a primary role in a planned manner.
+ #### [REST API](#tab/restapi) When promoting a replica to a primary server, use an `HTTP PATCH` request with a specific `JSON` body to set the promotion options. This process is crucial when you need to elevate a replica server to act as the primary server.
Repeat the same operations to promote the original server to the primary.
6. Select **Promote**, the process begins. Once it's completed, the roles reverse: the replica becomes the primary, and the primary will assume the role of the replica.
+#### [CLI](#tab/cli)
+
+This time, change the `<replica-server-name>` in the `az postgres flexible-server replica promote` command to refer to your old primary server, which is currently acting as a replica, and execute the request again.
+
+```azurecli-interactive
+az postgres flexible-server replica promote \
+ --resource-group <resource-group> \
+ --name <replica-server-name> \
+ --promote-mode switchover \
+ --promote-option planned
+```
+
+Replace `<resource-group>` and `<replica-server-name>` with your specific resource group and current replica server name.
+ #### [REST API](#tab/restapi) This time, change the `{replicaserverName}` in the API request to refer to your old primary server, which is currently acting as a replica, and execute the request again.
Create a secondary read replica in a separate region to modify the reader virtua
:::image type="content" source="./media/how-to-read-replicas-portal/primary-updating.png" alt-text="Screenshot of primary entering into updating status." lightbox="./media/how-to-read-replicas-portal/primary-updating.png":::
+#### [CLI](#tab/cli)
+
+You can create a secondary read replica by using the [`az postgres flexible-server replica create`](/cli/azure/postgres/flexible-server/replica#az-postgres-flexible-server-replica-create) command.
+
+```azurecli-interactive
+az postgres flexible-server replica create \
+ --replica-name <replica-name> \
+ --resource-group <resource-group> \
+ --source-server <source-server-name> \
+ --location <location>
+```
+
+Choose a distinct name for `<replica-name>` to differentiate it from the primary server and any other replicas.
+Replace `<resource-group>`, `<source-server-name>` and `<location>` with your specific values.
+ #### [REST API](#tab/restapi) You can create a secondary read replica by using the [create API](/rest/api/postgresql/flexibleserver/servers/create):
The location is set to `westus3`, but you can adjust this based on your geograph
5. Select **Save**. The reader endpoint will now be pointed at the secondary replica, and the promote operation will now be tied to this replica.
+#### [CLI](#tab/cli)
+
+You can now modify your reader endpoint to point to the newly created secondary replica by using a `az postgres flexible-server virtual-endpoint update` command. Remember to replace `<replica-name>` with the name of the newly created read replica.
+
+```azurecli-interactive
+az postgres flexible-server virtual-endpoint update \
+ --resource-group <resource-group> \
+ --server-name <server-name> \
+ --name <virtual-endpoint-name> \
+ --endpoint-type ReadWrite \
+ --members <replica-name>
+```
+
+Replace `<resource-group>`, `<server-name>`, `<virtual-endpoint-name>` and `<replica-name>` with your specific values.
+ #### [REST API](#tab/restapi) You can now modify your reader endpoint to point to the newly created secondary replica by using a `PATCH` request. Remember to replace `{replicaserverName}` with the name of the newly created read replica.
Rather than switchover to a replica, it's also possible to break the replication
6. Select **Promote**, the process begins. Once completed, the server will no longer be a replica of the primary.
+#### [CLI](#tab/cli)
+
+When promoting a replica in Azure PostgreSQL Flexible Server, the default behavior is to promote it to an independent server. This is achieved using the [`az postgres flexible-server replica promote`](/cli/azure/postgres/flexible-server/replica#az-postgres-flexible-server-replica-promote) command without specifying the `--promote-mode` option, as `standalone` mode is assumed by default.
+
+```azurecli-interactive
+az postgres flexible-server replica promote \
+ --resource-group <resource-group> \
+ --name <replica-server-name>
+```
+
+In this command, replace `<resource-group>` and `<replica-server-name>` with your specific resource group name and the name of the first replica server that you created, that is not part of virtual endpoint anymore.
+++ #### [REST API](#tab/restapi) You can promote a replica to a standalone server using a `PATCH` request. To do this, send a `PATCH` request to the specified Azure Management REST API URL with the first `JSON` body, where `PromoteMode` is set to `standalone` and `PromoteOption` to `planned`. The second `JSON` body format, setting `ReplicationRole` to `None`, is deprecated but still mentioned here for backward compatibility.
PATCH https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups
4. A delete confirmation dialog will appear. It will warn you: "This action will delete the virtual endpoint `virtualendpointName`. Any clients connected using these domains may lose access." Acknowledge the implications and confirm by clicking on **Delete**.
+#### [CLI](#tab/cli)
+
+To remove a virtual endpoint from an Azure PostgreSQL Flexible Server, you can use the [`az postgres flexible-server virtual-endpoint delete`](/cli/azure/postgres/flexible-server/virtual-endpoint#az-postgres-flexible-server-virtual-endpoint-delete) command. This action permanently deletes the specified virtual endpoint.
+
+```azurecli-interactive
+az postgres flexible-server virtual-endpoint delete \
+ --resource-group <resource-group> \
+ --server-name <server-name> \
+ --name <virtual-endpoint-name>
+```
+
+In this command, replace `<resource-group>`, `<server-name>`, and `<virtual-endpoint-name>` with your specific resource group, server name, and the name of the virtual endpoint you wish to delete.
++ #### [REST API](#tab/restapi) To delete a virtual endpoint in a preview environment using Azure's REST API, you would issue an `HTTP DELETE` request. The request URL would be structured as follows:
To delete a virtual endpoint in a preview environment using Azure's REST API, yo
DELETE https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBForPostgreSql/flexibleServers/{serverName}/virtualendpoints/{virtualendpointName}?api-version=2023-06-01-preview ``` - ## Delete a replica
You can also delete the read replica from the **Replication** window by followin
5. Acknowledge **Delete** operation.
+#### [CLI](#tab/cli)
+To delete a primary or replica server, use the [`az postgres flexible-server delete`](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-delete) command. If server has read replicas then read replicas should be deleted first before deleting the primary server.
+
+```azurecli-interactive
+az postgres flexible-server delete \
+ --resource-group <resource-group> \
+ --name <server-name>
+```
+
+Replace `<resource-group>` and `<server-name>` with the name of your resource group name and the replica server name you wish to delete.
+ #### [REST API](#tab/restapi) To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
To delete a server from the Azure portal, follow these steps:
:::image type="content" source="./media/how-to-read-replicas-portal/delete-primary-confirm.png" alt-text="Screenshot of confirming to delete the primary server.":::
+#### [CLI](#tab/cli)
+To delete a primary or replica server, use the [`az postgres flexible-server delete`](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-delete) command. If server has read replicas then read replicas should be deleted first before deleting the primary server.
+
+```azurecli-interactive
+az postgres flexible-server delete \
+ --resource-group <resource-group> \
+ --name <server-name>
+```
+
+Replace `<resource-group>` and `<server-name>` with the name of your resource group name and the primary server name you wish to delete.
+ #### [REST API](#tab/restapi) To delete a primary or replica server, use the [delete API](/rest/api/postgresql/flexibleserver/servers/delete). If server has read replicas then read replicas should be deleted first before deleting the primary server.
postgresql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-cli.md
Title: List and download server logs with Azure CLI
-description: This article describes how to list and download Azure Database for PostgreSQL - Flexible Server logs by using the Azure CLI.
+ Title: Download server logs for Azure Database for PostgreSQL - Flexible Server with Azure CLI
+description: This article describes how to download server logs using Azure CLI.
Previously updated : 1/10/2024 Last updated : 1/16/2024 # List and download Azure Database for PostgreSQL - Flexible Server logs by using the Azure CLI
az account set --subscription <subscription id>
## List server logs using Azure CLI
-Once you're configured the prerequisites and connected to your required subscription.
-You can list the server logs from your Azure Database for PostgreSQL flexible server instance by using the following command.
+Once you're configured the prerequisites and connected to your required subscription. You can list the server logs from your Azure Database for PostgreSQL flexible server instance by using the following command.
+> [!Note]
+> You can configure your server logs in the same way as above using the [Server Parameters](./howto-configure-server-parameters-using-portal.md), setting the appropriate values for these parameters: _logfiles.download_enable_ to ON to enable this feature, and _logfiles.retention_days_ to define retention in days. Initially, server logs occupy data disk space for about an hour before moving to backup storage for the set retention period.
```azurecli az postgres flexible-server server-logs list --resource-group <myresourcegroup> --server-name <serverlogdemo> --out <table>
postgresql How To Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-portal.md
Title: 'How to enable and download server logs for Azure Database for PostgreSQL - Flexible Server'
+ Title: 'Download server logs for Azure Database for PostgreSQL - Flexible Server'
description: This article describes how to download server logs using Azure portal. Previously updated : 1/10/2024 Last updated : 1/16/2024 # Enable, list and download server logs for Azure Database for PostgreSQL - Flexible Server
To enable the server logs feature, perform the following steps.
4. To configure retention period (in days), choose the slider. Minimum retention 1 days and Maximum retention is 7 days. > [!Note]
-> You can configure your server logs in the same way as above using the [Server Parameters](./howto-configure-server-parameters-using-portal.md), setting the appropriate values for these parameters: _logfiles.download_enable_ to ON to enable this feature, and _logfiles.retention_days_ to define retention in days.
+> You can configure your server logs in the same way as above using the [Server Parameters](./howto-configure-server-parameters-using-portal.md), setting the appropriate values for these parameters: _logfiles.download_enable_ to ON to enable this feature, and _logfiles.retention_days_ to define retention in days. Initially, server logs occupy data disk space for about an hour before moving to backup storage for the set retention period.
## Download Server logs
To download server logs, perform the following steps.
:::image type="content" source="./media/how-to-server-logs-portal/5-how-to-server-log.png" alt-text="Screenshot showing server Logs - Disable.":::
-3. Select Save.
+3. Select Save
## Next steps - To enable and disable Server logs from CLI, you can refer to the [article.](./how-to-server-logs-cli.md)
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Israel Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Italy North | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :x: |
| Japan East | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: |
One advantage of running your workload in Azure is global reach. The flexible se
| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Switzerland West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: |
+| UAE Central* | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
| UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: | | US Gov Arizona | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: | | US Gov Texas | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :x: |
$ New Zone-redundant high availability deployments are temporarily blocked in th
$$ New server deployments are temporarily blocked in these regions. Already provisioned servers are fully supported. ** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Any existing servers deployed in AZ with *no preference* (which you can check on the Azure portal) before the region started to support AZ, even when you enable zone-redundant HA, the standby is provisioned in the same AZ (same-zone HA) as the primary server. To enable zone-redundant high availability, [follow the steps](how-to-manage-high-availability-portal.md#enabling-zone-redundant-ha-after-the-region-supports-az).
+(*) Certain regions are access-restricted to support specific customer scenarios, such as in-country/region disaster recovery. These regions are available only upon request by creating a new support request.
<!-- We continue to add more regions for flexible servers. --> > [!NOTE]
private-link How To Approve Private Link Cross Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/how-to-approve-private-link-cross-subscription.md
+
+ Title: Approve private link connections across subscriptions
+
+description: Get started learning how to approve and manage private link connections across subscriptions with Azure Private Link.
++++ Last updated : 01/11/2024
+#customer intent: As a Network Administrator, I want the approve private link connections across Azure subscriptions.
+++
+# Approve private link connections across subscriptions
+
+Azure Private Link enables you to connect privately to Azure resources. Private Link connections are scoped to a specific subscription. This article shows you how to approve a private endpoint connection across subscriptions.
+
+## Prerequisites
+
+- Two active Azure subscriptions.
+
+ - One subscription hosts the Azure resource and the other subscription contains the consumer private endpoint and virtual network.
+
+- An administrator account for each subscription or an account with permissions in each subscription to create and manage resources.
+
+Resources used in this article:
+
+| Resource | Subscription | Resource group | Location |
+| | | | |
+| **storage1** *(This name is unique, replace with the name you create)* | subscription-1 | test-rg | East US 2 |
+| **vnet-1** | subscription-2 | test-rg | East US 2 |
+| **private-endpoint** | subscription-2 | test-rg | East US 2 |
+
+## Sign in to subscription-1
+
+Sign in to **subscription-1** in the [Azure portal](https://portal.azure.com).
+
+## Register the resource providers for subscription-1
+
+For the private endpoint connection to complete successfully, the `Microsoft.Network` and `Microsoft.Storage` resource providers must be registered in **subscription-1**. Use the following steps to register the resource providers. If the `Microsoft.Network` and `Microsoft.Storage` resource providers are already registered, skip this step.
+
+> [!IMPORTANT]
+> If you're using a different resource type, you must register the resource provider for that resource type if it's not already registered.
+
+1. In the search box at the top of the portal, enter **Subscription**. Select **Subscriptions** in the search results.
+
+1. Select **subscription-1**.
+
+1. In **Settings**, select **Resource providers**.
+
+1. In the **Resource providers** filter box, enter **Microsoft.Storage**. Select **Microsoft.Storage**.
+
+1. Select **Register**.
+
+1. Repeat the previous steps to register the **Microsoft.Network** resource provider.
+
+## Create a resource group
+
+1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
+
+1. Select **+ Create**.
+
+1. In the **Basics** tab of **Create a resource group**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select **subscription-1**. |
+ | Resource group | Enter **test-rg**. |
+ | Region | Select **East US 2**. |
+
+1. Select **Review + Create**.
+
+1. Select **Create**.
++
+## Obtain storage account resource ID
+
+You need the storage account resource ID to create the private endpoint connection in **subscription-2**. Use the following steps to obtain the storage account resource ID.
+
+1. In the search box at the top of the portal, enter **Storage account**. Select **Storage accounts** in the search results.
+
+1. Select **storage1** or the name of your existing storage account.
+
+1. In **Settings**, select **Endpoints**.
+
+1. Copy the entry in **Storage account resource ID**.
+
+## Sign in to subscription-2
+
+Sign in to **subscription-2** in the [Azure portal](https://portal.azure.com).
+
+## Register the resource providers for subscription-2
+
+For the private endpoint connection to complete successfully, the `Microsoft.Storage` and `Microsoft.Network` resource provider must be registered in **subscription-2**. Use the following steps to register the resource providers. If the `Microsoft.Storage` and `Microsoft.Network` resource providers are already registered, skip this step.
+
+> [!IMPORTANT]
+> If you're using a different resource type, you must register the resource provider for that resource type if it's not already registered.
+
+1. In the search box at the top of the portal, enter **Subscription**. Select **Subscriptions** in the search results.
+
+1. Select **subscription-2**.
+
+1. In **Settings**, select **Resource providers**.
+
+1. In the **Resource providers** filter box, enter **Microsoft.Storage**. Select **Microsoft.Storage**.
+
+1. Select **Register**.
+
+1. Repeat the previous steps to register the **Microsoft.Network** resource provider.
++
+## Create private endpoint
+
+1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints**.
+
+1. Select **+ Create** in **Private endpoints**.
+
+1. In the **Basics** tab of **Create a private endpoint**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select **subscription-2**. |
+ | Resource group | Select **test-rg** |
+ | **Instance details** | |
+ | Name | Enter **private-endpoint**. |
+ | Network Interface Name | Leave the default of **private-endpoint-nic**. |
+ | Region | Select **East US 2**. |
+
+1. Select **Next: Resource**.
+
+1. Select **Connect to an Azure resource by resource ID or alias**.
+
+1. In **Resource ID or alias**, paste the storage account resource ID that you copied earlier.
+
+1. In **Target sub-resource**, enter **blob**.
+
+1. Select **Next: Virtual Network**.
+
+1. In **Virtual Network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Networking** | |
+ | Virtual network | Select **vnet-1 (test-rg)**. |
+ | Subnet | Select **subnet-1**. |
+
+1. Select **Next: DNS**.
+
+1. Select **Next: Tags**.
+
+1. Select **Review + Create**.
+
+1. Select **Create**.
+
+## Approve private endpoint connection
+
+The private endpoint connection is in a **Pending** state until approved. Use the following steps to approve the private endpoint connection in **subscription-1**.
+
+1. In the search box at the top of the portal, enter **Private endpoint**. Select **Private endpoints**.
+
+1. Select **Pending connections**.
+
+1. Select the box next to your storage account in **subscription-1**.
+
+1. Select **Approve**.
+
+1. Select **Yes** in **Approve connection**.
+
+## Next steps
+
+In this article, you learned how to approve a private endpoint connection across subscriptions. To learn more about Azure Private Link, continue to the following articles:
+
+- [Azure Private Link overview](private-link-overview.md)
+
+- [Azure Private endpoint overview](private-endpoint-overview.md)
role-based-access-control Troubleshoot Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshoot-limits.md
Previously updated : 12/01/2023 Last updated : 01/12/2024
To reduce the number of role assignments in the subscription, add principals (us
This query checks active role assignments and doesn't consider eligible role assignments in [Microsoft Entra Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+ If you are using [role assignment conditions](conditions-overview.md) or [delegating role assignment management with conditions](delegate-role-assignments-overview.md), you should use the Conditions query. Otherwise, use the Default query.
+
+ # [Default](#tab/default)
+ [!INCLUDE [resource-graph-query-authorization-same-role-scope](../governance/includes/resource-graph/query/authorization-same-role-scope.md)]
+ # [Conditions](#tab/conditions)
+
+ [!INCLUDE [resource-graph-query-authorization-same-role-scope-condition](../governance/includes/resource-graph/query/authorization-same-role-scope-condition.md)]
+
+
+ The following shows an example of the results. The **count_** column is the number of principals assigned the same role and at the same scope. The count is sorted in descending order. :::image type="content" source="media/troubleshoot-limits/authorization-same-role-scope.png" alt-text="Screenshot of Azure Resource Graph Explorer that shows role assignments with the same role and at the same scope, but for different principals." lightbox="media/troubleshoot-limits/authorization-same-role-scope.png":::
To reduce the number of role assignments in the subscription, remove redundant r
This query checks active role assignments and doesn't consider eligible role assignments in [Microsoft Entra Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+ If you are using [role assignment conditions](conditions-overview.md) or [delegating role assignment management with conditions](delegate-role-assignments-overview.md), you should use the Conditions query. Otherwise, use the Default query.
+
+ # [Default](#tab/default)
+ [!INCLUDE [resource-graph-query-authorization-same-role-principal](../governance/includes/resource-graph/query/authorization-same-role-principal.md)]
+ # [Conditions](#tab/conditions)
+
+ [!INCLUDE [resource-graph-query-authorization-same-role-principal-condition](../governance/includes/resource-graph/query/authorization-same-role-principal-condition.md)]
+
+
+ The following shows an example of the results. The **count_** column is the number of different scopes for role assignments with the same role and same principal. The count is sorted in descending order. :::image type="content" source="media/troubleshoot-limits/authorization-same-role-principal.png" alt-text="Screenshot of Azure Resource Graph Explorer that shows role assignments for the same role and same principal, but at different scopes." lightbox="media/troubleshoot-limits/authorization-same-role-principal.png":::
To reduce the number of role assignments in the subscription, replace multiple b
This query checks active role assignments and doesn't consider eligible role assignments in [Microsoft Entra Privileged Identity Management](../active-directory/privileged-identity-management/pim-resource-roles-assign-roles.md).
+ If you are using [role assignment conditions](conditions-overview.md) or [delegating role assignment management with conditions](delegate-role-assignments-overview.md), you should use the Conditions query. Otherwise, use the Default query.
+
+ # [Default](#tab/default)
+ [!INCLUDE [resource-graph-query-authorization-same-principal-scope](../governance/includes/resource-graph/query/authorization-same-principal-scope.md)]
+ # [Condition](#tab/conditions)
+
+ [!INCLUDE [resource-graph-query-authorization-same-principal-scope-condition](../governance/includes/resource-graph/query/authorization-same-principal-scope-condition.md)]
+
+
+ The following shows an example of the results. The **count_** column is the number of different built-in role assignments with the same principal and same scope. The count is sorted in descending order. :::image type="content" source="media/troubleshoot-limits/authorization-same-principal-scope.png" alt-text="Screenshot of Azure Resource Graph Explorer that shows role assignments for with the same principal and same scope." lightbox="media/troubleshoot-limits/authorization-same-principal-scope.png":::
sap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/tutorial.md
When you choose a name for your service principal, make sure that the name is un
```cloudshell-interactive export ARM_SUBSCRIPTION_ID="<subscriptionId>"
- export control_plane_env_code="MGMT"
+ export control_plane_env_code="LAB"
az ad sp create-for-rbac --role="Contributor" \ --scopes="/subscriptions/${ARM_SUBSCRIPTION_ID}" \
As a part of the SAP automation framework control plane, you can optionally crea
If you would like to use the web app, you must first create an app registration for authentication purposes. Open the Azure Cloud Shell and execute the following commands:
-Replace MGMT with your environment as necessary.
+Replace LAB with your environment as necessary.
```bash
-export env_code="MGMT"
+export env_code="LAB"
echo '[{"resourceAppId":"00000003-0000-0000-c000-000000000000","resourceAccess":[{"id":"e1fe6dd8-ba31-4d61-89e7-88639da4683d","type":"Scope"}]}]' >> manifest.json
code .
| Canada Central | CACE | | Central US | CEUS | | East US | EAUS |
- | North Europe | NOEU |
+ | North Europe | WEEU |
| South Africa North | SANO | | Southeast Asia | SOEA | | UK South | UKSO |
code .
```terraform # The environment value is a mandatory field, it is used for partitioning the environments, for example, PROD and NP.
- environment = "MGMT"
+ environment = "LAB"
# The location/region value is a mandatory field, it is used to control where the resources are deployed location = "westeurope"
code .
```terraform # The environment value is a mandatory field, it is used for partitioning the environments, for example, PROD and NP.
- environment = "MGMT"
+ environment = "LAB"
# The location/region value is a mandatory field, it is used to control where the resources are deployed location = "westeurope"
Use the [deploy_controlplane.sh](bash/deploy-controlplane.md) script to deploy t
The deployment goes through cycles of deploying the infrastructure, refreshing the state, and uploading the Terraform state files to the library storage account. All of these steps are packaged into a single deployment script. The script needs the location of the configuration file for the deployer and library, and some other parameters.
-For example, choose **North Europe** as the deployment location, with the four-character name `NOEU`, as previously described. The sample deployer configuration file `MGMT-NOEU-DEP00-INFRASTRUCTURE.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/MGMT-NOEU-DEP00-INFRASTRUCTURE` folder.
+For example, choose **West Europe** as the deployment location, with the four-character name `WEEU`, as previously described. The sample deployer configuration file `LAB-WEEU-DEP05-INFRASTRUCTURE.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/DEPLOYER/LAB-WEEU-DEP05-INFRASTRUCTURE` folder.
-The sample SAP library configuration file `MGMT-NOEU-SAP_LIBRARY.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/MGMT-NOEU-SAP_LIBRARY` folder.
+The sample SAP library configuration file `LAB-WEEU-SAP_LIBRARY.tfvars` is in the `~/Azure_SAP_Automated_Deployment/WORKSPACES/LIBRARY/LAB-WEEU-SAP_LIBRARY` folder.
Set the environment variables for the service principal:
export TF_use_webapp=true
```bash
-export env_code="MGMT"
-export vnet_code="DEP00"
+export env_code="LAB"
+export vnet_code="DEP05"
export region_code="<region_code>" export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation"
You need to note some values for upcoming steps. Look for this text block in the
######################################################################################### # # # Please save these values: #
-# - Key Vault: MGMTNOEUDEP00user39B #
+# - Key Vault: LABWEEUDEP05user39B #
# - Deployer IP: x.x.x.x # # - Storage Account: mgmtnoeutfstate53e # # - Web Application Name: mgmt-noeu-sapdeployment39B #
You need to note some values for upcoming steps. Look for this text block in the
2. Go to the [Azure portal](https://portal.azure.com).
- Select **Resource groups**. Look for new resource groups for the deployer infrastructure and library. For example, you might see `MGMT-[region]-DEP00-INFRASTRUCTURE` and `MGMT-[region]-SAP_LIBRARY`.
+ Select **Resource groups**. Look for new resource groups for the deployer infrastructure and library. For example, you might see `LAB-[region]-DEP05-INFRASTRUCTURE` and `LAB-[region]-SAP_LIBRARY`.
The contents of the deployer and SAP library resource group are shown here.
To connect to your deployer VM:
1. Select or search for **Key vaults**.
-1. On the **Key vault** page, find the deployer key vault. The name starts with `MGMT[REGION]DEP00user`. Filter by **Resource group** or **Location**, if necessary.
+1. On the **Key vault** page, find the deployer key vault. The name starts with `LAB[REGION]DEP05user`. Filter by **Resource group** or **Location**, if necessary.
1. On the **Settings** section in the left pane, select **Secrets**.
-1. Find and select the secret that contains **sshkey**. It might look like `MGMT-[REGION]-DEP00-sshkey`.
+1. Find and select the secret that contains **sshkey**. It might look like `LAB-[REGION]-DEP05-sshkey`.
1. On the secret's page, select the current version. Then, copy the **Secret value**.
export ARM_TENANT_ID="<tenantId>"
```bash
-export env_code="MGMT"
-export vnet_code="DEP00"
+export env_code="LAB"
+export vnet_code="DEP05"
export region_code="<region_code>" storage_accountname="mgmtneweeutfstate###"
-vault_name="MGMTNOEUDEP00user###"
+vault_name="LABWEEUDEP05user###"
export DEPLOYMENT_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/sap-automation" export CONFIG_REPO_PATH="${HOME}/Azure_SAP_Automated_Deployment/WORKSPACES"
${SAP_AUTOMATION_REPO_PATH}/deploy/scripts/deploy_controlplane.sh \
You can deploy the web application using the following script: ```bash
-export env_code="MGMT"
-export vnet_code="DEP00"
+export env_code="LAB"
+export vnet_code="DEP05"
export region_code="<region_code>" export webapp_name="<webAppName>" export app_id="<appRegistrationId>"
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
1. Collect the following information in a text editor. This information was collected at the end of the "Deploy the control plane" phase. 1. The name of the Terraform state file storage account in the library resource group:
- - Following from the preceding example, the resource group is `MGMT-NOEU-SAP_LIBRARY`.
+ - Following from the preceding example, the resource group is `LAB-WEEU-SAP_LIBRARY`.
- The name of the storage account contains `mgmtnoeutfstate`. 1. The name of the key vault in the deployer resource group:
- - Following from the preceding example, the resource group is `MGMT-NOEU-DEP00-INFRASTRUCTURE`.
- - The name of the key vault contains `MGMTNOEUDEP00user`.
+ - Following from the preceding example, the resource group is `LAB-WEEU-DEP05-INFRASTRUCTURE`.
+ - The name of the key vault contains `LABWEEUDEP05user`.
1. The public IP address of the deployer VM. Go to your deployer's resource group, open the deployer VM, and copy the public IP address.
az webapp restart --resource-group ${env_code}-${region_code}-${vnet_code}-INFRA
1. The name of the deployer state file is found under the library resource group: - Select **Library resource group** > **State storage account** > **Containers** > `tfstate`. Copy the name of the deployer state file.
- - Following from the preceding example, the name of the blob is `MGMT-NOEU-DEP00-INFRASTRUCTURE.terraform.tfstate`.
+ - Following from the preceding example, the name of the blob is `LAB-WEEU-DEP05-INFRASTRUCTURE.terraform.tfstate`.
1. If necessary, register the Service Principal.
- The first time an environment is instantiated, a Service Principal must be registered. In this tutorial, the control plane is in the `MGMT` environment and the workload zone is in `DEV`. Therefore, a Service Principal must be registered for the `DEV` environment.
+ The first time an environment is instantiated, a Service Principal must be registered. In this tutorial, the control plane is in the `LAB` environment and the workload zone is in `DEV`. Therefore, a Service Principal must be registered for the `DEV` environment.
```bash export ARM_SUBSCRIPTION_ID="<subscriptionId>"
Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th
From the example region `northeurope`, the folder looks like: ```bash
- cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-NOEU-SAP01-INFRASTRUCTURE
+ cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/DEV-WEEU-SAP01-INFRASTRUCTURE
``` 1. Optionally, open the workload zone configuration file and, if needed, change the network logical name to match the network name.
Use the [install_workloadzone](bash/install-workloadzone.md) script to deploy th
```bash export tfstate_storage_account="<storageaccountName>"
-export deployer_env_code="MGMT"
+export deployer_env_code="LAB"
export sap_env_code="DEV" export region_code="<region_code>" export key_vault="<vaultName>"
-export deployer_vnet_code="DEP01"
-export vnet_code="SAP02"
+export deployer_vnet_code="DEP05"
+export vnet_code="SAP04"
export ARM_SUBSCRIPTION_ID="<subscriptionId>" export ARM_CLIENT_ID="<appId>"
${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \
The deployment command for the `northeurope` example looks like: ```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00
${DEPLOYMENT_REPO_PATH}/deploy/scripts/installer.sh \
- --parameterfile DEV-NOEU-SAP01-X00.tfvars \
+ --parameterfile DEV-WEEU-SAP01-X00.tfvars \
--type sap_system \ --auto-approve ```
materials:
- name: "Kernel Part I ; OS: Linux on x86_64 64bit ; DB: Database independent" ```
-For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRUCTURE`. The deployer key vault name contains `MGMTNOEUDEP00user` in the name. You use this information to configure your deployer's key vault secrets.
+For this example configuration, the resource group is `LAB-WEEU-DEP05-INFRASTRUCTURE`. The deployer key vault name contains `LABWEEUDEP05user` in the name. You use this information to configure your deployer's key vault secrets.
1. Connect to your deployer VM for the following steps. A copy of the repo is now there.
The SAP application installation happens through Ansible playbooks.
Go to the system deployment folder. ```bash
-cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-NOEU-SAP01-X00/
+cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/DEV-WEEU-SAP01-X00/
``` Make sure you have the following files in the current folders: `sap-parameters.yaml` and `X00_host.yaml`.
Before you begin, sign in to your Azure account. Then, check that you're in the
### Remove the SAP infrastructure
-Go to the `DEV-NOEU-SAP01-X00` subfolder inside the `SYSTEM` folder. Then, run this command:
+Go to the `DEV-WEEU-SAP01-X00` subfolder inside the `SYSTEM` folder. Then, run this command:
```bash export sap_env_code="DEV"
-export region_code="NOEU"
-export sap_vnet_code="SAP02"
+export region_code="WEEU"
+export sap_vnet_code="SAP04"
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/SYSTEM/${sap_env_code}-${region_code}-${sap_vnet_code}-X00
Go to the `DEV-XXXX-SAP01-INFRASTRUCTURE` subfolder inside the `LANDSCAPE` folde
```bash export sap_env_code="DEV"
-export region_code="NOEU"
+export region_code="WEEU"
export sap_vnet_code="SAP01" cd ~/Azure_SAP_Automated_Deployment/WORKSPACES/LANDSCAPE/${sap_env_code}-${region_code}-${sap_vnet_code}-INFRASTRUCTURE
export ARM_SUBSCRIPTION_ID="<subscriptionId>"
Run the following command: ```bash
-export region_code="NOEU"
-export env_code="MGMT"
-export vnet_code="DEP00"
+export region_code="WEEU"
+export env_code="LAB"
+export vnet_code="DEP05"
cd ~/Azure_SAP_Automated_Deployment/WORKSPACES ${DEPLOYMENT_REPO_PATH}/deploy/scripts/remove_controlplane.sh \
sentinel Connect Cef Ama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md
Title: Stream CEF logs to Microsoft Sentinel with the AMA connector
-description: Stream and filter CEF-based logs from on-premises appliances to your Microsoft Sentinel workspace.
-
+ Title: Ingest CEF logs to Microsoft Sentinel with the Azure Monitor Agent
+description: Ingest and filter CEF-based logs from security devices and appliances to your Microsoft Sentinel workspace using the data connector based on the Azure Monitor Agent (AMA).
++ Previously updated : 09/19/2022-
-#Customer intent: As a security operator, I want to stream and filter CEF-based logs from on-premises appliances to my Microsoft Sentinel workspace, so I can improve load time and easily analyze the data.
Last updated : 12/20/2023
+#Customer intent: As a security operator, I want to ingest and filter CEF-based logs from security devices and appliances to my Microsoft Sentinel workspace, so that security analysts can monitor activity on these systems and detect security threats.
-# Stream CEF logs with the AMA connector
+# Ingest CEF logs with the Azure Monitor Agent
-This article describes how to use the **Common Event Format (CEF) via AMA** connector to quickly filter and upload logs in the Common Event Format (CEF) from multiple on-premises appliances over Syslog.
+This article describes how to use the **Common Event Format (CEF) via AMA (Preview)** connector to quickly filter and ingest logs in the Common Event Format (CEF) from multiple security devices and appliances over Syslog.
-The connector uses the Azure Monitor Agent (AMA), which uses Data Collection Rules (DCRs). With DCRs, you can filter the logs before they're ingested, for quicker upload, efficient analysis, and querying.
+The connector uses the Azure Monitor Agent (AMA), which takes instructions from Data Collection Rules (DCRs). DCRs specify the systems to monitor, and they define filters to apply to the logs before they're ingested, for better performance and more efficient querying and analysis.
-Learn how to [collect Syslog with the AMA](../azure-monitor/agents/data-collection-syslog.md), including how to configure Syslog and create a DCR.
+You can also collect (non-CEF) Syslog logs with the Azure Monitor Agent. Learn how to [configure Syslog and create a DCR](../azure-monitor/agents/data-collection-syslog.md).
> [!IMPORTANT] > > The CEF via AMA connector is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-The AMA is installed on a Linux machine that acts as a log forwarder, and the AMA collects the logs in the CEF format.
+The AMA is installed on a Linux machine that acts as a log forwarder, and the AMA collects logs sent by your security devices and appliances in the CEF format.
- [Set up the connector](#set-up-the-common-event-format-cef-via-ama-connector)-- [Learn more about the connector](#how-collection-works-with-the-common-event-format-cef-via-ama-connector)
+- [Learn more about the connector](#how-microsoft-sentinel-collects-cef-logs-with-the-azure-monitor-agent)
> [!IMPORTANT] >
-> On **February 28th 2023**, we introduced changes to the CommonSecurityLog table schema. Following this change, you might need to review and update custom queries. For more details, see the [recommended actions section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232) in this blog post. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) has been updated by Microsoft Sentinel.
+> On **February 28th 2023**, we introduced changes to the CommonSecurityLog table schema. Following this change, you might need to review and update custom queries. For more details, see the **"Recommended actions"** section in [this blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232). Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) has been updated by Microsoft Sentinel.
## Overview
-### What is CEF collection?
+### What is Common Event Format (CEF)?
-Many network, security appliances, and devices send their logs in the CEF format over Syslog. This format includes more structured information than Syslog, with information presented in a parsed key-value arrangement.
+Many network and security devices and appliances send their logs in the Common Event Format (CEF) based on Syslog. This format includes more structured information than Syslog, with information presented in a parsed key-value arrangement. Logs in this format are still transmitted over the syslog port (514) by default, and they are received by the Syslog daemon.
If your appliance or system sends logs over Syslog using CEF, the integration with Microsoft Sentinel allows you to easily run analytics and queries across the data.
-CEF normalizes the data, making it more immediately useful for analysis with Microsoft Sentinel. Microsoft Sentinel also allows you to ingest unparsed Syslog events, and to analyze them with query time parsing.
+CEF normalizes the data, making it more immediately useful for analysis with Microsoft Sentinel. Microsoft Sentinel also allows you to ingest unparsed Syslog events, and to analyze them with query-time parsing.
-### How collection works with the Common Event Format (CEF) via AMA connector
+### How Microsoft Sentinel collects CEF logs with the Azure Monitor Agent
+This diagram illustrates the architecture of CEF log collection in Microsoft Sentinel, using the **Common Event Format (CEF) via AMA (Preview)** connector.
-1. Your organization sets up a log forwarder (Linux VM), if one doesn't already exist. The forwarder can be on-premises or cloud-based.
-1. Your organization uploads CEF logs from your source devices to the forwarder.
-1. The AMA connector installed on the log forwarder collects and parses the logs.
-1. The connector streams the events to the Microsoft Sentinel workspace to be further analyzed.
-When you install a log forwarder, the originating device must be configured to send Syslog events to the Syslog daemon on this forwarder instead of the local daemon. The Syslog daemon on the forwarder sends events to the Azure Monitor Agent over UDP. If this Linux forwarder is expected to collect a high volume of Syslog events, its Syslog daemon sends events to the agent over TCP instead. In either case, the agent then sends the events from there to your Log Analytics workspace in Microsoft Sentinel.
+The data ingestion process using the Azure Monitor Agent uses the following components and data flows:
+- **CEF log sources:** These are your various security devices and appliances in your environment that produce logs in CEF format. These devices are configured to send their log messages over TCP port 514, not to their local Syslog daemon, but instead to the **Syslog daemon on the Log forwarder**.
+
+- **Log forwarder:** This is a dedicated Linux VM that your organization sets up to collect the log messages from your CEF log sources. The VM can be on-premises, in Azure, or in another cloud. This log forwarder itself has two components:
+ - The **Syslog daemon** (either `rsyslog` or `syslog-ng`) collects the log messages on TCP port 514. The daemon then sends these logs\* to the **Azure Monitor Agent**.
+ - The **Azure Monitor Agent** that you install on the log forwarder by [setting up the data connector according to the instructions below](#set-up-the-common-event-format-cef-via-ama-connector). The agent parses the logs and then sends them to your **Microsoft Sentinel (Log Analytics) workspace**.
+
+- Your **Microsoft Sentinel (Log Analytics) workspace:** CEF logs sent here end up in the *CommonSecurityLog* table, where you can query the logs and perform analytics on them to detect and respond to security threats.
+
+ > [!NOTE]
+ >
+ > \* The Syslog daemon sends logs to the Azure Monitor Agent in two different ways, depending on the AMA version:
+ > - AMA versions **1.28.11** and above receive logs on **TCP port 28330**.
+ > - Earlier versions of AMA receive logs via Unix domain socket.
## Set up the Common Event Format (CEF) via AMA connector
+The setup process for the CEF via AMA connector has two parts:
+
+1. **Install the Azure Monitor Agent and create a Data Collection Rule (DCR)**.
+ - [Using the Azure portal](?tabs=portal#install-the-ama-and-create-a-data-collection-rule-dcr)
+ - [Using the Azure Monitor Logs Ingestion API](?tabs=api#install-the-ama-and-create-a-data-collection-rule-dcr)
+
+1. [**Run the "installation" script**](#run-the-installation-script) on the log forwarder to configure the Syslog daemon.
+ ### Prerequisites
-Before you begin, verify that you have:
+- You must have the Microsoft Sentinel **Common Event Format** solution enabled.
+- Your Azure account must have the following roles/permissions:
+
+ |Built-in role |Scope |Reason |
+ ||||
+ |- [Virtual Machine Contributor](../role-based-access-control/built-in-roles.md)</br>- [Azure Connected Machine<br>&nbsp;&nbsp;&nbsp;Resource Administrator](../role-based-access-control/built-in-roles.md) | - Virtual machines</br>- VM scale sets</br>- Azure Arc-enabled servers | To deploy the agent |
+ |Any role that includes the action<br>*Microsoft.Resources/deployments/\** | - Subscription </br>- Resource group</br>- Existing data collection rule | To deploy Azure Resource Manager templates |
+ |[Monitoring Contributor ](../role-based-access-control/built-in-roles.md) |- Subscription </br>- Resource group </br>- Existing data collection rule | To create or edit data collection rules |
+
+- You must have a designated Linux VM (your **Log forwarder**) to collect logs.
+ - [Create a Linux VM in the Azure portal](../virtual-machines/linux/quick-create-portal.md).
+ - [Supported Linux operating systems for Azure Monitor Agent](../azure-monitor/agents/agents-overview.md#linux).
+
+- If your log forwarder *isn't* an Azure virtual machine, it must have the Azure Arc [Connected Machine agent](../azure-arc/servers/overview.md) installed on it.
+
+- The Linux log forwarder VM must have Python 2.7 or 3 installed. Use the ``python --version`` or ``python3 --version`` command to check.
+
+- The log forwarder must have either the `syslog-ng` or `rsyslog` daemon enabled.
-- The Microsoft Sentinel solution enabled. -- A defined Microsoft Sentinel workspace.-- A Linux machine to collect logs.
- - The Linux machine must have Python 2.7 or 3 installed on the Linux machine. Use the ``python --version`` or ``python3 --version`` command to check.
- - For space requirements for your log forwarder, see the [Azure Monitor Agent Performance Benchmark](../azure-monitor/agents/azure-monitor-agent-performance.md). You can also review this blog post, which includes [designs for scalable ingestion](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/designs-for-accomplishing-microsoft-sentinel-scalable-ingestion/ba-p/3741516).
-- Either the `syslog-ng` or `rsyslog` daemon enabled.-- To collect events from any system that isn't an Azure virtual machine, ensure that [Azure Arc](../azure-monitor/agents/azure-monitor-agent-manage.md) is installed.
+- For space requirements for your log forwarder, refer to the [Azure Monitor Agent Performance Benchmark](../azure-monitor/agents/azure-monitor-agent-performance.md). You can also review [this blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/designs-for-accomplishing-microsoft-sentinel-scalable-ingestion/ba-p/3741516), which includes designs for scalable ingestion.
-## Avoid data ingestion duplication
+- Your log sources (your security devices and appliances) must be configured to send their log messages to the log forwarder's Syslog daemon instead of to their local Syslog daemon.
+
+#### Avoid data ingestion duplication
Using the same facility for both Syslog and CEF messages may result in data ingestion duplication between the CommonSecurityLog and Syslog tables.
To avoid this scenario, use one of these methods:
where ProcessName !contains \"CEF\" ```
-### Configure a log forwarder
+#### Log forwarder security considerations
-To ingest Syslog and CEF logs into Microsoft Sentinel, you need to designate and configure a Linux machine that collects the logs from your devices and forwards them to your Microsoft Sentinel workspace. This machine can be a physical or virtual machine in your on-premises environment, an Azure VM, or a VM in another cloud. If this machine is not an Azure VM, it must have Azure Arc installed (see the [prerequisites](#prerequisites)).
+Make sure to configure the machine's security according to your organization's security policy. For example, you can configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements. To improve your machine security configuration, [secure your VM in Azure](../virtual-machines/security-policy.md), or review these [best practices for network security](../security/fundamentals/network-best-practices.md).
-This machine has two components that take part in this process:
+If your devices are sending Syslog and CEF logs over TLS (because, for example, your log forwarder is in the cloud), you need to configure the Syslog daemon (`rsyslog` or `syslog-ng`) to communicate in TLS:
-- A Syslog daemon, either `rsyslog` or `syslog-ng`, which collects the logs.-- The AMA, which forwards the logs to Microsoft Sentinel.
+- [Encrypt Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html)
+- [Encrypt log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)
-When you set up the connector and the DCR, you [run a script](#run-the-installation-script) on the Linux machine, which configures the built-in Linux Syslog daemon (`rsyslog.d`/`syslog-ng`) to listen for Syslog messages from your security solutions on TCP/UDP port 514.
+### Install the AMA and create a Data Collection Rule (DCR)
-The DCR installs the AMA to collect and parse the logs.
+You can perform this step in one of two ways:
+- Deploy and configure the **CEF via AMA** data connector in the [Microsoft Sentinel portal](?tabs=portal#install-the-ama-and-create-a-data-collection-rule-dcr). With this setup, you can create, manage, and delete DCRs per workspace. The AMA will be installed automatically on the VMs you select in the connector configuration.
+ **&mdash;OR&mdash;**
+- Send HTTP requests to the [Logs Ingestion API](?tabs=api#install-the-ama-and-create-a-data-collection-rule-dcr). With this setup, you can create, manage, and delete DCRs. This option is more flexible than the portal. For example, with the API, you can filter by specific log levels, where with the UI, you can only select a minimum log level. The downside is that you have to manually install the Azure Monitor Agent on the log forwarder before creating a DCR.
-#### Log forwarder - security considerations
+Select the appropriate tab below to see the instructions for each way.
-Make sure to configure the machine's security according to your organization's security policy. For example, you can configure your network to align with your corporate network security policy and change the ports and protocols in the daemon to align with your requirements. To improve your machine security configuration, [secure your VM in Azure](../virtual-machines/security-policy.md), or review these [best practices for network security](../security/fundamentals/network-best-practices.md).
+# [Microsoft Sentinel portal](#tab/portal)
-If your devices are sending Syslog and CEF logs over TLS (because, for example, your log forwarder is in the cloud), you need to configure the Syslog daemon (`rsyslog` or `syslog-ng`) to communicate in TLS:
+#### Open the connector page and start the DCR wizard
-- [Encrypt Syslog traffic with TLS ΓÇô rsyslog](https://www.rsyslog.com/doc/v8-stable/tutorials/tls_cert_summary.html)-- [Encrypt log messages with TLS ΓÇô syslog-ng](https://support.oneidentity.com/technical-documents/syslog-ng-open-source-edition/3.22/administration-guide/60#TOPIC-1209298)
+1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
-### Set up the connector
+1. Select **Data connectors** from the navigation menu
-You can set up the connector in two ways:
-- [Microsoft Sentinel portal](#set-up-the-connector-in-the-microsoft-sentinel-portal-ui). With this setup, you can create, manage, and delete DCRs per workspace. -- [API](#set-up-the-connector-with-the-api). With this setup, you can create, manage, and delete DCRs. This option is more flexible than the UI. For example, with the API, you can filter by specific log levels, where with the UI, you can only select a minimum log level.
+1. Type *CEF* in the **Search** box. From the results, select the **Common Event Format (CEF) via AMA (Preview)** connector.
-#### Set up the connector in the Microsoft Sentinel portal (UI)
+1. Select **Open connector page** on the details pane.
-1. [Open the connector page and create the DCR](#open-the-connector-page-and-create-the-dcr)
-1. [Define resources (VMs)](#define-resources-vms)
-1. [Select the data source type and create the DCR](#select-the-data-source-type-and-create-the-dcr)
-1. [Run the installation script](#run-the-installation-script)
+1. In the **Configuration** area, select **+Create data collection rule**.
-##### Open the connector page and create the DCR
+ :::image type="content" source="media/connect-cef-ama/cef-connector-page-create-dcr.png" alt-text="Screenshot showing the CEF via AMA connector page." lightbox="media/connect-cef-ama/cef-connector-page-create-dcr.png":::
-1. Open the [Azure portal](https://portal.azure.com/) and navigate to the **Microsoft Sentinel** service.
-1. Select **Data connectors**, and in the search bar, type *CEF*.
-1. Select the **Common Event Format (CEF) via AMA (Preview)** connector.
-1. Below the connector description, select **Open connector page**.
-1. In the **Configuration** area, select **Create data collection rule**.
-1. Under **Basics**:
- - Type a DCR name
- - Select your subscription
- - Select the resource group where your collector is defined
+1. In the **Basic** tab:
+ - Type a DCR name.
+ - Select your subscription.
+ - Select the resource group where your collector is defined.
+
+ :::image type="content" source="media/connect-cef-ama/dcr-basics-tab.png" alt-text="Screenshot showing the DCR details in the Basic tab." lightbox="media/connect-cef-ama/dcr-basics-tab.png":::
+
+1. Select **Next: Resources >**.
- :::image type="content" source="media/connect-cef-ama/dcr-basics-tab.png" alt-text="Screenshot showing the DCR details in the Basics tab." lightbox="media/connect-cef-ama/dcr-basics-tab.png":::
+#### Define resources (VMs)
-##### Define resources (VMs)
+In the **Resources** tab, select the machines on which you want to install the AMA&mdash;in this case, your log forwarder machine. (If your log forwarder doesn't appear in the list, it might not have the Azure Connected Machine agent installed.)
-Select the machines on which you want to install the AMA. These machines are VMs or on-premises Linux machines with Arc installed.
+1. Use the available filters or search box to find your log forwarder VM. You can expand a subscription in the list to see its resource groups, and a resource group to see its VMs.
-1. Select the **Resources** tab and select **Add Resource(s)**.
-1. Select the VMs on which you want to install the connector to collect logs.
+1. Select the log forwarder VM that you want to install the AMA on. (The check box will appear next to the VM name when you hover over it.)
:::image type="content" source="media/connect-cef-ama/dcr-select-resources.png" alt-text="Screenshot showing how to select resources when setting up the DCR." lightbox="media/connect-cef-ama/dcr-select-resources.png":::
-1. Review your changes and select **Collect**.
+1. Review your changes and select **Next: Collect >**.
-##### Select the data source type and create the DCR
+#### Select facilities and severities and create the DCR
> [!NOTE] > Using the same facility for both Syslog and CEF messages may result in data ingestion duplication. Learn how to [avoid data ingestion duplication](#avoid-data-ingestion-duplication).
-1. Select the **Collect** tab and select **Linux syslog** as the data source type.
-1. Configure the minimum log level for each facility. When you select a log level, Microsoft Sentinel collects logs for the selected level and other levels with higher severity. For example, if you select **LOG_ERR**, Microsoft Sentinel collects logs for the **LOG_ERR**, **LOG_CRIT**, **LOG_ALERT**, and **LOG_EMERG** levels.
+1. In the **Collect** tab, select the minimum log level for each facility. When you select a log level, Microsoft Sentinel collects logs for the selected level and other levels with higher severity. For example, if you select **LOG_ERR**, Microsoft Sentinel collects logs for the **LOG_ERR**, **LOG_CRIT**, **LOG_ALERT**, and **LOG_EMERG** levels.
:::image type="content" source="media/connect-cef-ama/dcr-log-levels.png" alt-text="Screenshot showing how to select log levels when setting up the DCR.":::
-1. Review your changes and select **Next: Review and create**.
+1. Review your selections and select **Next: Review + create**.
+ 1. In the **Review and create** tab, select **Create**.
-##### Run the installation script
+ :::image type="content" source="media/connect-cef-ama/dcr-review-create.png" alt-text="Screenshot showing how to review the configuration of the DCR and create it.":::
-1. Log in to the Linux forwarder machine, where you want the AMA to be installed.
-1. Run this command to launch the installation script:
-
- ```python
- sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
- ```
- The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the
- Syslog daemon configuration file according to the daemon type running on the machine:
- - Rsyslog: `/etc/rsyslog.conf`
- - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
+- The connector will install the Azure Monitor Agent on the machines you selected when creating your DCR.
- > [!NOTE]
- > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
- > Read more about [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](
-https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+- You will see notifications from the Azure portal when the DCR is created and the agent is installed.
-### Set up the connector with the API
+- Select **Refresh** on the connector page to see the DCR displayed in the list.
-You can create DCRs using the [API](/rest/api/monitor/data-collection-rules). Learn more about [DCRs](../azure-monitor/essentials/data-collection-rule-overview.md).
+# [Logs Ingestion API](#tab/api)
-Run this command to launch the installation script:
-
-```python
-sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
-```
-The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon.
+#### Install the Azure Monitor Agent
-#### Request URL and headerΓÇ»
+Follow these instructions, from the Azure Monitor documentation, to install the Azure Monitor Agent on your log forwarder. Remember to use the instructions for Linux, not those for Windows.
+- [Install the AMA using PowerShell](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=azure-powershell)
+- [Install the AMA using the Azure CLI](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=azure-cli)
+- [Install the AMA using an Azure Resource Manager template](../azure-monitor/agents/azure-monitor-agent-manage.md?tabs=azure-resource-manager)
-```rest
-GET
-https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
-```
-
-#### Request body
+You can create Data Collection Rules (DCRs) using the [Azure Monitor Logs Ingestion API](/rest/api/monitor/data-collection-rules). Learn more about [DCRs](../azure-monitor/essentials/data-collection-rule-overview.md).
-Edit the template:
-- Verify that the `streams` field is set to `Microsoft-CommonSecurityLog`.-- Add the filter and facility log levels in the `facilityNames` and `logLevels` parameters.
+#### Create the Data Collection Rule
-```rest
+Prepare a DCR file in JSON format. When you send the [API request to create the DCR](#create-the-request-url-and-header), include the contents of this file as the request body.
+
+The following is an example of a DCR creation request:
+
+```json
{
- "properties": {
- "immutableId": "dcr-bcc4039c90f0489b80927bbdf1f26008",
- "dataSources": {
- "syslog": [
- {
- "streams": [
- "Microsoft-CommonSecurityLog"
- ],
-
- "facilityNames": [
- "*"
- ],
- "logLevels": [ "*"
- ],
- "name": "sysLogsDataSource-1688419672"
- }
- ]
- },
- "destinations": {
- "logAnalytics": [
- {
- "workspaceResourceId": "/subscriptions/{Your-Subscription-
-Id}/resourceGroups/{resourceGroupName}/providers/Microsoft.OperationalInsights/workspaces/{SentinelWorkspaceName}", "workspaceId": "123x56xx-9123-xx4x-567x-89123xxx45",
-"name": "la-140366483"
- }
- ]
+ "location": "centralus",
+ "kind": "Linux",
+ "properties": {
+ "dataSources": {
+ "syslog": [
+ {
+ "name": "localsSyslog",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "auth",
+ "local0",
+ "local1",
+ "local2",
+ "local3",
+ "syslog"
+ ],
+ "logLevels": [
+ "Critical",
+ "Alert",
+ "Emergency"
+ ]
},
- "dataFlows": [
- {
- "streams": [
- "Microsoft-CommonSecurityLog"
- ],
- "destinations": [
- "la-140366483"
- ]
- }
+ {
+ "name": "authprivSyslog",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "authpriv"
+ ],
+ "logLevels": [
+ "Error",
+ "Alert",
+ "Critical",
+ "Emergency"
+ ]
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/Contoso",
+ "workspaceId": "11111111-2222-3333-4444-555555555555",
+ "name": "DataCollectionEvent"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
],
- "provisioningState": "Succeeded"
+ "destinations": [
+ "DataCollectionEvent"
+ ]
+ }
+ ]
+ }
+}
+```
+
+- Verify that the `streams` field is set to `Microsoft-CommonSecurityLog`.
+- Add the filter and facility log levels in the `facilityNames` and `logLevels` parameters. See [examples below](#examples-of-facilities-and-log-levels-sections).
+
+##### Create the request URL and header
+
+1. Copy the request URL and header below by selecting the *copy* icon in the upper right corner of the frame.
+
+ ```http
+ PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2022-06-01
+ ```
+
+ - Substitute the appropriate values for the `{subscriptionId}` and `{resourceGroupName}` placeholders.
+ - Enter a name of your choice for the DCR in place of the `{dataCollectionRuleName}` placeholder.
+
+ Example:
+ ```http
+ PUT https://management.azure.com/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.Insights/dataCollectionRules/Contoso-DCR-01?api-version=2022-06-01
+ ```
+
+1. Paste the request URL and header in a REST API client of your choosing.
+
+##### Create the request body and send the request
+
+Copy and paste the DCR JSON file that you created (based on the example above) into the request body, and send it.
+
+Here's the response you should receive according to the sample request above:
+
+```json
+ {
+ "properties": {
+ "immutableId": "dcr-0123456789abcdef0123456789abcdef",
+ "dataSources": {
+ "syslog": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "auth",
+ "local0",
+ "local1",
+ "local2",
+ "local3",
+ "syslog"
+ ],
+ "logLevels": [
+ "Critical",
+ "Alert",
+ "Emergency"
+ ],
+ "name": "localsSyslog"
+ },
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "authpriv"
+ ],
+ "logLevels": [
+ "Error",
+ "Alert",
+ "Critical",
+ "Emergency"
+ ],
+ "name": "authprivSyslog"
+ }
+ ]
+ },
+ "destinations": {
+ "logAnalytics": [
+ {
+ "workspaceResourceId": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.OperationalInsights/workspaces/Contoso",
+ "workspaceId": "11111111-2222-3333-4444-555555555555",
+ "name": "DataCollectionEvent"
+ }
+ ]
+ },
+ "dataFlows": [
+ {
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "destinations": [
+ "DataCollectionEvent"
+ ]
+ }
+ ],
+ "provisioningState": "Succeeded"
},
- "location": "westeurope",
- "tags": {},
+ "location": "centralus",
"kind": "Linux",
- "id": "/subscriptions/{Your-Subscription- Id}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{DCRName}",
- "name": "{DCRName}",
- "type": "Microsoft.Insights/dataCollectionRules",
- "etag": "\"2401b6f3-0000-0d00-0000-618bbf430000\""
+ "id": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.Insights/dataCollectionRules/Contoso-DCR-01",
+ "name": "Contoso-DCR-01",
+ "type": "Microsoft.Insights/dataCollectionRules",
+ "etag": "\"00000000-0000-0000-0000-000000000000\"",
+ "systemData": {
+ }
+ }
+```
+
+#### Associate the DCR with the log forwarder
+
+Now you need to create a DCR Association (DCRA) that ties the DCR to the VM resource that hosts your log forwarder.
+
+This procedure follows the same steps as creating the DCR.
+
+##### Request URL and header
+
+```http
+PUT
+https://management.azure.com/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/LogForwarder-VM-1/providers/Microsoft.Insights/dataCollectionRuleAssociations/contoso-dcr-assoc?api-version=2022-06-01
+```
+
+##### Request body
+
+```json
+{
+ "properties": {
+ "dataCollectionRuleId": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.Insights/dataCollectionRules/Contoso-DCR-01"
+ }
} ```
-After you finish editing the template, use `POST` or `PUT` to deploy it:
-```rest
-PUT
-https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Insights/dataCollectionRules/{dataCollectionRuleName}?api-version=2019-11-01-preview
+Here's a sample response:
+
+```json
+{
+ "properties": {
+ "dataCollectionRuleId": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.Insights/dataCollectionRules/Contoso-DCR-01"
+ },
+ "id": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee/resourceGroups/ContosoRG/providers/Microsoft.Compute/virtualMachines/LogForwarder-VM-1/providers/Microsoft.Insights/dataCollectionRuleAssociations/contoso-dcr-assoc",
+ "name": "contoso-dcr-assoc",
+ "type": "Microsoft.Insights/dataCollectionRuleAssociations",
+ "etag": "\"00000000-0000-0000-0000-000000000000\"",
+ "systemData": {
+ }
+ }
```+ #### Examples of facilities and log levels sections Review these examples of the facilities and log levels settings. The `name` field includes the filter name. This example collects events from the `cron`, `daemon`, `local0`, `local3` and `uucp` facilities, with the `Warning`, `Error`, `Critical`, `Alert`, and `Emergency` log levels:
-```rest
- "syslog": [
- {
- "streams": [
- "Microsoft-CommonSecurityLog"
+```json
+ "dataSources": {
+ "syslog": [
+ {
+ "name": "SyslogStream0",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
], "facilityNames": [
- "cron",
- "daemon",
- "local0",
- "local3",
- "uucp"
+ "cron",
+ "daemon",
+ "local0",
+ "local3",
+ "uucp"
],
-
"logLevels": [
- "Warning",
- "Error",
- "Critical",
- "Alert",
- "Emergency"
- ],
- "name": "sysLogsDataSource-1688419672"
- }
-]
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ]
+ }
+ ]
+ }
``` This example collects events for:
This example collects events for:
- The `kern`, `local0`, `local5`, and `news` facilities with the `Critical`, `Alert`, and `Emergency` log levels - The `mail` and `uucp` facilities with the `Emergency` log level
-```rest
- "syslog": [
- {
- "streams": [
- "Microsoft-CommonSecurityLog"
- ],
- "facilityNames": [
- "authpriv",
- "mark"
- ],
- "logLevels": [
- "Info",
- "Notice",
- "Warning",
- "Error",
- "Critical",
- "Alert",
- "Emergency"
- ],
- "name": "sysLogsDataSource--1469397783"
+```json
+ "dataSources": {
+ "syslog": [
+ {
+ "name": "SyslogStream1",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "authpriv",
+ "mark"
+ ],
+ "logLevels": [
+ "Info",
+ "Notice",
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ]
}, {
- "streams": [ "Microsoft-CommonSecurityLog"
- ],
- "facilityNames": [
- "daemon"
- ],
- "logLevels": [
- "Warning",
- "Error",
- "Critical",
- "Alert",
- "Emergency"
- ],
-
- "name": "sysLogsDataSource--1343576735"
+ "name": "SyslogStream2",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "daemon"
+ ],
+ "logLevels": [
+ "Warning",
+ "Error",
+ "Critical",
+ "Alert",
+ "Emergency"
+ ]
}, {
- "streams": [
- "Microsoft-CommonSecurityLog"
- ],
- "facilityNames": [
- "kern",
- "local0",
- "local5",
- "news"
- ],
- "logLevels": [
- "Critical",
- "Alert",
- "Emergency"
- ],
- "name": "sysLogsDataSource--1469572587"
+ "name": "SyslogStream3",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "kern",
+ "local0",
+ "local5",
+ "news"
+ ],
+ "logLevels": [
+ "Critical",
+ "Alert",
+ "Emergency"
+ ]
}, {
- "streams": [
- "Microsoft-CommonSecurityLog"
- ],
- "facilityNames": [
- "mail",
- "uucp"
- ],
- "logLevels": [
- "Emergency"
- ],
- "name": "sysLogsDataSource-1689584311"
+ "name": "SyslogStream4",
+ "streams": [
+ "Microsoft-CommonSecurityLog"
+ ],
+ "facilityNames": [
+ "mail",
+ "uucp"
+ ],
+ "logLevels": [
+ "Emergency"
+ ]
}
- ]
-}
+ ]
+ }
+ ```
-### Test the connector
+++
+### Run the "installation" script
+
+The "installation" script doesn't actually install anything, but it configures the Syslog daemon on your log forwarder properly to collect the logs.
+
+1. From the connector page, copy the command line that appears under **Run the following command to install and apply the CEF collector:** by selecting the *copy* icon next to it.
+
+ :::image type="content" source="media/connect-cef-ama/run-install-script.png" alt-text="Screenshot of command line on connector page.":::
+
+ You can also copy it from here:
+ ```python
+ sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py
+ ```
+
+1. Log in to the log forwarder machine where you just installed the AMA.
+
+1. Paste the command you copied in the last step to launch the installation script.
+ The script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the Syslog daemon configuration file according to the daemon type running on the machine:
+ - Rsyslog: `/etc/rsyslog.conf`
+ - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
+
+ > [!NOTE]
+ > To avoid [Full Disk scenarios](../azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) where the agent can't function, we recommend that you set the `syslog-ng` or `rsyslog` configuration not to store unneeded logs. A Full Disk scenario disrupts the function of the installed AMA.
+ > Read more about [RSyslog](https://www.rsyslog.com/doc/master/configuration/actions.html) or [Syslog-ng](https://www.syslog-ng.com/technical-documents/doc/syslog-ng-open-source-edition/3.26/administration-guide/34#TOPIC-1431029).
+
+## Test the connector
1. To validate that the syslog daemon is running on the UDP port and that the AMA is listening, run this command:
This example collects events for:
## Next steps
-In this article, you learned how to set up the CEF via AMA connector to upload data from appliances that support CEF over Syslog. To learn more about Microsoft Sentinel, see the following articles:
+In this article, you learned how to set up data ingestion from security devices and appliances that support CEF over Syslog, using the **Common Event Format (CEF) via AMA (Preview)** connector.
+
+- Explore in greater depth how to [collect CEF or Syslog logs with the Azure Monitor Agent](../azure-monitor/agents/data-collection-syslog.md), including how to configure Syslog and create a DCR.
+- See other articles about ingesting CEF and Syslog logs:
+ - [Stream logs in both the CEF and Syslog format](connect-cef-syslog.md)
+ - [Options for streaming logs in the CEF and Syslog format to Microsoft Sentinel](connect-cef-syslog-options.md)
++
+To learn more about Microsoft Sentinel, see the following articles:
- Learn how to [get visibility into your data, and potential threats](get-visibility.md). - Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-syslog.md
Learn how to [collect Syslog with the Azure Monitor Agent](../azure-monitor/agen
> > On **February 28th 2023**, we introduced changes to the CommonSecurityLog table schema. Following this change, you might need to review and update custom queries. For more details, see the [recommended actions section](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/upcoming-changes-to-the-commonsecuritylog-table/ba-p/3643232) in this blog post. Out-of-the-box content (detections, hunting queries, workbooks, parsers, etc.) has been updated by Microsoft Sentinel.
-Read more about [CEF](connect-cef-ama.md#what-is-cef-collection) and [Syslog](connect-syslog.md#architecture) collection in Microsoft Sentinel.
+Read more about [CEF](connect-cef-ama.md#what-is-common-event-format-cef) and [Syslog](connect-syslog.md#architecture) collection in Microsoft Sentinel.
## Prerequisites
Before you begin, verify that you have:
- For space requirements for your log forwarder, see the [Azure Monitor Agent Performance Benchmark](../azure-monitor/agents/azure-monitor-agent-performance.md). You can also review this blog post, which includes [designs for scalable ingestion](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/designs-for-accomplishing-microsoft-sentinel-scalable-ingestion/ba-p/3741516). - Either the `syslog-ng` or `rsyslog` daemon enabled. - To collect events from any system that isn't an Azure virtual machine, ensure that [Azure Arc](../azure-monitor/agents/azure-monitor-agent-manage.md) is installed.-- To ingest Syslog and CEF logs into Microsoft Sentinel, you can designate and configure a Linux machine that collects the logs from your devices and forwards them to your Microsoft Sentinel workspace. [Configure a log forwarder](connect-cef-ama.md#configure-a-log-forwarder).
+- To ingest Syslog and CEF logs into Microsoft Sentinel, you can designate and configure a Linux machine that collects the logs from your devices and forwards them to your Microsoft Sentinel workspace. [Configure a log forwarder](connect-cef-ama.md#how-microsoft-sentinel-collects-cef-logs-with-the-azure-monitor-agent).
## Avoid data ingestion duplication
To avoid this scenario, use one of these methods:
## Create a DCR for your CEF logs - Create the DCR via the UI:
- 1. [Open the connector page and create the DCR](connect-cef-ama.md#open-the-connector-page-and-create-the-dcr).
+ 1. [Open the connector page and start the DCR wizard](connect-cef-ama.md#open-the-connector-page-and-start-the-dcr-wizard).
1. [Define resources (VMs)](connect-cef-ama.md#define-resources-vms).
- 1. [Select the data source type and create the DCR](connect-cef-ama.md#select-the-data-source-type-and-create-the-dcr).
+ 1. [Select facilities and severities and create the DCR](connect-cef-ama.md#select-facilities-and-severities-and-create-the-dcr).
> [!IMPORTANT] > Make sure to **[avoid data ingestion duplication](#avoid-data-ingestion-duplication)** (review the options in this section).
- 1. [Run the installation script](connect-cef-ama.md).
+ 1. [Run the installation script](connect-cef-ama.md#run-the-installation-script).
- Create the DCR via the API:
- 1. [Create the request URL and header](connect-cef-ama.md#request-url-and-header).
- 1. [Create the request body](connect-cef-ama.md#request-body).
+ 1. [Create the request URL and header](connect-cef-ama.md#create-the-request-url-and-header).
+ 1. [Create the request body](connect-cef-ama.md#create-the-request-body-and-send-the-request).
See [examples of facilities and log levels sections](connect-cef-ama.md#examples-of-facilities-and-log-levels-sections).
Create the DCR for your Syslog-based logs using the Azure Monitor [guidelines](.
```python sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python3 Forwarder_AMA_installer.py ```
- The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the
- Syslog daemon configuration file according to the daemon type running on the machine:
+ The installation script configures the `rsyslog` or `syslog-ng` daemon to use the required protocol and restarts the daemon. The script opens port 514 to listen to incoming messages in both UDP and TCP protocols. To change this setting, refer to the Syslog daemon configuration file according to the daemon type running on the machine:
- Rsyslog: `/etc/rsyslog.conf` - Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
site-recovery Deploy Vmware Azure Replication Appliance Modernized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/deploy-vmware-azure-replication-appliance-modernized.md
If there are any organizational restrictions, you can manually set up the Site R
- CheckCommandPromptPolicy - Prevents access to the command prompt. - Key: HKLM\SOFTWARE\Policies\Microsoft\Windows\System
- - DisableCMD value shouldn't be equal 0.
+ - DisableCMD value should be equal 0.
- CheckTrustLogicAttachmentsPolicy - Trust logic for file attachments.
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
To update this setting for an existing storage account, follow these steps:
:::image type="content" source="media/authorize-data-operations-portal/default-auth-account-update-portal.png" alt-text="Screenshot showing how to configure default Microsoft Entra authorization in Azure portal for existing account":::
+The **defaultToOAuthAuthentication** property of a storage account is not set by default and does not return a value until you explicitly set it.
+ ## Next steps - [Authorize access to data in Azure Storage](../common/authorize-data-access.md)
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
An object replication policy can prevent an inventory job from writing inventory
### Inventory and Immutable Storage
-In instances where immutable storage is enabled, it's essential to be aware of a specific limitation pertaining to Inventory reports. Due to the inherent characteristics of immutable storage, notably its write-once, read-many (WORM) nature, the execution of Inventory reports is constrained. The results cannot be written when immutable storage is active. This stands as a known limitation, and we recommend planning your reporting activities accordingly.
+You can't configure an inventory policy in the account if support for version-level immutability is enabled on that account, or if support for version-level immutability is enabled on the destination container that is defined in the inventory policy.
## Next steps
storage Immutable Storage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-storage-overview.md
When you enable blob inventory, Azure Storage generates an inventory report on a
For more information about blob inventory, see [Azure Storage blob inventory](blob-inventory.md).
+> [!NOTE]
+> You can't configure an inventory policy in an account if support for version-level immutability is enabled on that account, or if support for version-level immutability is enabled on the destination container that is defined in the inventory policy.
+ ## Pricing There is no additional capacity charge for using immutable storage. Immutable data is priced in the same way as mutable data. For pricing details on Azure Blob Storage, see the [Azure Storage pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
storage Upgrade To Data Lake Storage Gen2 How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/upgrade-to-data-lake-storage-gen2-how-to.md
The migration process creates a directory for each path segment of a blob. Data
The upgrade might fail if an application writes to the storage account during the upgrade. To prevent such write activity: 1. Quiesce any applications or services that might perform write operations.
-1. Release or break existing leases on containers and blobs in the storage account.
-1. Acquire new leases on all containers and blobs in the account. The new leases should be infinite or long enough to prevent write access for the duration of the upgrade.
+
+2. Release or break existing leases on containers and blobs in the storage account.
After the upgrade has completed, break the leases you created to resume allowing write access to the containers and blobs.
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
description: Learn how to enable Active Directory Domain Services authentication
Previously updated : 10/19/2023 Last updated : 01/12/2024 recommendations: false
This article describes the process for enabling Active Directory Domain Services
To enable AD DS authentication over SMB for Azure file shares, you need to register your Azure storage account with your on-premises AD DS and then set the required domain properties on the storage account. To register your storage account with AD DS, you create a computer account (or service logon account) representing it in your AD DS. Think of this process as if it were like creating an account representing an on-premises Windows file server in your AD DS. When the feature is enabled on the storage account, it applies to all new and existing file shares in the account. ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
To enable AD DS authentication over SMB for Azure file shares, you need to regis
## Option one (recommended): Use AzFilesHybrid PowerShell module
-The AzFilesHybrid PowerShell module provides cmdlets for deploying and configuring Azure Files. It includes cmdlets for domain joining storage accounts to your on-premises Active Directory and configuring your DNS servers. The cmdlets make the necessary modifications and enable the feature for you. Because some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do, so you can determine if the changes align with your compliance and security policies, and ensure you have the proper permissions to execute the cmdlets. Although we recommend using AzFilesHybrid module, if you're unable to do so, we provide [manual steps](#option-two-manually-perform-the-enablement-actions).
+The AzFilesHybrid PowerShell module provides cmdlets for deploying and configuring Azure Files. It includes cmdlets for domain joining storage accounts to your on-premises Active Directory and configuring your DNS servers. The cmdlets make the necessary modifications and enable the feature for you. Because some parts of the cmdlets interact with your on-premises AD DS, we explain what the cmdlets do, so you can determine if the changes align with your compliance and security policies, and ensure you have the proper permissions to execute the cmdlets. Although we recommend using the AzFilesHybrid module, if you're unable to do so, we provide [manual steps](#option-two-manually-perform-the-enablement-actions).
+
+> [!IMPORTANT]
+> AES-256 Kerberos encryption is now the only encryption method supported by the AzFilesHybrid module. If you prefer to use RC4 encryption, see [Option two: Manually perform the enablement actions](#option-two-manually-perform-the-enablement-actions). If you previously enabled the feature with an old AzFilesHybrid version (below v0.2.2) that used RC4 as the default encryption method and want to update to support AES-256, see [troubleshoot Azure Files SMB authentication](/troubleshoot/azure/azure-storage/files-troubleshoot-smb-authentication?toc=/azure/storage/files/toc.json#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).
### Prerequisites
The AzFilesHybrid PowerShell module provides cmdlets for deploying and configuri
### Download AzFilesHybrid module
-[Download and unzip the latest version of the AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). Note that AES-256 Kerberos encryption is supported on v0.2.2 or above, and is the default encryption method beginning in v0.2.5. If you've enabled the feature with an AzFilesHybrid version below v0.2.2 and want to update to support AES-256 Kerberos encryption, see [troubleshoot Azure Files SMB authentication](/troubleshoot/azure/azure-storage/files-troubleshoot-smb-authentication?toc=/azure/storage/files/toc.json#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).
+[Download and unzip the latest version of the AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases).
### Run Join-AzStorageAccount
$DomainAccountType = "<ComputerAccount|ServiceLogonAccount>" # Default is set as
# If you don't provide the OU name as an input parameter, the AD identity that represents the # storage account is created under the root directory. $OuDistinguishedName = "<ou-distinguishedname-here>"
-# Specify the encryption algorithm used for Kerberos authentication. Using AES256 is recommended.
-$EncryptionType = "<AES256|RC4|AES256,RC4>"
+# Encryption method is AES-256 Kerberos.
# Select the target subscription for the current session Select-AzSubscription -SubscriptionId $SubscriptionId
Join-AzStorageAccount `
-StorageAccountName $StorageAccountName ` -SamAccountName $SamAccountName ` -DomainAccountType $DomainAccountType `
- -OrganizationalUnitDistinguishedName $OuDistinguishedName `
- -EncryptionType $EncryptionType
+ -OrganizationalUnitDistinguishedName $OuDistinguishedName
# You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration # with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more details on
$storageAccount.AzureFilesIdentityBasedAuth.ActiveDirectoryProperties
If successful, the output should look like this:
-```PowerShell
+```output
DomainName:<yourDomainHere> NetBiosDomainName:<yourNetBiosDomainNameHere> ForestName:<yourForestNameHere>
storage Storage Files Identity Auth Domain Services Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md
Most users should assign share-level permissions to specific Microsoft Entra use
There are five Azure built-in roles for Azure Files, some of which allow granting share-level permissions to users and groups: -- **Storage File Data Share Reader** allows read access in Azure file shares over SMB.-- **Storage File Data Privileged Reader** allows read access in Azure file shares over SMB by overriding existing Windows ACLs.-- **Storage File Data Share Contributor** allows read, write, and delete access in Azure file shares over SMB.-- **Storage File Data Share Elevated Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB. - **Storage File Data Privileged Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB by overriding existing Windows ACLs.
+- **Storage File Data Privileged Reader** allows read access in Azure file shares over SMB by overriding existing Windows ACLs.
+- **Storage File Data SMB Share Contributor** allows read, write, and delete access in Azure file shares over SMB.
+- **Storage File Data SMB Share Elevated Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB.
+- **Storage File Data SMB Share Reader** allows read access in Azure file shares over SMB.
> [!IMPORTANT] > Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage account key. Administrative control isn't supported with Microsoft Entra credentials.
storage Storage How To Use Files Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-windows.md
description: Learn to use Azure file shares with Windows and Windows Server. Use
Previously updated : 01/05/2024 Last updated : 01/16/2024 ai-usage: ai-assisted
In order to use an Azure file share via the public endpoint outside of the Azure
<sup>1</sup>Regular Microsoft support for Windows 7 and Windows Server 2008 R2 has ended. It's possible to purchase additional support for security updates only through the [Extended Security Update (ESU) program](https://support.microsoft.com/help/4497181/lifecycle-faq-extended-security-updates). We strongly recommend migrating off of these operating systems.
-> [!Note]
-> We always recommend taking the most recent KB for your version of Windows.
+> [!NOTE]
+> We recommend taking the most recent KB for your version of Windows.
## Prerequisites
A common pattern for lifting and shifting line-of-business (LOB) applications th
### Mount the Azure file share
-The Azure portal provides a script that you can use to mount your file share directly to a host using the storage account key. We recommend using this provided script.
+The Azure portal provides a PowerShell script that you can use to mount your file share directly to a host using the storage account key. Unless you're mounting the file share using identity-based authentication, we recommend using this provided script.
To get this script:
You have now mounted your Azure file share.
### Mount the Azure file share with File Explorer
-> [!Note]
+> [!NOTE]
> Note that the following instructions are shown on Windows 10 and may differ slightly on older releases. 1. Open File Explorer by opening it from the Start Menu, or by pressing the Win+E shortcut.
You have now mounted your Azure file share.
### Access an Azure file share via its UNC path
-You don't need to mount the Azure file share to a particular drive letter to use it. You can directly access your Azure file share using the [UNC path](/windows/win32/fileio/naming-a-file) by entering the following into File Explorer. Be sure to replace *storageaccountname* with your storage account name and *myfileshare* with your file share name:
+You don't need to mount the Azure file share to a drive letter to use it. You can directly access your Azure file share using the [UNC path](/windows/win32/fileio/naming-a-file) by entering the following into File Explorer. Be sure to replace *storageaccountname* with your storage account name and *myfileshare* with your file share name:
`\\storageaccountname.file.core.windows.net\myfileshare`
-You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share. If you do not get prompted for credentials you can add the credentials using the following command:
+You'll be asked to sign in with your network credentials. Sign in with the Azure subscription under which you've created the storage account and file share. If you don't get prompted for credentials, you can add the credentials using the following command:
`cmdkey /add:StorageAccountName.file.core.windows.net /user:localhost\StorageAccountName /pass:StorageAccountKey`
-For Azure Government Cloud, simply change the servername to:
+For Azure Government Cloud, change the servername to:
`\\storageaccountname.file.core.usgovcloudapi.net\myfileshare`
Select **Previous Versions** to see the list of share snapshots for this directo
![Previous Versions tab](./media/storage-how-to-use-files-windows/snapshot-windows-list.png)
-You can select **Open** to open a particular snapshot.
+You can select **Open** to open a particular snapshot.
![Opened snapshot](./media/storage-how-to-use-files-windows/snapshot-browse-windows.png)
You can select **Open** to open a particular snapshot.
Select **Restore** to copy the contents of the entire directory recursively at the share snapshot creation time to the original location.
- ![Restore button in warning message](./media/storage-how-to-use-files-windows/snapshot-windows-restore.png)
+ ![Restore button in warning message](./media/storage-how-to-use-files-windows/snapshot-windows-restore.png)
## Enable SMB Multichannel
-Support for SMB Multichannel in Azure Files requires ensuring Windows has all the relevant patches applied to be up-to-date. Several older Windows versions, including Windows Server 2016, Windows 10 version 1607, and Windows 10 version 1507, require additional registry keys to be set for all relevant SMB Multichannel fixes to be applied on fully patched installations. If you're running a version of Windows that is newer than these three versions, no additional action is required.
+Support for SMB Multichannel in Azure Files requires ensuring Windows has all the relevant patches applied. Several older Windows versions, including Windows Server 2016, Windows 10 version 1607, and Windows 10 version 1507, require additional registry keys to be set for all relevant SMB Multichannel fixes to be applied on fully patched installations. If you're running a version of Windows that's newer than these three versions, no additional action is required.
### Windows Server 2016 and Windows 10 version 1607
Set-ItemProperty `
## Next steps See these links for more information about Azure Files:+ - [Planning for an Azure Files deployment](storage-files-planning.md) - [FAQ](storage-files-faq.md) - [Troubleshoot Azure Files](/troubleshoot/azure/azure-storage/files-troubleshoot?toc=/azure/storage/files/toc.json)
synapse-analytics Overview Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/overview-cognitive-services.md
- Last updated 02/14/2023 # Azure AI services in Azure Synapse Analytics
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
Title: "Design and performance for Netezza migrations"
description: Learn how Netezza and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes. -
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/2-etl-load-migration-considerations.md
Title: "Data migration, ETL, and load for Netezza migrations"
description: Learn how to plan your data migration from Netezza to Azure Synapse Analytics to minimize the risk and impact on users. -
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/3-security-access-operations.md
Title: "Security, access, and operations for Netezza migrations"
description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse Analytics and Netezza. -
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/4-visualization-reporting.md
Title: "Visualization and reporting for Netezza migrations"
description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse Analytics compared to Netezza. -
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/5-minimize-sql-issues.md
Title: "Minimize SQL issues for Netezza migrations"
description: Learn how to minimize the risk of SQL issues when migrating from Netezza to Azure Synapse Analytics. -
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/6-microsoft-third-party-migration-tools.md
Title: "Tools for Netezza data warehouse migration to Azure Synapse Analytics"
description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Netezza to Azure Synapse Analytics. -
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/7-beyond-data-warehouse-migration.md
Title: "Beyond Netezza migration, implement a modern data warehouse in Microsoft
description: Learn how a Netezza migration to Azure Synapse Analytics lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem. -
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/1-design-performance-migration.md
Title: "Design and performance for Oracle migrations"
description: Learn how Oracle and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes. -
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/2-etl-load-migration-considerations.md
Title: "Data migration, ETL, and load for Oracle migrations"
description: Learn how to plan your data migration from Oracle to Azure Synapse Analytics to minimize the risk and impact on users. -
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/3-security-access-operations.md
Title: "Security, access, and operations for Oracle migrations"
description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse Analytics and Oracle. -
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/4-visualization-reporting.md
Title: "Visualization and reporting for Oracle migrations"
description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse Analytics compared to Oracle. -
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/5-minimize-sql-issues.md
Title: "Minimize SQL issues for Oracle migrations"
description: Learn how to minimize the risk of SQL issues when migrating from Oracle to Azure Synapse Analytics. -
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/6-microsoft-third-party-migration-tools.md
Title: "Tools for Oracle data warehouse migration to Azure Synapse Analytics"
description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Oracle to Azure Synapse Analytics. -
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/7-beyond-data-warehouse-migration.md
Title: "Beyond Oracle migration, implement a modern data warehouse in Microsoft
description: Learn how an Oracle migration to Azure Synapse Analytics lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem. -
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
Title: "Design and performance for Teradata migrations"
description: Learn how Teradata and Azure Synapse SQL databases differ in their approach to high query performance on exceptionally large data volumes. -
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/2-etl-load-migration-considerations.md
Title: "Data migration, ETL, and load for Teradata migrations"
description: Learn how to plan your data migration from Teradata to Azure Synapse Analytics to minimize the risk and impact on users. -
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/3-security-access-operations.md
Title: "Security, access, and operations for Teradata migrations"
description: Learn about authentication, users, roles, permissions, monitoring, and auditing, and workload management in Azure Synapse Analytics and Teradata. -
synapse-analytics 4 Visualization Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/4-visualization-reporting.md
Title: "Visualization and reporting for Teradata migrations"
description: Learn about Microsoft and third-party BI tools for reports and visualizations in Azure Synapse Analytics compared to Teradata. -
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/5-minimize-sql-issues.md
Title: "Minimize SQL issues for Teradata migrations"
description: Learn how to minimize the risk of SQL issues when migrating from Teradata to Azure Synapse Analytics. -
synapse-analytics 6 Microsoft Third Party Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/6-microsoft-third-party-migration-tools.md
Title: "Tools for Teradata data warehouse migration to Azure Synapse Analytics"
description: Learn about Microsoft and third-party data and database migration tools that can help you migrate from Teradata to Azure Synapse Analytics. -
synapse-analytics 7 Beyond Data Warehouse Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/7-beyond-data-warehouse-migration.md
Title: "Beyond Teradata migration, implement a modern data warehouse in Microsof
description: Learn how a Teradata migration to Azure Synapse Analytics lets you integrate your data warehouse with the Microsoft Azure analytical ecosystem. -
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
mssparkutils.session.stop()
> We don't recommend call language built-in APIs like `sys.exit` in Scala or `sys.exit()` in Python in your code, because such APIs just > kill the interpreter process, leaving Spark session alive and resources not released.
+## Package Dependencies
+
+If you want to develop notebooks or jobs locally and need to reference the relevant packages for compilation/IDE hints, you can use the following packages.
+
+[PyPI package](https://pypi.org/project/dummy-notebookutils/)
+
+[Cran package](https://cran.r-project.org/web/packages/notebookutils/https://docsupdatetracker.net/index.html)
+
+[Maven dependencies](https://mvnrepository.com/artifact/com.microsoft.azure.synapse/synapseutils)
+ ## Next steps - [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/master/Notebooks)
synapse-analytics Resource Consumption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resource-consumption-models.md
Last updated 04/15/2020 - # Synapse SQL resource consumption
synapse-analytics Synapse Notebook Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-notebook-activity.md
Last updated 05/19/2021 -
update-manager Pre Post Events Common Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-common-scenarios.md
This article presents the frequently asked questions in the lifecycle of pre and
1. On the selected maintenance configuration page, under **Settings**, select **Events**. 1. In the **Essentials** section, view metrics to see the metrics for all the events that are part of the event subscription. In the grid, the count of the Published Events metric should match with the count of Matched Events metric. Both of these two values should also correspond with the Delivered Events count. 1. To view the metrics specific to a pre or a post event, select the name of the event from the grid. Here, the count of Matched Events metric should match with the Delivered Events count.
-1. To view the time at which the event was triggered, hover over the line graph. [Learn more](https://learn.microsoft.com/azure/azure-monitor/reference/supported-metrics/microsoft-eventgrid-systemtopics-metrics).
+1. To view the time at which the event was triggered, hover over the line graph. [Learn more](/azure/azure-monitor/reference/supported-metrics/microsoft-eventgrid-systemtopics-metrics).
## How to check an unsuccessful delivery of a pre and post events to an endpoint from Event Grid?
If the user modifies the schedule run time after the pre-event has been triggere
## Next steps - For an overview on [pre and post scenarios](pre-post-scripts-overview.md)-- Manage the [pre and post maintenance configuration events](manage-pre-post-events.md)
+- Manage the [pre and post maintenance configuration events](manage-pre-post-events.md)
virtual-desktop Autoscale New Existing Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-new-existing-host-pool.md
- Title: Azure Virtual Desktop scaling plans for host pools in Azure Virtual Desktop
-description: How to assign scaling plans to new or existing host pools in your deployment.
-- Previously updated : 11/01/2023---
-# Assign scaling plans to host pools in Azure Virtual Desktop
-
-You can assign a scaling plan to any existing host pools in your deployment. When you assign a scaling plan to your host pool, the plan will apply to all session hosts within that host pool. The scaling plan also automatically applies to any new session hosts you create in the assigned host pool.
-
-If you disable a scaling plan, all assigned resources will remain in the state they were in at the time you disabled it.
-
-## Assign a scaling plan to a single existing host pool
-
-To assign a scaling plan to an existing host pool:
-
-1. Open the [Azure portal](https://portal.azure.com).
-
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-
-1. Select **Host pools**, and select the host pool you want to assign the scaling plan to.
-
-1. Under the **Settings** heading, select **Scaling plan**, and then select **+ Assign**. Select the scaling plan you want to assign and select **Assign**. The scaling plan must be in the same Azure region as the host pool and the scaling plan's host pool type must match the type of host pool that you're trying to assign it to.
-
-> [!TIP]
-> If you've enabled the scaling plan during deployment, then you'll also have the option to disable the plan for the selected host pool in the **Scaling plan** menu by unselecting the **Enable autoscale** checkbox, as shown in the following screenshot.
->
-> [!div class="mx-imgBorder"]
-> ![A screenshot of the scaling plan window. The "enable autoscale" check box is selected and highlighted with a red border.](media/enable-autoscale.png)
-
-## Assign a scaling plan to multiple existing host pools
-
-To assign a scaling plan to multiple existing host pools at the same time:
-
-1. Open the [Azure portal](https://portal.azure.com).
-
-1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-
-1. Select **Scaling plans**, and select the scaling plan you want to assign to host pools.
-
-1. Under the **Manage** heading, select **Host pool assignments**, and then select **+ Assign**. Select the host pools you want to assign the scaling plan to and select **Assign**. The host pools must be in the same Azure region as the scaling plan and the scaling plan's host pool type must match the type of host pools you're trying to assign it to.
-
-## Next steps
--- Review how to create a scaling plan at [Autoscale for Azure Virtual Desktop session hosts](autoscale-new-existing-host-pool.md).-- Learn how to troubleshoot your scaling plan at [Enable diagnostics for your scaling plan](autoscale-diagnostics.md).-- Learn more about terms used in this article at our [autoscale glossary](autoscale-glossary.md).-- For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md).-- View our [autoscale FAQ](autoscale-faq.yml) to answer commonly asked questions.
virtual-desktop Autoscale Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-scaling-plan.md
Title: Create an autoscale scaling plan for Azure Virtual Desktop
-description: How to create an autoscale scaling plan to optimize deployment costs.
+ Title: Create and assign an autoscale scaling plan for Azure Virtual Desktop
+description: How to create and assign an autoscale scaling plan to optimize deployment costs.
Previously updated : 07/18/2023 Last updated : 01/16/2024
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
>- Though an exclusion tag will exclude the tagged VM from power management scaling operations, tagged VMs will still be considered as part of the calculation of the minimum percentage of hosts. >- Make sure not to include any sensitive information in the exclusion tags such as user principal names or other personally identifiable information.
-1. Select **Next**, which should take you to the **Schedules** tab.
+1. Select **Next**, which should take you to the **Schedules** tab. Schedules let you define when autoscale turns VMs on and off throughout the day. The schedule parameters are different based on the **Host pool type** you chose for the scaling plan.
-## Configure a schedule
+ #### [Pooled host pools](#tab/pooled-autoscale)
-Schedules let you define when autoscale turns VMs on and off throughout the day. The schedule parameters are different based on the **Host pool type** you chose for the scaling plan.
-
-#### [Pooled host pools](#tab/pooled-autoscale)
-
-In each phase of the schedule, autoscale only turns off VMs when in doing so the used host pool capacity won't exceed the capacity threshold. The default values you'll see when you try to create a schedule are the suggested values for weekdays, but you can change them as needed.
-
-To create or change a schedule:
-
-1. In the **Schedules** tab, select **Add schedule**.
-
-1. Enter a name for your schedule into the **Schedule name** field.
-
-1. In the **Repeat on** field, select which days your schedule will repeat on.
-
-1. In the **Ramp up** tab, fill out the following fields:
-
- - For **Start time**, select a time from the drop-down menu to start preparing VMs for peak business hours.
-
- - For **Load balancing algorithm**, we recommend selecting **breadth-first algorithm**. Breadth-first load balancing will distribute users across existing VMs to keep access times fast.
+ In each phase of the schedule, autoscale only turns off VMs when in doing so the used host pool capacity won't exceed the capacity threshold. The default values you'll see when you try to create a schedule are the suggested values for weekdays, but you can change them as needed.
+
+ To create or change a schedule:
+
+ 1. In the **Schedules** tab, select **Add schedule**.
+
+ 1. Enter a name for your schedule into the **Schedule name** field.
+
+ 1. In the **Repeat on** field, select which days your schedule will repeat on.
+
+ 1. In the **Ramp up** tab, fill out the following fields:
+
+ - For **Start time**, select a time from the drop-down menu to start preparing VMs for peak business hours.
+
+ - For **Load balancing algorithm**, we recommend selecting **breadth-first algorithm**. Breadth-first load balancing will distribute users across existing VMs to keep access times fast.
+
+ >[!NOTE]
+ >The load balancing preference you select here will override the one you selected for your original host pool settings.
+
+ - For **Minimum percentage of hosts**, enter the percentage of session hosts you want to always remain on in this phase. If the percentage you enter isn't a whole number, it's rounded up to the nearest whole number. For example, in a host pool of seven session hosts, if you set the minimum percentage of hosts during ramp-up hours to **10%**, one VM will always stay on during ramp-up hours, and it won't be turned off by autoscale.
- >[!NOTE]
- >The load balancing preference you select here will override the one you selected for your original host pool settings.
-
- - For **Minimum percentage of hosts**, enter the percentage of session hosts you want to always remain on in this phase. If the percentage you enter isn't a whole number, it's rounded up to the nearest whole number. For example, in a host pool of seven session hosts, if you set the minimum percentage of hosts during ramp-up hours to **10%**, one VM will always stay on during ramp-up hours, and it won't be turned off by autoscale.
+ - For **Capacity threshold**, enter the percentage of available host pool capacity that will trigger a scaling action to take place. For example, if two session hosts in the host pool with a max session limit of 20 are turned on, the available host pool capacity is 40. If you set the capacity threshold to **75%** and the session hosts have more than 30 user sessions, autoscale will turn on a third session host. This will then change the available host pool capacity from 40 to 60.
- - For **Capacity threshold**, enter the percentage of available host pool capacity that will trigger a scaling action to take place. For example, if two session hosts in the host pool with a max session limit of 20 are turned on, the available host pool capacity is 40. If you set the capacity threshold to **75%** and the session hosts have more than 30 user sessions, autoscale will turn on a third session host. This will then change the available host pool capacity from 40 to 60.
-
-1. In the **Peak hours** tab, fill out the following fields:
-
- - For **Start time**, enter a start time for when your usage rate is highest during the day. Make sure the time is in the same time zone you specified for your scaling plan. This time is also the end time for the ramp-up phase.
+ 1. In the **Peak hours** tab, fill out the following fields:
+
+ - For **Start time**, enter a start time for when your usage rate is highest during the day. Make sure the time is in the same time zone you specified for your scaling plan. This time is also the end time for the ramp-up phase.
+
+ - For **Load balancing**, you can select either breadth-first or depth-first load balancing. Breadth-first load balancing distributes new user sessions across all available session hosts in the host pool. Depth-first load balancing distributes new sessions to any available session host with the highest number of connections that hasn't reached its session limit yet. For more information about load-balancing types, see [Configure the Azure Virtual Desktop load-balancing method](configure-host-pool-load-balancing.md).
+
+ > [!NOTE]
+ > You can't change the capacity threshold here. Instead, the setting you entered in **Ramp-up** will carry over to this setting.
+
+ - For **Ramp-down**, you'll enter values into similar fields to **Ramp-up**, but this time it will be for when your host pool usage drops off. This will include the following fields:
+
+ - Start time
+ - Load-balancing algorithm
+ - Minimum percentage of hosts (%)
+ - Capacity threshold (%)
+ - Force logoff users
+
+ > [!IMPORTANT]
+ > - If you've enabled autoscale to force users to sign out during ramp-down, the feature will choose the session host with the lowest number of user sessions to shut down. Autoscale will put the session host in drain mode, send all active user sessions a notification telling them they'll be signed out, and then sign out all users after the specified wait time is over. After autoscale signs out all user sessions, it then deallocates the VM. If you haven't enabled forced sign out during ramp-down, session hosts with no active or disconnected sessions will be deallocated.
+ > - During ramp-down, autoscale will only shut down VMs if all existing user sessions in the host pool can be consolidated to fewer VMs without exceeding the capacity threshold.
+
+ - Likewise, **Off-peak hours** works the same way as **Peak hours**:
+
+ - Start time, which is also the end of the ramp-down period.
+ - Load-balancing algorithm. We recommend choosing **depth-first** to gradually reduce the number of session hosts based on sessions on each VM.
+ - Just like peak hours, you can't configure the capacity threshold here. Instead, the value you entered in **Ramp-down** will carry over.
+
+ #### [Personal host pools](#tab/personal-autoscale)
+
+ In each phase of the schedule, define whether VMs should be deallocated based on the user session state.
+
+ To create or change a schedule:
+
+ 1. In the **Schedules** tab, select **Add schedule**.
+
+ 1. Enter a name for your schedule into the **Schedule name** field.
+
+ 1. In the **Repeat on** field, select which days your schedule will repeat on.
+
+ 1. In the **Ramp up** tab, fill out the following fields:
+
+ - For **Start time**, select the time you want the ramp-up phase to start from the drop-down menu.
+
+ - For **Start VM on Connect**, select whether you want Start VM on Connect to be enabled during ramp up.
+
+ - For **VMs to start**, select whether you want only personal desktops that have a user assigned to them at the start time to be started, you want all personal desktops in the host pool (regardless of user assignment) to be started, or you want no personal desktops in the pool to be started.
+
+ > [!NOTE]
+ > We highly recommend that you enable Start VM on Connect if you choose not to start your VMs during the ramp-up phase.
+
+ - For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360.
+
+ - For **Perform**, specify what action the service should take after a user session has been disconnected for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
+
+ - For **When logged off for**, specify the number of minutes a user session has to be logged off before performing a specific action. This number can be anywhere between 0 and 360.
+
+ - For **Perform**, specify what action the service should take after a user session has been logged off for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
+
+ 1. In the **Peak hours**, **Ramp-down**, and **Off-peak hours** tabs, fill out the following fields:
+
+ - For **Start time**, enter a start time for each phase. This time is also the end time for the previous phase.
+
+ - For **Start VM on Connect**, select whether you want to enable Start VM on Connect to be enabled during that phase.
+
+ - For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360.
+
+ - For **Perform**, specify what action should be performed after a user session has been disconnected for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
+
+ - For **When logged off for**, specify the number of minutes a user session has to be logged off before performing a specific action. This number can be anywhere between 0 and 360.
+
+ - For **Perform**, specify what action should be performed after a user session has been logged off for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
+
- - For **Load balancing**, you can select either breadth-first or depth-first load balancing. Breadth-first load balancing distributes new user sessions across all available session hosts in the host pool. Depth-first load balancing distributes new sessions to any available session host with the highest number of connections that hasn't reached its session limit yet. For more information about load-balancing types, see [Configure the Azure Virtual Desktop load-balancing method](configure-host-pool-load-balancing.md).
+1. Select **Next** to take you to the **Host pool assignments** tab. Select the check box next to each host pool you want to include. If you don't want to enable autoscale, unselect all check boxes. You can always return to this setting later and change it. You can only assign the scaling plan to host pools that match the host pool type specified in the plan.
> [!NOTE]
- > You can't change the capacity threshold here. Instead, the setting you entered in **Ramp-up** will carry over to this setting.
-
- - For **Ramp-down**, you'll enter values into similar fields to **Ramp-up**, but this time it will be for when your host pool usage drops off. This will include the following fields:
-
- - Start time
- - Load-balancing algorithm
- - Minimum percentage of hosts (%)
- - Capacity threshold (%)
- - Force logoff users
-
- > [!IMPORTANT]
- > - If you've enabled autoscale to force users to sign out during ramp-down, the feature will choose the session host with the lowest number of user sessions to shut down. Autoscale will put the session host in drain mode, send all active user sessions a notification telling them they'll be signed out, and then sign out all users after the specified wait time is over. After autoscale signs out all user sessions, it then deallocates the VM. If you haven't enabled forced sign out during ramp-down, session hosts with no active or disconnected sessions will be deallocated.
- > - During ramp-down, autoscale will only shut down VMs if all existing user sessions in the host pool can be consolidated to fewer VMs without exceeding the capacity threshold.
-
- - Likewise, **Off-peak hours** works the same way as **Peak hours**:
-
- - Start time, which is also the end of the ramp-down period.
- - Load-balancing algorithm. We recommend choosing **depth-first** to gradually reduce the number of session hosts based on sessions on each VM.
- - Just like peak hours, you can't configure the capacity threshold here. Instead, the value you entered in **Ramp-down** will carry over.
-
-#### [Personal host pools](#tab/personal-autoscale)
-
-In each phase of the schedule, define whether VMs should be deallocated based on the user session state.
+ > - When you create or update a scaling plan that's already assigned to host pools, its changes will be applied immediately.
-To create or change a schedule:
+1. After that, you'll need to enter **tags**. Tags are name and value pairs that categorize resources for consolidated billing. You can apply the same tag to multiple resources and resource groups. To learn more about tagging resources, see [Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md).
-1. In the **Schedules** tab, select **Add schedule**.
+ > [!NOTE]
+ > If you change resource settings on other tabs after creating tags, your tags will be automatically updated.
-1. Enter a name for your schedule into the **Schedule name** field.
+1. Once you're done, go to the **Review + create** tab and select **Create** to deploy your host pool.
-1. In the **Repeat on** field, select which days your schedule will repeat on.
-
-1. In the **Ramp up** tab, fill out the following fields:
-
- - For **Start time**, select the time you want the ramp-up phase to start from the drop-down menu.
-
- - For **Start VM on Connect**, select whether you want Start VM on Connect to be enabled during ramp up.
-
- - For **VMs to start**, select whether you want only personal desktops that have a user assigned to them at the start time to be started, you want all personal desktops in the host pool (regardless of user assignment) to be started, or you want no personal desktops in the pool to be started.
+## Edit an existing scaling plan
- > [!NOTE]
- > We highly recommend that you enable Start VM on Connect if you choose not to start your VMs during the ramp-up phase.
+To edit an existing scaling plan:
- - For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360.
+1. Sign in to the [Azure portal](https://portal.azure.com).
- - For **Perform**, specify what action the service should take after a user session has been disconnected for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
-
- - For **When logged off for**, specify the number of minutes a user session has to be logged off before performing a specific action. This number can be anywhere between 0 and 360.
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
- - For **Perform**, specify what action the service should take after a user session has been logged off for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
+1. Select **Scaling plans**, then select the name of the scaling plan you want to edit. The overview blade of the scaling plan should open.
-1. In the **Peak hours**, **Ramp-down**, and **Off-peak hours** tabs, fill out the following fields:
+1. To change the scaling plan host pool assignments, under the **Manage** heading select **Host pool assignments**.
- - For **Start time**, enter a start time for each phase. This time is also the end time for the previous phase.
-
- - For **Start VM on Connect**, select whether you want to enable Start VM on Connect to be enabled during that phase.
+1. To edit schedules, under the **Manage** heading, select **Schedules**.
- - For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360.
+1. To edit the plan's friendly name, description, time zone, or exclusion tags, go to the **Properties** tab.
- - For **Perform**, specify what action should be performed after a user session has been disconnected for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
-
- - For **When logged off for**, specify the number of minutes a user session has to be logged off before performing a specific action. This number can be anywhere between 0 and 360.
+## Assign scaling plans to existing host pools
- - For **Perform**, specify what action should be performed after a user session has been logged off for the specified time. The options are to either deallocate (shut down) the VMs, hibernate the personal desktop, or do nothing.
-
+You can assign a scaling plan to any existing host pools in your deployment. When you assign a scaling plan to your host pool, the plan will apply to all session hosts within that host pool. The scaling plan also automatically applies to any new session hosts you create in the assigned host pool.
-## Assign host pools
+If you disable a scaling plan, all assigned resources will remain in the state they were in at the time you disabled it.
-Now that you've set up your scaling plan, it's time to assign the plan to your host pools. Select the check box next to each host pool you want to include. If you don't want to enable autoscale, unselect all check boxes. You can always return to this setting later and change it. You can only assign the scaling plan to host pools that match the host pool type specified in the plan.
+### Assign a scaling plan to a single existing host pool
+To assign a scaling plan to an existing host pool:
-> [!NOTE]
-> - When you create or update a scaling plan that's already assigned to host pools, its changes will be applied immediately.
+1. Open the [Azure portal](https://portal.azure.com).
-## Add tags
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-After that, you'll need to enter tags. Tags are name and value pairs that categorize resources for consolidated billing. You can apply the same tag to multiple resources and resource groups. To learn more about tagging resources, see [Use tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md).
+1. Select **Host pools**, and select the host pool you want to assign the scaling plan to.
-> [!NOTE]
-> If you change resource settings on other tabs after creating tags, your tags will be automatically updated.
+1. Under the **Settings** heading, select **Scaling plan**, and then select **+ Assign**. Select the scaling plan you want to assign and select **Assign**. The scaling plan must be in the same Azure region as the host pool and the scaling plan's host pool type must match the type of host pool that you're trying to assign it to.
-Once you're done, go to the **Review + create** tab and select **Create** to deploy your host pool.
+> [!TIP]
+> If you've enabled the scaling plan during deployment, then you'll also have the option to disable the plan for the selected host pool in the **Scaling plan** menu by unselecting the **Enable autoscale** checkbox, as shown in the following screenshot.
+>
+> [!div class="mx-imgBorder"]
+> ![A screenshot of the scaling plan window. The "enable autoscale" check box is selected and highlighted with a red border.](media/enable-autoscale.png)
-## Edit an existing scaling plan
+### Assign a scaling plan to multiple existing host pools
-To edit an existing scaling plan:
+To assign a scaling plan to multiple existing host pools at the same time:
-1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open the [Azure portal](https://portal.azure.com).
1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
-1. Select **Scaling plans**, then select the name of the scaling plan you want to edit. The overview blade of the scaling plan should open.
-
-1. To change the scaling plan host pool assignments, under the **Manage** heading select **Host pool assignments**.
+1. Select **Scaling plans**, and select the scaling plan you want to assign to host pools.
-1. To edit schedules, under the **Manage** heading, select **Schedules**.
-
-1. To edit the plan's friendly name, description, time zone, or exclusion tags, go to the **Properties** tab.
+1. Under the **Manage** heading, select **Host pool assignments**, and then select **+ Assign**. Select the host pools you want to assign the scaling plan to and select **Assign**. The host pools must be in the same Azure region as the scaling plan and the scaling plan's host pool type must match the type of host pools you're trying to assign it to.
## Next steps Now that you've created your scaling plan, here are some things you can do: -- [Assign your scaling plan to new and existing host pools](autoscale-new-existing-host-pool.md) - [Enable diagnostics for your scaling plan](autoscale-diagnostics.md) If you'd like to learn more about terms used in this article, check out our [autoscale glossary](autoscale-glossary.md). For examples of how autoscale works, see [Autoscale example scenarios](autoscale-scenarios.md). You can also look at our [Autoscale FAQ](autoscale-faq.yml) if you have other questions.
virtual-machines Nd H100 V5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nd-h100-v5-series.md
Ubuntu 20.04: 5.4.0-1046-azure
| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU Memory GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max network bandwidth | Max NICs | |||||-|-|-|--||-|
-| Standard_ND96isr_v5 | 96 | 1900 | 1000 | 8 H100 80 GB GPUs(NVLink) | 80 | 32 | 40800/612 | 80,000 Mbps | 8 |
+| Standard_ND96isr_H100_v5 | 96 | 1900 | 1000 | 8 H100 80 GB GPUs(NVLink) | 80 | 32 | 40800/612 | 80,000 Mbps | 8 |
[!INCLUDE [virtual-machines-common-sizes-table-defs](../../includes/virtual-machines-common-sizes-table-defs.md)]
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
Currently, you can create restore points in only one VM at a time. You can't cre
## Throttling limits for Restore points
-**Scope** | **Operation** | **Limit**
+**Scope** | **Operation** | **Limit per hour**
| | VM | RestorePoints.RestorePointOperation.PUT (Create new **Application Consistent**) | 3 VM | RestorePoints.RestorePointOperation.PUT (Create new **Crash Consisten**t) | 3
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/overview.md
**Applies to:** :heavy_check_mark: Linux VMs
-Red Hat workloads are supported through a variety of offerings on Azure. Red Hat Enterprise Linux (RHEL) images are at the core of RHEL workloads, as is the Red Hat Update Infrastructure (RHUI).
+Red Hat workloads are supported through a variety of offerings on Azure. Red Hat Enterprise Linux (RHEL) images are at the core of RHEL workloads, as is the Red Hat Update Infrastructure (RHUI). Red Hat JBoss EAP is also supported on Azure, see [Red Hat JBoss EAP](#red-hat-jboss-eap).
## Red Hat Enterprise Linux images
Azure provides Red Hat Update Infrastructure only for pay-as-you-go RHEL VMs. RH
RHEL images connected to RHUI update by default to the latest minor version of RHEL when a `yum update` is run. This behavior means that a RHEL 7.4 VM might get upgraded to RHEL 7.7 if a `yum update` operation is run on it. This behavior is by design for RHUI. To mitigate this upgrade behavior, switch from regular RHEL repositories to [Extended Update Support repositories](./redhat-rhui.md#rhel-eus-and-version-locking-rhel-vms).
-## Red Hat Middleware
+## Red Hat JBoss EAP
-Microsoft and Azure have partnered to develop a variety of solutions for running Red Hat Middleware on Azure. Learn more about JBoss EAP on Azure Virtual Machines and Azure App service at [Red Hat JBoss EAP on Azure](/azure/developer/java/ee/jboss-on-azure).
+Microsoft and Azure have partnered to develop a variety of solutions for running Red Hat Middleware on Azure. Learn more about JBoss EAP on Azure Virtual Machines, Azure App service and Azure Red Hat OpenShift at [Red Hat JBoss EAP on Azure](/azure/developer/java/ee/jboss-on-azure?toc=/azure/virtual-machines/workloads/redhat/toc.json&bc=/azure/virtual-machines/workloads/redhat/breadcrumb/toc.json).
## Next steps
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
There are multiple ways to turn off default outbound access. The following secti
:::image type="content" source="./media/default-outbound-access/private-subnet-portal.png" alt-text="Screenshot of Azure portal showing Private subnet option.":::
-* Using PowerShell, when creating a subnet with [New-AzVirtualNetworkSubnetConfig](https://learn.microsoft.com/powershell/module/az.network/new-azvirtualnetworksubnetconfig?view=azps-11.1.0), use the `DefaultOutboundAccess` option and choose "$false"
+* Using PowerShell, when creating a subnet with [New-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/new-azvirtualnetworksubnetconfig), use the `DefaultOutboundAccess` option and choose "$false"
* Using CLI, when creating a subnet with [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create), use the `--default-outbound` option and choose "false"
virtual-wan How To Palo Alto Cloud Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-palo-alto-cloud-ngfw.md
To create a new virtual WAN, use the steps in the following article:
## Known limitations
-* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other Azure regions are on the roadmap.
+* Palo Alto Networks Cloud NGFW is only available in the following Azure regions: Central US, East US, East US 2, West US, West US 2, West US 3, North Central US, Brazil South, North Europe, West Europe, UK South, UK West, Australia East, Australia Southeast, UK South, UK West, Canada Central, Japan East, Southeast Asia, and East Asia. Other Azure regions are on the roadmap.
* Palo Alto Networks Cloud NGFW can't be deployed with Network Virtual Appliances in the Virtual WAN hub.
-* For routing between Virtual WAN and Palo Alto Networks Cloud NGFW to work properly, your entire network (on-premises and Virtual Networks) must be within RFC-1918 (subnets within 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). For example, you may not use a subnet such as 40.0.0.0/24 within your Virtual Network or on-premises. Traffic to 40.0.0.0/24 may not be routed properly.
* All other limitations in the [Routing Intent and Routing policies documentation limitations section](how-to-routing-policies.md) apply to Palo Alto Networks Cloud NGFW deployments in Virtual WAN. ## Register resource provider
The following section describes common issues seen when using Palo Alto Networks
### Troubleshooting Cloud NGFW creation
-* Ensure your Virtual Hubs are deployed in one of the following regions: Central US, East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe, Australia East, Australia Southeast, UK South, UK West, Canada Central and East Asia. Other regions are in the roadmap.
+* Ensure your Virtual Hubs are deployed in one of the following regions:Central US, East US, East US 2, West US, West US 2, West US 3, North Central US, Brazil South, North Europe, West Europe, UK South, UK West, Australia East, Australia Southeast, UK South, UK West, Canada Central, Japan East, Southeast Asia, and East Asia. Other regions are in the roadmap.
* Ensure the Routing status of the Virtual Hub is "Provisioned." Attempts to create Cloud NGFW prior to routing being provisioned will fail. * Ensure registration to the **PaloAltoNetworks.Cloudngfw** resource provider is successful.
The following section describes common issues seen when using Palo Alto Networks
### Troubleshooting Routing intent and policies * Ensure Cloud NGFW deployment is completed successfully before attempting to configure Routing Intent.
-* Ensure all your on-premises and Azure Virtual Networks are in RFC1918 (subnets within 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12).
+* Ensure all your on-premises and Azure Virtual Networks are in RFC1918 (subnets within 10.0.0.0/8, 192.168.0.0/16, 172.16.0.0/12). If there are networks that are not in RFC1918, make sure those prefixes are listed in the Private Traffic prefixes text box.
* For more information about troubleshooting routing intent, see [Routing Intent documentation](how-to-routing-policies.md). This document describes pre-requisites, common errors associated with configuring routing intent and troubleshooting tips. ### Troubleshooting Palo Alto Networks Cloud NGFW configuration
virtual-wan Route Maps How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-how-to.md
description: Learn how to configure Route-maps for Virtual WAN virtual hubs.
Previously updated : 05/31/2023 Last updated : 01/16/2024
This article helps you create or edit a route map in an Azure Virtual WAN hub us
Verify that you've met the following criteria before beginning your configuration:
-You have virtual WAN with a connection (S2S, P2S, or ExpressRoute) already configured. For steps to create a VWAN with a S2S connection, see [Tutorial - Create a S2S connection with Virtual WAN](virtual-wan-site-to-site-portal.md). For steps to create a virtual WAN with a P2S User VPN connection, see [Tutorial - Create a User VPN P2S connection with Virtual WAN](virtual-wan-point-to-site-portal.md).
+* You have virtual WAN with a connection (S2S, P2S, or ExpressRoute) already configured.
+ * For steps to create a VWAN with a S2S connection, see [Tutorial - Create a S2S connection with Virtual WAN](virtual-wan-site-to-site-portal.md).
+ * For steps to create a virtual WAN with a P2S User VPN connection, see [Tutorial - Create a User VPN P2S connection with Virtual WAN](virtual-wan-point-to-site-portal.md).
## Create a route map - The following steps walk you through how to configure a route map. 1. In the Azure portal, go to your Virtual WAN resource. Select **Hubs** to view the list of hubs.
The following steps walk you through how to configure a route map.
## Apply a route map to connections
-Once the route map is saved, you may apply the route map to the desired connections in the virtual hub.
+Once the route map is saved, you can apply the route map to the desired connections in the virtual hub.
1. On the **Route-maps** page, select **Apply Route-maps to connections**.