Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
azure-arc | Tutorial Gitops Flux2 Ci Cd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-gitops-flux2-ci-cd.md | description: "This tutorial walks through setting up a CI/CD solution using GitO Previously updated : 05/24/2022 Last updated : 03/03/2023 # Tutorial: Implement CI/CD with GitOps (Flux v2) If you don't have an Azure subscription, create a [free account](https://azure.m az extension add --name connectedk8s az extension add --name k8s-configuration ```+ * To update these extensions to the latest version, run the following commands: ```azurecli If you don't have an Azure subscription, create a [free account](https://azure.m ``` ### Connect Azure Container Registry to Kubernetes-Enable your Kubernetes cluster to pull images from your Azure Container Registry. If it's private, authentication will be required. ++Enable your Kubernetes cluster to pull images from your Azure Container Registry. If it's private, authentication is required. #### Connect Azure Container Registry to existing AKS clusters az aks update -n arc-cicd-cluster -g myResourceGroup --attach-acr arc-demo-acr To connect non-AKS and local clusters to your Azure Container Registry, create an image pull secret. Kubernetes uses image pull secrets to store information needed to authenticate your registry. Create an image pull secret with the following `kubectl` command. Repeat for both the `dev` and `stage` namespaces.+ ```console kubectl create secret docker-registry <secret-name> \ --namespace <namespace> \ kubectl create secret docker-registry <secret-name> \ --docker-password=<service-principal-password> ``` -To avoid having to set an imagePullSecret for every Pod, consider adding the imagePullSecret to the Service account in the `dev` and `stage` namespaces. See the [Kubernetes tutorial](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account) for more information. +To avoid having to set an imagePullSecret for every Pod, consider adding the imagePullSecret to the Service account in the `dev` and `stage` namespaces. For more information, see the [Kubernetes tutorial](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account). Depending on the CI/CD orchestrator you prefer, you can proceed with instructions either for Azure DevOps or for GitHub. Depending on the CI/CD orchestrator you prefer, you can proceed with instruction This tutorial assumes familiarity with Azure DevOps, Azure Repos and Pipelines, and Azure CLI. -Make sure to complete the following: +Make sure to complete the following steps first: * Sign into [Azure DevOps Services](https://dev.azure.com/). * Verify you have "Build Admin" and "Project Admin" permissions for [Azure Repos](/azure/devops/repos/get-started/what-is-repos) and [Azure Pipelines](/azure/devops/pipelines/get-started/pipelines-get-started). Make sure to complete the following: Import an [application repository](./conceptual-gitops-ci-cd.md#application-repo) and a [GitOps repository](./conceptual-gitops-ci-cd.md#gitops-repo) into Azure Repos. For this tutorial, use the following example repositories: * **arc-cicd-demo-src** application repository- * URL: https://github.com/Azure/arc-cicd-demo-src - * Contains the example Azure Vote App that you will deploy using GitOps. - * Import the repository with name `arc-cicd-demo-src` + * URL: https://github.com/Azure/arc-cicd-demo-src + * Contains the example Azure Vote App that you'll deploy using GitOps. + * Import the repository with name `arc-cicd-demo-src` * **arc-cicd-demo-gitops** GitOps repository- * URL: https://github.com/Azure/arc-cicd-demo-gitops - * Works as a base for your cluster resources that house the Azure Vote App. - * Import the repository with name `arc-cicd-demo-gitops` + * URL: https://github.com/Azure/arc-cicd-demo-gitops + * Works as a base for your cluster resources that house the Azure Vote App. + * Import the repository with name `arc-cicd-demo-gitops` Learn more about [importing Git repositories](/azure/devops/repos/git/import-git-repository). To continuously deploy your app, connect the application repository to your clus The initial GitOps repository contains only a [manifest](https://github.com/Azure/arc-cicd-demo-gitops/blob/master/arc-cicd-cluster/manifests/namespaces.yml) that creates the **dev** and **stage** namespaces corresponding to the deployment environments. The GitOps connection that you create will automatically:+ * Sync the manifests in the manifest directory. * Update the cluster state. -The CI/CD workflow will populate the manifest directory with extra manifests to deploy the app. +The CI/CD workflow populates the manifest directory with extra manifests to deploy the app. 1. [Create a new GitOps connection](./tutorial-use-gitops-flux2.md) to your newly imported **arc-cicd-demo-gitops** repository in Azure Repos. The CI/CD workflow will populate the manifest directory with extra manifests to --https-user <Azure Repos username> \ --https-key <Azure Repos PAT token> \ --scope cluster \- --cluster-type managedClusters \ + --cluster-type connectedClusters \ --branch master \ --kustomization name=cluster-config prune=true path=arc-cicd-cluster/manifests ``` + > [!TIP] + > For an AKS cluster (rather than an Arc-enabled cluster), use `-cluster-type managedClusters`. + 1. Check the state of the deployment in Azure portal. * If successful, you'll see both `dev` and `stage` namespaces created in your cluster.- * You can also check on Azure Portal page of your K8s cluster on `GitOps` tab a configuration `cluster-config` is created. -+ * You can also confirm that on the Azure portal page of your cluster, a configuration `cluster-config` is created on the f`GitOps` tab. ### Import the CI/CD pipelines -Now that you've synced a GitOps connection, you'll need to import the CI/CD pipelines that create the manifests. +Now that you've synced a GitOps connection, you need to import the CI/CD pipelines that create the manifests. -The application repository contains a `.pipeline` folder with the pipelines you'll use for PRs, CI, and CD. Import and rename the three pipelines provided in the sample repository: +The application repository contains a `.pipeline` folder with pipelines used for PRs, CI, and CD. Import and rename the three pipelines provided in the sample repository: | Pipeline file name | Description | | - | - | The application repository contains a `.pipeline` folder with the pipelines you' | [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) | The application CD pipeline, named **arc-cicd-demo-src CD** | ### Connect Azure Container Registry to Azure DevOps-During the CI process, you'll deploy your application containers to a registry. Start by creating an Azure service connection: ++During the CI process, you deploy your application containers to a registry. Start by creating an Azure service connection: 1. In Azure DevOps, open the **Service connections** page from the project settings page. In TFS, open the **Services** page from the **settings** icon in the top menu bar. 2. Choose **+ New service connection** and select the type of service connection you need. 3. Fill in the parameters for the service connection. For this tutorial:- * Name the service connection **arc-demo-acr**. + * Name the service connection **arc-demo-acr**. * Select **myResourceGroup** as the resource group.-4. Select the **Grant access permission to all pipelines**. - * This option authorizes YAML pipeline files for service connections. +4. Select the **Grant access permission to all pipelines**. + * This option authorizes YAML pipeline files for service connections. 5. Choose **OK** to create the connection. -### Configure PR Service Connection +### Configure PR service connection -CD pipeline manipulates PRs in the GitOps repository. It needs a Service Connection for that: +CD pipeline manipulates PRs in the GitOps repository. It needs a service connection in order to do this. To configure this connection: 1. In Azure DevOps, open the **Service connections** page from the project settings page. In TFS, open the **Services** page from the **settings** icon in the top menu bar. 2. Choose **+ New service connection** and select `Generic` type. 3. Fill in the parameters for the service connection. For this tutorial: * Server URL `https://dev.azure.com/<Your organization>/<Your project>/_apis/git/repositories/arc-cicd-demo-gitops`- * Leave Username and Password blank - * Name the service connection **azdo-pr-connection**. -4. Select the **Grant access permission to all pipelines**. - * This option authorizes YAML pipeline files for service connections. + * Leave Username and Password blank. + * Name the service connection **azdo-pr-connection**. +4. Select the **Grant access permission to all pipelines**. + * This option authorizes YAML pipeline files for service connections. 5. Choose **OK** to create the connection. ### Install GitOps Connector 1. Add GitOps Connector repository to Helm repositories:-```console - helm repo add gitops-connector https://azure.github.io/gitops-connector/ -``` -2. Install the connector to the cluster: -```console - helm upgrade -i gitops-connector gitops-connector/gitops-connector \ - --namespace flux-system \ - --set gitRepositoryType=AZDO \ - --set ciCdOrchestratorType=AZDO \ - --set gitOpsOperatorType=FLUX \ - --set azdoGitOpsRepoName=arc-cicd-demo-gitops \ - --set azdoOrgUrl=https://dev.azure.com/<Your organization>/<Your project> \ - --set gitOpsAppURL=https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ - --set orchestratorPAT=<Azure Repos PAT token> -``` -> [!NOTE] -> `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Full` permissions. -3. Configure Flux to send notifications to GitOps connector: -```console -cat <<EOF | kubectl apply -f - -apiVersion: notification.toolkit.fluxcd.io/v1beta1 -kind: Alert -metadata: - name: gitops-connector - namespace: flux-system -spec: - eventSeverity: info - eventSources: - - kind: GitRepository - name: cluster-config - - kind: Kustomization - name: cluster-config-cluster-config - providerRef: - name: gitops-connector --apiVersion: notification.toolkit.fluxcd.io/v1beta1 -kind: Provider -metadata: - name: gitops-connector - namespace: flux-system -spec: - type: generic - address: http://gitops-connector:8080/gitopsphase -EOF -``` + ```console + helm repo add gitops-connector https://azure.github.io/gitops-connector/ + ``` ++1. Install the connector to the cluster: ++ ```console + helm upgrade -i gitops-connector gitops-connector/gitops-connector \ + --namespace flux-system \ + --set gitRepositoryType=AZDO \ + --set ciCdOrchestratorType=AZDO \ + --set gitOpsOperatorType=FLUX \ + --set azdoGitOpsRepoName=arc-cicd-demo-gitops \ + --set azdoOrgUrl=https://dev.azure.com/<Your organization>/<Your project> \ + --set gitOpsAppURL=https://dev.azure.com/<Your organization>/<Your project>/_git/arc-cicd-demo-gitops \ + --set orchestratorPAT=<Azure Repos PAT token> + ``` ++ > [!NOTE] + > `Azure Repos PAT token` should have `Build: Read & execute` and `Code: Full` permissions. ++1. Configure Flux to send notifications to GitOps connector: ++ ```console + cat <<EOF | kubectl apply -f - + apiVersion: notification.toolkit.fluxcd.io/v1beta1 + kind: Alert + metadata: + name: gitops-connector + namespace: flux-system + spec: + eventSeverity: info + eventSources: + - kind: GitRepository + name: cluster-config + - kind: Kustomization + name: cluster-config-cluster-config + providerRef: + name: gitops-connector + + apiVersion: notification.toolkit.fluxcd.io/v1beta1 + kind: Provider + metadata: + name: gitops-connector + namespace: flux-system + spec: + type: generic + address: http://gitops-connector:8080/gitopsphase + EOF + ``` For the details on installation, refer to the [GitOps Connector](https://github.com/microsoft/gitops-connector#installation) repository. ### Create environment variable groups #### App repository variable group+ [Create a variable group](/azure/devops/pipelines/library/variable-groups) named **az-vote-app-dev**. Set the following values: | Variable | Value | For the details on installation, refer to the [GitOps Connector](https://github. | ORGANIZATION_NAME | Name of Azure DevOps organization | | PROJECT_NAME | Name of GitOps project in Azure DevOps | | REPO_URL | Full URL for GitOps repository |-| SRC_FOLDER | `azure-vote` | +| SRC_FOLDER | `azure-vote` | | TARGET_CLUSTER | `arc-cicd-cluster` | | TARGET_NAMESPACE | `dev` | | VOTE_APP_TITLE | Voting Application | You're now ready to deploy to the `dev` and `stage` environments. #### Create environments -In Azure DevOps project create `Dev` and `Stage` environments. See [Create and target an environment](/azure/devops/pipelines/process/environments) for more details. +In your Azure DevOps project, create `Dev` and `Stage` environments. For details, see [Create and target an environment](/azure/devops/pipelines/process/environments). ### Give more permissions to the build service The CD pipeline uses the security token of the running build to authenticate to 1. Go to `Project settings` from the Azure DevOps project main page. 1. Select `Repos/Repositories`.-1. Select `Security`. +1. Select `Security`. 1. For the `<Project Name> Build Service (<Organization Name>)` and for the `Project Collection Build Service (<Organization Name>)` (type in the search field, if it doesn't show up), allow `Contribute`, `Contribute to pull requests`, and `Create branch`. 1. Go to `Pipelines/Settings`-1. Switch off `Protect access to repositories in YAML pipelines` option +1. Switch off `Protect access to repositories in YAML pipelines` option For more information, see:-- [Grant VC Permissions to the Build Service](/azure/devops/pipelines/scripts/git-commands?preserve-view=true&tabs=yaml&view=azure-devops#version-control )-- [Manage Build Service Account Permissions](/azure/devops/pipelines/process/access-tokens?preserve-view=true&tabs=yaml&view=azure-devops#manage-build-service-account-permissions)++* [Grant VC Permissions to the Build Service](/azure/devops/pipelines/scripts/git-commands?preserve-view=true&tabs=yaml&view=azure-devops#version-control ) +* [Manage Build Service Account Permissions](/azure/devops/pipelines/process/access-tokens?preserve-view=true&tabs=yaml&view=azure-devops#manage-build-service-account-permissions) ### Deploy the dev environment for the first time With the CI and CD pipelines created, run the CI pipeline to deploy the app for #### CI pipeline During the initial CI pipeline run, you may get a resource authorization error in reading the service connection name.+ 1. Verify the variable being accessed is AZURE_SUBSCRIPTION. 1. Authorize the use. 1. Rerun the pipeline. The CI pipeline:+ * Ensures the application change passes all automated quality checks for deployment. * Does any extra validation that couldn't be completed in the PR pipeline.- * Specific to GitOps, the pipeline also publishes the artifacts for the commit that will be deployed by the CD pipeline. + * Specific to GitOps, the pipeline also publishes the artifacts for the commit that will be deployed by the CD pipeline. * Verifies the Docker image has changed and the new image is pushed. #### CD pipeline -During the initial CD pipeline run, you'll be asked to give the pipeline access to the GitOps repository. Select View when prompted that the pipeline needs permission to access a resource. Then, select Permit to grant permission to use the GitOps repository for the current and future runs of the pipeline. +During the initial CD pipeline run, you need to give the pipeline access to the GitOps repository. Select **View** when prompted that the pipeline needs permission to access a resource. Then, select **Permit** to grant permission to use the GitOps repository for the current and future runs of the pipeline. The successful CI pipeline run triggers the CD pipeline to complete the deployment process. You'll deploy to each environment incrementally. > [!TIP] > If the CD pipeline does not automatically trigger:+> > 1. Verify the name matches the branch trigger in [`.pipelines/az-vote-cd-pipeline.yaml`](https://github.com/Azure/arc-cicd-demo-src/blob/master/.pipelines/az-vote-cd-pipeline.yaml) > * It should be `arc-cicd-demo-src CI`. > 1. Rerun the CI pipeline. -Once the template and manifest changes to the GitOps repository have been generated, the CD pipeline will create a commit, push it, and create a PR for approval. +Once the template and manifest changes to the GitOps repository have been generated, the CD pipeline creates a commit, pushes it, and creates a PR for approval. + 1. Find the PR created by the pipeline to the GitOps repository. 1. Verify the changes to the GitOps repository. You should see: * High-level Helm template changes. Once the template and manifest changes to the GitOps repository have been genera 1. If everything looks good, approve and complete the PR. 1. After a few minutes, Flux picks up the change and starts the deployment.-1. Monitor the Git Commit status on the Commit history tab. Once it is `succeeded` the CD pipeline will go ahead and start automated testing +1. Monitor the Git Commit status on the Commit history tab. Once it is `succeeded`, the CD pipeline starts automated testing. 1. Forward the port locally using `kubectl` and ensure the app works correctly using: ```console With this baseline set of templates and manifests representing the state on the 2. Since "Cats vs Dogs" isn't getting enough votes, change it to "Tabs vs Spaces" to drive up the vote count. -3. Commit the change in a new branch, push it, and create a pull request. - * This is the typical developer flow that will start the CI/CD lifecycle. +3. Commit the change in a new branch, push it, and create a pull request. This sequence of steps is the typical developer flow that starts the CI/CD lifecycle. ### PR validation pipeline The PR pipeline is the first line of defense against a faulty change. Usual appl The application's Dockerfile and Helm charts can use linting in a similar way to the application. -Errors found during linting range from: -* Incorrectly formatted YAML files, to -* Best practice suggestions, such as setting CPU and Memory limits for your application. +Errors found during linting range from incorrectly formatted YAML files, to best practice suggestions, such as setting CPU and Memory limits for your application. > [!NOTE] > To get the best coverage from Helm linting in a real application, you will need to substitute values that are reasonably similar to those used in a real environment. Errors found during pipeline execution appear in the test results section of the run. From here, you can:+ * Track the useful statistics on the error types. * Find the first commit on which they were detected. * Stack trace style links to the code sections that caused the error. -Once the pipeline run has finished, you have assured the quality of the application code and the template that will deploy it. You can now approve and complete the PR. The CI will run again, regenerating the templates and manifests, before triggering the CD pipeline. +Once the pipeline run has finished, you have assured the quality of the application code and the template that deploys it. You can now approve and complete the PR. The CI will run again, regenerating the templates and manifests, before triggering the CD pipeline. > [!TIP]-> In a real environment, don't forget to set branch policies to ensure the PR passes your quality checks. For more information, see the [Set branch policies](/azure/devops/repos/git/branch-policies) article. +> In a real environment, don't forget to set branch policies to ensure the PR passes your quality checks. For more information, see [Set branch policies](/azure/devops/repos/git/branch-policies). ### CD process approvals A successful CI pipeline run triggers the CD pipeline to complete the deployment process. This time, the pipeline requires you to approve each deployment environment. 1. Approve the deployment to the `dev` environment.-1. Once the template and manifest changes to the GitOps repository have been generated, the CD pipeline will create a commit, push it, and create a PR for approval. +1. Once the template and manifest changes to the GitOps repository have been generated, the CD pipeline creates a commit, pushes it, and creates a PR for approval. 1. Verify the changes to the GitOps repository. You should see: * High-level Helm template changes. * Low-level Kubernetes manifests that show the underlying changes to the desired state. A successful CI pipeline run triggers the CD pipeline to complete the deployment * View the Azure Vote app in your browser at `http://localhost:8080/` and verify the voting choices have changed to Tabs vs Spaces. 1. Repeat steps 1-7 for the `stage` environment. -Your deployment is now complete. This ends the CI/CD workflow. Refer to the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD pipelines used in this tutorial. +The deployment is now complete. ++For a detailed overview of all the steps and techniques implemented in the CI/CD workflows used in this tutorial, see the [Azure DevOps GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops.md). + ## Implement CI/CD with GitHub This tutorial assumes familiarity with GitHub, GitHub Actions. Fork an [application repository](./conceptual-gitops-ci-cd.md#application-repo) and a [GitOps repository](./conceptual-gitops-ci-cd.md#gitops-repo). For this tutorial, use the following example repositories: * **arc-cicd-demo-src** application repository- * URL: https://github.com/Azure/arc-cicd-demo-src - * Contains the example Azure Vote App that you will deploy using GitOps. + * URL: https://github.com/Azure/arc-cicd-demo-src + * Contains the example Azure Vote App that you will deploy using GitOps. * **arc-cicd-demo-gitops** GitOps repository- * URL: https://github.com/Azure/arc-cicd-demo-gitops - * Works as a base for your cluster resources that house the Azure Vote App. + * URL: https://github.com/Azure/arc-cicd-demo-gitops + * Works as a base for your cluster resources that house the Azure Vote App. ### Connect the GitOps repository To continuously deploy your app, connect the application repository to your clus The initial GitOps repository contains only a [manifest](https://github.com/Azure/arc-cicd-demo-gitops/blob/master/arc-cicd-cluster/manifests/namespaces.yml) that creates the **dev** and **stage** namespaces corresponding to the deployment environments. The GitOps connection that you create will automatically:+ * Sync the manifests in the manifest directory. * Update the cluster state. -The CI/CD workflow will populate the manifest directory with extra manifests to deploy the app. +The CI/CD workflow populates the manifest directory with extra manifests to deploy the app. 1. [Create a new GitOps connection](./tutorial-use-gitops-flux2.md) to your newly forked **arc-cicd-demo-gitops** repository in GitHub. The CI/CD workflow will populate the manifest directory with extra manifests to --https-user <Azure Repos username> \ --https-key <Azure Repos PAT token> \ --scope cluster \- --cluster-type managedClusters \ + --cluster-type connectedClusters \ --branch master \ --kustomization name=cluster-config prune=true path=arc-cicd-cluster/manifests ``` The CI/CD workflow will populate the manifest directory with extra manifests to ### Install GitOps Connector 1. Add GitOps Connector repository to Helm repositories:-```console - helm repo add gitops-connector https://azure.github.io/gitops-connector/ -``` -2. Install the connector to the cluster: -```console - helm upgrade -i gitops-connector gitops-connector/gitops-connector \ - --namespace flux-system \ - --set gitRepositoryType=GITHUB \ - --set ciCdOrchestratorType=GITHUB \ - --set gitOpsOperatorType=FLUX \ - --set gitHubGitOpsRepoName=arc-cicd-demo-src \ - --set gitHubGitOpsManifestsRepoName=arc-cicd-demo-gitops \ - --set gitHubOrgUrl=https://api.github.com/repos/<Your organization> \ - --set gitOpsAppURL=https://github.com/<Your organization>/arc-cicd-demo-gitops/commit \ - --set orchestratorPAT=<GitHub PAT token> -``` -3. Configure Flux to send notifications to GitOps connector: -```console -cat <<EOF | kubectl apply -f - -apiVersion: notification.toolkit.fluxcd.io/v1beta1 -kind: Alert -metadata: - name: gitops-connector - namespace: flux-system -spec: - eventSeverity: info - eventSources: - - kind: GitRepository - name: cluster-config - - kind: Kustomization - name: cluster-config-cluster-config - providerRef: - name: gitops-connector --apiVersion: notification.toolkit.fluxcd.io/v1beta1 -kind: Provider -metadata: - name: gitops-connector - namespace: flux-system -spec: - type: generic - address: http://gitops-connector:8080/gitopsphase -EOF -``` + ```console + helm repo add gitops-connector https://azure.github.io/gitops-connector/ + ``` ++1. Install the connector to the cluster: ++ ```console + helm upgrade -i gitops-connector gitops-connector/gitops-connector \ + --namespace flux-system \ + --set gitRepositoryType=GITHUB \ + --set ciCdOrchestratorType=GITHUB \ + --set gitOpsOperatorType=FLUX \ + --set gitHubGitOpsRepoName=arc-cicd-demo-src \ + --set gitHubGitOpsManifestsRepoName=arc-cicd-demo-gitops \ + --set gitHubOrgUrl=https://api.github.com/repos/<Your organization> \ + --set gitOpsAppURL=https://github.com/<Your organization>/arc-cicd-demo-gitops/commit \ + --set orchestratorPAT=<GitHub PAT token> + ``` ++1. Configure Flux to send notifications to GitOps connector: ++ ```console + cat <<EOF | kubectl apply -f - + apiVersion: notification.toolkit.fluxcd.io/v1beta1 + kind: Alert + metadata: + name: gitops-connector + namespace: flux-system + spec: + eventSeverity: info + eventSources: + - kind: GitRepository + name: cluster-config + - kind: Kustomization + name: cluster-config-cluster-config + providerRef: + name: gitops-connector + + apiVersion: notification.toolkit.fluxcd.io/v1beta1 + kind: Provider + metadata: + name: gitops-connector + namespace: flux-system + spec: + type: generic + address: http://gitops-connector:8080/gitopsphase + EOF + ``` For the details on installation, refer to the [GitOps Connector](https://github.com/microsoft/gitops-connector#installation) repository. For the details on installation, refer to the [GitOps Connector](https://github. | ENVIRONMENT_NAME | Dev | | TARGET_NAMESPACE | `dev` | -2. Create `az-vote-app-stage` environment with the following secrets: +1. Create `az-vote-app-stage` environment with the following secrets: | Secret | Value | | -- | -- | You're now ready to deploy to the `dev` and `stage` environments. #### CI/CD Dev workflow -To start the CI/CD Dev workflow change the source code. In the application repository, update values in `.azure-vote/src/azure-vote-front/config_file.cfg` file and push the changes to the repository. +To start the CI/CD Dev workflow, change the source code. In the application repository, update values in `.azure-vote/src/azure-vote-front/config_file.cfg` file and push the changes to the repository. The CI/CD Dev workflow:+ * Ensures the application change passes all automated quality checks for deployment. * Does any extra validation that couldn't be completed in the PR pipeline. * Verifies the Docker image has changed and the new image is pushed. * Publishes the artifacts (Docker image tags, Manifest templates, Utils) that will be used by the following CD stages. * Deploys the application to Dev environment.- * Generates manifests to the GitOps repository - * Creates a PR to the GitOps repository for approval + * Generates manifests to the GitOps repository. + * Creates a PR to the GitOps repository for approval. 1. Find the PR created by the pipeline to the GitOps repository. 1. Verify the changes to the GitOps repository. You should see: * High-level Helm template changes. * Low-level Kubernetes manifests that show the underlying changes to the desired state. Flux deploys these manifests. 1. If everything looks good, approve and complete the PR.- 1. After a few minutes, Flux picks up the change and starts the deployment.-1. Monitor the Git Commit status on the Commit history tab. Once it is `succeeded` the `CD Stage` workflow will start +1. Monitor the Git Commit status on the Commit history tab. Once it is `succeeded`, the `CD Stage` workflow will start. 1. Forward the port locally using `kubectl` and ensure the app works correctly using:+ ```console kubectl port-forward -n dev svc/azure-vote-front 8080:80 ``` 1. View the Azure Vote app in your browser at `http://localhost:8080/`.- 1. Vote for your favorites and get ready to make some changes to the app. #### CD Stage workflow The CI/CD Dev workflow: The CD Stage workflow starts automatically once Flux successfully deploys the application to dev environment and notifies GitHub actions via GitOps Connector. The CD Stage workflow:+ * Runs application smoke tests against Dev environment * Deploys the application to Stage environment.- * Generates manifests to the GitOps repository - * Creates a PR to the GitOps repository for approval + * Generates manifests to the GitOps repository + * Creates a PR to the GitOps repository for approval -Once the manifests PR to the Stage environment is merged and Flux successfully applied all the changes, it updates Git commit status in the GitOps repository. +Once the manifests PR to the Stage environment is merged and Flux successfully applies all the changes, the Git commit status is updated in the GitOps repository. The deployment is now complete. -Your deployment is now complete. This ends the CI/CD workflow. Refer to the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops-githubfluxv2.md) in the application repository that explains in details the steps and techniques implemented in the CI/CD workflows used in this tutorial. +For a detailed overview of all the steps and techniques implemented in the CI/CD workflows used in this tutorial, see the [GitHub GitOps Flow diagram](https://github.com/Azure/arc-cicd-demo-src/blob/master/docs/azdo-gitops-githubfluxv2.md). ## Clean up resources If you're not going to continue to use this application, delete any resources wi --name cluster-config \ --cluster-name arc-cicd-cluster \ --resource-group myResourceGroup \- -t managedClusters --yes + -t connectedClusters --yes ``` -2. Delete GitOps Connector: +1. Delete GitOps Connector: + ```console helm uninstall gitops-connector -n flux-system kubectl delete alerts.notification.toolkit.fluxcd.io gitops-connector -n flux-system |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | Then you define these elements for the resulting alert actions by using: 1. On the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, or **resource location**. - The **Available signal types** for your selected resources are at the bottom right of the pane. - > [!NOTE] > If you select a Log analytics workspace resource, keep in mind that if the workspace receives telemetry from resources in more than one subscription, alerts are sent about those resources from different subscriptions. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot that shows the select resource pane for creating a new alert rule."::: -1. Select **Include all future resources** to include any future resources added to the selected scope. -1. Select **Done**. +1. Select **Apply**. 1. Select **Next: Condition** at the bottom of the page.-1. On the **Select a signal** pane, filter the list of signals by using the signal type and monitor service: +1. On the **Select a signal** pane, you can search for the signal name or you can filter the list of signals by: - **Signal type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.- - **Monitor service**: The service sending the signal. This list is pre-populated based on the type of alert rule you selected. + - **Signal source**: The service sending the signal. The list is pre-populated based on the type of alert rule you selected. This table describes the services available for each type of alert rule: - |Signal type |Monitor service |Description | + |Signal type |Signal source |Description | |||| |Metrics|Platform |For metric signals, the monitor service is the metric namespace. "Platform" means the metrics are provided by the resource provider, namely, Azure.| | |Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. | Then you define these elements for the resulting alert actions by using: |Resource health|Resource health|The service that provides the resource-level health status. | |Service health|Service health|The service that provides the subscription-level health status. | -1. Select the **Signal name**, and follow the steps in the following tab that corresponds to the type of alert you're creating. +1. Select the **Signal name** and **Apply**. +1. Follow the steps in the tab that corresponds to the type of alert you're creating. ### [Metric alert](#tab/metric) Then you define these elements for the resulting alert actions by using: Dimensions are name-value pairs that contain more data about the metric value. By using dimensions, you can filter the metrics and monitor specific time-series, instead of monitoring the aggregate of all the dimensional values. - If you select more than one dimension value, each time series that results from the combination will trigger its own alert and be charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs. + If you select more than one dimension value, each time series that results from the combination triggers its own alert and is charged separately. For example, the transactions metric of a storage account can have an API name dimension that contains the name of the API called by each transaction (for example, GetBlob, DeleteBlob, and PutPage). You can choose to have an alert fired when there's a high number of transactions in a specific API (the aggregated data). Or you can use dimensions to alert only when the number of transactions is high for specific APIs. |Field |Description | ||| Then you define these elements for the resulting alert actions by using: |Field |Description | ||| |Threshold|Select if the threshold should be evaluated based on a static value or a dynamic value.<br>A **static threshold** evaluates the rule by using the threshold value that you configure.<br>**Dynamic thresholds** use machine learning algorithms to continuously learn the metric behavior patterns and calculate the appropriate thresholds for unexpected behavior. You can learn more about using [dynamic thresholds for metric alerts](alerts-types.md#dynamic-thresholds). |- |Operator|Select the operator for comparing the metric value against the threshold. <br>If you are using dynamic thresholds, alert rules can use tailored thresholds based on metric behavior for both upper and lower bounds in the same alert rule. Select one of these operators: <br> - Greater than the upper threshold or lower than the lower threshold (default) <br> - Greater than the upper threshold <br> - Lower than the lower threshold| + |Operator|Select the operator for comparing the metric value against the threshold. <br>If you're using dynamic thresholds, alert rules can use tailored thresholds based on metric behavior for both upper and lower bounds in the same alert rule. Select one of these operators: <br> - Greater than the upper threshold or lower than the lower threshold (default) <br> - Greater than the upper threshold <br> - Lower than the lower threshold| |Aggregation type|Select the aggregation function to apply on the data points: Sum, Count, Average, Min, or Max. | |Threshold value|If you selected a **static** threshold, enter the threshold value for the condition logic. | |Unit|If the selected metric signal supports different units, such as bytes, KB, MB, and GB, and if you selected a **static** threshold, enter the unit for the condition logic.| Then you define these elements for the resulting alert actions by using: |Field |Description | ||| |Check every|Select how often the alert rule checks if the condition is met. |- |Lookback period|Select how far back to look each time the data is checked. For example, every 1 minute youΓÇÖll be looking at the past 5 minutes.| + |Lookback period|Select how far back to look each time the data is checked. For example, every 1 minute, look back 5 minutes.| - 1. (Optional) In the **Advanced options** section, you can specify how many failures within a specific time period will trigger the alert. For example, you can specify that you only want to trigger an alert if there were three failures in the last hour. This setting is defined by your application business policy. + 1. (Optional) In the **Advanced options** section, you can specify how many failures within a specific time period trigger an alert. For example, you can specify that you only want to trigger an alert if there were three failures in the last hour. Your application business policy should determine this setting. Select values for these fields: Then you define these elements for the resulting alert actions by using: > [!NOTE] > If you're creating a new log alert rule, note that the current alert rule wizard is different from the earlier experience. For more information, see [Changes to the log alert rule creation experience](#changes-to-the-log-alert-rule-creation-experience). - 1. On the **Logs** pane, write a query that will return the log events for which you want to create an alert. + 1. On the **Logs** pane, write a query that returns the log events for which you want to create an alert. To use one of the predefined alert rule queries, expand the **Schema and filter** pane on the left of the **Logs** pane. Then select the **Queries** tab, and select one of the queries. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-log-rule-query-pane.png" alt-text="Screenshot that shows the Query pane when creating a new log alert rule."::: Then you define these elements for the resulting alert actions by using: :::image type="content" source="media/alerts-create-new-alert-rule/alerts-create-log-rule-logic.png" alt-text="Screenshot that shows the Alert logic section of a new log alert rule."::: - 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. This setting is defined by your application business policy. + 1. (Optional) In the **Advanced options** section, you can specify the number of failures and the alert evaluation period required to trigger an alert. For example, if you set **Aggregation granularity** to 5 minutes, you can specify that you only want to trigger an alert if there were three failures (15 minutes) in the last hour. Your application business policy determines this setting. Select values for these fields under **Number of violations to trigger the alert**: Then you define these elements for the resulting alert actions by using: :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot that shows the Tags tab when creating a new alert rule."::: -1. On the **Review + create** tab, a validation will run and inform you of any issues. +1. On the **Review + create** tab, the rule is validated, and lets you know about any issues. 1. When validation passes and you've reviewed the settings, select the **Create** button. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot that shows the Review and create tab when creating a new alert rule."::: |
azure-monitor | Alerts Manage Alert Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md | description: Manage your alert rules in the Azure portal, or using the CLI or Po Previously updated : 02/20/2023 Last updated : 03/05/2023 # Manage your alert rules |
azure-monitor | Best Practices Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-alerts.md | Each alert rule defines the severity of the alerts that it creates based on the | Sev 1 | Error | Degradation of performance or loss of availability of some aspect of an application or service. Requires attention but not immediate. | | Sev 2 | Warning | A problem that doesn't include any current loss in availability or performance, although it has the potential to lead to more severe problems if unaddressed. | | Sev 3 | Informational | Doesn't indicate a problem but provides interesting information to an operator, such as successful completion of a regular process. |-| Sev 4 | Verbose | Detailed information that isn't useful. +| Sev 4 | Verbose | Doesn't indicate a problem but provides detailed information that is verbose. Assess the severity of the condition each rule is identifying to assign an appropriate level. Define the types of issues you assign to each severity level and your standard response to each in your alerts strategy. |
azure-monitor | Azure Monitor Workspace Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md | Last updated 01/22/2023 # Azure Monitor workspace (preview) An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions. -+> [!Note] +> Log Analytics workspaces contain logs and metrics data from multiple Azure resources, whereas Azure Monitor workspaces contain only metrics related to Prometheus. + ## Contents of Azure Monitor workspace Azure Monitor workspaces will eventually contain all metric data collected by Azure Monitor. Currently, only Prometheus metrics are data hosted in an Azure Monitor workspace. |
azure-monitor | Collect Custom Metrics Guestos Resource Manager Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md | Title: Collect Windows VM metrics in Azure Monitor with template -description: Send guest OS metrics to the Azure Monitor metric database store by using a Resource Manager template for a Windows virtual machine + Title: Collect Windows VM metrics in Azure Monitor with a template +description: Send guest OS metrics to the Azure Monitor metric database store by using a Resource Manager template for a Windows virtual machine. -# Send guest OS metrics to the Azure Monitor metric store by using an Azure Resource Manager template for a Windows virtual machine -Performance data from the guest OS of Azure virtual machines is not collected automatically like other [platform metrics](./monitor-azure-resource.md#monitoring-data). Install the Azure Monitor [diagnostics extension](../agents/diagnostics-extension-overview.md) to collect guest OS metrics into the metrics database so it can be used with all features of Azure Monitor Metrics, including near-real time alerting, charting, routing, and access from a REST API. This article describes the process for sending Guest OS performance metrics for a Windows virtual machine to the metrics database using a Resource Manager template. +# Send guest OS metrics to the Azure Monitor metrics store by using an ARM template for a Windows VM -> [!NOTE] -> For details on configuring the diagnostics extension to collect guest OS metrics using the Azure portal, see [Install and configure Windows Azure diagnostics extension (WAD)](../agents/diagnostics-extension-windows-install.md). +Performance data from the guest OS of Azure virtual machines (VMs) isn't collected automatically like other [platform metrics](./monitor-azure-resource.md#monitoring-data). Install the Azure Monitor [Diagnostics extension](../agents/diagnostics-extension-overview.md) to collect guest OS metrics into the metrics database so that it can be used with all features of Azure Monitor Metrics. These features include near real time alerting, charting, routing, and access from a REST API. This article describes the process for sending guest OS performance metrics for a Windows VM to the metrics database by using an Azure Resource Manager template (ARM template). +> [!NOTE] +> For details on configuring the diagnostics extension to collect guest OS metrics by using the Azure portal, see [Install and configure the Windows Azure Diagnostics (WAD) extension](../agents/diagnostics-extension-windows-install.md). -If you're new to Resource Manager templates, learn about [template deployments](../../azure-resource-manager/management/overview.md) and their structure and syntax. +If you're new to ARM templates, learn about [template deployments](../../azure-resource-manager/management/overview.md) and their structure and syntax. ## Prerequisites - Your subscription must be registered with [Microsoft.Insights](../../azure-resource-manager/management/resource-providers-and-types.md).- - You need to have either [Azure PowerShell](/powershell/azure) or [Azure Cloud Shell](../../cloud-shell/overview.md) installed.- - Your VM resource must be in a [region that supports custom metrics](./metrics-custom-overview.md#supported-regions). - ## Set up Azure Monitor as a data sink-The Azure Diagnostics extension uses a feature called "data sinks" to route metrics and logs to different locations. The following steps show how to use a Resource Manager template and PowerShell to deploy a VM by using the new "Azure Monitor" data sink. +The Azure Diagnostics extension uses a feature called *data sinks* to route metrics and logs to different locations. The following steps show how to use an ARM template and PowerShell to deploy a VM by using the new Azure Monitor data sink. -## Author Resource Manager template -For this example, you can use a publicly available sample template. The starting templates are at -https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows. +## ARM template +For this example, you can use a publicly available sample template. The starting templates are on +[GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/vm-simple-windows). -- **Azuredeploy.json** is a preconfigured Resource Manager template for the deployment of a virtual machine.--- **Azuredeploy.parameters.json** is a parameters file that stores information such as what user name and password you would like to set for your VM. During deployment, the Resource Manager template uses the parameters that are set in this file.+- **Azuredeploy.json**: A preconfigured ARM template for the deployment of a VM. +- **Azuredeploy.parameters.json**: A parameters file that stores information like what user name and password you want to set for your VM. During deployment, the ARM template uses the parameters that are set in this file. Download and save both files locally. ### Modify azuredeploy.parameters.json-Open the *azuredeploy.parameters.json* file +1. Open the *azuredeploy.parameters.json* file. -1. Enter values for **adminUsername** and **adminPassword** for the VM. These parameters are used for remote access to the VM. To avoid having your VM hijacked, DO NOT use the values in this template. Bots scan the internet for user names and passwords in public GitHub repositories. They are likely to be testing VMs with these defaults. +1. Enter values for `adminUsername` and `adminPassword` for the VM. These parameters are used for remote access to the VM. To avoid having your VM hijacked, *don't* use the values in this template. Bots scan the internet for user names and passwords in public GitHub repositories. They're likely to be testing VMs with these defaults. -1. Create a unique dnsname for the VM. +1. Create a unique `dnsname` for the VM. ### Modify azuredeploy.json -Open the *azuredeploy.json* file --Add a storage account ID to the **variables** section of the template after the entry for **storageAccountName.** --```json -// Find these lines. -"variables": { - "storageAccountName": "[concat(uniquestring(resourceGroup().id), 'sawinvm')]", --// Add this line directly below. - "accountid": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]", -``` +1. Open the *azuredeploy.json* file. ++1. Add a storage account ID to the `variables` section of the template after the entry for `storageAccountName`. + + ```json + // Find these lines. + "variables": { + "storageAccountName": "[concat(uniquestring(resourceGroup().id), 'sawinvm')]", + + // Add this line directly below. + "accountid": "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]", + ``` + +1. Add this Managed Service Identity (MSI) extension to the template at the top of the `resources` section. The extension ensures that Azure Monitor accepts the metrics that are being emitted. ++ ```json + //Find this code. + "resources": [ + // Add this code directly below. + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "name": "[concat(variables('vmName'), '/', 'WADExtensionSetup')]", + "apiVersion": "2017-12-01", + "location": "[resourceGroup().location]", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]" ], + "properties": { + "publisher": "Microsoft.ManagedIdentity", + "type": "ManagedIdentityExtensionForWindows", + "typeHandlerVersion": "1.0", + "autoUpgradeMinorVersion": true, + "settings": { + "port": 50342 + } + } + }, + ``` -Add this Managed Service Identity (MSI) extension to the template at the top of the **resources** section. The extension ensures that Azure Monitor accepts the metrics that are being emitted. +1. Add the `identity` configuration to the VM resource to ensure that Azure assigns a system identity to the MSI extension. This step ensures that the VM can emit guest metrics about itself to Azure Monitor. -```json -//Find this code. -"resources": [ -// Add this code directly below. + ```json + // Find this section + "subnet": { + "id": "[variables('subnetRef')]" + } + } + } + ] + } + }, {- "type": "Microsoft.Compute/virtualMachines/extensions", - "name": "[concat(variables('vmName'), '/', 'WADExtensionSetup')]", - "apiVersion": "2017-12-01", + "apiVersion": "2017-03-30", + "type": "Microsoft.Compute/virtualMachines", + "name": "[variables('vmName')]", "location": "[resourceGroup().location]",+ // add these 3 lines below + "identity": { + "type": "SystemAssigned" + }, + //end of added lines "dependsOn": [- "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]" ], + "[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]", + "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]" + ], "properties": {- "publisher": "Microsoft.ManagedIdentity", - "type": "ManagedIdentityExtensionForWindows", - "typeHandlerVersion": "1.0", - "autoUpgradeMinorVersion": true, - "settings": { - "port": 50342 - } - } - }, -``` --Add the **identity** configuration to the VM resource to ensure that Azure assigns a system identity to the MSI extension. This step ensures that the VM can emit guest metrics about itself to Azure Monitor. --```json -// Find this section - "subnet": { - "id": "[variables('subnetRef')]" - } - } + "hardwareProfile": { + ... + ``` ++1. Add the following configuration to enable the diagnostics extension on a Windows VM. For a simple Resource Manager-based VM, you can add the extension configuration to theΓÇ»resourcesΓÇ»array for the VM. The line `"sinks": "AzMonSink"`, and the corresponding `"SinksConfig"` later in the section, enable the extension to emit metrics directly to Azure Monitor. Feel free to add or remove performance counters as needed. + + ```json + "networkProfile": { + "networkInterfaces": [ + { + "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]" + } + ] + }, + "diagnosticsProfile": { + "bootDiagnostics": { + "enabled": true, + "storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))).primaryEndpoints.blob]" }- ] }-}, -{ - "apiVersion": "2017-03-30", - "type": "Microsoft.Compute/virtualMachines", - "name": "[variables('vmName')]", - "location": "[resourceGroup().location]", - // add these 3 lines below - "identity": { - "type": "SystemAssigned" },- //end of added lines - "dependsOn": [ - "[resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))]", - "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]" - ], - "properties": { - "hardwareProfile": { - ... -``` --Add the following configuration to enable the Diagnostics extension on a Windows virtual machine. For a simple Resource Manager-based virtual machine, we can add the extension configuration to theΓÇ»resourcesΓÇ»array for the virtual machine. The line "sinks"— "AzMonSink" and the corresponding "SinksConfig" later in the section—enable the extension to emit metrics directly to Azure Monitor. Feel free to add or remove performance counters as needed. ---```json - "networkProfile": { - "networkInterfaces": [ - { - "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]" - } - ] - }, -"diagnosticsProfile": { - "bootDiagnostics": { - "enabled": true, - "storageUri": "[reference(resourceId('Microsoft.Storage/storageAccounts/', variables('storageAccountName'))).primaryEndpoints.blob]" - } -} -}, -//Start of section to add -"resources": [ -{ - "type": "Microsoft.Compute/virtualMachines/extensions", - "name": "[concat(variables('vmName'), '/', 'Microsoft.Insights.VMDiagnosticsSettings')]", - "apiVersion": "2017-12-01", - "location": "[resourceGroup().location]", - "dependsOn": [ - "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]" - ], - "properties": { - "publisher": "Microsoft.Azure.Diagnostics", - "type": "IaaSDiagnostics", - "typeHandlerVersion": "1.12", - "autoUpgradeMinorVersion": true, - "settings": { - "WadCfg": { - "DiagnosticMonitorConfiguration": { - "overallQuotaInMB": 4096, - "DiagnosticInfrastructureLogs": { - "scheduledTransferLogLevelFilter": "Error" - }, - "Directories": { - "scheduledTransferPeriod": "PT1M", - "IISLogs": { - "containerName": "wad-iis-logfiles" - }, - "FailedRequestLogs": { - "containerName": "wad-failedrequestlogs" - } - }, - "PerformanceCounters": { - "scheduledTransferPeriod": "PT1M", - "sinks": "AzMonSink", - "PerformanceCounterConfiguration": [ - { - "counterSpecifier": "\\Memory\\Available Bytes", - "sampleRate": "PT15S" + //Start of section to add + "resources": [ + { + "type": "Microsoft.Compute/virtualMachines/extensions", + "name": "[concat(variables('vmName'), '/', 'Microsoft.Insights.VMDiagnosticsSettings')]", + "apiVersion": "2017-12-01", + "location": "[resourceGroup().location]", + "dependsOn": [ + "[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]" + ], + "properties": { + "publisher": "Microsoft.Azure.Diagnostics", + "type": "IaaSDiagnostics", + "typeHandlerVersion": "1.12", + "autoUpgradeMinorVersion": true, + "settings": { + "WadCfg": { + "DiagnosticMonitorConfiguration": { + "overallQuotaInMB": 4096, + "DiagnosticInfrastructureLogs": { + "scheduledTransferLogLevelFilter": "Error" + }, + "Directories": { + "scheduledTransferPeriod": "PT1M", + "IISLogs": { + "containerName": "wad-iis-logfiles" },- { - "counterSpecifier": "\\Memory\\% Committed Bytes In Use", - "sampleRate": "PT15S" + "FailedRequestLogs": { + "containerName": "wad-failedrequestlogs" + } },- { - "counterSpecifier": "\\Memory\\Committed Bytes", - "sampleRate": "PT15S" + "PerformanceCounters": { + "scheduledTransferPeriod": "PT1M", + "sinks": "AzMonSink", + "PerformanceCounterConfiguration": [ + { + "counterSpecifier": "\\Memory\\Available Bytes", + "sampleRate": "PT15S" + }, + { + "counterSpecifier": "\\Memory\\% Committed Bytes In Use", + "sampleRate": "PT15S" + }, + { + "counterSpecifier": "\\Memory\\Committed Bytes", + "sampleRate": "PT15S" + } + ] + }, + "WindowsEventLog": { + "scheduledTransferPeriod": "PT1M", + "DataSource": [ + { + "name": "Application!*" + } + ] + }, + "Logs": { + "scheduledTransferPeriod": "PT1M", + "scheduledTransferLogLevelFilter": "Error" }- ] },- "WindowsEventLog": { - "scheduledTransferPeriod": "PT1M", - "DataSource": [ + "SinksConfig": { + "Sink": [ {- "name": "Application!*" + "name" : "AzMonSink", + "AzureMonitor" : {} }- ] - }, - "Logs": { - "scheduledTransferPeriod": "PT1M", - "scheduledTransferLogLevelFilter": "Error" + ] }+ }, + "StorageAccount": "[variables('storageAccountName')]" },- "SinksConfig": { - "Sink": [ - { - "name" : "AzMonSink", - "AzureMonitor" : {} - } - ] + "protectedSettings": { + "storageAccountName": "[variables('storageAccountName')]", + "storageAccountKey": "[listKeys(variables('accountid'),'2015-06-15').key1]", + "storageAccountEndPoint": "https://core.windows.net/" + } }- }, - "StorageAccount": "[variables('storageAccountName')]" - }, - "protectedSettings": { - "storageAccountName": "[variables('storageAccountName')]", - "storageAccountKey": "[listKeys(variables('accountid'),'2015-06-15').key1]", - "storageAccountEndPoint": "https://core.windows.net/" - } }- } - ] -//End of section to add -``` ---Save and close both files. + ] + //End of section to add + ``` +1. Save and close both files. -## Deploy the Resource Manager template +## Deploy the ARM template > [!NOTE]-> You must be running the Azure Diagnostics extension version 1.5 or higher AND have the **autoUpgradeMinorVersion**: property set to ΓÇÿtrueΓÇÖ in your Resource Manager template. Azure then loads the proper extension when it starts the VM. If you don't have these settings in your template, change them and redeploy the template. -+> You must be running Azure Diagnostics extension version 1.5 or higher *and* have the `autoUpgradeMinorVersion:` property set to `true` in your ARM template. Azure then loads the proper extension when it starts the VM. If you don't have these settings in your template, change them and redeploy the template. -To deploy the Resource Manager template, we leverage Azure PowerShell. +To deploy the ARM template, we use Azure PowerShell. -1. Launch PowerShell. -1. Log in to Azure using `Login-AzAccount`. +1. Start PowerShell. +1. Sign in to Azure by using `Login-AzAccount`. 1. Get your list of subscriptions by using `Get-AzSubscription`.-1. Set the subscription that you're using to create/update the virtual machine in: +1. Set the subscription that you're using to create/update the VM in: ```powershell Select-AzSubscription -SubscriptionName "<Name of the subscription>" ```+ 1. To create a new resource group for the VM that's being deployed, run the following command: ```powershell New-AzResourceGroup -Name "<Name of Resource Group>" -Location "<Azure Region>" ```- > [!NOTE] - > Remember to [use an Azure region that is enabled for custom metrics](./metrics-custom-overview.md). -1. Run the following commands to deploy the VM using the Resource Manager template. + > [!NOTE] + > Remember to [use an Azure region that's enabled for custom metrics](./metrics-custom-overview.md). ++1. Run the following commands to deploy the VM by using the ARM template. > [!NOTE]- > If you wish to update an existing VM, simply add *-Mode Incremental* to the end of the following command. + > If you want to update an existing VM, add *-Mode Incremental* to the end of the following command. ```powershell New-AzResourceGroupDeployment -Name "<NameThisDeployment>" -ResourceGroupName "<Name of the Resource Group>" -TemplateFile "<File path of your Resource Manager template>" -TemplateParameterFile "<File path of your parameters file>" To deploy the Resource Manager template, we leverage Azure PowerShell. 1. After your deployment succeeds, the VM should be in the Azure portal, emitting metrics to Azure Monitor. > [!NOTE]- > You might run into errors around the selected vmSkuSize. If this happens, go back to your azuredeploy.json file, and update the default value of the vmSkuSize parameter. In this case, we recommend trying "Standard_DS1_v2"). + > You might run into errors around the selected `vmSkuSize`. If this error happens, go back to your *azuredeploy.json* file and update the default value of the `vmSkuSize` parameter. In this case, we recommend that you try `"Standard_DS1_v2"`). ## Chart your metrics -1. Log in to the Azure portal. --2. On the left menu, select **Monitor**. +1. Sign in to the Azure portal. -3. On the Monitor page, select **Metrics**. +1. On the left menu, select **Monitor**. -  +1. On the **Monitor** page, select **Metrics**. -4. Change the aggregation period to **Last 30 minutes**. +  -5. In the resource drop-down menu, select the VM that you created. If you didn't change the name in the template, it should be *SimpleWinVM2*. +1. Change the aggregation period to **Last 30 minutes**. -6. In the namespaces drop-down menu, select **azure.vm.windows.guest** +1. In the resource dropdown menu, select the VM that you created. If you didn't change the name in the template, it should be **SimpleWinVM2**. -7. In the metrics drop down menu, select **Memory\%Committed Bytes in Use**. +1. In the namespaces dropdown list, select **azure.vm.windows.guest**. +1. In the metrics dropdown list, select **Memory\%Committed Bytes in Use**. ## Next steps-- Learn more about [custom metrics](./metrics-custom-overview.md).+Learn more about [custom metrics](./metrics-custom-overview.md). |
azure-monitor | Data Collection Rule Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-structure.md | ms.reviwer: nikeist -- # Structure of a data collection rule in Azure Monitor (preview)-[Data Collection Rules (DCRs)](data-collection-rule-overview.md) determine how to collect and process telemetry sent to Azure. Some data collection rules will be created and managed by Azure Monitor, while you may create others to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing data collection rules in those cases where you need to work with them directly. -+[Data collection rules (DCRs)](data-collection-rule-overview.md) determine how to collect and process telemetry sent to Azure. Some DCRs will be created and managed by Azure Monitor. You might create other DCRs to customize data collection for your particular requirements. This article describes the structure of DCRs for creating and editing DCRs in those cases where you need to work with them directly. ## Custom logs-A DCR for [custom logs](../logs/logs-ingestion-api-overview.md) contains the sections below. For a sample, see [Sample data collection rule - custom logs](../logs/data-collection-rule-sample-custom-logs.md). +A DCR for [custom logs](../logs/logs-ingestion-api-overview.md) contains the following sections. For a sample, see [Sample data collection rule - custom logs](../logs/data-collection-rule-sample-custom-logs.md). ### streamDeclarations-This section contains the declaration of all the different types of data that will be sent via the HTTP endpoint directly into Log Analytics. Each stream is an object whose key represents the stream name (Must begin with *Custom-*) and whose value is the full list of top-level properties that the JSON data that will be sent will contain. Note that the shape of the data you send to the endpoint doesn't need to match that of the destination table. Rather, the output of the transform that is applied on top of the input data needs to match the destination shape. The possible data types that can be assigned to the properties are `string`, `int`, `long`, `real`, `boolean`, `dynamic`, and `datetime`. +This section contains the declaration of all the different types of data that will be sent via the HTTP endpoint directly into Log Analytics. Each stream is an object whose: ++- Key represents the stream name, which must begin with *Custom-*. +- Value is the full list of top-level properties that are contained in the JSON data that will be sent. ++The shape of the data you send to the endpoint doesn't need to match that of the destination table. Instead, the output of the transform that's applied on top of the input data needs to match the destination shape. The possible data types that can be assigned to the properties are `string`, `int`, `long`, `real`, `boolean`, `dynamic`, and `datetime`. ### destinations-This section contains a declaration of all the destinations where the data will be sent. Only Log Analytics is currently supported as a destination. Each Log Analytics destination will require the full Workspace Resource ID, as well as a friendly name that will be used elsewhere in the DCR to refer to this workspace. +This section contains a declaration of all the destinations where the data will be sent. Only Log Analytics is currently supported as a destination. Each Log Analytics destination requires the full workspace resource ID and a friendly name that will be used elsewhere in the DCR to refer to this workspace. ### dataFlows-This section ties the other sections together. Defines the following for each stream declared in the `streamDeclarations` section: +This section ties the other sections together. It defines the following properties for each stream declared in the `streamDeclarations` section: -- `destination` from the `destinations` section where the data will be sent. -- `transformKql` which is the [transformation](data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table.-- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of the outputStream will have the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream.+- `destination` from the `destinations` section where the data will be sent. +- `transformKql` section, which is the [transformation](data-collection-transformations.md) applied to the data that was sent in the input shape described in the `streamDeclarations` section to the shape of the target table. +- `outputStream` section, which describes which table in the workspace specified under the `destination` property the data will be ingested into. The value of `outputStream` has the `Microsoft-[tableName]` shape when data is being ingested into a standard Log Analytics table, or `Custom-[tableName]` when ingesting data into a custom-created table. Only one destination is allowed per stream. -## Azure Monitor agent - A DCR for [Azure Monitor agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the sections below. For a sample, see [Sample data collection rule - agent](../agents/data-collection-rule-sample-agent.md). +## Azure Monitor Agent + A DCR for [Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) contains the following sections. For a sample, see [Sample data collection rule - agent](../agents/data-collection-rule-sample-agent.md). ### dataSources-Unique source of monitoring data with its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and syslog. Each data source matches a particular data source type as described below. +This unique source of monitoring data has its own format and method of exposing its data. Examples of a data source include Windows event log, performance counters, and Syslog. Each data source matches a particular data source type as described in the following table. -Each data source has a data source type. Each type defines a unique set of properties that must be specified for each data source. The data source types currently available are shown in the following table. +Each data source has a data source type. Each type defines a unique set of properties that must be specified for each data source. The data source types currently available appear in the following table. | Data source type | Description | |:|:| Each data source has a data source type. Each type defines a unique set of prope | syslog | Syslog events on Linux | | windowsEventLogs | Windows event log | --### Streams -Unique handle that describes a set of data sources that will be transformed and schematized as one type. Each data source requires one or more streams, and one stream may be used by multiple data sources. All data sources in a stream share a common schema. Use multiple streams for example, when you want to send a particular data source to multiple tables in the same Log Analytics workspace. +### Streams + This unique handle describes a set of data sources that will be transformed and schematized as one type. Each data source requires one or more streams, and one stream can be used by multiple data sources. All data sources in a stream share a common schema. Use multiple streams, for example, when you want to send a particular data source to multiple tables in the same Log Analytics workspace. ### destinations-Set of destinations where the data should be sent. Examples include Log Analytics workspace and Azure Monitor Metrics. Multiple destinations are allowed for multi-homing scenario. --### dataFlows -Definition of which streams should be sent to which destinations. -+This set of destinations indicates where the data should be sent. Examples include Log Analytics workspace and Azure Monitor Metrics. Multiple destinations are allowed for multi-homing scenarios. +### dataFlows +The definition indicates which streams should be sent to which destinations. ## Next steps -- [Overview of data collection rules including methods for creating them.](data-collection-rule-overview.md)+[Overview of data collection rules and methods for creating them](data-collection-rule-overview.md) |
azure-monitor | Data Collection Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations.md | ms.reviwer: nikeist # Data collection transformations in Azure Monitor-Transformations in Azure Monitor allow you to filter or modify incoming data before it's sent to a Log Analytics workspace. This article provides a basic description of transformations and how they are implemented. It provides links to other content for actually creating a transformation. +With transformations in Azure Monitor, you can filter or modify incoming data before it's sent to a Log Analytics workspace. This article provides a basic description of transformations and how they're implemented. It provides links to other content for creating a transformation. ## Why to use transformations-The following table describes the different goals that transformations can be used to achieve. +The following table describes the different goals that you can achieve by using transformations. | Category | Details | |:|:|-| Remove sensitive data | You may have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or just particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information**. Replace information such as digits in an IP address or telephone number with a common character.<br><br>**Send to alternate table.** Send sensitive records to an alternate table with different RBAC configuration. | -| Enrich data with additional or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with additional information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business specific information.** For example, you might add a column indicating a company division based on location information in other columns. | -| Reduce data costs | Since you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all of the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data may include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You may have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.<br><br>**Send certain rows to basic logs.** Send rows in your data that require on basic query capabilities to basic logs tables for a lower ingestion cost. | --+| Remove sensitive data | You might have a data source that sends information you don't want stored for privacy or compliancy reasons.<br><br>**Filter sensitive information.** Filter out entire rows or particular columns that contain sensitive information.<br><br>**Obfuscate sensitive information.** Replace information such as digits in an IP address or telephone number with a common character.<br><br>**Send to an alternate table.** Send sensitive records to an alternate table with different role-based access control configuration. | +| Enrich data with more or calculated information | Use a transformation to add information to data that provides business context or simplifies querying the data later.<br><br>**Add a column with more information.** For example, you might add a column identifying whether an IP address in another column is internal or external.<br><br>**Add business-specific information.** For example, you might add a column indicating a company division based on location information in other columns. | +| Reduce data costs | Because you're charged ingestion cost for any data sent to a Log Analytics workspace, you want to filter out any data that you don't require to reduce your costs.<br><br>**Remove entire rows.** For example, you might have a diagnostic setting to collect resource logs from a particular resource but not require all the log entries that it generates. Create a transformation that filters out records that match a certain criteria.<br><br>**Remove a column from each row.** For example, your data might include columns with data that's redundant or has minimal value. Create a transformation that filters out columns that aren't required.<br><br>**Parse important data from a column.** You might have a table with valuable data buried in a particular column. Use a transformation to parse the valuable data into a new column and remove the original.<br><br>**Send certain rows to basic logs.** Send rows in your data that require basic query capabilities to basic logs tables for a lower ingestion cost. | ## Supported tables-Transformations may be applied to the following tables in a Log Analytics workspace. +You can apply transformations to the following tables in a Log Analytics workspace: - Any Azure table listed in [Tables that support transformations in Azure Monitor Logs](../logs/tables-feature-support.md) - Any custom table - ## How transformations work-Transformations are performed in Azure Monitor in the [data ingestion pipeline](../essentials/data-collection.md) after the data source delivers the data and before it's sent to the destination. The data source may perform its own filtering before sending data but then rely on the transformation for further manipulation for it's sent to the destination. +Transformations are performed in Azure Monitor in the [data ingestion pipeline](../essentials/data-collection.md) after the data source delivers the data and before it's sent to the destination. The data source might perform its own filtering before sending data but then rely on the transformation for further manipulation before it's sent to the destination. -Transformations are defined in a [data collection rule (DCR)](data-collection-rule-overview.md) and use a [Kusto Query Language (KQL) statement](data-collection-transformations-structure.md) that is applied individually to each entry in the incoming data. It must understand the format of the incoming data and create output in the structure expected by the destination. +Transformations are defined in a [data collection rule (DCR)](data-collection-rule-overview.md) and use a [Kusto Query Language (KQL) statement](data-collection-transformations-structure.md) that's applied individually to each entry in the incoming data. It must understand the format of the incoming data and create output in the structure expected by the destination. -For example, a DCR that collects data from a virtual machine using Azure Monitor agent would specify particular data to collect from the client operating system. It could also include a transformation that would get applied to that data after it's sent to the data ingestion pipeline that further filters the data or adds a calculated column. This workflow is shown in the following diagram. +For example, a DCR that collects data from a virtual machine by using Azure Monitor Agent would specify particular data to collect from the client operating system. It could also include a transformation that would get applied to that data after it's sent to the data ingestion pipeline that further filters the data or adds a calculated column. The following diagram shows this workflow. -Another example is data sent from a custom application using the [logs ingestion API](../logs/logs-ingestion-api-overview.md). In this case, the application sends the data to a [data collection endpoint](data-collection-endpoint-overview.md) and specifies a data collection rule in the REST API call. The DCR includes the transformation and the destination workspace and table. +Another example is data sent from a custom application by using the [logs ingestion API](../logs/logs-ingestion-api-overview.md). In this case, the application sends the data to a [data collection endpoint](data-collection-endpoint-overview.md) and specifies a DCR in the REST API call. The DCR includes the transformation and the destination workspace and table. ## Workspace transformation DCR-The workspace transformation DCR is a special DCR that's applied directly to a Log Analytics workspace. It includes default transformations for one more [supported tables](../logs/tables-feature-support.md). These transformations are applied to any data sent to these tables unless that data came from another DCR. +The workspace transformation DCR is a special DCR that's applied directly to a Log Analytics workspace. It includes default transformations for one or more [supported tables](../logs/tables-feature-support.md). These transformations are applied to any data sent to these tables unless that data came from another DCR. -For example, if you create a transformation in the workspace transformation DCR for the `Event` table, it would be applied to events collected by virtual machines running the [Log Analytics agent](../agents/log-analytics-agent.md) since this agent doesn't use a DCR. The transformation would be ignored by any data sent from the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) though since it uses a DCR and would be expected to provide its own transformation. +For example, if you create a transformation in the workspace transformation DCR for the `Event` table, it would be applied to events collected by virtual machines running the [Log Analytics agent](../agents/log-analytics-agent.md) because this agent doesn't use a DCR. The transformation would be ignored by any data sent from [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) because it uses a DCR and would be expected to provide its own transformation. -A common use of the workspace transformation DCR is collection of [resource logs](resource-logs.md) which are configured with a [diagnostic setting](diagnostic-settings.md). This is shown in the example below. +A common use of the workspace transformation DCR is collection of [resource logs](resource-logs.md) that are configured with a [diagnostic setting](diagnostic-settings.md). The following example shows this process. ## Multiple destinations -Transformations allow you to send data to multiple destinations in a Log Analytics workspace using a single DCR. You provide a KQL query for each destination, and the results of each query are sent to their corresponding location. You can send different sets of data to different tables, or use multiple queries to send different sets of data to the same table. +With transformations, you can send data to multiple destinations in a Log Analytics workspace by using a single DCR. You provide a KQL query for each destination, and the results of each query are sent to their corresponding location. You can send different sets of data to different tables or use multiple queries to send different sets of data to the same table. -For example, you may send event data into Azure Monitor using the Logs ingestion API. Most of the events should be sent an analytics table where it could be queried regularly, while audit events should be sent to a custom table configured for [basic logs](../logs/basic-logs-configure.md) to reduce your cost. +For example, you might send event data into Azure Monitor by using the Logs Ingestion API. Most of the events should be sent an analytics table where it could be queried regularly, while audit events should be sent to a custom table configured for [basic logs](../logs/basic-logs-configure.md) to reduce your cost. -To use multiple destinations, you must currently either manually create a new DCR or [edit an existing one](data-collection-rule-edit.md). See the [Samples](#samples) section for examples of DCRs using multiple destinations. +To use multiple destinations, you must currently either manually create a new DCR or [edit an existing one](data-collection-rule-edit.md). See the [Samples](#samples) section for examples of DCRs that use multiple destinations. > [!IMPORTANT] > Currently, the tables in the DCR must be in the same Log Analytics workspace. To send to multiple workspaces from a single data source, use multiple DCRs and configure your application to send the data to each. ---## Creating a transformation -There are multiple methods to create transformations depending on the data collection method. The following table lists guidance for different methods for creating transformations. +## Create a transformation +There are multiple methods to create transformations depending on the data collection method. The following table lists guidance for different methods for creating transformations. | Type | Reference | |:|:|-| Logs ingestion API with transformation | [Send data to Azure Monitor Logs using REST API (Azure portal)](../logs/tutorial-logs-ingestion-portal.md)<br>[Send data to Azure Monitor Logs using REST API (Resource Manager templates)](../logs/tutorial-logs-ingestion-api.md) | -| Transformation in workspace DCR | [Add workspace transformation to Azure Monitor Logs using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Add workspace transformation to Azure Monitor Logs using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md) +| Logs ingestion API with transformation | [Send data to Azure Monitor Logs by using REST API (Azure portal)](../logs/tutorial-logs-ingestion-portal.md)<br>[Send data to Azure Monitor Logs by using REST API (Azure Resource Manager templates)](../logs/tutorial-logs-ingestion-api.md) | +| Transformation in workspace DCR | [Add workspace transformation to Azure Monitor Logs by using the Azure portal](../logs/tutorial-workspace-transformations-portal.md)<br>[Add workspace transformation to Azure Monitor Logs by using Resource Manager templates](../logs/tutorial-workspace-transformations-api.md) ## Cost for transformations-There is no direct cost for transformations, but you may incur charges for the following: --- If your transformation increases the size of the incoming data, adding a calculated column for example, then you're charged at the normal rate for ingestion of that additional data.-- If your transformation reduces the incoming data by more than 50%, then you're charged for ingestion of the amount of filtered data above 50%.+There's no direct cost for transformations, but you might incur charges for the following changes: +- If your transformation increases the size of the incoming data, like by adding a calculated column, for example, you're charged at the normal rate for ingestion of that extra data. +- If your transformation reduces the incoming data by more than 50%, you're charged for ingestion of the amount of filtered data above 50%. -The formula to determine the filter ingestion charge from transformations is `[GB filtered out by transformations] - ( [Total GB ingested] / 2 )`. For example, suppose that you ingest 100 GB on a particular day, and transformations remove 70 GB. You would be charged for 70 GB - (100 GB / 2) or 20 GB. To avoid this charge, you should use other methods to filter incoming data before the transformation is applied. +The formula to determine the filter ingestion charge from transformations is `[GB filtered out by transformations] - ( [Total GB ingested] / 2 )`. For example, suppose that you ingest 100 GB on a particular day, and transformations remove 70 GB. You would be charged for 70 GB - (100 GB / 2) or 20 GB. To avoid this charge, you should use other methods to filter incoming data before the transformation is applied. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor) for current charges for ingestion and retention of log data in Azure Monitor. > [!IMPORTANT]-> If Azure Sentinel is enabled for the Log Analytics workspace, then there is no filtering ingestion charge regardless of how much data the transformation filters. -+> If Azure Sentinel is enabled for the Log Analytics workspace, there's no filtering ingestion charge regardless of how much data the transformation filters. ## Samples-Following are Resource Manager templates of sample DCRs with different patterns. You can use these templates as a starting point to creating DCRs with transformations for your own scenarios. +The following Resource Manager templates show sample DCRs with different patterns. You can use these templates as a starting point to creating DCRs with transformations for your own scenarios. ### Single destination -The following example is a DCR for Azure Monitor agent that sends data to the `Syslog` table. In this example, the transformation filters the data for records with *error* in the message. -+The following example is a DCR for Azure Monitor Agent that sends data to the `Syslog` table. In this example, the transformation filters the data for records with `error` in the message. ```json { The following example is a DCR for Azure Monitor agent that sends data to the `S ### Multiple Azure tables -The following example is a DCR for data from Logs Ingestion API that sends data to both the `Syslog` and `SecurityEvent` table. This requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. In this example, all incoming data is sent to the `Syslog` table while malicious data is also sent to the `SecurityEvent` table. If you didn't want to replicate the malicious data in both tables, you could add a `where` statement to first query to remove those records. +The following example is a DCR for data from the Logs Ingestion API that sends data to both the `Syslog` and `SecurityEvent` tables. This DCR requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. In this example, all incoming data is sent to the `Syslog` table while malicious data is also sent to the `SecurityEvent` table. If you didn't want to replicate the malicious data in both tables, you could add a `where` statement to first query to remove those records. ```json { The following example is a DCR for data from Logs Ingestion API that sends data ### Combination of Azure and custom tables -The following example is a DCR for data from Logs Ingestion API that sends data to both the `Syslog` table and a custom table with the data in a different format. This requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. -+The following example is a DCR for data from the Logs Ingestion API that sends data to both the `Syslog` table and a custom table with the data in a different format. This DCR requires a separate `dataFlow` for each with a different `transformKql` and `OutputStream` for each. ```json { The following example is a DCR for data from Logs Ingestion API that sends data } ``` -- ## Next steps -- [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.+[Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine by using Azure Monitor Agent. |
azure-monitor | Metrics Store Custom Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-store-custom-rest-api.md | Title: Send metrics to the Azure Monitor metric database using REST API -description: Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API + Title: Send metrics to the Azure Monitor metric database by using a REST API +description: Send custom metrics for an Azure resource to the Azure Monitor metrics store by using a REST API. -# Send custom metrics for an Azure resource to the Azure Monitor metric store by using a REST API +# Send custom metrics for an Azure resource to the Azure Monitor metrics store by using a REST API -This article shows you how to send custom metrics for Azure resources to the Azure Monitor metrics store via a REST API. When the metrics are in Azure Monitor, you can do all the things with them that you do with standard metrics. For example, charting, alerting, and routing them to other external tools. +This article shows you how to send custom metrics for Azure resources to the Azure Monitor metrics store via a REST API. When the metrics are in Azure Monitor, you can do all the things with them that you do with standard metrics. For example, you can generate charts and alerts and route the metrics to other external tools. ->[!NOTE] ->The REST API only permits sending custom metrics for Azure resources. -To send metrics for resources in other environments or on-premises, use [Application Insights](../app/api-custom-events-metrics.md). +>[!NOTE] +>The REST API only permits sending custom metrics for Azure resources. To send metrics for resources in other environments or on-premises, use [Application Insights](../app/api-custom-events-metrics.md). -## Create and authorize a service principal to emit metrics +## Create and authorize a service principal to emit metrics -A service principal is an application whose tokens can be used to authenticate and grant access to specific Azure resources using Azure Active Directory. This includes user-apps, services or automation tools. +A service principal is an application whose tokens can be used to authenticate and grant access to specific Azure resources by using Azure Active Directory. Resources include user apps, services, or automation tools. 1. [Register an application with Azure Active Directory](../logs/api/register-app-for-token.md) to create a service principal. -1. Save the tenant ID, new client ID, and client secret value for your app to use when requesting a token. +1. Save the tenant ID, new client ID, and client secret value for your app to use when it requests a token. -1. Give the app created as part of the previous step **Monitoring Metrics Publisher** permissions to the resource you wish to emit metrics against. If you plan to use the app to emit custom metrics against many resources, you can grant these permissions at the resource group or subscription level. - - On your resource's overview page, select **Access Control (IAM)** -1. Select **Add**, then **Add role assignment** from the dropdown. - :::image type="content" source="./media/metrics-store-custom-rest-api/access-contol-add-role-assignment.png" alt-text="A screenshot showing the Access control(IAM) page for a virtual machine."::: -1. Search for *Monitoring Metrics* in the search field. +1. Give the app that was created as part of the previous step **Monitoring Metrics Publisher** permissions to the resource you want to emit metrics against. If you plan to use the app to emit custom metrics against many resources, you can grant these permissions at the resource group or subscription level. ++1. On your resource's overview page, select **Access control (IAM)**. +1. Select **Add** and select **Add role assignment** from the dropdown list. ++ :::image type="content" source="./media/metrics-store-custom-rest-api/access-contol-add-role-assignment.png" alt-text="Screenshot that shows the Access control(IAM) page for a virtual machine."::: +1. Search for **Monitoring Metrics** in the search field. 1. Select **Monitoring Metrics Publisher** from the list. 1. Select **Members**.- :::image type="content" source="./media/metrics-store-custom-rest-api/add-role-assignment.png" alt-text="A screenshot showing the add role assignment page."::: -1. Search for your app in the **Select** field. + :::image type="content" source="./media/metrics-store-custom-rest-api/add-role-assignment.png" alt-text="Screenshot that shows the Add role assignment page."::: +1. Search for your app in the **Select** field. 1. Select your app from the list. 1. Click **Select**. 1. Select **Review + assign**.- :::image type="content" source="./media/metrics-store-custom-rest-api/select-members.png" alt-text="A screenshot showing the members tab of the role assignment page."::: ++ :::image type="content" source="./media/metrics-store-custom-rest-api/select-members.png" alt-text="Screenshot that shows the members tab of the role assignment page."::: ## Get an authorization token -Send the following request in the command prompt or using a client like Postman. +Send the following request in the command prompt or by using a client like Postman. ```shell curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \ curl -X POST 'https://login.microsoftonline.com/<tennant ID>/oauth2/token' \ --data-urlencode 'resource=https://monitor.azure.com' ``` -The response body appears as follows: +The response body appears: ```JSON { Save the access token from the response for use in the following HTTP requests. ## Send a metric via the REST API -1. Paste the following JSON into a file, and save it asΓÇ»**custommetric.json** on your local computer. Update the time parameter so that it is within the last 20 minutes. You can't put a metric into the store that's over 20 minutes old. The metric store is optimized for alerting and real-time charting. +1. Paste the following JSON into a file. Save it asΓÇ»*custommetric.json* on your local computer. Update the time parameter so that it's within the last 20 minutes. You can't put a metric into the store that's more than 20 minutes old. The metrics store is optimized for alerting and real-time charting. ```JSON { Save the access token from the response for use in the following HTTP requests. } ``` -1. Submit the following HTTP POST request using the following variables: +1. Submit the following HTTP POST request by using the following variables: - **location**: Deployment region of the resource you're emitting metrics for.- - **resourceId**: Resource ID of the Azure resource you're tracking the metric against. - - **accessToken**: The authorization token acquired from the previous step. + - **resourceId**: Resource ID of the Azure resource you're tracking the metric against. + - **accessToken**: The authorization token acquired from the previous step. ```Shell curl -X POST 'https://<location>.monitoring.azure.com/<resourceId>/metrics' \ Save the access token from the response for use in the following HTTP requests. -d @custommetric.json ``` -1. Change the timestamp and values in the JSON file. -1. Repeat the previous two steps a number of times, to create data for several minutes. +1. Change the timestamp and values in the JSON file. +1. Repeat the previous two steps a few times to create data for several minutes. ## Troubleshooting If you receive an error message with some part of the process, consider the following troubleshooting information: -- If you can't issue metrics against a subscription or resource group, or resource, check that your application or Service Principal has the **Monitoring Metrics Publisher** role assigned in Access control (IAM).-- Check that the number of dimension names matches the number values.-- Check that you are not emitting metrics against a region that doesnΓÇÖt support custom metrics. See [supported regions](./metrics-custom-overview.md#supported-regions).+- If you can't issue metrics against a subscription or resource group, or resource, check that your application or service principal has the **Monitoring Metrics Publisher** role assigned in **Access control (IAM)**. +- Check that the number of dimension names matches the number of values. +- Check that you aren't emitting metrics against a region that doesn't support custom metrics. See [supported regions](./metrics-custom-overview.md#supported-regions). ## View your metrics 1. Sign in to the Azure portal. -1. In the left-hand menu, select **Monitor**. +1. In the menu on the left, select **Monitor**. 1. On the **Monitor** page, select **Metrics**. -  +  -1. Change the aggregation period to **Last hour**. +1. Change the aggregation period to **Last hour**. -1. In the **Scope** drop-down menu, select the resource you send the metric for. +1. In the **Scope** dropdown list, select the resource you send the metric for. -1. In the **Metric namespace** drop-down menu, select **QueueProcessing**. +1. In the **Metric Namespace** dropdown list, select **queueprocessing**. -1. In the **Metric** drop-down menu, select **QueueDepth**. +1. In the **Metric** dropdown list, select **QueueDepth**. ## Next steps -- Learn more about [custom metrics](./metrics-custom-overview.md).+Learn more about [custom metrics](./metrics-custom-overview.md). |
azure-monitor | Tutorial Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md | Title: Collect resource logs from an Azure resource -description: Learn how to configure diagnostic settings to send resource logs from an Azure resource io a Log Analytics workspace where they can be analyzed with a log query. +description: Learn how to configure diagnostic settings to send resource logs from an Azure resource to a Log Analytics workspace where they can be analyzed with a log query. Resource logs provide insight into the detailed operation of an Azure resource a In this tutorial, you learn how to: > [!div class="checklist"]-> * Create a Log Analytics workspace in Azure Monitor -> * Create a diagnostic setting to collect resource logs -> * Create a simple log query to analyze logs -+> * Create a Log Analytics workspace in Azure Monitor. +> * Create a diagnostic setting to collect resource logs. +> * Create a simple log query to analyze logs. ## Prerequisites -To complete the steps in this tutorial, you need the following: --- An Azure resource to monitor. You can use any resource in your Azure subscription that supports diagnostic settings. To determine whether a resource supports diagnostic settings, go to its menu in the Azure portal and verify that there's a **Diagnostic settings** option in the **Monitoring** section of the menu.+To complete the steps in this tutorial, you need an Azure resource to monitor. +You can use any resource in your Azure subscription that supports diagnostic settings. To determine whether a resource supports diagnostic settings, go to its menu in the Azure portal and verify that there's a **Diagnostic settings** option in the **Monitoring** section of the menu. > [!NOTE]-> This procedure does not apply to Azure virtual machines since their **Diagnostic settings** menu is used to configure the diagnostic extension. +> This procedure doesn't apply to Azure virtual machines. Their **Diagnostic settings** menu is used to configure the diagnostic extension. ## Create a Log Analytics workspace [!INCLUDE [Create workspace](../../../includes/azure-monitor-tutorial-workspace.md)] ## Create a diagnostic setting-[Diagnostic settings](../essentials/diagnostic-settings.md) define where resource logs should be sent for a particular resource. A single diagnostic setting can have multiple [destinations](../essentials/diagnostic-settings.md#destinations), but we'll only use a Log Analytics workspace in this tutorial. +[Diagnostic settings](../essentials/diagnostic-settings.md) define where to send resource logs for a particular resource. A single diagnostic setting can have multiple [destinations](../essentials/diagnostic-settings.md#destinations), but we only use a Log Analytics workspace in this tutorial. -Under the **Monitoring** section of your resource's menu, select **Diagnostic settings** and click **Add diagnostic setting**. +Under the **Monitoring** section of your resource's menu, select **Diagnostic settings**. Then select **Add diagnostic setting**. > [!NOTE]-> Some resource may require additional selections. For example, a storage account requires you to select a resource before the **Add diagnostic setting** option is displayed. You may also notice a **Preview** label for some resources as their diagnostic settings are currently in public preview. -+> Some resources might require other selections. For example, a storage account requires you to select a resource before the **Add diagnostic setting** option is displayed. You might also notice a **Preview** label for some resources because their diagnostic settings are currently in preview. Each diagnostic setting has three basic parts:- - - **Name**: This has no significant effect and should simply be descriptive to you. - - **Categories**: Categories of logs to send to each of the destinations. The set of categories will vary for each Azure service. - - **Destinations**: One or more destinations to send the logs. All Azure services share the same set of possible destinations. Each diagnostic setting can define one or more destinations but no more than one destination of a particular type. -Enter a name for the diagnostic setting and select the categories that you want to collect. See the documentation for each service for a definition of its available categories. **AllMetrics** will send that same platform metrics available in Azure Monitor Metrics for the resource to the workspace. This allows you to analyze this data with log queries along with other monitoring data. Select **Send to Log Analytics workspace** and then select the workspace that you created. + - **Name**: The name has no significant effect and should be descriptive to you. + - **Categories**: Categories of logs to send to each of the destinations. The set of categories varies for each Azure service. + - **Destinations**: One or more destinations to send the logs. All Azure services share the same set of possible destinations. Each diagnostic setting can define one or more destinations but no more than one destination of a particular type. +Enter a name for the diagnostic setting and select the categories that you want to collect. See the documentation for each service for a definition of its available categories. **AllMetrics** sends the same platform metrics available in Azure Monitor Metrics for the resource to the workspace. As a result, you can analyze this data with log queries along with other monitoring data. Select **Send to Log Analytics workspace** and then select the workspace that you created. -Click **Save** to save the diagnostic settings. ++Select **Save** to save the diagnostic settings. - - ## Use a log query to retrieve logs-Data is retrieved from a Log Analytics workspace using a log query written in Kusto Query Language (KQL). A set of precreated queries is available for many Azure services so that you don't require knowledge of KQL to get started. +Data is retrieved from a Log Analytics workspace by using a log query written in Kusto Query Language (KQL). A set of pre-created queries is available for many Azure services, so you don't require knowledge of KQL to get started. -Select **Logs** from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your **Resource type**. +Select **Logs** from your resource's menu. Log Analytics opens with the **Queries** window that includes prebuilt queries for your resource type. > [!NOTE]-> If the **Queries** window doesn't open, click **Queries** in the top right. -+> If the **Queries** window doesn't open, select **Queries** in the upper-right corner. -Browse through the available queries. Identify one to run and click **Run**. The query is added to the query window and the results returned. +Browse through the available queries. Identify one to run and select **Run**. The query is added to the query window and the results are returned. ## Next steps Now that you're collecting resource logs, create a log query alert to be proactively notified when interesting data is identified in your log data. |
azure-monitor | Basic Logs Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md | Configure a table for Basic logs if: | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/acscallrecordingsummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) | | Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) |- | Firewalls | [AZFWNetworkRule](/azure/azure-monitor/reference/tables/AZFWNetworkRule) | + | Firewalls | [AZFWFlowTrace](/azure/azure-monitor/reference/tables/AZFWFlowTrace) | | Health Data | [AHDSMedTechDiagnosticLogs](/azure/azure-monitor/reference/tables/AHDSMedTechDiagnosticLogs) | | Media Services | [AMSLiveEventOperations](/azure/azure-monitor/reference/tables/AMSLiveEventOperations)<br>[AMSKeyDeliveryRequests](/azure/azure-monitor/reference/tables/AMSKeyDeliveryRequests)<br>[AMSMediaAccountHealth](/azure/azure-monitor/reference/tables/AMSMediaAccountHealth)<br>[AMSStreamingEndpointRequests](/azure/azure-monitor/reference/tables/AMSStreamingEndpointRequests) | | Sphere | [ASCAuditLogs](/azure/azure-monitor/reference/tables/ASCAuditLogs)<br>[ASCDeviceEvents](/azure/azure-monitor/reference/tables/ASCDeviceEvents) | | Storage | [StorageBlobLogs](/azure/azure-monitor/reference/tables/StorageBlobLogs)<br>[StorageFileLogs](/azure/azure-monitor/reference/tables/StorageFileLogs)<br>[StorageQueueLogs](/azure/azure-monitor/reference/tables/StorageQueueLogs)<br>[StorageTableLogs](/azure/azure-monitor/reference/tables/StorageTableLogs) |- | Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs) | + | Storage Mover | [StorageMoverJobRunLogs](/azure/azure-monitor/reference/tables/StorageMoverJobRunLogs)<br>[StorageMoverCopyLogsFailed](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsFailed)<br>[StorageMoverCopyLogsTransferred](/azure/azure-monitor/reference/tables/StorageMoverCopyLogsTransferred)<br> | | Virtual Network Manager | [AVNMNetworkGroupMembershipChange](/azure/azure-monitor/reference/tables/AVNMNetworkGroupMembershipChange) | > [!NOTE] |
chaos-studio | Troubleshooting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/troubleshooting.md | As you use Chaos Studio, you may occasionally encounter some problems. This arti ## General troubleshooting tips The following sources are useful when troubleshooting issues with Chaos Studio:-1. **The Activity Log**: The [Azure Activity Log](../azure-monitor/essentials/activity-log.md) has a record of all create, update, and delete operations in a subscription, including Chaos Studio operations like enabling a target and/or capabilities, installing the agent, and creating or running an experiment. Failures in the Activity Log indicate that a user action essential to using Chaos Studio may have failed to complete. Most service-direct faults also inject faults by executing an Azure Resource Manager operation, so the Activity Log will also have the record of faults being injected during an experiment for some service-direct faults. -2. **Experiment Details**: Experiment execution details show the status and errors of an individual experiment run. Opening a specific fault in experiment details will show the resources that failed and the error messages for a failure. [Learn more about how to access experiment details](chaos-studio-run-experiment.md#view-experiment-history-and-details). +1. **The Activity Log**: The [Azure Activity Log](../azure-monitor/essentials/activity-log.md) has a record of all create, update, and delete operations in a subscription. These records include Chaos Studio operations like enabling a target and/or capabilities, installing the agent, and creating or running an experiment. Failures in the Activity Log indicate that a user action essential to using Chaos Studio may have failed to complete. Most service-direct faults also inject faults by executing an Azure Resource Manager operation, so the Activity Log also has the record of faults that were injected during an experiment for some service-direct faults. +2. **Experiment Details**: Experiment execution details show the status and errors of an individual experiment run. Opening a specific fault in experiment details shows the resources that failed and the error messages for a failure. [Learn more about how to access experiment details](chaos-studio-run-experiment.md#view-experiment-history-and-details). 3. **Agent logs**: If using an agent-based fault, you may need to RDP or SSH in to the virtual machine to understand why the agent failed to run a fault. The instructions for accessing agent logs depend on the operating system:- * **Chaos Windows agent**: Agent logs are located in the Windows Event Log in the Application category with the source AzureChaosAgent. The agent adds fault activity and regular health check (ability to authenticate to and communicate with the Chaos Studio agent service) events to this log. + * **Chaos Windows agent**: Agent logs are in the Windows Event Log in the Application category with the source AzureChaosAgent. The agent adds fault activity and regular health check (ability to authenticate to and communicate with the Chaos Studio agent service) events to this log. * **Chaos Linux agent**: The Linux agent uses systemd to manage the agent process as a Linux service. To view the systemd journal for the agent (the events logged by the agent service), run the command `journalctl -u azure-chaos-agent`.-4. **VM extension status**: If using an agent-based fault, you may also need to verify that the VM extension is installed and healthy. In the Azure portal, navigate to your virtual machine and go to **Extensions** or **Extensions + applications**. Click on the ChaosAgent extension and look for the following fields: - * **Status** should show "Provisioning succeeded." Any other status indicates that the agent failed to install. Verify that all [system requirements](chaos-studio-limitations.md#limitations) are met and try re-installing the agent. - * **Handler status** should show "Ready." Any other status indicates that the agent installed but cannot connect to the Chaos Studio service. Verify that all [network requirements](chaos-studio-limitations.md#limitations) are met and that the user-assigned managed identity has been added to the virtual machine and try rebooting. +4. **VM extension status**: If using an agent-based fault, verify that the VM extension is installed and healthy. In the Azure portal, navigate to your virtual machine and go to **Extensions** or **Extensions + applications**. Click on the ChaosAgent extension and look for the following fields: + * **Status** should show "Provisioning succeeded." Any other status indicates that the agent failed to install. Verify that you meet all [system requirements](chaos-studio-limitations.md#limitations) try reinstalling the agent. + * **Handler status** should show "Ready." Any other status indicates that the agent installed but can't connect to the Chaos Studio service. Verify that you meet all [network requirements](chaos-studio-limitations.md#limitations) and that the user-assigned managed identity has been added to the virtual machine and try rebooting. ## Issues onboarding a resource -### Resources do not show up in the targets list in the Azure portal -If you do not see the resources you would like to enable in the Chaos Studio targets list, it may be due to any of the following issues: -* The resources are not in [a supported region for Chaos Studio](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio). -* The resources are not of [a supported resource type in Chaos Studio](chaos-studio-fault-providers.md). -* The resources are in a subscription or resource group that are filtered out in the filters for the target list. Change the subscription and resource group filters to see your resources. +### Resources don't show up in the targets list in the Azure portal +If you don't see the resources you would like to enable in the Chaos Studio targets list, it may be due to any of the following issues: +* The resources aren't in [a supported region for Chaos Studio](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio). +* The resources aren't of [a supported resource type in Chaos Studio](chaos-studio-fault-providers.md). +* The resources are in a subscription or resource group that is filtered out in the filters for the target list. Change the subscription and resource group filters to see your resources. ### Target and/or capability enablement fails or doesn't show correctly in the target list If you see an error when enabling targets and/or capabilities, try the following steps:-1. Verify that you have appropriate permissions to the resources you are onboarding. Enabling a target and/or capabilities requires Microsoft.Chaos/\* permission at the scope of the resource. Built-in roles such as Contributor have wildcard Read and Write permission, which includes permission to all Microsoft.Chaos operations. +1. Verify that you have appropriate permissions to the resources you're onboarding. Enabling a target and/or capabilities requires Microsoft.Chaos/\* permission at the scope of the resource. Built-in roles such as Contributor have wildcard Read and Write permission, which includes permission to all Microsoft.Chaos operations. 2. Wait a few minutes for the target and capability list to update. The Azure portal uses Azure Resource Graph to gather information on target and capability onboarding and it can take up to five minutes for the update to propagate. 3. If the resource still shows "Not enabled", try the following steps: 1. Attempt to enable the resource again. 2. If resource enablement still fails, visit the Activity Log and find the failed target create operation to see detailed error information. 4. If the resource shows "Enabled" but onboarding capabilities failed, try the following steps:- 1. Click the **Manage actions** button on the resource in the targets list. Check any capabilities that were not checked, and click **Save**. + 1. Click the **Manage actions** button on the resource in the targets list. Check any capabilities that weren't checked, and click **Save**. 2. If capability enablement still fails, visit the Activity Log and find the failed target create operation to see detailed error information. ## Prerequisite issues If you see an error when enabling targets and/or capabilities, try the following Some issues are caused by missing prerequisites. ### Agent-based faults fail on a virtual machine-Agent-based faults may fail for a variety of reasons related to missing prerequisites: +Agent-based faults may fail for various reasons related to missing prerequisites: * On Linux VMs, the [CPU Pressure](chaos-studio-fault-library.md#cpu-pressure), [Physical Memory Pressure](chaos-studio-fault-library.md#physical-memory-pressure), [Disk I/O pressure](chaos-studio-fault-library.md#disk-io-pressure-linux), and [Arbitrary Stress-ng Stress](chaos-studio-fault-library.md#arbitrary-stress-ng-stress) faults all require the [stress-ng utility](https://wiki.ubuntu.com/Kernel/Reference/stress-ng) to be installed on your virtual machine. For more information on how to install stress-ng, see the fault prerequisite sections. * On either Linux or Windows VMs, the user-assigned managed identity provided during agent-based target enablement must also be added to the virtual machine.-* On either Linux or Windows VMs, the system-assigned managed identity for the experiment must be granted Reader role on the VM (seemingly elevated roles like Virtual Machine Contributor do not include the \*/Read operation that is necessary for the Chaos Studio agent service to read the microsoft-agent target proxy resource on the virtual machine). +* On either Linux or Windows VMs, the system-assigned managed identity for the experiment must be granted Reader role on the VM (seemingly elevated roles like Virtual Machine Contributor don't include the \*/Read operation that is necessary for the Chaos Studio agent service to read the microsoft-agent target proxy resource on the virtual machine). ++### Chaos agent won't install on Virtual Machine Scale Sets ++Installing the Chaos agent on Virtual Machine Scale Sets may fail with without showing an error if the Virtual Machine Scale Sets upgrade policy is set to **Manual**. To check the Virtual Machine Scale Sets upgrade policy: ++1. Log in to Azure portal. +1. Select **Virtual Machine Scale Set**. +1. From the left pane menu, choose **Upgrade policy**. +1. Check the **Upgrade mode** to see if it's set to **Manual - Existing instances must be manually upgraded**. ++If the Upgrade policy is set to **Manual**, you must upgrade your Virtual Machine Scale Sets instances so that Chaos agent installation completes. ++#### Upgrade instances from Azure portal ++You can upgrade your Virtual Machine Scale Sets instances from Azure portal: ++1. Log in to Azure portal. +1. Select **Virtual Machine Scale Set**. +1. From the left pane menu, choose **Instances**. +1. Select all instances and click **Upgrade**. ++#### Upgrade instances with the Azure CLI ++You can upgrade your Virtual Machine Scale Sets instances with Azure CLI: ++- From the Azure CLI, use `az vmss update-instances` to manually upgrade your instances: ++ ```azurecli + az vmss update-instances --resource-group myResourceGroup --name myScaleSet --instance-ids {instanceIds} + ``` ++For more information, see [How to bring VMs up-to-date with the latest scale set model](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) ### AKS Chaos Mesh faults fail-AKS Chaos Mesh faults may fail for a variety of reasons related to missing prerequisites: +AKS Chaos Mesh faults may fail for various reasons related to missing prerequisites: * Chaos Mesh must first be installed on the AKS cluster before using the AKS Chaos Mesh faults. Instructions can be found in the [Chaos Mesh faults on AKS tutorial](chaos-studio-tutorial-aks-portal.md#set-up-chaos-mesh-on-your-aks-cluster). * Chaos Mesh must be version 2.0.4 or greater. You can get the Chaos Mesh version by connecting to your AKS cluster and running `helm version chaos-mesh`.-* Chaos Mesh must be installed with the namespace `chaos-testing`. Other namespace names for Chaos Mesh are not supported. +* Chaos Mesh must be installed with the namespace `chaos-testing`. Other namespace names for Chaos Mesh aren't supported. * The Azure Kubernetes Service Cluster Admin role must be assigned to the system-assigned managed identity for the chaos experiment. ## Issues creating or designing an experiment -### When adding a fault, my resource does not show in the Target Resources list -When adding a fault, if you do not see the resource you want to target with a fault in the Target Resources list, it may be due to any of the following issues: +### My resource doesn't show in the Target Resources list when I add a fault +When you add a fault, if you don't see the resource you want to target with a fault in the Target Resources list, it may be due to any of the following issues: * The **Subscription** filter is set to exclude the subscription in which your target is deployed. Click on the subscription filter and modify the selected subscriptions. * The resource hasn't been onboarded yet. Visit the **Targets** view and enable the target. After this completes, you need to close the Add Fault pane and reopen it to see an updated target list. * The resource hasn't been enabled for the target type of that fault yet. Consult the [fault library](chaos-studio-fault-library.md) to see which target type is used for the fault, then visit the **Targets** view and enable that target type (either agent-based for microsoft-agent faults or service-direct for all other target types). After this completes, you need to close the Add Fault pane and reopen it to see an updated target list.-* The resource doesn't have the capability for that fault enabled yet. Consult the [fault library](chaos-studio-fault-library.md) to see the capability name for the fault, then visit the **Targets** view and click **Manage actions** on the target resource. Check the box for the capability that corresponds to the fault you are trying to run and click **Save**. After this completes, you need to close the Add Fault pane and reopen it to see an updated target list. +* The resource doesn't have the capability for that fault enabled yet. Consult the [fault library](chaos-studio-fault-library.md) to see the capability name for the fault, then visit the **Targets** view and click **Manage actions** on the target resource. Check the box for the capability that corresponds to the fault you're trying to run and click **Save**. After this completes, you need to close the Add Fault pane and reopen it to see an updated target list. * The resource has just recently been onboarded and hasn't appeared in Azure Resource Graph yet. The Target Resources list is queried from Azure Resource Graph, and after enabling a new target it can take up to five minutes for the update to propagate to Azure Resource Graph. Wait a few minutes, then reopen the Add Fault pane. -### When creating an experiment, I get the error `The microsoft:agent provider requires a managed identity` +### I get the error `The microsoft:agent provider requires a managed identity` when creating an experiment -This error happens when the agent has not been deployed to your virtual machine. For installation instructions, see [Create and run an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md). +This error happens when the agent hasn't been deployed to your virtual machine. For installation instructions, see [Create and run an experiment that uses agent-based faults](chaos-studio-tutorial-agent-based-portal.md). ### When creating an experiment, I get the error `The content media type '<null>' is not supported. Only 'application/json' is supported.` -You may encounter this error if you are creating your experiment using an ARM template or the Chaos Studio REST API. The error indicates that there is malformed JSON in your experiment definition. Check to see if you have any syntax errors, such as mismatched braces or brackets ({} and \[\]), using a JSON linter like Visual Studio Code. +You may encounter this error if you're creating your experiment using an ARM template or the Chaos Studio REST API. The error indicates that there's malformed JSON in your experiment definition. Check to see if you have any syntax errors, such as mismatched braces or brackets ({} and \[\]), using a JSON linter like Visual Studio Code. ## Issues running an experiment From the **Experiments** list in the Azure portal, click on the experiment name ### My agent-based fault failed with error: Verify that the target is correctly onboarded and proper read permissions are provided to the experiment msi. -This may happen if you onboarded the agent using the Azure portal, which has a known issue: Enabling an agent-based target does not assign the user-assigned managed identity to the virtual machine or virtual machine scale set. +This may happen if you onboarded the agent using the Azure portal, which has a known issue: Enabling an agent-based target doesn't assign the user-assigned managed identity to the virtual machine or Virtual Machine Scale Set. -To resolve this, navigate to the virtual machine or virtual machine scale set in the Azure portal, go to **Identity**, open the **User assigned** tab, and **Add** your user-assigned identity to the virtual machine. Once complete, you may need to reboot the virtual machine for the agent to connect. +To resolve this, navigate to the virtual machine or Virtual Machine Scale Set in the Azure portal, go to **Identity**, open the **User assigned** tab, and **Add** your user-assigned identity to the virtual machine. Once complete, you may need to reboot the virtual machine for the agent to connect. |
cognitive-services | Releasenotes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/releasenotes.md | Azure Cognitive Service for Speech is updated on an ongoing basis. To stay up-to ## Recent highlights +* Speech SDK 1.26.0 was released in March 2023. * Custom Speech-to-Text container disconnected mode was released in January 2023.-* Speech SDK 1.25.0 was released in January 2023. * Text-to-speech Batch synthesis API is available in public preview. * Speech-to-text REST API version 3.1 is generally available. * Speech-to-text and text-to-speech container versions were updated in October 2022. |
connectors | Apis List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/apis-list.md | Title: Connectors overview -description: Overview about connectors in Azure Logic Apps. + Title: What are connectors +description: Learn how connectors in Azure Logic Apps help you access data, events, and resources in other apps, services, and systems from workflows. ms.suite: integration Previously updated : 01/05/2023- Last updated : 03/02/2023++# As a developer, I want to learn how connectors help me access data, events, and resources in other apps, services, systems, and platforms from my workflow in Azure Logic Apps. -# About connectors in Azure Logic Apps +# What are connectors in Azure Logic Apps -When you build workflows using Azure Logic Apps, you can use *connectors* to help you quickly and easily access data, events, and resources in other apps, services, systems, protocols, and platforms - often without writing any code. A connector provides prebuilt operations that you can use as steps in your workflows. Azure Logic Apps provides hundreds of connectors that you can use. If no connector is available for the resource that you want to access, you can use the generic HTTP operation to communicate with the service, or you can [create a custom connector](#custom-connectors-and-apis). +When you build a workflow using Azure Logic Apps, you can use a *connector* to work with data, events, and resources in other apps, services, systems, and platforms - without writing code. A connector provides one or more prebuilt operations, which you use as steps in your workflow. -This overview provides a high-level introduction to connectors and how they generally work. +In a connector, each operation is either a [*trigger*](#triggers) condition that starts a workflow or a subsequent [*action*](#actions) that performs a specific task, along with properties that you can configure. While many connectors have both triggers and actions, some connectors offer only triggers, while others provide only actions. -## What are connectors? +In Azure Logic Apps, connectors are available in either a [built-in version, managed version, or both](#built-in-vs-managed). Many connectors usually require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, usually so that you can authenticate access to a user account. If no connector is available for the service or system that you want to access, you can send a request using the [generic HTTP operation](connectors-native-http.md), or you can [create a custom connector](#custom-connectors-and-apis). -Technically, many connectors provide a proxy or a wrapper around an API that the underlying service uses to communicate with Azure Logic Apps. This connector provides operations that you use in your workflows to perform tasks. An operation is available either as a *trigger* or *action* with properties you can configure. Some triggers and actions also require that you first [create and configure a connection](#connection-configuration) to the underlying service or system, for example, so that you can authenticate access to a user account. For more overview information, review [Connectors overview for Azure Logic Apps, Microsoft Power Automate, and Microsoft Power Apps](/connectors). +This overview provides a high-level introduction to connectors and how they generally work. For more connector information, see the following documentation: - For information about the more popular and commonly used connectors in Azure Logic Apps, review the following documentation: +* [Connectors overview for services such as Power Automate and Power Apps](/connectors/connectors) +* [Built-in connectors overview for Azure Logic Apps](built-in.md) +* [Managed connectors overview for Azure Logic Apps](managed.md) +* [Managed connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) -* [Connectors reference for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors) -* [Built-in connectors for Azure Logic Apps](built-in.md) -* [Managed connectors in Azure Logic Apps](managed.md) -* [Pricing and billing models in Azure Logic Apps](../logic-apps/logic-apps-pricing.md) -* [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/) +<a name="built-in-vs-managed"></a> -### Triggers +## Built-in connectors versus managed connectors -A *trigger* specifies the event that starts the workflow and is always the first step in any workflow. Each trigger also follows a specific firing pattern that controls how the trigger monitors and responds to events. Usually, a trigger follows the *polling* pattern or *push* pattern, but sometimes, a trigger is available in both versions. +In Azure Logic Apps, connectors are either *built in* or *managed*. Some connectors have both versions. The available versions depend on whether you create a *Consumption* logic app workflow that runs in multi-tenant Azure Logic Apps or a *Standard* logic app workflow that runs in single-tenant Azure Logic Apps. For more information about logic app resource types, see [Resource types and host environment differences](../logic-apps/logic-apps-overview.md#resource-environment-differences). -- *Polling triggers* regularly check a specific service or system on a specified schedule to check for new data or a specific event. If new data is available, or the specific event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input.+* [Built-in connectors](built-in.md) are designed to run directly and natively inside Azure Logic Apps. -- *Push triggers* listen for new data or for an event to happen, without polling. When new data is available, or when the event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input.+* [Managed connectors](managed.md) are deployed, hosted, and managed in Azure by Microsoft. Managed connectors mostly provide a proxy or a wrapper around an API that the underlying service or system uses to communicate with Azure Logic Apps. -For example, you might want to build a workflow that does something when a file is uploaded to your FTP server. As the first step in your workflow, you can use the FTP trigger named **When a file is added or modified**, which follows the polling pattern. You can then specify a schedule to regularly check for upload events. + * In a Consumption workflow, managed connectors appear in the designer under the **Standard** or **Enterprise** labels, based on their pricing level. -A trigger also passes along any inputs and other required data into your workflow where later actions can reference and use that data throughout the workflow. For example, suppose you want to use Office 365 Outlook trigger named **When a new email arrives** to start a workflow when you get a new email. You can configure this trigger to pass along the content from each new email, such as the sender, subject line, body, attachments, and so on. Your workflow can then process that information by using other actions. + * In a Standard workflow, all managed connectors appear in the designer under the **Azure** label. -### Actions +For more information, see the following documentation: -An *action* is an operation that follows the trigger and performs some kind of task in your workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a SQL trigger that detects new customer data in an SQL database. Following the trigger, your workflow can have a SQL action that gets the customer data. Following the SQL action, your workflow can have a different action that processes the data. +* [Pricing and billing models in Azure Logic Apps](../logic-apps/logic-apps-pricing.md) +* [Azure Logic Apps pricing details](https://azure.microsoft.com/pricing/details/logic-apps/) -## Connector categories +## Triggers -In Azure Logic Apps, most triggers and actions are available in either a *built-in* version or *managed connector* version. A few triggers and actions are available in both versions. The versions available depend on whether you create a *Consumption* logic app that runs in multi-tenant Azure Logic Apps, or a *Standard* logic app that runs in single-tenant Azure Logic Apps. +A trigger specifies the condition to meet before the workflow can start and is always the first step in any workflow. Each trigger also follows a specific firing pattern that controls how the trigger monitors and responds to events. Usually, a trigger follows either a *polling* pattern or a *push* pattern. Sometimes, both trigger versions are available. -* [Built-in connectors](built-in.md) run natively on the Azure Logic Apps runtime. +- *Polling* triggers regularly check a specific service or system on a specified schedule to check for new data or a specific event. If new data is available, or the specific event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input. -* [Managed connectors](managed.md) are deployed, hosted, and managed by Microsoft. These connectors provide triggers and actions for cloud services, on-premises systems, or both. +- *Push* or *webhook* triggers listen for new data or for an event to happen, without polling. When new data is available, or when the event happens, these triggers create and run a new instance of your workflow. This new instance can then use the data that's passed as input. - In a *Standard* logic app, all managed connectors are organized as **Azure** connectors. However, in a *Consumption* logic app, managed connectors are organized as **Standard** or **Enterprise**, based on pricing level. +For example, suppose you want to build a workflow that runs when a file is uploaded to your FTP server. As the first step in your workflow, you can add the [FTP trigger](/connectors/ftp/#triggers) named **When a file is added or modified**, which follows a polling pattern. You then specify the schedule to regularly check for upload events. -For more information about logic app types, review [Resource types and host environment differences](../logic-apps/logic-apps-overview.md#resource-environment-differences). +When the trigger fires, the trigger usually passes along event outputs for subsequent actions to reference and use. For the FTP example, the trigger automatically outputs information such as the file name and path. You can also set up the trigger to include the file content. So, to process this data, you must add actions to your workflow. ++## Actions ++An action specifies a task to perform and always appears as a subsequent step in the workflow. You can use multiple actions in your workflow. For example, you might start the workflow with a [SQL Server trigger](/connectors/sql/#triggers) that checks for new customer data in an SQL database. Following the trigger, your workflow can have a [SQL Server action](/connectors/sql/#actions) that gets the customer data. Following this SQL Server action, your workflow can use a different action that processes the data, for example, a [Data Operations action](../logic-apps/logic-apps-perform-data-operations.md) that creates a CSV table. <a name="connection-configuration"></a> -## Connection configuration +## Connection permissions ++In a Consumption logic app workflow, before you can create or manage logic app resources, workflows, and their connections, you need specific permissions. For more information about these permissions, see [Secure operations - Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md#secure-operations). -In Consumption logic apps, before you can create or manage logic apps and their connections, you need specific permissions. For more information about these permissions, review [Secure operations - Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md#secure-operations). +## Connection creation, configuration, and authentication -Before you can use a managed connector's triggers or actions in your workflow, many connectors require that you first create a *connection* to the target service or system. To create a connection from within the logic app workflow designer, you have to authenticate your identity with account credentials and sometimes other connection information. For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For some built-in connectors and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials. +Before you can use a connector's operations in your workflow, many connectors require that you first create a *connection* to the target service or system. To create a connection from inside the workflow designer, you have to authenticate your identity with account credentials and sometimes other connection information. -Although you create connections within a workflow, these connections are actually separate Azure resources with their own resource definitions. To review these connection resource definitions, follow these steps based on whether you have a Consumption or Standard logic app: +For example, before your workflow can access and work with your Office 365 Outlook email account, you must authorize a connection to that account. For some built-in connectors and managed connectors, you can [set up and use a managed identity for authentication](../logic-apps/create-managed-service-identity.md#triggers-actions-managed-identity), rather than provide your credentials. -* Consumption: To view these connections in the Azure portal, review [View connections for Consumption logic apps in the Azure portal](../logic-apps/manage-logic-apps-with-azure-portal.md#view-connections). +Although you create connections within a workflow, these connections are actually separate Azure resources with their own resource definitions. To review these connection resource definitions, follow these steps based on whether you have a Consumption or Standard workflow: - To view and manage these connections in Visual Studio, review [Manage Consumption logic apps with Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md), and download your logic app from Azure into Visual Studio. For more information about connection resource definitions for Consumption logic apps, review [Connection resource definitions](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions). +* Consumption -* Standard: To view these connections in the Azure portal, review [View connections for Standard logic apps in the Azure portal](../logic-apps/create-single-tenant-workflows-azure-portal.md#view-connections). + * To view and manage these connections in the Azure portal, see [View connections for Consumption workflows in the Azure portal](../logic-apps/manage-logic-apps-with-azure-portal.md#view-connections). - To view and manage these connections in Visual Studio Code, review [View your logic app in Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md#manage-deployed-apps-vs-code). The **connections.json** file contains the required configuration for the connections created by connectors. + * To view and manage these connections in Visual Studio, see [Manage Consumption workflows with Visual Studio](../logic-apps/manage-logic-apps-with-visual-studio.md), and download your logic app resource from Azure into Visual Studio. ++ For more information about connection resource definitions for Consumption workflows, see [Connection resource definitions](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions). ++* Standard ++ * To view and manage these connections in the Azure portal, see [View connections for Standard workflows in the Azure portal](../logic-apps/create-single-tenant-workflows-azure-portal.md#view-connections). ++ * To view and manage these connections in Visual Studio Code, see [View your logic app workflow in Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md#manage-deployed-apps-vs-code). The **connections.json** file contains the required configuration for the connections created by connectors. <a name="connection-security-encryption"></a> Connection configuration details, such as server address, username, and password Established connections can access the target service or system for as long as that service or system allows. For services that use Azure AD OAuth connections, such as Office 365 and Dynamics, Azure Logic Apps refreshes access tokens indefinitely. Other services might have limits on how long Logic Apps can use a token without refreshing. Some actions, such as changing your password, invalidate all access tokens. -> [!TIP] +> [!NOTE] +> > If your organization doesn't permit you to access specific resources through connectors in Azure Logic Apps, you can [block the capability to create such connections](../logic-apps/block-connections-connectors.md) using [Azure Policy](../governance/policy/overview.md). -For more information about securing logic apps and connections, review [Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md). +For more information about securing logic app workflows and connections, see [Secure access and data in Azure Logic Apps](../logic-apps/logic-apps-securing-a-logic-app.md). <a name="firewall-access"></a> ### Firewall access for connections -If you use a firewall that limits traffic, and your logic app workflows need to communicate through that firewall, you have to set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Azure Logic Apps platform or runtime in the Azure region where your logic app workflows exist. If your workflows also use managed connectors, such as the Office 365 Outlook connector or SQL connector, or use custom connectors, your firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in your logic app's Azure region. For more information, review [Firewall configuration](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags). +If you use a firewall that limits traffic, and your logic app workflows need to communicate through that firewall, you have to set up your firewall to allow access for both the [inbound](../logic-apps/logic-apps-limits-and-config.md#inbound) and [outbound](../logic-apps/logic-apps-limits-and-config.md#outbound) IP addresses used by the Azure Logic Apps platform or runtime in the Azure region where your logic app workflows exist. ++If your workflows also use managed connectors, such as the Office 365 Outlook connector or SQL connector, or use custom connectors, your firewall also needs to allow access for *all* the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses#azure-logic-apps) in your logic app resource's Azure region. For more information, see [Firewall configuration](../logic-apps/logic-apps-limits-and-config.md#firewall-configuration-ip-addresses-and-service-tags). ## Custom connectors and APIs -In Consumption logic apps that run in multi-tenant Azure Logic Apps, you can call Swagger-based or SOAP-based APIs that aren't available as out-of-the-box connectors. You can also run custom code by creating custom API Apps. For more information, review the following documentation: +In Consumption workflows for multi-tenant Azure Logic Apps, you can call Swagger-based or SOAP-based APIs that aren't available as out-of-the-box connectors. You can also run custom code by creating custom API Apps. For more information, see the following documentation: ++* [Swagger-based or SOAP-based custom connectors for Consumption workflows](../logic-apps/custom-connector-overview.md#custom-connector-consumption) -* [Swagger-based or SOAP-based custom connectors for Consumption logic apps](../logic-apps/custom-connector-overview.md#custom-connector-consumption) +* Create a [Swagger-based](/connectors/custom-connectors/define-openapi-definition) or [SOAP-based](/connectors/custom-connectors/create-register-logic-apps-soap-connector) custom connector, which makes these APIs available to any Consumption logic app workflow in your Azure subscription. -* Create a [Swagger-based](/connectors/custom-connectors/define-openapi-definition) or [SOAP-based](/connectors/custom-connectors/create-register-logic-apps-soap-connector) custom connector, which makes these APIs available to any Consumption logic app in your Azure subscription. To make your custom connector public for anyone to use in Azure, [submit your connector for Microsoft certification](/connectors/custom-connectors/submit-certification). + To make your custom connector public for anyone to use in Azure, [submit your connector for Microsoft certification](/connectors/custom-connectors/submit-certification). * [Create custom API Apps](../logic-apps/logic-apps-create-api-app.md) -In Standard logic apps that run in single-tenant Azure Logic Apps, you can create natively running service provider-based custom built-in connectors that are available to any Standard logic app. For more information, review the following documentation: +In Standard workflows for single-tenant Azure Logic Apps, you can create natively running service provider-based custom built-in connectors that are available to any Standard logic app workflow. For more information, see the following documentation: -* [Service provider-based custom built-in connectors for Standard logic apps](../logic-apps/custom-connector-overview.md#custom-connector-standard) +* [Service provider-based custom built-in connectors for Standard workflows](../logic-apps/custom-connector-overview.md#custom-connector-standard) -* [Create service provider-based custom built-in connectors for Standard logic apps](../logic-apps/create-custom-built-in-connector-standard.md) +* [Create service provider-based custom built-in connectors for Standard workflows](../logic-apps/create-custom-built-in-connector-standard.md) ## ISE and connectors -For workflows that need direct access to resources in an Azure virtual network, you can create a dedicated [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) where you can build, deploy, and run your workflows on dedicated resources. For more information about creating ISEs, review [Connect to Azure virtual networks from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment.md). +For workflows that need direct access to resources in an Azure virtual network, you can create a dedicated [integration service environment (ISE)](../logic-apps/connect-virtual-network-vnet-isolated-environment-overview.md) where you can build, deploy, and run your workflows on dedicated resources. For more information about creating ISEs, see [Connect to Azure virtual networks from Azure Logic Apps](../logic-apps/connect-virtual-network-vnet-isolated-environment.md). -Custom connectors created within an ISE don't work with the on-premises data gateway. However, these connectors can directly access on-premises data sources that are connected to an Azure virtual network hosting the ISE. So, logic apps in an ISE most likely don't need the data gateway when communicating with those resources. If you have custom connectors that you created outside an ISE that require the on-premises data gateway, logic apps in an ISE can use those connectors. +Custom connectors created within an ISE don't work with the on-premises data gateway. However, these connectors can directly access on-premises data sources that are connected to an Azure virtual network hosting the ISE. So, logic app workflows in an ISE most likely don't need the data gateway when communicating with those resources. If you have custom connectors that you created outside an ISE that require the on-premises data gateway, workflows in an ISE can use those connectors. -In the workflow designer, when you browse the built-in connectors or managed connectors that you want to use for logic apps in an ISE, the **CORE** label appears on built-in connectors, while the **ISE** label appears on managed connectors that are designed to work with an ISE. +In the workflow designer, when you browse the built-in connectors or managed connectors that you want to use for workflows in an ISE, the **CORE** label appears on built-in connectors, while the **ISE** label appears on managed connectors that are designed to work with an ISE. :::row::: :::column::: In the workflow designer, when you browse the built-in connectors or managed con **CORE** \ \- Built-in connectors with this label run in the same ISE as your logic apps. + Built-in connectors with this label run in the same ISE as your workflows. :::column-end::: :::column:::  In the workflow designer, when you browse the built-in connectors or managed con **ISE** \ \- Managed connectors with this label run in the same ISE as your logic apps. + Managed connectors with this label run in the same ISE as your workflows. \ \ If you have an on-premises system that's connected to an Azure virtual network, an ISE lets your workflows directly access that system without using the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). Instead, you can either use that system's **ISE** connector if available, an HTTP action, or a [custom connector](#custom-connectors-and-apis). In the workflow designer, when you browse the built-in connectors or managed con ## Known issues -The following table includes known issues for Logic Apps connectors. +The following table includes known issues for connectors in Azure Logic Apps: | Error message| Description | Resolution | |--|-||-| `Error: BadGateway. Client request id: '{GUID}'` | This error results from updating the tags on a logic app where one or more connections don't support Azure Active Directory (Azure AD) OAuth authentication, such as SFTP ad SQL, breaking those connections. | To prevent this behavior, avoid updating those tags. | -|||| +| `Error: BadGateway. Client request id: '{GUID}'` | This error results from updating the tags on a logic app resource where one or more connections don't support Azure Active Directory (Azure AD) OAuth authentication, such as SFTP ad SQL, breaking those connections. | To prevent this behavior, avoid updating those tags. | ## Next steps > [!div class="nextstepaction"]-> [Create custom APIs you can call from Logic Apps](../logic-apps/logic-apps-create-api-app.md) +> +> [Create a Consumption logic app workflow - Azure portal](../logic-apps/quickstart-create-first-logic-app-workflow.md) +> +> [Create a Standard logic app workflow - Azure portal](../logic-apps/create-single-tenant-workflows-azure-portal.md) |
container-registry | Github Action Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/github-action-scan.md | Title: Scan container images using GitHub Actions description: Learn how to scan the container images using Container Scan action -- Last updated 10/11/2022 |
cosmos-db | Index Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md | Azure Cosmos DB supports two indexing modes: - **None**: Indexing is disabled on the container. This mode is commonly used when a container is used as a pure key-value store without the need for secondary indexes. It can also be used to improve the performance of bulk operations. After the bulk operations are complete, the index mode can be set to Consistent and then monitored using the [IndexTransformationProgress](how-to-manage-indexing-policy.md#dotnet-sdk) until complete. > [!NOTE]-> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmoslazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing). +> Azure Cosmos DB also supports a Lazy indexing mode. Lazy indexing performs updates to the index at a much lower priority level when the engine is not doing any other work. This can result in **inconsistent or incomplete** query results. If you plan to query an Azure Cosmos DB container, you should not select lazy indexing. New containers cannot select lazy indexing. You can request an exemption by contacting cosmosdblazyindexing@microsoft.com (except if you are using an Azure Cosmos DB account in [serverless](serverless.md) mode which doesn't support lazy indexing). By default, indexing policy is set to `automatic`. It's achieved by setting the `automatic` property in the indexing policy to `true`. Setting this property to `true` allows Azure Cosmos DB to automatically index items as they're written. |
cosmos-db | How To Write Stored Procedures Triggers Udfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-write-stored-procedures-triggers-udfs.md | Title: Write stored procedures, triggers, and UDFs in Azure Cosmos DB -description: Learn how to define stored procedures, triggers, and user-defined functions in Azure Cosmos DB +description: Learn how to define stored procedures, triggers, and user-defined functions by using the API for NoSQL in Azure Cosmos DB. Previously updated : 10/05/2021 Last updated : 03/01/2023 ms.devlang: javascript-Azure Cosmos DB provides language-integrated, transactional execution of JavaScript that lets you write **stored procedures**, **triggers**, and **user-defined functions (UDFs)**. When using the API for NoSQL in Azure Cosmos DB, you can define the stored procedures, triggers, and UDFs in JavaScript language. You can write your logic in JavaScript and execute it inside the database engine. You can create and execute triggers, stored procedures, and UDFs by using [Azure portal](https://portal.azure.com/), the [JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) and the [Azure Cosmos DB for NoSQL client SDKs](samples-dotnet.md). +Azure Cosmos DB provides language-integrated, transactional execution of JavaScript that lets you write *stored procedures*, *triggers*, and *user-defined functions (UDFs)*. When you use the API for NoSQL in Azure Cosmos DB, you can define the stored procedures, triggers, and UDFs using JavaScript. You can write your logic in JavaScript and execute it inside the database engine. You can create and execute triggers, stored procedures, and UDFs by using the [Azure portal](https://portal.azure.com/), the [JavaScript query API in Azure Cosmos DB](javascript-query-api.md), and the [Azure Cosmos DB for NoSQL SDKs](samples-dotnet.md). -To call a stored procedure, trigger, and user-defined function, you need to register it. For more information, see [How to work with stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md). +To call a stored procedure, trigger, or UDF, you need to register it. For more information, see [How to work with stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md). > [!NOTE]-> For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value will not be visible to the stored procedure. This also applied to triggers as well. +> For partitioned containers, when executing a stored procedure, a partition key value must be provided in the request options. Stored procedures are always scoped to a partition key. Items that have a different partition key value aren't visible to the stored procedure. This also applies to triggers. > [!NOTE]-> Server-side JavaScript features including stored procedures, triggers, and user-defined functions do not support importing modules. +> Server-side JavaScript features, including stored procedures, triggers, and UDFs, don't support importing modules. > [!TIP]-> Azure Cosmos DB supports deploying containers with stored procedures, triggers and user-defined functions. For more information see [Create an Azure Cosmos DB container with server-side functionality.](./manage-with-templates.md#create-sproc) +> Azure Cosmos DB supports deploying containers with stored procedures, triggers, and UDFs. For more information, see [Create an Azure Cosmos DB container with server-side functionality.](./manage-with-templates.md#create-sproc) ## <a id="stored-procedures"></a>How to write stored procedures -Stored procedures are written using JavaScript, they can create, update, read, query, and delete items inside an Azure Cosmos DB container. Stored procedures are registered per collection, and can operate on any document or an attachment present in that collection. -> [Note] -> When it comes to stored procedure, Cosmos DB has different charging policy. Since, stored can essentially execute code and consume any number of RUs, we do upfront charging for each stored procedure execution. This is a defense mechanism in backend to ensure stored procedure scripts do not impact out backend services. The amount which is charged upfront is the average charge consumed by the script in previous invocations. If the stored procedure has varied RUs per invocation i.e., lot of variance around the mean then the client may not be able to fully utilize the budget as we always reserve the average RU per operations before we start the execution. As an alternative we would suggest the client to use batch or bulk requests instead of stored procedures to avoid the variance around the RU charging. -> -Here is a simple stored procedure that returns a "Hello World" response. +Stored procedures are written using JavaScript, and they can create, update, read, query, and delete items inside an Azure Cosmos DB container. Stored procedures are registered per collection, and can operate on any document or an attachment present in that collection. ++> [!NOTE] +> Azure Cosmos DB has a different charging policy for stored procedures. Because stored procedures can execute code and consume any number of request units (RUs), each execution requires an upfront charge. This ensures that stored procedure scripts don't impact backend services. The amount charged upfront equals the average charge consumed by the script in previous invocations. The average RUs per operation is reserved before execution. If the invocations have a lot of variance in RUs, your budget utilization might be affected. As an alternative, you should use batch or bulk requests instead of stored procedures to avoid variance around RU charges. ++Here's a simple stored procedure that returns a "Hello World" response. ```javascript var helloWorldStoredProc = { var helloWorldStoredProc = { The context object provides access to all operations that can be performed in Azure Cosmos DB, as well as access to the request and response objects. In this case, you use the response object to set the body of the response to be sent back to the client. -Once written, the stored procedure must be registered with a collection. To learn more, see [How to use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures) article. +Once written, the stored procedure must be registered with a collection. To learn more, see [How to use stored procedures in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-stored-procedures). -### <a id="create-an-item"></a>Create an item using stored procedure +### <a id="create-an-item"></a>Create items using stored procedures -When you create an item by using stored procedure, the item is inserted into the Azure Cosmos DB container and an ID for the newly created item is returned. Creating an item is an asynchronous operation and depends on the JavaScript callback functions. The callback function has two parameters - one for the error object in case the operation fails and another for a return value; in this case, the created object. Inside the callback, you can either handle the exception or throw an error. In case a callback is not provided and there is an error, the Azure Cosmos DB runtime will throw an error. +When you create an item by using a stored procedure, the item is inserted into the Azure Cosmos DB container and an ID for the newly created item is returned. Creating an item is an asynchronous operation and depends on the JavaScript callback functions. The callback function has two parameters: one for the error object in case the operation fails, and another for a return value, in this case, the created object. Inside the callback, you can either handle the exception or throw an error. If a callback isn't provided and there's an error, the Azure Cosmos DB runtime throws an error. -The stored procedure also includes a parameter to set the description, it's a boolean value. When the parameter is set to true and the description is missing, the stored procedure will throw an exception. Otherwise, the rest of the stored procedure continues to run. +The stored procedure also includes a parameter to set the description as a boolean value. When the parameter is set to true and the description is missing, the stored procedure throws an exception. Otherwise, the rest of the stored procedure continues to run. -The following example stored procedure takes an array of new Azure Cosmos DB items as input, inserts it into the Azure Cosmos DB container and returns the count of the items inserted. In this example, we are leveraging the ToDoList sample from the [Quickstart .NET API for NoSQL](quickstart-dotnet.md) +The following example of a stored procedure takes an array of new Azure Cosmos DB items as input, inserts it into the Azure Cosmos DB container and returns the count of the items inserted. In this example, we're using the ToDoList sample from the [Quickstart .NET API for NoSQL](quickstart-dotnet.md). ```javascript function createToDoItems(items) { function createToDoItems(items) { ### Arrays as input parameters for stored procedures -When defining a stored procedure in Azure portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to string and sent to the stored procedure. To work around this, you can define a function within your stored procedure to parse the string as an array. The following code shows how to parse a string input parameter as an array: +When you define a stored procedure in Azure portal, input parameters are always sent as a string to the stored procedure. Even if you pass an array of strings as an input, the array is converted to a string and sent to the stored procedure. To work around this, you can define a function within your stored procedure to parse the string as an array. The following code shows how to parse a string input parameter as an array: ```javascript function sample(arr) { function sample(arr) { ### <a id="transactions"></a>Transactions within stored procedures -You can implement transactions on items within a container by using a stored procedure. The following example uses transactions within a fantasy football gaming app to trade players between two teams in a single operation. The stored procedure attempts to read the two Azure Cosmos DB items each corresponding to the player IDs passed in as an argument. If both players are found, then the stored procedure updates the items by swapping their teams. If any errors are encountered along the way, the stored procedure throws a JavaScript exception that implicitly aborts the transaction. +You can implement transactions on items within a container by using a stored procedure. The following example uses transactions within a fantasy football gaming app to trade players between two teams in a single operation. The stored procedure attempts to read the two Azure Cosmos DB items, each corresponding to the player IDs passed in as an argument. If both players are found, then the stored procedure updates the items by swapping their teams. If any errors are encountered along the way, the stored procedure throws a JavaScript exception that implicitly aborts the transaction. ```javascript // JavaScript source code function bulkImport(items) { } ``` -### <a id="async-promises"></a>Async await with stored procedures +### <a id="async-promises"></a>Async/await with stored procedures -The following is an example of a stored procedure that uses async-await with Promises using a helper function. The stored procedure queries for an item and replaces it. +The following stored procedure example uses `async/await` with *Promises* using a helper function. The stored procedure queries for an item and replaces it. ```javascript function async_sample() { function async_sample() { ## <a id="triggers"></a>How to write triggers -Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item and post-triggers are executed after modifying a database item. Triggers are not automatically executed, they must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) by using the Azure Cosmos DB SDKs. +Azure Cosmos DB supports pre-triggers and post-triggers. Pre-triggers are executed before modifying a database item, and post-triggers are executed after modifying a database item. Triggers aren't automatically executed. They must be specified for each database operation where you want them to execute. After you define a trigger, you should [register and call a pre-trigger](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) by using the Azure Cosmos DB SDKs. ### <a id="pre-triggers"></a>Pre-triggers -The following example shows how a pre-trigger is used to validate the properties of an Azure Cosmos DB item that is being created. In this example, we are leveraging the ToDoList sample from the [Quickstart .NET API for NoSQL](quickstart-dotnet.md), to add a timestamp property to a newly added item if it doesn't contain one. +The following example shows how a pre-trigger is used to validate the properties of an Azure Cosmos DB item that's being created. This example uses the ToDoList sample from the [Quickstart .NET API for NoSQL](quickstart-dotnet.md) to add a timestamp property to a newly added item if it doesn't contain one. ```javascript function validateToDoItemTimestamp() { function validateToDoItemTimestamp() { } ``` -Pre-triggers cannot have any input parameters. The request object in the trigger is used to manipulate the request message associated with the operation. In the previous example, the pre-trigger is run when creating an Azure Cosmos DB item, and the request message body contains the item to be created in JSON format. +Pre-triggers can't have any input parameters. The request object in the trigger is used to manipulate the request message associated with the operation. In the previous example, the pre-trigger is run when creating an Azure Cosmos DB item, and the request message body contains the item to be created in JSON format. -When triggers are registered, you can specify the operations that it can run with. This trigger should be created with a `TriggerOperation` value of `TriggerOperation.Create`, which means using the trigger in a replace operation as shown in the following code is not permitted. +When triggers are registered, you can specify the operations that it can run with. This trigger should be created with a `TriggerOperation` value of `TriggerOperation.Create`, which means that using the trigger in a replace operation isn't permitted. -For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles. +For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers). ### <a id="post-triggers"></a>Post-triggers function updateMetadata() { } ``` -One thing that is important to note is the transactional execution of triggers in Azure Cosmos DB. The post-trigger runs as part of the same transaction for the underlying item itself. An exception during the post-trigger execution will fail the whole transaction. Anything committed will be rolled back and an exception returned. +One thing that's important to note is the transactional execution of triggers in Azure Cosmos DB. The post-trigger runs as part of the same transaction for the underlying item itself. An exception during the post-trigger execution fails the whole transaction. Anything committed is rolled back and an exception is returned. -For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers) articles. +For examples of how to register and call a pre-trigger, see [pre-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-pre-triggers) and [post-triggers](how-to-use-stored-procedures-triggers-udfs.md#how-to-run-post-triggers). ## <a id="udfs"></a>How to write user-defined functions -The following sample creates a UDF to calculate income tax for various income brackets. This user-defined function would then be used inside a query. For the purposes of this example assume there is a container called "Incomes" with properties as follows: +The following sample creates a UDF to calculate income tax for various income brackets. This user-defined function would then be used inside a query. For the purposes of this example, assume there's a container called *Incomes* with properties as follows: ```json {- "name": "Satya Nadella", + "name": "Daniel Elfyn", "country": "USA", "income": 70000 } ``` -The following is a function definition to calculate income tax for various income brackets: +The following function definition calculates income tax for various income brackets: ```javascript function tax(income) { function tax(income) { } ``` -For examples of how to register and use a user-defined function, see [How to use user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions) article. +For examples of how to register and use a UDF, see [How to work with user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md#how-to-work-with-user-defined-functions). ## Logging -When using stored procedure, triggers or user-defined functions, you can log the steps by enabling the script logging. A string for debugging is generated when `EnableScriptLogging` is set to true as shown in the following examples: +When using stored procedure, triggers, or UDFs, you can log the steps by enabling script logging. A string for debugging is generated when `EnableScriptLogging` is set to *true*, as shown in the following examples: # [JavaScript](#tab/javascript) Console.WriteLine(response.ScriptLog); ## Next steps -Learn more concepts and how-to write or use stored procedures, triggers, and user-defined functions in Azure Cosmos DB: +Learn more concepts and how to write or use stored procedures, triggers, and UDFs in Azure Cosmos DB: -* [How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md) +* [How to register and use stored procedures, triggers, and UDFs in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md) * [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md) -* [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md) +* [Working with Azure Cosmos DB stored procedures, triggers, and UDFs in Azure Cosmos DB](stored-procedures-triggers-udfs.md) * [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) |
defender-for-cloud | Concept Easm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md | description: Learn how to gain comprehensive visibility and insights over extern Previously updated : 01/24/2023 Last updated : 03/05/2023 # What is an external attack surface? Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that - Pinpoint attacker-exposed weaknesses, anywhere and on-demand - Gain visibility into third-party attack surfaces -EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). That data can be used by MDC CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers. +EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). That data can be used by Defender for Cloud CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers. ## Learn more |
defender-for-cloud | Defender For Cloud Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md | TodayΓÇÖs applications require security awareness at the code, infrastructure, a | Capability | What problem does it solve? | Get started | Defender plan and pricing | | - | | -- | - |-| [Code pipeline insights](defender-for-devops-introduction.md) | Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud | [Defender for DevOps](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | +| [Code pipeline insights](defender-for-devops-introduction.md) | Empowers security teams with the ability to protect applications and resources from code to cloud across multi-pipeline environments, including GitHub and Azure DevOps. Findings from Defender for DevOps, such as IaC misconfigurations and exposed secrets, can then be correlated with other contextual cloud security insights to prioritize remediation in code. | Connect [Azure DevOps](quickstart-onboard-devops.md) and [GitHub](quickstart-onboard-github.md) repositories to Defender for Cloud | [Defender for DevOps](https://azure.microsoft.com/pricing/details/defender-for-cloud/) | ## Improve your security posture |
defender-for-cloud | Episode Twenty Seven | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-seven.md | + + Title: Demystifying Defender for Servers | Defender for Cloud in the field ++description: Learn about different deployment options in Defender for Servers + Last updated : 03/05/2023+++# Demystifying Defender for Servers | Defender for Cloud in the field ++**Episode description**: In this episode of Defender for Cloud in the Field, Tom Janetscheck joins Yuri Diogenes to talk about the different deployment options in Defender for Servers. Tom covers the different agents available and the scenarios that will be most used for each agent, including the agentless feature. Tom also talks about the different vulnerability assessment solutions available, and how to deploy Defender for Servers at scale via policy or custom automation. +<br> +<br> +<iframe src="https://aka.ms/docs/player?id=dd9d789d-6685-47f1-9947-d31966aa4372" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe> ++- [02:14](/shows/mdc-in-the-field/demystify-servers#time=02m14s) - Understanding Defender for Servers P1 and P2 +- [06:15](/shows/mdc-in-the-field/demystify-servers#time=06m15s) - Pricing model +- [07:37](/shows/mdc-in-the-field/demystify-servers#time=07m37s) - Integration with MDE +- [10:08](/shows/mdc-in-the-field/demystify-servers#time=10m08s) - Using Defender for Servers P2 without MDE +- [11:32](/shows/mdc-in-the-field/demystify-servers#time=11m32s) - Understanding the different types of agents used by Defender for Servers +- [17:11](/shows/mdc-in-the-field/demystify-servers#time=17m11s) - The case for agentless implementation +- [22:52](/shows/mdc-in-the-field/demystify-servers#time=22m52s) - Deploying Defender for Servers at scale +++## Recommended resources + - Learn more about [Defender for Servers](plan-defender-for-servers.md) + - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS) + - Join our [Tech Community](https://aka.ms/SecurityTechCommunity) + - For more about [Microsoft Security](https://msft.it/6002T9HQY) ++- Follow us on social media: ++ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F) + - [Twitter](https://twitter.com/msftsecurity) ++- Join our [Tech Community](https://aka.ms/SecurityTechCommunity) ++- Learn more about [Microsoft Security](https://msft.it/6002T9HQY) ++## Next steps ++> [!div class="nextstepaction"] +> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md) |
defender-for-cloud | Episode Twenty Six | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-six.md | Last updated 02/15/2023 ## Next steps > [!div class="nextstepaction"]-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md) +> [Demystifying Defender for Servers](episode-twenty-seven.md) |
defender-for-cloud | Monitoring Components | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/monitoring-components.md | The following use cases explain how deployment of the Log Analytics agent works - **A pre-existing VM extension is present**: - When the Monitoring Agent is installed as an extension, the extension configuration allows reporting to only a single workspace. Defender for Cloud doesn't override existing connections to user workspaces. Defender for Cloud will store security data from the VM in the workspace already connected, if the "Security" or "SecurityCenterFree" solution has been installed on it. Defender for Cloud may upgrade the extension version to the latest version in this process. - To see to which workspace the existing extension is sending data to, run the test to [Validate connectivity with Microsoft Defender for Cloud](/archive/blogs/yuridiogenes/validating-connectivity-with-azure-security-center). Alternatively, you can open Log Analytics workspaces, select a workspace, select the VM, and look at the Log Analytics agent connection.- - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. For more information, see [Existing log analytics customers](./faq-azure-monitor-logs.yml). + - If you have an environment where the Log Analytics agent is installed on client workstations and reporting to an existing Log Analytics workspace, review the list of [operating systems supported by Microsoft Defender for Cloud](security-center-os-coverage.md) to make sure your operating system is supported. Learn more about [working with the Log Analytics agent](working-with-log-analytics-agent.md). |
defender-for-iot | Detect Windows Endpoints Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/detect-windows-endpoints-script.md | The script described in this article is supported for the following Windows oper - Windows NT - Windows 7 - Windows 10-- Windows Server 2003/2008/2012/2016+- Windows Server 2003/2008/2012/2016/2019 ## Prerequisites After having run the script as described [earlier](#run-the-script), import the ## Next steps -For more information, see [View detected devices on-premises](how-to-investigate-sensor-detections-in-a-device-inventory.md). +For more information, see [View detected devices on-premises](how-to-investigate-sensor-detections-in-a-device-inventory.md). |
defender-for-iot | Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/device-inventory.md | + + Title: Device inventory - Microsoft Defender for IoT +description: Learn about the Defender for IoT device inventory features available from the Azure portal, OT sensor console, and the on-premises management console. Last updated : 02/19/2023++++# Defender for IoT device inventory ++Defender for IoT's device inventory helps you identify details about specific devices, such as manufacturer, type, serial number, firmware, and more. Gathering details about your devices helps your teams proactively investigate vulnerabilities that can compromise your most critical assets. ++- **Manage all your IoT/OT devices** by building up-to-date inventory that includes all your managed and unmanaged devices ++- **Protect devices with risk-based approach** to identify risks such as missing patches, vulnerabilities and prioritize fixes based on risk scoring and automated threat modeling ++- **Update your inventory** by deleting irrelevant devices and adding organization-specific information to emphasize your organization preferences ++For example: +++## Device management options ++The Defender for IoT device inventory is available in the Azure portal, OT network sensor consoles, and the on-premises management console. ++While you can view device details from any of these locations, each location also offers extra device inventory support. The following table describes the device inventory visible supported for each location and the extra actions available from that location only: ++|Location |Description | Extra inventory support | +|||| +|**Azure portal** | Devices detected from all cloud-connected OT sensors and Enterprise IoT sensors. <br><br> | - If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) on your Azure subscription, the device inventory also includes devices detected by Microsoft Defender for Endpoint agents. <br><br>- If you also use [Microsoft Sentinel](iot-solution.md), incidents in Microsoft Sentinel are linked to related devices in Defender for IoT. <br><br>- Use Defender for IoT [workbooks](workbooks.md) for visibility into all cloud-connected device inventory, including related alerts and vulnerabilities. | +|**OT network sensor consoles** | Devices detected by that OT sensor | - View all detected devices across a network device map<br>- View related events on the **Event timeline** | +|**An on-premises management console** | Devices detected across all connected OT sensors | Enhance device data by importing data manually or via script | ++For more information, see: ++- [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) +- [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) +- [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) ++> [!NOTE] +> If you have an [Enterprise IoT plan](eiot-defender-for-endpoint.md) to [integrate with Microsoft Defender for Endpoint](concept-enterprise.md), devices detected by an Enterprise IoT sensor are also listed in Defender for Endpoint. For more information, see: +> +> - [Defender for Endpoint device inventory](/microsoft-365/security/defender-endpoint/machines-view-overview) +> - [Defender for Endpoint device discovery](/microsoft-365/security/defender-endpoint/device-discovery) +> ++## Supported devices ++Defender for IoT's device inventory supports device types across a variety of industries and fields. ++|Devices |For example ... | +||| +|**Manufacturing**| Industrial and operational devices, such as pneumatic devices, packaging systems, industrial packaging systems, industrial robots | +|**Building** | Access panels, surveillance devices, HVAC systems, elevators , smart lighting systems | +|**Health care** | Glucose meters, monitors | +|**Transportation / Utilities** | Turnstiles, people counters, motion sensors, fire and safety systems, intercoms | +|**Energy and resources** | DCS controllers, PLCs, historian devices, HMIs | +|**Endpoint devices** | Workstations, servers, or mobile devices | +| **Enterprise** | Smart devices, printers, communication devices, or audio/video devices | +| **Retail** | Barcode scanners, humidity sensor, punch clocks | ++A *transient* device type indicates a device that was detected for only a short time. We recommend investigating these devices carefully to understand their impact on your network. ++*Unclassified* devices are devices that don't otherwise have an out-of-the-box category defined. +++## Unauthorized devices ++When you're first working with Defender for IoT, during the learning period just after deploying a sensor, all devices detected are identified as *authorized* devices. ++After the learning period is over, any new devices detected are considered to be *unauthorized* and *new* devices. We recommend checking these devices carefully for risks and vulnerabilities. For example, in the Azure portal, filter the device inventory for `Authorization == **Unauthorized**`. On the device details page, drill down and check for related vulnerabilities, alerts, and recommendations. ++The *new* status is removed as soon as you edit any of the device details move the device on an OT sensor device map. In contrast, the *unauthorized* label remains until you manually edit the device details and mark it as *authorized*. ++On an OT sensor, unauthorized devices are also included in the following reports: ++- [Attack vector reports](how-to-create-attack-vector-reports.md): Devices marked as *unauthorized* are included in an attack vector simulation as suspected rogue devices that might be a threat to the network. ++- [Risk assessment reports](how-to-create-risk-assessment-reports.md): Devices marked as *unauthorized* are listed in risk assessment reports as their risks to your network require investigation. ++## Important OT devices ++Mark OT devices as *important* to highlight them for extra tracking. On an OT sensor, important devices are included in the following reports: ++- [Attack vector reports](how-to-create-attack-vector-reports.md): Devices marked as *important* are included in an attack vector simulation as possible attack targets. ++- [Risk assessment reports](how-to-create-risk-assessment-reports.md): Devices marked as *important* are counted in risk assessment reports when calculating security scores ++## Device inventory column data ++The following table lists the columns available in the Defender for IoT device inventory on the Azure portal. Starred items **(*)** are also available from the OT sensor. ++|Name |Description +||| +|**Authorization** * |Editable. Determines whether or not the device is marked as *authorized*. This value may need to change as the device security changes. | +|**Business Function** | Editable. Describes the device's business function. | +| **Class** | Editable. The device's class. <br>Default: `IoT` | +|**Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`| +|**Description** * | Editable. The device's description. | +| **Device Id** | The device's Azure-assigned ID number| +| **Firmware model** | The device's firmware model.| +| **Firmware vendor** | Editable. The vendor of the device's firmware. | +| **Firmware version** * |Editable. The device's firmware version. | +|**First seen** * | The date and time the device was first seen. Shown in `MM/DD/YYYY HH:MM:SS AM/PM` format. On the OT sensor, shown as **Discovered**.| +|**Importance** | Editable. The device's important level: `Low`, `Medium`, or `High`. | +| **IPv4 Address** | The device's IPv4 address. | +|**IPv6 Address** | The device's IPv6 address.| +|**Last activity** * | The date and time the device last sent an event through to Azure or to the OT sensor, depending on where you're viewing the device inventory. Shown in `MM/DD/YYYY HH:MM:SS AM/PM` format. | +|**Location** | Editable. The device's physical location. | +| **MAC Address** * | The device's MAC address. | +|**Model** *| Editable The device's hardware model. | +|**Name** * | Mandatory, and editable. The device's name as the sensor discovered it, or as entered by the user. | +|**OS architecture** | Editable. The device's operating system architecture. | +|**OS distribution** | Editable. The device's operating system distribution, such as Android, Linux, and Haiku. | +|**OS platform** * | Editable. The device's operating system, if detected. On the OT sensor, shown as **Operating System**. | +|**OS version** | Editable. The device's operating system version, such as Windows 10 or Ubuntu 20.04.1. | +|**PLC mode** * | The device's PLC operating mode, including both the *Key* state (physical / logical) and the *Run* state (logical). If both states are the same, then only one state is listed.<br><br>- Possible *Key* states include: `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. <br><br>- Possible *Run* states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. | +|**Programming device** * | Editable. Defines whether the device is defined as a *Programming Device*, performing programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. | +|**Protocols** *| The protocols that the device uses. | +| **Purdue level** | Editable. The Purdue level in which the device exists.| +|**Scanner device** * | Editable. Defines whether the device performs scanning-like activities in the network. | +|**Sensor**| The sensor the device is connected to. | +|**Serial number** *| The device's serial number. | +| **Site** | The device's site. <br><br>All Enterprise IoT sensors are automatically added to the **Enterprise network** site. | +| **Slots** | The number of slots the device has. | +| **Subtype** | Editable. The device's subtype, such as *Speaker* or *Smart TV*. <br>**Default**: `Managed Device` | +| **Tags** | Editable. The device's tags. | +|**Type** * | Editable. The device type, such as *Communication* or *Industrial*. <br>**Default**: `Miscellaneous` | +|**Vendor** *| The name of the device's vendor, as defined in the MAC address. | +| **VLAN** * | The device's VLAN. | +|**Zone** | The device's zone. | ++The following columns are available on OT sensors only: ++- The device's **DHCP Address** +- The device's **FQDN** address and **FQDN Last Lookup Time** +- The device **Groups** that include the device, as [defined on the OT sensor's device map](how-to-work-with-the-sensor-device-map.md#create-a-custom-device-group) +- The device's **Module address** +- The device's **Rack** and **Slot** +- The number of **Unacknowledged Alerts** alerts associated with the device ++> [!NOTE] +> The additional **Agent type** and **Agent version** columns are used for by device builders. For more information, see [Microsoft Defender for IoT for device builders documentation](/azure/defender-for-iot/device-builders/). ++## Next steps ++For more information, see: ++- [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) +- [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md) +- [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md) +- [Microsoft Defender for IoT - supported IoT, OT, ICS, and SCADA protocols](concept-supported-protocols.md) +- [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md) |
defender-for-iot | How To Activate And Set Up Your Sensor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-activate-and-set-up-your-sensor.md | You can access console tools from the side menu. Tools help you: | Tools| Description | | --|--| | Overview | View a dashboard with high-level information about your sensor deployment, alerts, traffic, and more. |-| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate sensor detections in the Device Map](how-to-work-with-the-sensor-device-map.md#investigate-sensor-detections-in-the-device-map). | +| Device map | View the network devices, device connections, Purdue levels, and device properties in a map. Various zoom, highlight, and filter options are available to help you gain the insight you need. For more information, see [Investigate devices on a device map](how-to-work-with-the-sensor-device-map.md) | | Device inventory | The Device inventory displays a list of device attributes that this sensor detects. Options are available to: <br /> - Sort, or filter the information according to the table fields, and see the filtered information displayed. <br /> - Export information to a CSV file. <br /> - Import Windows registry details. For more information, see [Detect Windows workstations and servers with a local script](detect-windows-endpoints-script.md).| | Alerts | Alerts are triggered when sensor engines detect changes or suspicious activity in network traffic that requires your attention. For more information, see [View and manage alerts on your OT sensor](how-to-view-alerts.md).| |
defender-for-iot | How To Enhance Port And Vlan Name Resolution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-enhance-port-and-vlan-name-resolution.md | For more information, see [On-premises users and roles for OT monitoring with De Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. However, you might want to customize the name of a specific port to highlight it, such as when you're watching a port with unusually high detected activity. -Port names are shown in Defender for IoT when [viewing device groups from the OT sensor's device map](how-to-work-with-the-sensor-device-map.md#group-highlight-and-filters-tools), or when you create OT sensor reports that include port information. +Port names are shown in Defender for IoT when [viewing device groups from the OT sensor's device map](how-to-work-with-the-sensor-device-map.md), or when you create OT sensor reports that include port information. **To customize a port name:** |
defender-for-iot | How To Investigate All Enterprise Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md | Use any of the following options to modify or filter the devices shown: | **Save a filter** | To save the current set of filters, select the **Save As** button that appears in the filter row.| | **Load a saved filter** | Saved filters are listed on the left, in the **Groups** pane. <br><br>1. Select the **Options** :::image type="icon" source="media/how-to-work-with-asset-inventory-information/options-menu.png"border="false"::: button in the toolbar to display the **Groups** pane. <br>2. In the **Device Inventory Filters** list, select the saved filter you want to load. | -For more information, see [Device inventory column reference](#device-inventory-column-reference). +For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). ## Export the device inventory to CSV For example: For more information, see [Defender for IoT sensor and management console APIs](references-work-with-defender-for-iot-apis.md). -## Device inventory column reference --The following table describes the device properties shown in the **Device inventory** page on an on-premises management console. --| Name | Description | -|--|--| -| **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. | -| **Business Unit** | The business unit that contains this device. | -| **Region** | The region that contains this device. | -| **Site** | The site that contains this device. | -| **Zone** | The zone that contains this device. | -| **Appliance** | The Microsoft Defender for IoT sensor that protects this device. | -| **Name** | The name of this device as Defender for IoT discovered it. | -| **Type** | The type of device, such as PLC or HMI. | -| **Vendor** | The name of the device's vendor, as defined in the MAC address. | -| **Operating System** | The OS of the device. | -| **Firmware** | The device's firmware. | -| **IP Address** | The IP address of the device. | -| **VLAN** | The VLAN of the device. | -| **MAC Address** | The MAC address of the device. | -| **Protocols** | The protocols that the device uses. | -| **Unacknowledged Alerts** | The number of unhandled alerts associated with this device. | -| **Is Authorized** | The authorization status of the device:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been authorized. | -| **Is Known as Scanner** | Whether this device performs scanning-like activities in the network. | -| **Is Programming Device** | Whether the device is a programming device:<br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations.<br />- **False**: The device isn't a programming device. | -| **Groups** | Groups in which this device participates. | -| **Last Activity** | The last activity that the device performed. | -| **Discovered** | When this device was first seen in the network. | -| **PLC mode (preview)** | The PLC operating mode includes the Key state (physical) and run state (logical). Possible **Key** states include, Run, Program, Remote, Stop, Invalid, Programming Disabled.Possible Run. The possible **Run** states are Run, Program, Stop, Paused, Exception, Halted, Trapped, Idle, Offline. if both states are the same, only one state is presented. | ## Next steps |
defender-for-iot | How To Investigate Sensor Detections In A Device Inventory | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md | Title: Manage your OT device inventory from a sensor console description: Learn how to view and manage OT devices (assets) from the Device inventory page on a sensor console. Previously updated : 07/21/2022 Last updated : 02/28/2023 This procedure describes how to view detected devices in the **Device inventory* :::image type="content" source="media/how-to-inventory-sensor/sensor-inventory-view-details.png" alt-text="Screenshot of the Device inventory page on an OT sensor console." lightbox="media/how-to-inventory-sensor/sensor-inventory-view-details.png"::: -For more information, see [Device inventory column reference](#device-inventory-column-reference). +For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). ## Edit device details If you're working with a cloud-connected sensor, any edits you make in the senso **To edit device details**: -1. Select one or more devices in the grid, and then select **View full details** in the pane on the right. +1. Select a device in the grid, and then select **Edit** in the toolbar at the top of the page. ++1. In the **Edit** pane on the right, modify the device fields as needed, and then select **Save** when you're done. ++You can also open the edit pane from the device details page: ++1. Select a device in the grid, and then select **View full details** in the pane on the right. 1. In the device details page, select **Edit Properties**. Editable fields include: - Device name - Device type - OS-- Purdue layer+- Purdue level - Description+- Scanner or programming device -For more information, see [Device inventory column reference](#device-inventory-column-reference). +For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). ## Export the device inventory to CSV For example, if you merge two devices, each with an IP address, both IP addresse **To merge devices from the device inventory:** -1. Use the SHIFT key to select two devices from the inventory, and then right-click one of them. +In the device inventory grid, select the devices you want to merge, and then select **Merge** in the toolbar at the top of the page. -1. Select **Merge** to merge the devices. This can take up to 2 minutes to complete. --1. When the **Set merge device attributes** dialog appears, enter a meaningful name for your merged device, and then select **Save**. +The devices are merged, and a confirmation message appears at the top right. ## View inactive devices You may want to delete devices from your device inventory, such as if they've be Deleted devices are removed from the **Device map** and the device inventories on the Azure portal and on-premises management console, and aren't calculated when generating reports, such as Data Mining, Risk Assessment, or Attack Vector reports. -**To delete a single device**: +**To delete one or more devices**: ++You can delete a device when it's been inactive for more than 10 minutes. -You can delete a single device when theyΓÇÖve been inactive for more than 10 minutes. +1. In the **Device inventory** page, select the device or devices you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page. -1. In the **Device inventory** page, select the device you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page. -1. At the prompt, select **Yes** to confirm that you want to delete the device from Defender for IoT. +1. At the prompt, select **Confirm** to confirm that you want to delete the device from Defender for IoT. -**To delete all inactive devices** +A confirmation message appears at the top right. ++**To delete all inactive devices**: This procedure is supported for the *cyberx* and admin users only. This procedure is supported for the *cyberx* and admin users only. All devices detected within the range of the filter will be deleted. If you delete a large number of devices, the delete process may take a few minutes. -## Export device inventory information --You can export device inventory information to a .csv file. --**To export:** --- Select **Export file** from the Device Inventory page. The report is generated and downloaded.--## Device inventory column reference --The following table describes the device properties shown in the **Device inventory** page on a sensor console. --| Name | Description | -|--|--| -| **Description** | A description of the device | -| **Discovered** | When this device was first seen in the network. | -| **Firmware version** | The device's firmware, if detected. | -| **FQDN** | The device's FQDN value | -| **FQDN lookup time** | The device's FQDN lookup time | -| **Groups** | The groups that this device participates in. | -| **IP Address** | The IP address of the device. | -| **Is Authorized** | The authorization status defined by the user:<br />- **True**: The device has been authorized.<br />- **False**: The device hasn't been | -| **Is Known as Scanner** | Defined as a network scanning device by the user. | -| **Is Programming device** | Defined as an authorized programming device by the user. <br />- **True**: The device performs programming activities for PLCs, RTUs, and controllers, which are relevant to engineering stations. <br />- **False**: The device isn't a programming device. | -| **Last Activity** | The last activity that the device performed. | -| **MAC Address** | The MAC address of the device. | -| **Name** | The name of the device as the sensor discovered it, or as entered by the user. | -| **Operating System** | The OS of the device, if detected. | -| **PLC mode** (preview) | The PLC operating mode that includes the Key state (physical, or logical), and the Run state (logical). Possible Key states include, `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. Possible Run states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. If both states are the same, then only one state is presented. | -| **Protocols** | The protocols that the device uses. | -| **Type** | The type of device as determined by the sensor, or as entered by the user. | -| **Unacknowledged Alerts** | The number of unacknowledged alerts associated with this device. | -| **Vendor** | The name of the device's vendor, as defined in the MAC address. | -| **VLAN** | The VLAN of the device. For more information, see [Define VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names). | - ## Next steps For more information, see: |
defender-for-iot | How To Manage Device Inventory For Organizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md | Use any of the following options to modify or filter the devices shown: |**Modify columns shown** | Select **Edit columns** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false":::. In the **Edit columns** pane:<br><br> - Select the **+ Add Column** button to add new columns to the grid.<br> - Drag and drop fields to change the columns order.<br>- To remove a column, select the **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/trashcan-icon.png" border="false"::: icon to the right.<br>- To reset the columns to their default settings, select **Reset** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/reset-icon.png" border="false":::. <br><br>Select **Save** to save any changes made. | | **Group devices** | From the **Group by** above the gird, select a category, such as **Class**, **Data source**, **Location**, **Purdue level**, **Site**, **Type**, **Vendor**, or **Zone**, to group the devices shown. Inside each group, devices retain the same column sorting. To remove the grouping, select **No grouping**. | -For more information, see [Device inventory column reference](#device-inventory-column-reference). +For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). ### View full device details As you manage your network devices, you may need to update their details. For ex Your updates are saved for all selected devices. -For more information, see [Device inventory column reference](#device-inventory-column-reference). +For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). ### Reference of editable fields The following device fields are supported for editing in the **Device inventory* |**Importance** | Select **Low**, **Normal**, or **High** to modify the device's importance. | |**Programming device** | Toggle the **Programming Device** option on or off as needed for your device. | -For more information, see [Device inventory column reference](#device-inventory-column-reference). +For more information, see [Device inventory column data](device-inventory.md#device-inventory-column-data). ## Export the device inventory to CSV A success message appears at the top right confirming that the devices have been The merged device that is now listed in the grid retains the details of the device with the most recent activity or an update to its identifying details. -## Device inventory column reference --The following table describes the device properties shown in the **Device inventory** page on the Azure portal. --| Parameter | Description | -|--|--| -| **Application** | The application that exists on the device. | -|**Authorization** |Editable. Determines whether or not the device is *authorized*. This value may change as device security changes. | -|**Business Function** | Editable. Describes the device's business function. | -| **Class** | Editable. The class of the device. <br>Default: `IoT`| -| **Data source** | The source of the data, such as a micro agent, OT sensor, or Microsoft Defender for Endpoint. <br>Default: `MicroAgent`| -| **Description** | Editable. The description of the device. | -| **Firmware vendor** | Editable. The vendor of the device's firmware. | -| **Firmware version** |Editable. The version of the firmware. | -| **First seen** | The date, and time the device was first seen. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. | -|**Hardware Model** | Editable. Determines the device's hardware model. | -|**Hardware Vendor** |Editable. Determines the device's hardware vendor. | -| **Importance** | Editable. The level of importance of the device. | -| **IPv4 Address** | The IPv4 address of the device. | -| **IPv6 Address** | The IPv6 address of the device. | -| **Last activity** | The date, and time the device last sent an event to the cloud. Presented in format MM/DD/YYYY HH:MM:SS AM/PM. | -| **Location** | Editable. The physical location of the device. | -| **MAC Address** | The MAC address of the device. | -| **Model** | The device's model. | -| **Name** | Mandatory, and editable. The name of the device as the sensor discovered it, or as entered by the user. | -| **OS architecture** | Editable. The architecture of the operating system. | -| **OS distribution** | Editable. The distribution of the operating system, such as Android, Linux, and Haiku. | -| **OS platform** | Editable. The OS of the device, if detected. | -| **OS version** | Editable. The version of the operating system, such as Windows 10 and Ubuntu 20.04.1. | -| **PLC mode** | The PLC operating mode that includes the Key state (physical, or logical), and the Run state (logical). Possible Key states include, `Run`, `Program`, `Remote`, `Stop`, `Invalid`, and `Programming Disabled`. Possible Run states are `Run`, `Program`, `Stop`, `Paused`, `Exception`, `Halted`, `Trapped`, `Idle`, or `Offline`. If both states are the same, then only one state is presented. | -| **PLC secured** | Determines if the PLC mode is in a secure state. A possible secure state is `Run`. A possible unsecured state can be either `Program`, or `Remote`. | -|**Programming device** | Editable. Determines whether the device is a *Programming Device*. | -| **Programming time** | The last time the device was programmed. | -| **Protocols** | The protocols that the device uses. | -| **Purdue level** | Editable. The Purdue level in which the device exists. | -| **Scanner device** | Whether the device performs scanning-like activities in the network. | -| **Sensor** | The sensor the device is connected to. | -| **Site** | The site that contains this device. <br><br>All Enterprise IoT sensors are automatically added to the **Enterprise network** site.| -| **Slots** | The number of slots the device has. | -| **Subtype** | Editable. The subtype of the device, such as speaker and smart tv. <br>**Default**: `Managed Device` | -| **Tags** | Editable. Tagging data for each device. | -| **Type** | Editable. The type of device, such as communication, and industrial. <br>**Default**: `Miscellaneous` | -| **Underlying devices** | Any relevant underlying devices for the device | -| **Underlying device region** | The region for an underlying device | -| **Vendor** | The name of the device's vendor, as defined in the MAC address. | -| **VLAN** | The VLAN of the device. | -| **Zone** | The zone that contains this device. | - ## Next steps For more information, see: |
defender-for-iot | How To View Information Per Zone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-information-per-zone.md | - Title: Learn about devices on specific zones -description: Use the on-premises management console to get a comprehensive view information per specific zone --- Previously updated : 06/12/2022 -----# View information per zone ---## View a device map for a zone --View a Device map for a selected zone on a sensor. This view displays all network elements related to the selected zone, including the sensors, the devices connected to them, and other information. ----- In the **Site Management** window, select **View Zone Map** from the bar that contains the zone name.-- :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-region-to-default-business-unit-v2.png" alt-text="Default region to default business unit."::: --The **Device Map** window appears. -The following tools are available for viewing devices and device information from the map. For details about each of these features, see the *Defender for IoT platform user guide*. --- **Map zoom views**: Simplified View, Connections View, and Detailed View. The displayed map view varies depending on the map's zoom level. You switch between map views by adjusting the zoom levels.-- :::image type="icon" source="media/how-to-work-with-asset-inventory-information/zoom-icon.png" border="false"::: --- **Map search and layout tools**: Tools used to display varied network segments, devices, device groups, or layers.-- :::image type="content" source="media/how-to-work-with-asset-inventory-information/search-and-layout-tools.png" alt-text="Screenshot of the Search and Layout Tools view."::: --- **Labels and indicators on devices:** For example, the number of devices grouped in a subnet in an IT network. In this example, it's 8.-- :::image type="content" source="media/how-to-work-with-asset-inventory-information/labels-and-indicators.png" alt-text="Screenshot of labels and indicators."::: --- **View device properties**: For example, the sensor that's monitoring the device and basic device properties. Right-click the device to view the device properties.-- :::image type="content" source="media/how-to-work-with-asset-inventory-information/asset-properties-v2.png" alt-text="Screenshot of the Device Properties view."::: --- **Alert associated with a device:** Right-click the device to view related alerts.-- :::image type="content" source="media/how-to-work-with-asset-inventory-information/show-alerts.png" alt-text="Screenshot of the Show Alerts view."::: --## View alerts associated with a zone --To view alerts associated with a specific zone: --- Select the alert icon from the **Zone** window. -- :::image type="content" source="media/how-to-work-with-asset-inventory-information/business-unit-view-v2.png" alt-text="The default Business Unit view with examples."::: --For more information, see [Overview: Working with alerts](how-to-work-with-alerts-on-premises-management-console.md). --### View the device inventory of a zone --To view the device inventory associated with a specific zone: --- Select **View Device Inventory** from the **Zone** window.-- :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-business-unit.png" alt-text="The device inventory screen will appear."::: --For more information, see: --- [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)-- [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md)-- [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md)--## View additional zone information --The following additional zone information is available: --- **Zone details**: View the number of devices, alerts, and sensors associated with the zone.--- **Sensor details**: View the name, IP address, and version of each sensor assigned to the zone.--- **Connectivity status**: If a sensor is disconnected, connect from the sensor. See [Connect sensors to the on-premises management console](how-to-activate-and-set-up-your-on-premises-management-console.md#connect-sensors-to-the-on-premises-management-console). --- **Update progress**: If the connected sensor is being upgraded, upgrade statuses will appear. During the upgrade, the on-premises management console doesn't receive device information from the sensor.--## Next steps --[Gain insight into global, regional, and local threats](how-to-gain-insight-into-global-regional-and-local-threats.md) |
defender-for-iot | How To Work With Device Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-device-notifications.md | - Title: Work with device notifications in Defender for IoT -description: Notifications provide information and recommendations about network activity. Previously updated : 01/02/2022----# Work with device notifications --Discovery notifications provide information about network activity that might require your attention, along with recommendations for handling this activity. For example, you might receive a notification about an inactive device that should be reconnected, or removed if it's no longer part of the network. Notifications aren't the same as alerts. Alerts provide information about changes that might present a threat to your network. --## Notification types --The following table describes the notification event types you might receive, along with the options for handling them. When you dismiss a notification, the device information is not updated with the recommended action. If traffic is detected again, the notification is resent. --| Type | Description | Responses | -|--|--|--| -| New IP detected | A new IP address is associated with the device. Five scenarios might be detected: <br /><br /> An additional IP address was associated with a device. This device is also associated with an existing MAC address.<br /><br /> A new IP address was detected for a device that's using an existing MAC address. Currently the device does not communicate by using an IP address.<br /> <br /> A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> A new IP address was detected for a device that's using a virtual IP address. | **Set Additional IP to Device** (merge devices) <br /> <br />**Replace Existing IP** <br /> <br /> **Dismiss**<br /> Remove the notification. | -| Inactive devices | Traffic wasn't detected on a device for more than 60 days. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device) | **Delete** <br /> If this device isn't part of your network, remove it. <br /><br />**Dismiss** <br /> Remove the notification if the device is part of your network. If the device is inactive (for example, because it's mistakenly disconnected from the network), dismiss the notification and reconnect the device. | -| New OT devices | A subnet includes an OT device that's not defined in an ICS subnet. <br /><br /> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br /> <br /> **Dismiss** <br />Remove the notification if the device isn't part of the subnet. | -| No subnets configured | No subnets are currently configured in your network. <br /><br /> Configure subnets for better representation in the map and the ability to differentiate between OT and IT devices. | **Open Subnets Configuration** and configure subnets. <br /><br />**Dismiss** <br /> Remove the notification. | -| Operating system changes | One or more new operating systems have been associated with the device. | Select the name of the new OS that you want to associate with the device.<br /><br /> **Dismiss** <br /> Remove the notification. | -| New subnets | New subnets were discovered. | **Learn**<br />Automatically add the subnet.<br />**Open Subnet Configuration**<br />Add all missing subnet information.<br />**Dismiss**<br />Remove the notification. | -| Device type changes | A new device type has been associated with the device. | **Set as {…}**<br />Associate the new type with the device.<br />**Dismiss**<br />Remove the notification. | --## View notifications --1. In Defender for IoT, select **Device Map**. -1. Select **Notifications** icon. -1. In **Discovery Notifications**, review all notifications. -1. For each notification, either accept the recommendation, or dismiss it. -1. By default, all notifications are shown. - - To filter for specific dates and times, select **Time range ==** and specify a days, weeks, or month filter. - - Select **Add filter** to filter on other device, subnet, and operating system values. ---## Respond to multiple notifications --You might need to handle several notifications simultaneously. For example: --- If IT did an OS upgrade to a large set of network servers, you can instruct the sensor to learn the new server versions for all upgraded servers. -- If a group of devices in a certain line was phased out and isn't active anymore, you can instruct the sensor to remove these devices from the console.--Respond as follows: --1. In **Discovery Notifications**, choose **Select All**, and then clear the notifications you don't need. When you choose **Select All**, Defender for IoT displays information about which notifications can be handled or dismissed simultaneously, and which need your input. -1. You can accept all recommendations, dismiss all recommendations, or handled notifications one at a time. -1. For notifications that indicate manual changes are required, such as **New IPs** and **No Subnets**, make the manual modifications as needed. --## Next steps --For more information, see [View alerts](how-to-view-alerts.md). |
defender-for-iot | How To Work With The Sensor Device Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-the-sensor-device-map.md | Title: Work with the sensor device map -description: The Device map provides a graphical representation of network devices detected. Use the map to analyze, and manage device information, network slices and generate reports. Previously updated : 02/02/2022+ Title: Investigate devices in the OT sensor or on-premises management console device map +description: Learn how to use the device map on an OT sensor or an on-premises management console, which provides a graphical representation of devices and the connections between them. Last updated : 01/25/2023 -# Investigate sensor detections in the Device map +# Investigate devices on a device map -The Device map provides a graphical representation of network devices detected, and the connections between them. Use the map to: +OT device maps provide a graphic representation of the network devices detected by the OT network sensor and the connections between them. - - Retrieve, analyze, and manage device information. +Use a device map to retrieve, analyze, and manage device information, either all at once or by network segment, such as specific interest groups or Purdue layers. If you're working in an air-gapped environment with an on-premises management console, use a *zone map* to view devices across all connected OT sensors in a specific zone. - - Analyze network slices, for example-specific groups of interest or Purdue layers. +## Prerequisites - - Generate reports, for example export device details and summaries. +To perform the procedures in this article, make sure that you have: +- An OT network sensor [installed](ot-deploy/install-software-ot-sensor.md), [activated, and configured](how-to-activate-and-set-up-your-sensor.md), with network traffic ingested -**To access the map:** +- Access to your OT sensor or on-premises management console. Users with the **Viewer** role can view data on the map. To import or export data or edit the map view, you need access as a **Security Analyst** or **Admin** user. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md). -- Select **Device map** from the console main screen.+To view devices across multiple sensors in a zone, you'll also need an on-premises management console [installed](ot-deploy/install-software-on-premises-management-console.md), [activated, and configured](how-to-activate-and-set-up-your-on-premises-management-console.md), with multiple sensors connected and assigned to sites and zones. +## View devices on OT sensor device map -## Map search and layout tools +1. Sign into your OT sensor and select **Device map**. All devices detected by the OT sensor are displayed by default according to [Purdue layer](best-practices/understand-network-architecture.md#purdue-reference-model-and-defender-for-iot). -A variety of map tools help you gain insight into devices and connections of interest to you. -- [Basic search tools](#basic-search-tools)-- [Group highlight and filters tools](#group-highlight-and-filters-tools)-- [Map display tools](#map-display-tools)+ On the OT sensor's device map: -Your user role determines which tools are available in the Device Map window. For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md) and [Create and manage users on an OT network sensor](manage-users-sensor.md). + - Devices with currently active alerts are highlighted in red + - Starred devices are those that had been marked as important + - Devices with no alerts are shown in black, or grey in the zoomed-in connections view -### Basic search tools + For example: -The following basic search tools are available: -- Search by IP or MAC address-- Multicast or broadcast traffic-- Last activity: Filter the devices on the map according to the time they last communicated with other devices.+ :::image type="content" source="media/how-to-work-with-maps/device-map-default.png" alt-text="Screenshot of a default view of an OT sensor's device map." lightbox="media/how-to-work-with-maps/device-map-default.png"::: - :::image type="icon" source="media/how-to-work-with-maps/search-bar-icon-v2.png" border="false"::: +1. Zoom in and select a specific device to view the connections between it and other devices, highlighted in blue. -When you search by IP or MAC address, the map displays the device that you searched for with the devices connected to it. + When zoomed in, each device shows the following details: + - The device's host name, IP address, and subnet address, if relevant. + - The number of currently active alerts on the device. + - The device type, represented by a various icons. + - The number of devices grouped in a subnet in an IT network, if relevant. This number of devices is shown in a black circle. + - Whether the device is newly detected or unauthorized. -### Group highlight and filters tools +1. Right-click a specific device and select **View properties** to drill down further to the **Map View** tab on the device's [device details page](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory). -Filter or highlight the map based on default and custom device groups. +### Modify the OT sensor map display -- Filtering omits the devices that aren't in the selected group.-- Highlights display all devices and highlights the selected items in the group in blue.+Use any of the following map tools to modify the data shown and how it's displayed: - :::image type="content" source="media/how-to-work-with-maps/group-highlight-and-filters-v2.png" alt-text="Screenshot of the group highlights and filters."::: +|Name |Description | +||| +|**Refresh map** | Select to refresh the map with updated data. | +| **Notifications** | Select to view [device notifications](#manage-device-notifications). | +|**Search by IP / MAC** | Filter the map to display only devices connected to a specific IP or MAC address. | +|**Multicast/broadcast** | Select to edit the filter that shows or hides multicast and broadcast devices. By default, multicast and broadcast traffic is hidden. | +|**Add filter** (Last seen) | Select to filter devices displayed by those shown in a specific time period, from the last five minutes to the last seven days. | +|**Reset filters** | Select to reset the *Last seen* filter. | +|**Highlight** | Select to highlight the devices in a specific [device group](#built-in-device-map-groups). Highlighted devices are shown on the map in blue. <br><br>Use the **Search groups** box to search for device groups to highlight, or expand your group options, and then select the group you want to highlight. | +|**Filter** | Select to filter the map to show only the devices in a specific [device group](#built-in-device-map-groups). <br><br>Use the **Search groups** box to search for device groups, or expand your group options, and then select the group you want to filter by. | +| **Zoom** <br>:::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" border="false"::: / :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" border="false"::: | Zoom in on the map to view the connections between each device, either using the mouse or the **+**/**-** buttons on the right of the map. | +| **Fit to screen** <br>:::image type="icon" source="media/how-to-work-with-maps/fit-to-screen-icon.png" border="false"::: | Zooms out to fit all devices on the screen | +|**Fit to selection**<br>:::image type="icon" source="media/how-to-work-with-maps/fit-to-selection-icon.png" border="false"::: | Zooms out enough to fit all selected devices on the screen | +|**IT/OT Presentation Options** <br> :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: |Select **Disable Display IT Networks Groups** to prevent the ability to [collapse subnets](#view-it-subnets-from-an-ot-sensor-device-map) in the map. This option is selected on by default. | +|**Layout options** <br>:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Select one of the following: <br>- **Pin layout**. Select to save device locations if you've dragged them to new places on the map. <br />- **Layout by connection**. Select to view devices organized by their connections. <br />- **Layout by Purdue**. Select to view devices organized by their Purdue layers. | -**To highlight or filter devices:** +To see device details, select a device and expand the device details pane on the right. In a device details pane: -1. Select **Device map** on the side menu. +- Select **Activity Report** to jump to the device's [data mining report](how-to-create-data-mining-queries.md) +- Select **Event Timeline** to jump to the device's [event timeline](how-to-track-sensor-activity.md) +- Select **Device Details** to jump to a full [device details page](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory). -1. From the Groups pane, select the group you want to highlight or filter. -1. Toggle the **Highlight** or **Filter** option. -The following predefined groups are available: +### View IT subnets from an OT sensor device map -| Group name | Description | -|--|--| -| **Known applications** | Devices that use reserved ports, such as TCP. | -| **non-standard ports (default)** | Devices that use non-standard ports or ports that haven't been assigned an alias. | -| **OT protocols (default)** | Devices that handle known OT traffic. | -| **Authorization (default)** | Devices that were discovered in the network during the learning process or were officially authorized on the network. | -| **Device inventory filters** | Devices grouped according to the filters saved in the Device Inventory table. | -| **Polling intervals** | Devices grouped by polling intervals. The polling intervals are generated automatically according to cyclic channels or periods. For example, 15.0 seconds, 3.0 seconds, 1.5 seconds, or any other interval. Reviewing this information helps you learn if systems are polling too quickly or slowly. | -| **Programming** | Engineering stations, and programming machines. | -| **Subnets** | Devices that belong to a specific subnet. | -| **VLAN** | Devices associated with a specific VLAN ID. | -| **Cross subnet connections** | Devices that communicate from one subnet to another subnet. | -| **Attack vector simulations** | Vulnerable devices detected in attack vector reports. To view these devices on the map, select the **Display on Device Map** checkbox when generating the Attack Vector. :::image type="content" source="media/how-to-work-with-maps/add-attack-v3.png" alt-text="Screenshot of the Add Attack Vector Simulations":::| -| **Last activity** | Devices grouped by the time frame they were last active, for example: One hour, six hours, one day, or seven days. | -| **Not In Active Directory** | All non-PLC devices that aren't communicating with the Active Directory. | --For information about creating custom groups, see [Define custom groups](#define-custom-groups). --### View filtered information as a map group --You can display devices from saved filters in the Device map. For more information, see [View the device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md#view-the-device-inventory). --**To view devices in the map:** --1. After creating and saving an Inventory filter, navigate to the Device map. -1. In the map page, open the Groups pane on the left. -1. Scroll down to the **Asset Inventory Filters** group. The groups you saved from the Inventory appear. --### Map display tools --| Icon | Description | -|--|--| -| :::image type="icon" source="media/how-to-work-with-maps/fit-to-screen-icon.png" border="false"::: | Fit to screen. | -| :::image type="icon" source="media/how-to-work-with-maps/fit-to-selection-icon.png" border="false"::: | Fits a group of selected devices to the center of the screen. | -| :::image type="icon" source="media/how-to-work-with-maps/collapse-view-icon.png" border="false"::: | IT/OT Presentation Options. Select **Disable Display IT Networks Groups** to prevent the ability to collapse IT networks in the map. This option is turned on by default. | -|:::image type="icon" source="media/how-to-work-with-maps/layouts-icon-v2.png" border="false"::: | Layout options, including: <br />**Pin layout**. Drag devices on the map to a new location. Use the Pin option to save those locations when you leave the map to use another option. <br />**Layout by connection**. View connections between devices. <br />**Layout by Purdue**. View the devices in the map according to Enterprise, supervisory and process control layers. <br /> | -| :::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" border="false"::: :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" border="false"::: | Zoom in or out of the map. | ---### Map zoom views --Working with map views helps expedite forensics when analyzing large networks. Map views include the following options: -- - [Bird’s-eye view](#birds-eye-view) -- - [Device type and connection view](#device-type-and-connection-view) ---### Bird’s-eye view --This view provides an at-a-glance view of devices represented as follows: -- - Red dots indicate devices with alert(s) -- - Starred dots indicate devices marked as important +By default, IT devices are automatically aggregated by [subnet](how-to-control-what-traffic-is-monitored.md#configure-subnets), so that the map focuses on OT and ICS networks. - - Black dots indicate devices with no alerts +**To expand an IT subnet**: - :::image type="content" source="media/how-to-work-with-maps/colored-dots-v2.png" alt-text="Screenshot of a bird eye view of the map." lightbox="media/how-to-work-with-maps/colored-dots-v2.png"::: +1. Sign into your OT sensor and select **Device map**. +1. Locate your subnet on the map. You might need to zoom in on the map to view a subnet icon, which looks like several machines inside a box. For example: -### Device type and connection view + :::image type="content" source="media/how-to-work-with-maps/expand-collapse-subnets.png" alt-text="Screenshot of a subnet device on the device map."::: -This view presents devices represented as icons on the map. +1. Right-click the subnet device on the map and **Expand Network**. - - Devices with alerts are displayed with a red ring -- - Devices without alerts are displayed with a grey ring -- - Devices displayed as a star were marked as important --Overall connections are displayed. ---**To view specific connections:** --1. Select a device in the map. -1. Specific connections between devices are displayed in blue. In addition, you'll see connections that cross various Purdue levels. -- :::image type="content" source="media/how-to-work-with-maps/connections-purdue-level.png" alt-text="Screenshot of the detailed map view." lightbox="media/how-to-work-with-maps/connections-purdue-level.png" ::: --### View IT subnets --By default, IT devices are automatically aggregated by subnet, so that the map view is focused on OT and ICS networks. The presentation of the IT network elements is collapsed to a minimum which reduces the total number of the devices presented on the map, and provides a clear picture of the OT and ICS network elements. --Each subnet is presented as a single entity on the Device map. Options are available to expand subnets to see details, collapse subnets or hide them. --**To expand an IT subnet:** -1. Right-click the icon on the map that represents the IT network and select **Expand Network**. -1. A confirmation box appears, notifying you that the layout change can't be redone. -1. Select **OK**. The IT subnet elements appear on the map. +1. In the confirmation message that appears above the map, select **OK**. **To collapse an IT subnet:** -1. From the left pane, select **Devices**. --2. Select the expanded subnet. The number in red indicates how many expanded IT subnets currently appear on the map. --3. Select the subnet(s) that you want to collapse or select **Collapse All**. The selected subnet appears collapsed on the map. --The collapse icon is updated with the updated number of collapsed IT subnets. +1. Sign into your OT sensor and select **Device map**. +1. Select one or more expanded subnets and then select **Collapse All**. -**To disable the option to collapse and expand IT subnets:** -1. Select the **Disable Display IT Network Groups**. -1. Select Confirm the dialog box that opens. -This option is available to Administrator users. -> [!NOTE] - > For information on updating default OT IT networks, see [Configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). +## Create a custom device group -## Define custom groups +In addition to OT sensor's [built-in device groups](#built-in-device-map-groups), create new custom groups as needed to use when highlighting or filtering devices on the map. -In addition to viewing predefined groups, you can define custom groups. The groups appear in the Device map, Device inventory, and Data Mining Reports. +1. Either select **+ Create Custom Group** in the toolbar, or right-click a device in the map and then select **Add to custom group**. -> [!NOTE] -> You can also create groups from the Device Inventory. +1. In the **Add custom group** pane: -**To create a group:** + - In the **Name** field, enter a meaningful name for your group, with up to 30 characters. + - From the **Copy from groups** menu, select any groups you want to copy devices from. + - From the **Devices** menu, select any extra devices to add to your group. -1. Select **Create Custom Group** from the Device map. +## Import / export device data -1. In the Add custom group dialog box, add the name of the group. Use up to 30 characters. +Use one of the following options to import and export device data: -1. Select an existing group(s) or choose specific device(s). +- **Import Devices**. Select to import devices from a pre-configured .CSV file. +- **Export Devices**. Select to export all currently displayed devices, with full details, to a .CSV file. +- **Export Device Summary**. Select to export a high level summary of all currently displayed devices to a .CSV file. -1. Select **Submit**. -**To add devices to a custom group**: +## Edit devices -1. Right-click a device(s) on the map. +1. Sign into an OT sensor and select **Device map**. -1. Select **Add to custom group**. +1. Right-click a device to open the device options menu, and then select any of the following options: -1. Select an existing group(s) or choose specific device(s). + |Name |Description | + ||| + |**Edit properties** | Opens the edit pane where you can edit device properties, such as authorization, name, description, OS platform, device type, Purdue level and if it is a scanner or programming device. | + |**View properties** | Opens the device's details page. | + |**Authorize/Unauthorize** | Changes the device's [authorization status](device-inventory.md#unauthorized-devices). | + |**Mark as Important / Non-Important** | Changes the device's [importance](device-inventory.md#important-ot-devices) status, highlighting business critical servers on the map with a star and elsewhere, including OT sensor reports and the Azure device inventory. | + |**Show Alerts** / **Show Events** | Opens the **Alerts** or **Event Timeline** tab on the device's details page. | + | **Activity Report** | Generates an activity report for the device for the selected timespan. | + | **Simulate Attack Vectors** | Generates an [attack vector simulation](how-to-create-attack-vector-reports.md) for the selected device. | + | **Add to custom group** | Creates a new [custom group](#create-a-custom-device-group) with the selected device. | + | **Delete** |Deletes the device from the inventory. | -1. Select **Submit**. +## Merge devices -## Learn more about devices +You may want to merge devices if the OT sensor detected multiple network entities associated with a unique device, such as a PLC with four network cards, or a single laptop with both WiFi and a physical network card. -An extensive range of tools are available to learn more about devices from the Device map, including: +You can only merge [authorized devices](device-inventory.md#unauthorized-devices). -- [Device labels and indicators](#device-labels-and-indicators)+> [!IMPORTANT] +> You can't undo a device merge. If you mistakenly merged two devices, delete the devices and then wait for the sensor to rediscover both. +> -- [Device details](#device-details)+**To merge multiple devices**: -- [Device types](#device-types)+1. Sign into your OT sensor and select **Device map**. -- [Backplane properties](#backplane-properties)+1. Select the authorized devices you want to merge by using the SHIFT key to select more than one device, and then right-click and select **Merge**. +The devices are merged, and a confirmation message appears at the top right. Merge events are listed in the OT sensor's event timeline. -### Device labels and indicators +## Manage device notifications -The following labels and indicators may appear on devices on the map: +As opposed to alerts, which provide details about changes in your traffic that might present a threat to your network, device notifications on an OT sensor device map provide details about network activity that might require your attention, but aren't threats. -| Device label | Description | -|--|--| -| :::image type="content" source="media/how-to-work-with-maps/host-v2.png" alt-text="Screenshot of the I P host name."::: | IP address host name and IP address, or subnet addresses | -| :::image type="content" source="media/how-to-work-with-maps/amount-alerts-v2.png" alt-text="Screenshot of the number of alerts"::: | Number of alerts associated with the device | -| :::image type="icon" source="media/how-to-work-with-maps/type-v2.png" border="false"::: | Device type icon, for example storage, PLC or historian. | -| :::image type="content" source="media/how-to-work-with-maps/grouped-v2.png" alt-text="Screenshot of devices grouped together."::: | Number of devices grouped in a subnet in an IT network. In this example 8. | -| :::image type="content" source="media/how-to-work-with-maps/not-authorized-v2.png" alt-text="Screenshot of the device learning period"::: | A device that was detected after the Learning period and wasn't authorized as a network device. | -| Solid line | Logical connection between devices | -| :::image type="content" source="media/how-to-work-with-maps/new-v2.png" alt-text="Screenshot of a new device discovered after learning is complete."::: | New device discovered after Learning is complete. | +For example, you might receive a notification about an inactive device that needs to be reconnected, or removed if it's no longer part of the network. -### Device details and contextual information +**To view and handle device notifications**: -You can access detailed and contextual information about a device from the map, for example: -- Device properties, such as the device type, protocols detected, or Purdue level associated with the device. -- Backplane properties. -- Contextual information such as open alerts associated with the device.+1. Sign into the OT sensor and select **Device map** > **Notifications**. -**To view details:** -1. Right-click a device on the map. -1. Select **View properties**. -1. Navigate to the information you need. +1. In the **Discovery Notifications** pane on the right, filter notifications as needed by time range, device, subnet, or operating systems. - :::image type="content" source="media/how-to-work-with-maps/device-details-from-map.png" alt-text="Screenshot of the device details shown for the device selected in map."::: + For example: -#### Device details + :::image type="content" source="media/how-to-work-with-maps/device-notifications.png" alt-text="Screenshot of device notifications on an OT sensor's Device map page." lightbox="media/how-to-work-with-maps/device-notifications.png"::: -This section describes device details. +1. Each notification may have different mitigation options. Do one of the following: -| Item | Description | -|--|--| -| Name | The device name. <br /> By default, the sensor discovers the device name as it's defined in the network. For example, a name defined in the DNS server. <br /> If no such names were defined, the device IP address appears in this field. <br /> You can change a device name manually. Give your devices meaningful names that reflect their functionality. | -| Authorized status | Indicates if the device is authorized or not. During the Learning period, all the devices discovered in the network are identified as Authorized. When a device is discovered after the Learning period, it appears as Unauthorized by default. You can change this definition manually. For information on this status and manually authorizing and unauthorizing, see [Authorize and unauthorize devices](#authorize-and-unauthorize-devices). | -| Last activity | The last time the device was detected. | -| Alert | The number of open alerts associated with the device. | -| Type | The device type as detected by the sensor. | -| Vendor | The device vendor. This is determined by the leading characters of the device MAC address. This field is read-only. | -| Operating System | The device OS detected by the sensor. | -| Location | The Purdue layer identified by the sensor for this device, including: <br /> - Automatic <br /> - Process Control <br /> - Supervisory <br /> - Enterprise | -| Description | A free text field. <br /> Add more information about the device. | -| Attributes | Additional information was discovered on the device. For example, view the PLC Run and Key state, the secure status of the PLC, or information on when the state changed. <br /> The information is read only and can't be updated from the Attributes section. | -| Scanner or Programming device | **Scanner**: Enable this option if you know that this device is known as a scanner and there's no need to alert you about it. <br /> **Programming Device**: Enable this option if you know that this device is known as a programming device and is used to make programming changes. Identifying it as a programming device will prevent alerts for programming changes originating from this asset. | -| Network Interfaces | The device interfaces. A RO field. | -| Protocols | The protocols used by the device. A RO field. | -| Firmware | If Backplane information is available, firmware information won't be displayed. | -| Address | The device IP address. | -| Serial | The device serial number. | -| Module Address | The device model and slot number or ID. | -| Model | The device model number. | -| Firmware Version | The firmware version number. | + - Handle one notification at a time, selecting a specific mitigation action, or selecting **Dismiss** to close the notification with no activity. + - Select **Select All** to show which notifications can be [handled together](#handling-multiple-notifications-together). Clear selections for specific notifications, and then select **Accept All** or **Dismiss All** to handle any remaining selected notifications together. -#### Contextual information -- View contextual information about the device. --**To view:** -1. Select **Map View** to see device connections to other devices. -1. Select **Alerts** to see details about alerts associated with the device. -1. Select **Event Timeline** to review events that occurred around the time of the detection. +> [!NOTE] +> Selected notifications are automatically resolved if they aren't dismissed or otherwise handled within 14 days. For more information, see the action indicated in the **Auto-resolve** column in the table [below](#device-notification-responses). +> -#### Backplane properties +### Handling multiple notifications together -If a PLC contains multiple modules separated into racks and slots, the characteristics might vary between the module cards. For example, if the IP address and the MAC address are the same, the firmware might be different. +You may have situations where you'd want to handle multiple notifications together, such as: -You can use the Backplane option to review multiple controllers/cards and their nested devices as one entity with various definitions. Each slot in the Backplane view represents the underlying devices – the devices that were discovered behind it. +- IT upgraded the OS across multiple network servers and you want to learn all of the new server versions. +- A group of devices is no longer active, and you want to instruct the OT sensor to remove the devices from the OT sensor. +When you handle multiple notifications together, you may still have remaining notifications that need to be handled manually, such as for new IP addresses or no subnets detected. -A Backplane can contain up to 30 controller cards and up to 30 rack units. The total number of devices included in the multiple levels can be up to 200 devices. -The Backplane pane is shown in the Device Properties window when Backplane details are detected. +### Device notification responses -Each slot appears with the number of underlying devices and the icon that shows the module type. +The following table lists available responses for each notification, and when we recommend using each one: -| Icon | Module Type | -|--|--| -| :::image type="content" source="media/how-to-work-with-maps/power.png" alt-text="Screenshot of the Power Supply icon."::: | Power Supply | -| :::image type="content" source="media/how-to-work-with-maps/analog.png" alt-text="Screenshot the Analog I/O icon."::: | Analog I/O | -| :::image type="content" source="media/how-to-work-with-maps/comms.png" alt-text="Screenshot of the Communication Adapter icon."::: | Communication Adapter | -| :::image type="content" source="media/how-to-work-with-maps/digital.png" alt-text="Screenshot of the Digital I/O icon."::: | Digital I/O | -| :::image type="content" source="media/how-to-work-with-maps/computer-processor.png" alt-text="Screenshot of the CPU icon."::: | CPU | -| :::image type="content" source="media/how-to-work-with-maps/HMI-icon.png" alt-text="Screenshot of the HMI icon."::: | HMI | -| :::image type="content" source="media/how-to-work-with-maps/average.png" alt-text="Screenshot of the Generic icon."::: | Generic | +| Type | Description | Available responses | Auto-resolve| +|--|--|--|--| +| **New IP detected** | A new IP address is associated with the device. This may occur in the following scenarios: <br><br>- A new or additional IP address was associated with a device already detected, with an existing MAC address.<br><br> - A new IP address was detected for a device that's using a NetBIOS name. <br /><br /> - An IP address was detected as the management interface for a device associated with a MAC address. <br /><br /> - A new IP address was detected for a device that's using a virtual IP address. | - **Set Additional IP to Device**: Merge the devices <br />- **Replace Existing IP**: Replaces any existing IP address with the new address <br /> - **Dismiss**: Remove the notification. |**Dismiss** | +| **No subnets configured** | No subnets are currently configured in your network. <br /><br /> We recommend configuring subnets for the ability to differentiate between OT and IT devices on the map. | - **Open Subnets Configuration** and [configure subnets](how-to-control-what-traffic-is-monitored.md#configure-subnets). <br />- **Dismiss**: Remove the notification. |**Dismiss** | +| **Operating system changes** | One or more new operating systems have been associated with the device. | - Select the name of the new OS that you want to associate with the device.<br /> - **Dismiss**: Remove the notification. |No automatic handling| +| **New subnets** | New subnets were discovered. |- **Learn**: Automatically add the subnet.<br />- **Open Subnet Configuration**: Add all missing subnet information.<br />- **Dismiss**<br />Remove the notification. |**Dismiss** | +| **Device type changes** | A new device type has been associated with the device. | - **Set as {…}**: Associate the new type with the device.<br />- **Dismiss**: Remove the notification. |No automatic handling| -When you select a slot, the slot details appear: +The following legacy notifications were removed in version 22.3.6. If you have an earlier OT sensor version installed, you may still have these notifications to resolve: +| Type | Description | Available responses | +|--|--|--| +| **Inactive devices** | Traffic wasn't detected on a specific device for more than 60 days. | We recommend removing the device from your network if it's no longer needed. <br><br>**Dismiss**: If the device is part of your network but currently inactive, such as if it's mistakenly disconnected from the network, dismiss the notification and reconnect the device. | +| **New OT devices** | A subnet includes an OT device that's not defined in an ICS subnet. <br><br> Each subnet that contains at least one OT device can be defined as an ICS subnet. This helps differentiate between OT and IT devices on the map. | **Set as ICS Subnet** <br><br>**Dismiss**: Remove the notification if the device isn't part of the subnet. -To view the underlying devices behind the slot, select **VIEW ON MAP**. The slot is presented in the device map with all the underlying modules and devices connected to it. +## View a device map for a specific zone +If you're working with an on-premises management console with sites and zones configured, device maps are also available for each zone. +On the on-premises management console, zone maps show all network elements related to a selected zone, including OT sensors, detected devices, and more. -## Manage device information from the map +**To view a zone map**: -Under certain circumstances, you may need to update device information provided by Defender for IoT. The following options are available: +1. Sign into an on-premises management console and select **Site Management** > **View Zone Map** for the zone you want to view. For example: -- [Update device properties](#update-device-properties)-- [Delete devices](#delete-devices)-- [Merge devices](#merge-devices)-- [Authorize and unauthorize devices](#authorize-and-unauthorize-devices)-- [Mark devices as important](#mark-devices-as-important)+ :::image type="content" source="media/how-to-work-with-asset-inventory-information/default-region-to-default-business-unit-v2.png" alt-text="Screenshot of default region to default business unit." lightbox="media/how-to-work-with-asset-inventory-information/default-region-to-default-business-unit-v2.png"::: +1. Use any of the following map tools to change your map display: -### Update device properties + |Name |Description | + ||| + |**Save current arrangement** <br> <br>:::image type="icon" source="media/how-to-work-with-maps/save-zone-map.png" border="false"::: | Saves any changes you've made in the map display. | + |**Hide multicast/broadcast addresses**<br><br>:::image type="icon" source="media/how-to-work-with-maps/hide-multi-cast-zone-map.png" border="false"::: | Selected by default. Select to show multicast and broadcast devices on the map. | + |**Present Purdue lines** <br><br>:::image type="icon" source="media/how-to-work-with-maps/present-purdue-zone-map.png" border="false"::: | Selected by default. Select to hide Purdue lines on the map. | + |**Relayout** <br> <br>:::image type="icon" source="media/how-to-work-with-maps/relayout-zone-map.png" border="false"::: | Select to reorganize the layout by Purdue lines or by zone. | + |**Scale to fit screen** <br><br> :::image type="icon" source="media/how-to-work-with-maps/scale-zone-map.png" border="false"::: | Zooms in or out on the map so that the entire map fits on the screen. | + | **Search by IP / MAC** | Select a specific IP or MAC address to highlight the device on the map. | + | **Change to a different zone map** <br><br>:::image type="icon" source="media/how-to-work-with-maps/change-zone-map.png" border="false"::: | Select to open the **Change Zone Map** dialog, where you can select a different zone map to view. | + | **Zoom** <br><br>:::image type="icon" source="media/how-to-work-with-maps/zoom-in-icon-v2.png" border="false"::: / :::image type="icon" source="media/how-to-work-with-maps/zoom-out-icon-v2.png" border="false"::: | Zoom in on the map to view the connections between each device, either using the mouse or the **+**/**-** buttons on the right of the map. | -Certain device properties can be updated manually. Information manually entered will override information discovered by Defender for IoT. +1. Zoom in to view more details per devices, such as to view the number of devices grouped in a subnet, or to expand a subnet. -**To update properties:** -1. Right-click a device from the map. -1. Select **View properties**. -1. Select **Edit properties.** +1. Right-click a device and select **View properties** to open a **Device Properties** dialog, with more details about the device. - :::image type="content" source="media/how-to-work-with-maps/edit-config.png" alt-text="Screenshot of the Edit device property pane."::: -1. Update any of the following: +1. Right-click a device shown in red and select **View alerts** to jump to the **Alerts page**, with alerts filtered only for the selected device. - - Authorized status - - Device name - - Device type. For a list of types, see [Device types](#device-types). - - OS - - Purdue layer - - Description - -#### Device types +## Built-in device map groups -This table lists device types you can manually assign to a device. +The following table lists the device groups available out-of-the-box on the OT sensor **Device map** page. [Create extra, custom groups](#create-a-custom-device-group) as needed for your organization. -| Category | Device Type | +| Group name | Description | |--|--|-| ICS | Engineering Station <br /> PLC <br />Historian <br />HMI <br />IED <br />DCS Controller <br />RTU <br />Industrial Packaging System <br />Industrial Scale <br />Industrial Robot <br />Slot <br />Meter <br />Variable Frequency Drive <br />Robot Controller <br />Servo Drive <br />Pneumatic Device <br />Marquee | -| IT | Domain Controller <br />DB Server <br />Workstation <br />Server <br />Terminal Station <br />Storage <br />Smart Phone <br />Tablet <br />Backup Server | -| IoT | IP Camera <br />Printer <br />Punch Clock <br />ATM <br />Smart TV <br />Game console <br />DVR <br />Door Control Panel <br />HVAC <br />Thermostat <br />Fire Alarm <br />Smart Light <br />Smart Switch <br />Fire Detector <br />IP Telephone <br />Alarm System <br />Alarm Siren <br />Motion Detector <br />Elevator <br />Humidity Sensor <br />Barcode Scanner <br />Uninterruptible Power Supply <br />People Counter System <br />Intercom <br />Turnstile | -| Network | Wireless Access Point <br />Router <br />Switch <br />Firewall <br />VPN Gateway <br />NTP Server <br />Wifi Pineapple <br />Physical Location <br />I/O Adapter <br /> Protocol Converter | --### Delete devices --You may want to delete a device if the information learned isn't relevant. For example, -- - A partner contractor at an engineering workstation connects temporarily to perform configuration updates. After the task is completed, the device is removed. -- - Due to changes in the network, some devices are no longer connected. --If you don't delete the device, the sensor will continue monitoring it. After 60 days, a notification will appear, recommending that you delete. --You may receive an alert indicating that the device is unresponsive if another device tries to access it. In this case, your network may be misconfigured. --The device will be removed from the Device Map, Device Inventory, and Data Mining reports. Other information, for example: information stored in Widgets will be maintained. --The device must be inactive for at least 10 minutes to delete it. --**To delete a device from the device map:** --1. Right-click a device on the map and select **Delete**. --### Merge devices --Under certain circumstances you may need to merge devices. This may be required if the sensor discovered separate network entities that are associated with one unique device. For example, -- - A PLC with four network cards. -- - A Laptop with WIFI and physical card. - - - A Workstation with two, or more network cards. --When merging, you instruct the sensor to combine the device properties of two devices into one. When you do this, the Device Properties window and sensor reports will be updated with the new device property details. --For example, if you merge two devices, each with an IP address, both IP addresses will appear as separate interfaces in the Device Properties window. You can only merge authorized devices. --The event timeline presents the merge event. ---You can't undo a device merge. If you mistakenly merged two devices, delete the device and wait for the sensor to rediscover both. --**To merge devices:** --1. Select two devices (shift-click), and then right-click one of them. --2. Select **Merge** to merge the devices. It can take up to 2 minutes complete the merge. --3. In the set merge device attributes dialog box, choose a device name. -- :::image type="content" source="media/how-to-work-with-maps/name-the-device-v2.png" alt-text="Screenshot of the attributes dialog box."::: --4. Select **Save**. --### Authorize and unauthorize devices --During the Learning period, all the devices discovered in the network are identified as authorized devices. The **Authorized** label doesn't appear on these devices in the Device map. --When a device is discovered after the Learning period, it appears as an unauthorized device. In addition to seeing unauthorized devices in the map, you can also see them in the Device Inventory. ---**New device vs unauthorized** --New devices detected after the Learning period will appear with a `New` and `Unauthorized` label. --If you move a device on the map or manually change the device properties, the `New` label is removed from the device icon. --#### Unauthorized devices - Attack Vectors and Risk Assessment reports --Unauthorized devices are included in Risk Assessment reports and Attack Vectors reports. --- **Attack Vector Reports:** Devices marked as unauthorized are resolved in the Attack Vector as suspected rogue devices that might be a threat to the network.-- :::image type="content" source="media/how-to-work-with-maps/attack-vector-reports.png" alt-text="Screenshot of the attack vector reports."::: --- **Risk Assessment Reports:** Devices marked as unauthorized are identified in Risk Assessment reports.-- :::image type="content" source="media/how-to-work-with-maps/unauthorized-risk-assessment-report.png" alt-text="Screenshot of a Risk Assessment report showing an unauthorized device."::: --**To authorize or unauthorize devices manually:** --1. Right-click the device on the map and select **Authorize** or **Unauthorize**. --### Mark devices as important --You can mark significant network devices as important, for example, business critical servers. These devices are marked with a star on the map. The star varies according to the map's zoom level. ---**To mark a device as Important:** --1. Right-click the device on the map and select **Mark as important** --#### Important devices - Attack Vectors and Risk Assessment reports --Important devices are calculated when generating Risk Assessment reports and Attack Vectors reports. -- - Attack Vector reports devices marked as important are resolved in the Attack Vector as Attack Targets. -- - Risk Assessment Reports: Devices marked as important are calculated when providing the security score in the Risk Assessment report. - -#### Important devices - Defender for IoT on the Azure portal --Devices you mark as important on your sensor are also marked as important in the Device inventory on the Defender for IoT portal on Azure. -+| **Attack vector simulations** | Vulnerable devices detected in attack vector reports, where the **Show in Device Map** option is [toggled on](how-to-create-attack-vector-reports.md).| +| **Authorization** | Devices that were either discovered during an initial learning period or were later manually marked as *authorized* devices.| +| **Cross subnet connections** | Devices that communicate from one subnet to another subnet. | +| **Device inventory filters** | Any devices based on a [filter](how-to-investigate-sensor-detections-in-a-device-inventory.md) created in the OT sensor's **Device inventory** page. | +| **Known applications** | Devices that use reserved ports, such as TCP. | +| **Last activity** | Devices grouped by the time frame they were last active, for example: One hour, six hours, one day, or seven days. | +| **Non-standard ports** | Devices that use non-standard ports or ports that haven't been assigned an alias. | +| **Not In Active Directory** | All non-PLC devices that aren't communicating with the Active Directory. | +| **OT protocols** | Devices that handle known OT traffic. | +| **Polling intervals** | Devices grouped by polling intervals. The polling intervals are generated automatically according to cyclic channels or periods. For example, 15.0 seconds, 3.0 seconds, 1.5 seconds, or any other interval. Reviewing this information helps you learn if systems are polling too quickly or slowly. | +| **Programming** | Engineering stations, and programming machines. | +| **Subnets** | Devices that belong to a specific subnet. | +| **VLAN** | Devices associated with a specific VLAN ID. | ## Next steps For more information, see [Investigate sensor detections in a Device Inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md).+ |
defender-for-iot | References Data Retention | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-data-retention.md | The following table lists how long device data is stored in each Defender for Io | Storage type | Details | ||| | **Azure portal** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md). |-| **OT network sensor** | The retention of device inventory data isn't limited by time. <br><br> For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). | -| **On-premises management console** | The retention of device inventory data isn't limited by time. <br><br> For more information, see [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md). | +| **OT network sensor** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your OT device inventory from a sensor console](how-to-investigate-sensor-detections-in-a-device-inventory.md). | +| **On-premises management console** | 90 days from the date of the **Last activity** value. <br><br> For more information, see [Manage your OT device inventory from an on-premises management console](how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md). | ## Alert data retention |
energy-data-services | How To Generate Refresh Token | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-refresh-token.md | In this article, you will learn how to generate a refresh token. The following a [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] ## Register your app with Azure AD-To use the Azure Data Manager for Energys Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. +To use the Azure Data Manager for Energy Preview platform endpoint, you must register your app using the [Azure app registration portal](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. To configure an app to use the OAuth 2.0 authorization code grant flow, save the following values when registering the app: |
energy-data-services | Overview Ddms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/overview-ddms.md | Domain data management services (DDMS) store, access, and retrieve metadata and ### Frictionless Exploration and Production(E&P) -The Azure Data Manager for Energy Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they'll achieve unparalleled streaming performance and use the standards and output from OSDU™. The Azure DDMS service will onboard the OSDU™ DDMS and SLB proprietary DMS. Microsoft also continues to contribute to the OSDU™ community DDMS to ensure compatibility and architectural alignment. +The Azure Data Manager for Energy Preview DDMS service enables energy companies to access their data in a manner that is fast, portable, testable and extendible. As a result, they can achieve unparalleled streaming performance and use the standards and output from OSDU™. The Azure DDMS service includes the OSDU™ DDMS and SLB proprietary DMS. Microsoft also continues to contribute to the OSDU™ community DDMS to ensure compatibility and architectural alignment. ### Seamless connection between applications and data -Customers can deploy applications on top of Azure Data Manager for Energy Preview that have been developed as per the OSDU™ standard. They're able to connect applications to Core Services and DDMS without spending extensive cycles on deployment. Customers can also easily connect DELFI to Azure Data Manager for Energy Preview, eliminating the cycles associated with Petrel deployments and connection to data management systems. By connecting applications to DDMS service, Geoscientists can execute integrated E&P workflows with unparalleled performance on Azure and use OSDU™ core services. For example, a geophysicist can pick well ties on a seismic volume in Petrel and stream data from the seismic DMS. +You can deploy applications on top of Azure Data Manager for Energy Preview that has been developed as per the OSDU™ standard. They're able to connect applications to Core Services and DDMS without spending extensive cycles on deployment. Customers can also easily connect DELFI to Azure Data Manager for Energy Preview, eliminating the cycles associated with Petrel deployments and connection to data management systems. By connecting applications to DDMS service, Geoscientists can execute integrated E&P workflows with unparalleled performance on Azure and use OSDU™ core services. For example, a geophysicist can pick well ties on a seismic volume in Petrel and stream data from the seismic DMS. ## Types of DMS -Below are the OSDU™ DMS the service supports - +OSDU™ DMS supports the following ### OSDU™ - Seismic DMS Due to this extraordinary data size, geoscientists working on-premises struggle The seismic DMS is part of the OSDU™ platform and enables users to connect seismic data to cloud storage to applications. It allows secure access to metadata associated with seismic data to efficiently retrieve and handle large blocks of data for OpenVDS, ZGY, and other seismic data formats. The DMS therefore enables users to stream huge amounts of data in OSDU™ compliant applications in real time. Enabling the seismic DMS on Azure Data Manager for Energy Preview opens a pathway for Azure customers to bring their seismic data to the cloud and take advantage of Azure storage and high performance computing. -## OSDU™ - Wellbore DMS +### OSDU™ - Wellbore DMS -Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU™ compliant application. The Wellbore DMS was contributed by SLB to OSDU™. +Well Logs are measurements taken while drilling, which tells energy companies information about the subsurface. Ultimately, they reveal whether hydrocarbons are present (or if the well is dry). Logs contain many attributes that inform geoscientists about the type of rock, its quality, and whether it contains oil, water, gas, or a mix. Energy companies use these attributes to determine the quality of a reservoir ΓÇô how much oil or gas is present, its quality, and ultimately, economic viability. Maintaining Well Log data and ensuring easy access to historical logs is critical to energy companies. The Wellbore DMS facilitates access to this data in any OSDU™ compliant application. -Well Log data can come in different formats. It's most often indexed by depth or time and the increment of these measurements can vary. Well Logs typically contain multiple attributes for each vertical measurement. Well Logs can therefore be small or for more modern Well Logs that use high frequency data, greater than 1 Gb. Well Log data is smaller than seismic; however, users will want to look at upwards of hundreds of wells at a time. This scenario is common in mature areas that have been heavily drilled such as the Permian Basin in West Texas. +Well Log data can come in different formats. It's indexed by depth or time and the increment of these measurements can vary. Well Logs typically contain multiple attributes for each vertical measurement. Well Logs can therefore be small or for more modern Well Logs that use high frequency data, greater than 1 Gb. Well Log data is smaller than seismic data, however, there are hundreds of wells associated with any oil exploration project. This scenario is common in mature areas that have been heavily drilled such as the Permian Basin in West Texas. -Geoscientists therefore want to access numerous well logs in a single session. They often are looking at all historical drilling programs in an area. As a result, they'll look at Well Log data that was collected using a wide variety of instruments and technology. This data will vary widely in format, quality, and sampling. The Wellbore DMS resolves this data through the OSDU™ schemas to deliver the data to the consuming applications. +As a geoscientist you want to access numerous well logs in a single session. You often look at all historical drilling programs in an area. As a result, you can look at Well Log data that was collected using a wide variety of instruments and technology. This data can vary widely in format, quality, and sampling. The Wellbore DMS resolves this data through the OSDU™ schemas to deliver the data to the consuming applications. Here are the services that the Wellbore DMS offers - Here are the services that the Wellbore DMS offers - - **Ingestion** - connection to file, interpretation software, system of records, and acquisition systems - **Contextualization** (Contextualized Access) -## OSDU™ - Well Delivery DMS +### OSDU™ - Well Delivery DMS The Well Delivery DMS stores critical drilling domain information related to the planning and execution of a well. Throughout a drilling program, engineers and domain experts need to access a wide variety of data types including activities, trajectories, risks, subsurface information, equipment used, fluid and cementing, rig utilization, and reports. Integrating this collection of data types together are the cornerstone to drilling insights. At the same time, until now, there was no industry wide standardization or enforced format. The common standards the Well Delivery DMS enables is critical to the Drilling Value Chain as it connects a diverse group of personas including operations, oil companies, service companies, logistics companies, etc. +### SLB™ - Petrel Data Services +Geoscientists working in [Petrel](https://www.software.slb.com/products/petrel) build Petrel Projects to store, track, share, and communicate their technical work. A Petrel project stores associated data in a ```.PET``` manifest file. It also keeps track of your windows within Petrel and setup. Petrel Data Services is an open DMS and doesn't require any additional licensing to get started. You can ingest Petrel projects to Petrel Data Services using OpenAPIΓÇÖs. By moving to Petrel on Azure Data Manager for Energy Preview, you can use Petrel Data Services Project Explorer UI to discover all the Petrel projects across your organization. You can create and save projects as well as track version history and experience unparalleled performance. This enables you to collaborate in real time with data permanently stored in Azure Data Manager for Energy. ++Additionally, Petrel Data Services serves to liberate data stored in Petrel ```.PET``` files to their respective DDMS for search and utilization in external applications. For example, you can upload a Petrel project containing many well logs to Azure Data Manager for Energy Preview. With data liberation, once the project is saved, the Wellbore data liberation service is triggered and that well log is extracted to the wellbore DMS. The association with the ```.PET``` Petrel project is tracked through lineage and you can use that well log in any ISV open app ecosystem. Petrel Data Services offers round trip data liberation and consumption for seismic, wellbore, and Petrel Project data. + ## Next steps-Learn more about DDMS concepts below. +Learn more about DDMS concepts. > [!div class="nextstepaction"] > [DDMS Concepts](concepts-ddms.md) |
energy-data-services | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/release-notes.md | Title: Release notes for Microsoft Azure Data Manager for Energy Preview #Required; page title is displayed in search results. Include the brand. -description: This topic provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results. +description: This article provides release notes of Azure Data Manager for Energy Preview releases, improvements, bug fixes, and known issues. #Required; article description that is displayed in search results. Azure Data Manager for Energy Preview is updated on an ongoing basis. To stay up ## February 2023 -### Product Access Update +### Compliant with M14 OSDU™ release -Beginning on February 15, 2023, customers of Microsoft Energy Data Services can search for and provision their instances of the product without a request for access. Customers can go directly to the Azure Marketplace to create an instance under their selected subscription. +Azure Data Manager for Energy Preview is now compliant with the M14 OSDU™ milestone release. With this release, you can take advantage of the latest features and capabilities available in the [OSDU™ M14](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M14-Release-Notes). +### Product Billing Enabled -### Product Billing Update +Billing for Azure Data Manager for Energy Preview is enabled. During Preview, the price for each instance is based on a fixed per-hour consumption. [Pricing information for Azure Data Manager for Energy Preview.](https://azure.microsoft.com/pricing/details/energy-data-services/#pricing) -Microsoft Energy Data Services will begin billing February 15, 2023. Prices will be based on a fixed per-hour consumption rate at a 50 percent discount during preview. -- No upfront costs or termination feesΓÇöpay only for what you use.-- No charges for storage, data transfers or compute overage during preview. -### OSDUΓäó Milestone Upgrade +### Available on Azure Marketplace -Azure Data Manager for Energy Preview is now compliant with the M14 OSDUΓäó milestone release. With this release you can take advantage of the latest features and capabilities available in the [OSDUΓäó M14](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M14-Release-Notes). +You can go directly to the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.MicrosoftEnergyDataServices?tab=Overview) to create an Azure Data Manager for Energy Preview instance in your subscription. You don't need to raise a support ticket with Microsoft to provision an instance anymore. ++### Support for Petrel Data Services +Azure Data Manager for Energy Preview supports [Petrel Data Services](overview-ddms.md#) that allows you to use [Petrel](https://www.software.slb.com/products/petrel) from SLB™ with Azure Data Manager from Energy as its data store. You can view your Petrel projects, liberate data from Petrel, and collaborate in real time with data permanently stored in Azure Data Manager for Energy. ### Enable Resource sharing (CORS)-CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin. With this feature you can set CORS rules for each Azure Data Manager for Energy instance. When you set CORS rules for the instance it gets applied automatically across all the services and storage accounts linked with Microsoft Energy Data services.[Learn more.]( ../energy-data-services/how-to-enable-CORS.md) ++CORS provides a secure way to allow one origin (the origin domain) to call APIs in another origin. You can set CORS rules for each Azure Data Manager for Energy Preview instance. When you set CORS rules for the instance they get applied automatically across all the services and storage accounts linked with Azure Data Manager for Energy Preview. [How to enable CORS.]( ../energy-data-services/how-to-enable-CORS.md) ## January 2023 ### Managed Identity Support -You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Azure Data Manager for Energy Preview. For example, you can write a script in Azure Function to ingest data in Azure Data Manager for Energy Preview. Now, you can use managed identity to connect to Azure Data Manager for Energy Preview using system or user assigned managed identity from other Azure services. [Learn more.]( ../energy-data-services/how-to-use-managed-identity.md) +You can use a managed identity to authenticate to any [service that supports Azure AD (Active Directory) authentication](../active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md) with Azure Data Manager for Energy Preview. For example, you can write a script in Azure Function to ingest data in Azure Data Manager for Energy Preview. Now, you can use managed identity to connect to Azure Data Manager for Energy Preview using system or user assigned managed identity from other Azure services. [Learn more.](../energy-data-services/how-to-use-managed-identity.md) ### Availability zone support -Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required by the Customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services®ions=all) +Availability Zones are physically separate locations within an Azure region made up of one or more datacenters equipped with independent power, cooling, and networking. Availability Zones provide in-region High Availability and protection against local disasters. Azure Data Manager for Energy Preview supports zone-redundant instance by default and there's no setup required by the customer. [Learn more.](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=energy-data-services®ions=all) <hr width=100%> ## December 2022 -### Lockbox +### Support for Lockbox -Most operations, support, and troubleshooting performed by Microsoft personnel do not require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Azure Data Manager for Energy Preview provides you an interface to review and approve or reject data access requests. Azure Data Manager for Energy Preview now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). +Most operations, support, and troubleshooting performed by Microsoft personnel don't require access to customer data. In those rare circumstances where such access is required, Customer Lockbox for Azure Data Manager for Energy Preview provides you with an interface to review, approve or reject data access requests. Azure Data Manager for Energy Preview now supports Lockbox. [Learn more](../security/fundamentals/customer-lockbox-overview.md). <hr width=100%> Most operations, support, and troubleshooting performed by Microsoft personnel d ### Support for Private Links -Azure Private Link on Azure Data Manager for Energy Preview provides private access to the service. This means traffic between your private network and Azure Data Manager for Energy Preview travels over the Microsoft backbone network therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint, which is a set of private IP addresses in a subnet within the virtual network. You can then limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. [Create a private endpoint for Azure Data Manager for Energy Preview](how-to-set-up-private-links.md). +Azure Private Link on Azure Data Manager for Energy Preview provides private access to the service. With Azure Private Link, traffic between your private network and Azure Data Manager for Energy Preview travels over the Microsoft backbone network, therefore limiting any exposure over the internet. By using Azure Private Link, you can connect to an Azure Data Manager for Energy Preview instance from your virtual network via a private endpoint. You can limit access to your Azure Data Manager for Energy Preview instance over these private IP addresses. [Create a private endpoint for Azure Data Manager for Energy Preview](how-to-set-up-private-links.md). ### Encryption at Rest using Customer Managed Keys+ Azure Data Manager for Energy Preview supports customer managed encryption keys (CMK). All data in Azure Data Manager for Energy Preview is encrypted with Microsoft-managed keys by default. In addition to Microsoft-managed key, you can use your own encryption key to protect the data in Azure Data Manager for Energy Preview. When you specify a customer-managed key, that key is used to protect and control access to the Microsoft-managed key that encrypts your data. [Data security and encryption in Azure Data Manager for Energy Preview](how-to-manage-data-security-and-encryption.md). Azure Data Manager for Energy Preview supports customer managed encryption keys ### Key Announcement: Preview Release -Azure Data Manager for Energy Preview is now available in public preview. Information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Energy Preview will be updated monthly. Keep tracking this page. +Azure Data Manager for Energy is now available in preview. Information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Energy Preview will be updated monthly. -Azure Data Manager for Energy Preview is developed in alignment with the emerging requirements of the OSDUΓäó Technical Standard, Version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes). +Azure Data Manager for Energy Preview is developed in alignment with the emerging requirements of the OSDU™ technical standard, version 1.0. and is currently aligned with Mercury Release(R3), [Milestone-12](https://community.opengroup.org/osdu/governance/project-management-committee/-/wikis/M12-Release-Notes). ### Partition & User Management -- New data partitions can be [created dynamically](how-to-add-more-data-partitions.md) as needed post provisioning of the platform (up to five). Earlier, data partitions could only be created when provisioning a new instance.+- New data partitions can be [created after provisioning an Azure Data Manager for Energy Preview instance](how-to-add-more-data-partitions.md). Earlier, data partitions could only be created when provisioning a new instance. - The domain name for entitlement groups for [user management](how-to-manage-users.md) has been changed to "dataservices.energy". ### Data Ingestion -- Enabled support for user context in ingestion ([ADR: Issue 52](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52)) - - User identity is preserved and passed on to all ingestion workflow related services using the newly introduced _x-on-behalf-of_ header. A user needs to have appropriate service level entitlements on all dependent services involved in the ingestion workflow and only users with appropriate data level entitlements can modify data. -- Workflow service payload is restricted to a maximum of 2 MB. If it exceeds, the service will throw an HTTP 413 error. This restriction is placed to prevent workflow requests from overwhelming the server.+- Azure Data Manager for Energy Preview supports user context in ingestion ([ADR: Issue 52](https://community.opengroup.org/osdu/platform/data-flow/ingestion/home/-/issues/52)) + - User identity is preserved and passed on to all ingestion workflow related services using the newly introduced _x-on-behalf-of_ header. You need to have appropriate service level entitlements on all dependent services involved in the ingestion workflow to modify data. +- Workflow service payload is restricted to a maximum of 2 MB. If it exceeds, the service throws an HTTP 413 error. This restriction is placed to prevent workflow requests from overwhelming the server. - Azure Data Manager for Energy Preview uses Azure Data Factory (ADF) to run large scale ingestion workloads. ### Search -- Improved security as Elasticsearch images are now pulled from Microsoft's internal Azure Container Registry instead of public repositories.-- Improved security by enabling encryption in transit for Elasticsearch, Registration, and Notification services.+Azure Data Manager for Energy Preview is more secure as Elasticsearch images are now pulled from Microsoft's internal Azure Container Registry instead of public repositories. In addition, Elastic search, registration, and notification services are now encrypted in transit further enhancing the security of the product. ### Monitoring -- Diagnostic settings can be exported from [Airflow](how-to-integrate-airflow-logs-with-azure-monitor.md) and [Elasticsearch](how-to-integrate-elastic-logs-with-azure-monitor.md) to Azure Monitor.+Azure Data Manager for Energy Preview supports diagnostic settings for [Airflow logs](how-to-integrate-airflow-logs-with-azure-monitor.md) and [Elasticsearch logs](how-to-integrate-elastic-logs-with-azure-monitor.md). You can configure Azure Monitor to view these logs in the storage location of your choice. ### Region Availability -- Currently, Azure Data Manager for Energy Preview is being offered in the following regions - South Central US, East US, West Europe, and North Europe.+Currently, Azure Data Manager for Energy Preview is available in the following regions - South Central US, East US, West Europe, and North Europe. |
healthcare-apis | How To Use Mapping Debugger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-mapping-debugger.md | -In this article, you'll learn how to use the MedTech service Mapping debugger in the Azure portal. The Mapping debugger is a tool used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations for persistence in the FHIR service. This new self-service tool allows you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. +In this article, you'll learn how to use the MedTech service Mapping debugger in the Azure portal. The Mapping debugger is a tool used for creating, updating, and troubleshooting the MedTech service device and FHIR destination mappings. The Mapping debugger can also be used for uploading test device messages to see how they'll look after being processed into normalized messages and transformed into FHIR Observations for persistence in the FHIR service. This self-service tool allows you to easily view and make inline adjustments in real-time, without ever having to leave the Azure portal. > [!TIP] > To learn about how the MedTech service transforms and persists device message data into the FHIR service see, [Understand the device message data transformation](understand-service.md). |
iot-hub | Iot Hub Create Through Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-through-portal.md | Title: Use the Azure portal to create an IoT Hub | Microsoft Docs + Title: Use the Azure portal to create an IoT Hub description: How to create, manage, and delete Azure IoT hubs through the Azure portal. Includes information about pricing tiers, scaling, security, and messaging configuration. You can change the settings of an existing IoT hub after it's created from the I **Pricing and scale**: Migrate to a different tier or set the number of IoT Hub units. -**IP Filter**: Specify a range of IP addresses that will be accepted or rejected by the IoT hub. +**IP Filter**: Specify a range of IP addresses for the IoT hub to accept or reject. **Properties**: A list of properties that you can copy and use elsewhere, such as the resource ID, resource group, location, and so on. For more detailed information about the access granted by specific permissions, [!INCLUDE [iot-hub-include-create-device](../../includes/iot-hub-include-create-device.md)] +## Disable or delete a device in an IoT hub ++If you want to keep a device in your IoT hub's identity registry, but want to prevent it from connecting then you can change its status to *disabled.* ++1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub. ++1. Select **Devices** from the navigation menu. ++1. Select the name of the device that you want to disable to view its device details page. ++1. On the device details page, set the **Enable connection to IoT Hub** parameter to **Disable**. ++ :::image type="content" source="./media/iot-hub-create-through-portal/disable-device.png" alt-text="Screenshot that shows disabling a device connection."::: ++If you want to remove a device from your IoT hub's identity registry, you can delete its registration. ++1. From the **Devices** page of your IoT hub, select the checkbox next to the device that you want to delete. ++1. Select **Delete** to remove the device registration. ++ :::image type="content" source="./media/iot-hub-create-through-portal/delete-device.png" alt-text="Screenshot that shows deleting a device."::: + ## Delete an IoT hub To delete an IoT hub, open your IoT hub in the Azure portal, then choose **Delete**. |
iot-hub | Iot Hub Devguide Identity Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md | You can disable devices by updating the **status** property of an identity in th * If you think a device is compromised or has become unauthorized for any reason. + >[!IMPORTANT] + >IoT Hub doesn't check certificate revocation lists when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. + This feature isn't available for modules. +For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub). + ## Import and export device identities Use asynchronous operations on the [IoT Hub resource provider endpoint](iot-hub-devguide-endpoints.md) to export device identities in bulk from an IoT hub's identity registry. Exports are long-running jobs that use a customer-supplied blob container to save device identity data read from the identity registry. Device identities can also be exported and imported from an IoT Hub via the Serv ## Device provisioning -The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as Table storage, Blob storage, or Azure Cosmos DB to store any additional device data. +The device data that a given IoT solution stores depends on the specific requirements of that solution. But, as a minimum, a solution must store device identities and authentication keys. Azure IoT Hub includes an identity registry that can store values for each device such as IDs, authentication keys, and status codes. A solution can use other Azure services such as Table storage, Blob storage, or Azure Cosmos DB to store other device data. *Device provisioning* is the process of adding the initial device data to the stores in your solution. To enable a new device to connect to your hub, you must add a device ID and keys to the IoT Hub identity registry. As part of the provisioning process, you might need to initialize device-specific data in other solution stores. You can also use the Azure IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning to one or more IoT hubs without requiring human intervention. To learn more, see the [provisioning service documentation](../iot-dps/index.yml). ## Device and module lifecycle notifications -IoT Hub can notify your IoT solution when a device identity is created or deleted by sending lifecycle notifications. To do so, your IoT solution needs to create a route and set the data source equal to *DeviceLifecycleEvents*. By default, no lifecycle notifications are sent, that is, no such routes pre-exist. By creating a route with Data Source equal to *DeviceLifecycleEvents*, lifecycle events will be sent for both device identities and module identities; however, the message contents will differ depending on whether the events are generated for module identities or device identities. It should be noted that for IoT Edge modules, the module identity creation flow is different than for other modules, as a result for IoT Edge modules the create notification is only sent if the corresponding IoT Edge Device for the updated IoT Edge module identity is running. For all other modules, lifecycle notifications are sent whenever the module identity is updated on the IoT Hub side. To learn more about the properties and body returned in the notification message, see [Non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md). +IoT Hub can notify your IoT solution when a device identity is created or deleted by sending lifecycle notifications. To do so, your IoT solution needs to create a route and set the data source equal to *DeviceLifecycleEvents*. By default, no lifecycle notifications are sent, that is, no such routes pre-exist. By creating a route with Data Source equal to *DeviceLifecycleEvents*, lifecycle events are sent for both device identities and module identities; however, the message contents differ depending on whether the events are generated for module identities or device identities. It should be noted that for IoT Edge modules, the module identity creation flow is different than for other modules, as a result for IoT Edge modules the create notification is only sent if the corresponding IoT Edge Device for the updated IoT Edge module identity is running. For all other modules, lifecycle notifications are sent whenever the module identity is updated on the IoT Hub side. To learn more about the properties and body returned in the notification message, see [Non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md). ## Device identity properties Device identities are represented as JSON documents with the following propertie | status |required |An access indicator. Can be **Enabled** or **Disabled**. If **Enabled**, the device is allowed to connect. If **Disabled**, this device can't access any device-facing endpoint. | | statusReason |optional |A 128 character-long string that stores the reason for the device identity status. All UTF-8 characters are allowed. | | statusUpdateTime |read-only |A temporal indicator, showing the date and time of the last status update. |-| connectionState |read-only |A field indicating connection status: either **Connected** or **Disconnected**. This field represents the IoT Hub view of the device connection status. **Important**: This field should be used only for development/debugging purposes. The connection state is updated only for devices using MQTT or AMQP. Also, it's based on protocol-level pings (MQTT pings, or AMQP pings), and it can have a maximum delay of only 5 minutes. For these reasons, there can be false positives, such as devices reported as connected but that are disconnected. | +| connectionState |read-only |A field indicating connection status: either **Connected** or **Disconnected**. This field represents the IoT Hub view of the device connection status. **Important**: This field should be used only for development/debugging purposes. The connection state is updated only for devices using MQTT or AMQP. Also, it's based on protocol-level pings (MQTT pings, or AMQP pings), and it can have a maximum delay of only 5 minutes. For these reasons, there can be false positives, such as disconnected devices reported as connected. | | connectionStateUpdatedTime |read-only |A temporal indicator, showing the date and last time the connection state was updated. | | lastActivityTime |read-only |A temporal indicator, showing the date and last time the device connected, received, or sent a message. This property is eventually consistent, but could be delayed up to 5 to 10 minutes. For this reason, it shouldn't be used in production scenarios. | Module identities are represented as JSON documents with the following propertie | authentication |optional |A composite object containing authentication information and security materials. For more information, see [Authentication Mechanism](/rest/api/iothub/service/modules/get-identity#authenticationmechanism) in the REST API documentation. | | managedBy | optional | Identifies who manages this module. For instance, this value is "IoT Edge" if the edge runtime owns this module. | | cloudToDeviceMessageCount | read-only | The number of cloud-to-module messages currently queued to be sent to the module. |-| connectionState |read-only |A field indicating connection status: either **Connected** or **Disconnected**. This field represents the IoT Hub view of the device connection status. **Important**: This field should be used only for development/debugging purposes. The connection state is updated only for devices using MQTT or AMQP. Also, it's based on protocol-level pings (MQTT pings, or AMQP pings), and it can have a maximum delay of only 5 minutes. For these reasons, there can be false positives, such as devices reported as connected but that are disconnected. | +| connectionState |read-only |A field indicating connection status: either **Connected** or **Disconnected**. This field represents the IoT Hub view of the device connection status. **Important**: This field should be used only for development/debugging purposes. The connection state is updated only for devices using MQTT or AMQP. Also, it's based on protocol-level pings (MQTT pings, or AMQP pings), and it can have a maximum delay of only 5 minutes. For these reasons, there can be false positives, such as disconnected devices reported as connected. | | connectionStateUpdatedTime |read-only |A temporal indicator, showing the date and last time the connection state was updated. | | lastActivityTime |read-only |A temporal indicator, showing the date and last time the device connected, received, or sent a message. | Now that you've learned how to use the IoT Hub identity registry, you may be int * [Schedule jobs on multiple devices](iot-hub-devguide-jobs.md) -To try out some of the concepts described in this article, see the following IoT Hub tutorial: --* [Get started with Azure IoT Hub](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) --To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see: +To explore using the IoT Hub Device Provisioning Service to enable zero-touch, just-in-time provisioning, see: * [Azure IoT Hub Device Provisioning Service](../iot-dps/index.yml) |
iot-hub | Iot Hub X509ca Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-x509ca-overview.md | Title: Overview of Azure IoT Hub X.509 CA security | Microsoft Docs + Title: Overview of Azure IoT Hub X.509 CA security description: Overview - how to authenticate devices to IoT Hub using X.509 Certificate Authorities. This article describes how to use X.509 certificate authority (CA) certificates [!INCLUDE [iot-hub-include-x509-ca-signed-support-note](../../includes/iot-hub-include-x509-ca-signed-support-note.md)] -The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process as well as supply chain logistics during device manufacturing. If you aren't familiar with X.509 CA certificates, see [Understand how X.509 CA certificates are used in the IoT industry](iot-hub-x509ca-concept.md) for more information. +The X.509 CA feature enables device authentication to IoT Hub using a certificate authority (CA). It simplifies the initial device enrollment process and supply chain logistics during device manufacturing. If you aren't familiar with X.509 CA certificates, see [Understand how X.509 CA certificates are used in the IoT industry](iot-hub-x509ca-concept.md) for more information. ## Get an X.509 CA certificate The X.509 CA certificate is at the top of the chain of certificates for each of your devices. You may purchase or create one depending on how you intend to use it. -For production environments, we recommend that you purchase an X.509 CA certificate from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they will interact with third-party products or services. +For production environments, we recommend that you purchase an X.509 CA certificate from a public root certificate authority. Purchasing a CA certificate has the benefit of the root CA acting as a trusted third party to vouch for the legitimacy of your devices. Consider this option if your devices are part of an open IoT network where they interact with third-party products or services. You may also create a self-signed X.509 CA for experimentation or for use in closed IoT networks. -Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected at all times. This is necessary for building trust in the X.509 CA authentication. +Regardless of how you obtain your X.509 CA certificate, make sure to keep its corresponding private key secret and protected always. This precaution is necessary for building trust in the X.509 CA authentication. Learn how to [create a self-signed CA certificate](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md), which you can use for testing. Learn how to [create a self-signed CA certificate](https://github.com/Azure/azur The owner of an X.509 CA certificate can cryptographically sign an intermediate CA that can in turn sign another intermediate CA, and so on, until the last intermediate CA terminates this process by signing a device certificate. The result is a cascaded chain of certificates known as a *certificate chain of trust*. In real life this plays out as delegation of trust towards signing devices. This delegation is important because it establishes a cryptographically variable chain of custody and avoids sharing of signing keys. - + The device certificate (also called a leaf certificate) must have the *subject name* set to the **device ID** (`CN=deviceId`) that was used when registering the IoT device in Azure IoT Hub. This setting is required for authentication. Learn how to [create a certificate chain](https://github.com/Azure/azure-iot-sdk ## Register the X.509 CA certificate to IoT Hub -Register your X.509 CA certificate to IoT Hub where it will be used to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession. +Register your X.509 CA certificate to IoT Hub, which uses it to authenticate your devices during registration and connection. Registering the X.509 CA certificate is a two-step process that includes uploading the certificate file and then establishing proof of possession. The upload process entails uploading a file that contains your certificate. This file should never contain any private keys. -The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. It does so by generating a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you will possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step by uploading a file containing the results. +The proof of possession step involves a cryptographic challenge and response process between you and IoT Hub. Given that digital certificate contents are public and therefore susceptible to eavesdropping, IoT Hub has to verify that you really own the CA certificate. It does so by generating a random challenge that you sign with the CA certificate's corresponding private key. If you kept the private key secret and protected as recommended, then only you possess the knowledge to complete this step. Secrecy of private keys is the source of trust in this method. After signing the challenge, you complete this step by uploading a file containing the results. Learn how to [register your CA certificate](./tutorial-x509-prove-possession.md) ## Create a device on IoT Hub -To prevent device impersonation, IoT Hub requires that you let it know what devices to expect. You do this by creating a device entry in the IoT hub's device registry. This process is automated when using [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md). +To prevent device impersonation, IoT Hub requires that you let it know what devices to expect. You do this by creating a device entry in the IoT hub's device registry. This process is automated when using [IoT Hub Device Provisioning Service](../iot-dps/about-iot-dps.md). Learn how to [manually create a device in IoT Hub](./iot-hub-create-through-portal.md#register-a-new-device-in-the-iot-hub). A successful device connection to IoT Hub completes the authentication process a Learn how to [complete this device connection step](./tutorial-x509-prove-possession.md). +## Revoke a device certificate ++IoT Hub doesn't check certificate revocation lists from the certificate authority when authenticating devices with certificate-based authentication. If you have a device that needs to be blocked from connecting to IoT Hub because of a potentially compromised certificate, you should disable the device in the identity registry. For more information, see [Disable or delete a device in an IoT hub](./iot-hub-create-through-portal.md#disable-or-delete-a-device-in-an-iot-hub). + ## Next Steps Learn about [the value of X.509 CA authentication](iot-hub-x509ca-concept.md) in IoT. |
operator-nexus | Howto Baremetal Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-baremetal-functions.md | This article describes how to perform lifecycle management operations on Bare Me 1. Install the latest version of the [appropriate CLI extensions](./howto-install-cli-extensions.md)-1. Ensure that the target bare metal machine (server) must be `powered-on` and have its `readyState` set to True +1. Ensure that the target bare metal machine (server) must have its `poweredState` set to `On` and have its `readyState` set to `True` 1. Get the Resource group name that you created for `network cloud cluster resource` ## Power-off bare metal machines You can make a BMM unschedulable by executing the [`cordon`](#make-a-bmm-unsched On the execution of the `cordon` command, Operator Nexus workloads are not scheduled on the BMM when cordon is set; any attempt to create a workload on a `cordoned` BMM results in the workload being set to `pending` state. Existing workloads continue to run.-The cordon command supports an `evacuate` parameter with the default `false` value. -On executing the `cordon` command, with the value `true` for the `evacuate` +The cordon command supports an `evacuate` parameter with the default `False` value. +On executing the `cordon` command, with the value `True` for the `evacuate` parameter, the workloads that are running on the BMM are `stopped` and the BMM is set to `pending` state. ```azurecli parameter, the workloads that are running on the BMM are `stopped` and the BMM i --resource-group "resourceGroupName" ``` -The `evacuate "True"` removes workloads from that node while `evacuate "FALSE"` only prevents the scheduling of new workloads. +The `evacuate "True"` removes workloads from that node while `evacuate "False"` only prevents the scheduling of new workloads. ## Make a BMM schedulable (uncordon) state on the BMM are `restarted` when the BMM is `uncordoned`. The existing BMM image can be **reinstalled** using the `reimage` command but will not install a new image. Make sure the BMM's workloads are drained using the [`cordon`](#make-a-bmm-unschedulable-cordon)-command, with `evacuate "TRUE"`, prior to executing the `reimage` command. +command, with `evacuate "True"`, prior to executing the `reimage` command. ```azurecli az networkcloud baremetalmachine reimage ΓÇô-name "bareMetalMachineName" \ |
operator-nexus | Howto Install Cli Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md | -# Install Azure CLI extensions +# Prepare to install Azure CLI extensions +This how-to guide explains the steps for installing the required az CLI and extensions required to interact with Operator Nexus. -Install the following CLI extensions: +Installation of the following CLI extensions are required: +`networkcloud` (for Microsoft.NetworkCloud APIs), `managednetworkfabric` (for Microsoft.ManagedNetworkFabric APIs) and `hybridaks` (for AKS-Hybrid APIs). -- `networkcloud` (for Microsoft.NetworkCloud APIs)-- `managednetworkfabric` (for Microsoft.ManagedNetworkFabric APIs)-- `hybridaks` (for AKS-Hybrid APIs)+If you haven't already installed Azure CLI: [Install Azure CLI][installation-instruction]. The aka.ms links download the latest available version of the extension. -- If you haven't already installed Azure CLI: [Install Azure CLI][installation-instruction]. The aka.ms links download the latest available version of the extension.--- Install `networkcloud` CLI extension+## Install `networkcloud` CLI extension - Remove any previously installed version of the extension Install the following CLI extensions: az networkcloud --help ``` -- Install `managednetworkfabric` CLI extension+## Install `managednetworkfabric` CLI extension - Remove any previously installed version of the extension Install the following CLI extensions: az nf --help ``` -- Install AKS-Hybrid (`hybridaks`) CLI extension+## Install AKS-Hybrid (`hybridaks`) CLI extension - Remove any previously installed version of the extension Install the following CLI extensions: az hybridaks --help ``` -- Install other needed extensions+## Install other Azure extensions ```azurecli az extension add --yes --upgrade --name customlocation hybridaks 0.1.6 ssh 1.1.3 ``` -<!-- LINKS - Internal --> -[howto-configure-network fabric]: ./howto-configure-network fabric.md -[quickstarts-tenant-workload-deployment]: ./quickstarts-tenant-workload-deployment.md - <!-- LINKS - External --> [installation-instruction]: https://aka.ms/azcli |
postgresql | Concepts Business Continuity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md | Last updated 11/30/2021 [!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)] -**Business continuity** in Azure Database for PostgreSQL - Flexible Server refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In most of the cases, flexible server will handle the disruptive events happens that might happen in the cloud environment and keep your applications and business processes running. However, there are some events that cannot be handled automatically such as: +**Business continuity** in Azure Database for PostgreSQL - Flexible Server refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In most of the cases, flexible server handles disruptive events happens that might happen in the cloud environment and keep your applications and business processes running. However, there are some events that can't be handled automatically such as: - User accidentally deletes or updates a row in a table. - Earthquake causes a power outage and temporary disables a data center or an availability zone. - Database patching required to fix a bug or security issue. -Flexible server provides features that protect data and mitigates downtime for your mission critical databases in the event of planned and unplanned downtime events. Built on top of the Azure infrastructure that already offers robust resiliency and availability, flexible server has business continuity features that provide additional fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - which is the recovery time objective (RTO) and data loss exposure - which is the recovery point objective (RPO). For example, your business-critical database requires much stricter uptime requirements compared to a test database. +Flexible server provides features that protect data and mitigates downtime for your mission critical databases in the event of planned and unplanned downtime events. Built on top of the Azure infrastructure that already offers robust resiliency and availability, flexible server has business continuity features that provide another fault-protection, address recovery time requirements, and reduce data loss exposure. As you architect your applications, you should consider the downtime tolerance - which is the recovery time objective (RTO) and data loss exposure - which is the recovery point objective (RPO). For example, your business-critical database requires stricter uptime requirements compared to a test database. The table below illustrates the features that Flexible server offers. | **Feature** | **Description** | **Considerations** | | - | -- | |-| **Automatic backups** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained from 7 days up to 35 days. You will be able to restore your database server to any point in time within your backup retention period. RTO is dependent on the size of the data to restore + the time to perform log recovery. It can be from few minutes up to 12 hours. For more details, see [Concepts - Backup and Restore](./concepts-backup-restore.md). |Backup data remains within the region. | +| **Automatic backups** | Flexible server automatically performs daily backups of your database files and continuously backs up transaction logs. Backups can be retained from 7 days up to 35 days. You're able to restore your database server to any point in time within your backup retention period. RTO is dependent on the size of the data to restore + the time to perform log recovery. It can be from few minutes up to 12 hours. For more details, see [Concepts - Backup and Restore](./concepts-backup-restore.md). |Backup data remains within the region. | | **Zone redundant high availability** | Flexible server can be deployed with zone redundant high availability(HA) configuration where primary and standby servers are deployed in two different availability zones within a region. This HA configuration protects your databases from zone-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. RTO in most cases is expected to be less than 120s. RPO is expected to be zero (no data loss). For more information, see [Concepts - High availability](./concepts-high-availability.md). | Supported in general purpose and memory optimized compute tiers. Available only in regions where multiple zones are available. | | **Same zone high availability** | Flexible server can be deployed with same zone high availability(HA) configuration where primary and standby servers are deployed in the same availability zone in a region. This HA configuration protects your databases from node-level failures and also helps with reducing application downtime during planned and unplanned downtime events. Data from the primary server is replicated to the standby replica in synchronous mode. In the event of any disruption to the primary server, the server is automatically failed over to the standby replica. RTO in most cases is expected to be less than 120s. RPO is expected to be zero (no data loss). For more information, see [Concepts - High availability](./concepts-high-availability.md). | Supported in general purpose and memory optimized compute tiers. | | **Premium-managed disks** | Database files are stored in a highly durable and reliable premium-managed storage. This provides data redundancy with three copies of replica stored within an availability zone with automatic data recovery capabilities. For more information, see [Managed disks documentation](../../virtual-machines/managed-disks-overview.md). | Data stored within an availability zone. |-| **Zone redundant backup** | Flexible server backups are automatically and securely stored in a zone redundant storage within a region if the region supports AZs. During a zone-level failure where your server is provisioned, and if your server is not configured with zone redundancy, you can still restore your database using the latest restore point in a different zone. For more information, see [Concepts - Backup and Restore](./concepts-backup-restore.md).| Only applicable in regions where multiple zones are available.| +| **Zone redundant backup** | Flexible server backups are automatically and securely stored in a zone redundant storage within a region if the region supports AZs. During a zone-level failure where your server is provisioned, and if your server isn't configured with zone redundancy, you can still restore your database using the latest restore point in a different zone. For more information, see [Concepts - Backup and Restore](./concepts-backup-restore.md).| Only applicable in regions where multiple zones are available.| | **Geo redundant backup** | Flexible server backups are copied to a remote region. that helps with disaster recovery situation in the event of the primary server region is down. | This feature is currently enabled in selected regions. It takes a longer RTO and a higher RPO depending on the size of the data to restore and amount of recovery to perform. | | **Read Replica** | Cross Region read replicas can be deployed to protect your databases from region-level failures. Read replicas are updated asynchronously using PostgreSQL's physical replication technology, and may lag the primary. For more information, see [Concepts - Read Replicas](./concepts-read-replicas.md).| Supported in general purpose and memory optimized compute tiers. | Below are some planned maintenance scenarios. These events typically incur up to | **Scenario** | **Process**| | - | -- | | <b>Compute scaling (User-initiated)| During compute scaling operation, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, storage is detached, and then it is shut down. A new flexible server with the same database server name is provisioned with the scaled compute configuration. The storage is then attached to the new server and the database is started which performs recovery if necessary before accepting client connections. |-| <b>Scaling up storage (User-initiated) | When a scaling up storage operation is initiated, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is scaled to the desired size and then attached to the new server. A recovery is performed if needed before accepting client connections. Note that scaling down of the storage size is not supported. | +| <b>Scaling up storage (User-initiated) | When a scaling up storage operation is initiated, active checkpoints are allowed to complete, client connections are drained, and any uncommitted transactions are canceled. After that the server is shut down. The storage is scaled to the desired size and then attached to the new server. A recovery is performed if needed before accepting client connections. Note that scaling down of the storage size is not supported. | | <b>New software deployment (Azure-initiated) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance, and you can schedule when those activities to happen. For more information, check your [portal](https://aka.ms/servicehealthpm). | | <b>Minor version upgrades (Azure-initiated) | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. The database server is automatically restarted with the new minor version. For more information, see [documentation](../concepts-monitoring.md#planned-maintenance-notification). You can also check your [portal](https://aka.ms/servicehealthpm).| When the flexible server is configured with **high availability**, the flexible ## Unplanned downtime mitigation Unplanned downtimes can occur as a result of unforeseen disruptions such as underlying hardware fault, networking issues, and software bugs. If the database server configured with high availability goes down unexpectedly, then the standby replica is activated and the clients can resume their operations. If not configured with high availability (HA), then if the restart attempt fails, a new database server is automatically provisioned. While an unplanned downtime cannot be avoided, flexible server helps mitigating the downtime by automatically performing recovery operations without requiring human intervention. - -### Unplanned downtime: failure scenarios and service recovery ++Though we continuously strive to provide high availability, there are times when Azure Database for PostgreSQL - Flexible Server service does incur outage causing unavailability of the databases and thus impacting your application. When our service monitoring detects issues that cause widespread connectivity errors, failures or performance issues, the service automatically declares an outage to keep you informed. +### Service Outage ++In the event of the Azure Database for PostgreSQL - Flexible Server service outage, you will be able to see additional details related to the outage in the following places. ++ * **Azure Portal Banner** +If your subscription is identified to be impacted, there will be an outage alert of a Service Issue in your Azure portal **Notifications**. + :::image type="content" source="./media/business-continuity/notification-service-issue-example.png" alt-text=" Screenshot showing notifications in Azure Portal."::: +* **Help + support** or **Support + troubleshooting** +When you create support ticket from **Help + support** or **Support + troubleshooting**, there will be information about any issues impacting your resources. Select View outage details for more information and a summary of impact. There will also be an alert in the New support request page. +* **Service Help** +The **Service Health** page in the Azure portal contains information about Azure data center status globally. Search for "service health" in the search bar in the Azure portal, then view Service issues in the Active events category. You can also view the health of individual resources in the **Resource health** page of any resource under the Help menu. A sample screenshot of the Service Health page follows, with information about an active service issue in Southeast Asia. +F### Unplanned downtime: failure scenarios and service recovery Below are some unplanned failure scenarios and the recovery process. |
purview | Concept Policies Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-devops.md | -# Concepts for Microsoft Purview DevOps policies +# What can I accomplish with Microsoft Purview DevOps policies? -This article discusses concepts related to managing access to data sources in your data estate from within the Microsoft Purview governance portal. In particular, it focuses on DevOps policies. +This article discusses concepts related to managing access to data sources in your data estate using the Microsoft Purview governance portal. In particular, it focuses on DevOps policies. > [!Note]-> This capability is different from access control for Microsoft Purview itself, which is described in [Access control in Microsoft Purview](./catalog-permissions.md). +> This capability is different from the internal access control for Microsoft Purview itself, which is described in [Access control in Microsoft Purview](./catalog-permissions.md). ## Overview-Access to system metadata is crucial for IT operations and other DevOps personnel to perform their job. That access can be granted and revoked efficiently and at-scale through Microsoft Purview DevOps policies. +Access to system metadata is crucial for IT/DevOps personnel to ensure that critical database systems are healthy, are performing to expectations and are secure. That access can be granted and revoked efficiently and at-scale through Microsoft Purview DevOps policies. ### Microsoft Purview access policies vs. DevOps policies-Microsoft Purview access policies enable customers to manage access to different data systems across their entire data estate, all from a central location in the cloud. These policies are access grants that can be created through Microsoft Purview Studio, avoiding the need for code. They dictate whether a set of Azure AD principals (users, groups, etc.) should be allowed or denied a specific type of access to a data source or asset within it. These policies get communicated to the data sources where they get natively enforced. +Microsoft Purview access policies enable customers to manage access to different data systems across their entire data estate, all from a central location in the cloud. You can think about these policies as access grants that can be created through Microsoft Purview Studio, avoiding the need for code. They dictate whether a list of Azure AD principals (users, groups, etc.) should be allowed or denied a specific type of access to a data source or asset within it. These policies get communicated by Microsoft Purview to the data sources, where they get natively enforced. -DevOps policies are a special type of Microsoft Purview access policies. They grant access to database system metadata instead of user data. They simplify access provisioning for IT operations and security auditing functions. DevOps policies only grant access, that is, they don't deny access. +DevOps policies are a special type of Microsoft Purview access policies. They grant access to database system metadata instead of user data. They simplify access provisioning for IT operations and security auditing personnel. DevOps policies only grant access, that is, they don't deny access. ## Elements of a DevOps policy-A DevOps policy is defined by three elements: The *data resource path*, the *role* and the *subject*. In essence, the DevOps policy assigns the *subject* to the *role* for the scope of the *data resource path*. +A DevOps policy is defined by three elements: The *subject*, the *data resource* and the *role*. In essence, the DevOps policy assigns the *role*'s related permissions to the *subject* and gets enforced in the scope of the *data resource*'s path. #### The subject-Is a set of Azure AD users, groups or service principals. --#### The role -The role maps to a set of actions that the policy permits on the data resource. DevOps policies support a couple of roles: *SQL Performance Monitor* and *SQL Security Auditor*. The DevOps policy how-to docs detail the role definition for each data source, that is, the mapping between the role in Microsoft Purview and the actions that get permitted in the data source. For example, the role definition for SQL Performance Monitor and SQL Security Auditor includes Connect actions at server and database level on the data source side. +This is a list of Azure AD users, groups or service principals that are granted access. #### The data resource-Microsoft Purview DevOps policies currently support SQL-type data sources and can be configured on individual data sources, resource groups and subscriptions. DevOps policies can only be created if the data source is first registered in Microsoft Purview with the option *Data use management enabled*. The data resource path is the composition of subscription > resource group > data source. +This is the scope where the policy gets enforced. The data resource path is the composition of subscription > resource group > data source. Microsoft Purview DevOps policies currently support SQL-type data sources and can be configured on individual data sources, but also entire resource groups and subscriptions. DevOps policies can only be created after the data resource is registered in Microsoft Purview with the option *Data use management* enabled. ++#### The role +The role maps to a set of actions that the policy permits on the data resource. DevOps policies support a couple of roles: *SQL Performance Monitor* and *SQL Security Auditor*. Both these roles provide access to SQL's system metadata, and more specifically to Dynamic Management Views (DMFs) and Dynamic Management Functions (DMFs). But the set of DMVs/DMFs granted by these roles is different. We provide some popular examples at the end of this document. Also, the DevOps policies how-to docs detail the role definition for each data source type, that is, the mapping between the role in Microsoft Purview and the actions that get permitted in that type of data source. For example, the role definition for SQL Performance Monitor and SQL Security Auditor includes Connect actions at server and database level on the data source side. -#### Hierarchical enforcement of policies +## Hierarchical enforcement of policies A DevOps policy on a data resource is enforced on the data resource itself and all children contained by it. For example, a DevOps policy on an Azure subscription applies to all resource groups, to all policy-enabled data sources within each resource group, and to all databases contained within each data source. ## A sample scenario to demonstrate the concept and the benefits-Bob and Alice are DevOps users at their company. Given their role, they need to log in to dozens of Azure SQL logical servers to monitor their performance so that critical DevOps processes don’t break. Their manager, Mateo, creates an Azure AD group and includes Alice and Bob. He then uses Microsoft Purview DevOps policies (Policy 1 in the diagram below) to grant this Azure AD group access at resource group level, to Resource Group 1, which hosts the Azure SQL servers. +Bob and Alice are involved with the DevOps process at their company. Given their role, they need to log in to dozens of SQL servers on-premises and Azure SQL logical servers to monitor their performance so that critical DevOps processes don’t break. Their manager, Mateo, puts all these SQL data sources into Resource Group 1. He then creates an Azure AD group and includes Alice and Bob. Next, he uses Microsoft Purview DevOps policies (Policy 1 in the diagram below) to grant this Azure AD group access to Resource Group 1, which hosts the Azure SQL servers. . #### These are the benefits:-- Mateo doesn't have to create local logins in each logical server-- The policies from Microsoft Purview improve security by helping limit local privileged access. This is what we call PoLP (Principle of Least Privilege). In the scenario, Mateo only grants the minimum access necessary that Bob and Alice need to perform the task of monitoring performance.-- When new Azure SQL servers are added to the Resource Group, Mateo doesn't need to update the policies in Microsoft Purview for them to be effective on the new logical servers.-- If Alice or Bob leave their job and get backfilled, Mateo just updates the Azure AD group, without having to make any changes to the servers or to the policies he created in Microsoft Purview.-- At any point in time, Mateo or the company’s auditor can see what access has been granted directly in Microsoft Purview Studio.+- Mateo doesn't have to create local logins in each SQL server. +- The policies from Microsoft Purview improve security by limiting local privileged access. They support the Principle of Least Privilege (PoLP). In the scenario, Mateo only grants the minimum access necessary that Bob and Alice need to perform the task of monitoring system health and performance. +- When new SQL servers are added to the resource group, Mateo doesn't need to update the policy in Microsoft Purview for it to be enforced on the new SQL servers. +- If Alice or Bob leaves their job and get backfilled, Mateo just updates the Azure AD group, without having to make any changes to the servers or to the policies he created in Microsoft Purview. +- At any point in time, Mateo or the company’s auditor can see all the permissions that were granted directly in Microsoft Purview Studio. ++| **Principle** | **Benefit** | +|-|-| +|*Simplify* |The role definitions SQL Performance Monitor and SQL Security AuditorData capture the permissions that typical IT/DevOps personas need to execute their job.| +| |Reduce the need of permission expertise for each data source type.| +||| +|*Reduce effort* |Graphical interface lets you navigate the data object hierarchy quickly.| +| |Supports policies on entire Azure resource groups and subscriptions.| +||| +|*Enhance security*|Access is granted centrally and can be easily reviewed and revoked.| +| |Reduces the need for privileged accounts to configure access directly at the data source.| +| |Supports the Principle of Least Privilege via data resource scopes and the role definitions.| +||| ++## Mapping of popular DMVs/DMFs +SQL dynamic metadata includes a list of more than 700 DMVs/DMFs. We list here as an illustration some of the most popular ones, mapped to their role definition in Microsoft Purview DevOps policies and linked to the URL, along with their description. ++| **Accessible by DevOps role** | **Popular DMV / DMF** | **Description**| +|-|-|-| +|||| +| *SQL Performance Monitor* | [sys.dm_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-requests-transact-sql)|Monitors the current activity and performance of the server| +||[sys.dm_os_wait_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-wait-stats-transact-sql)|Identifies performance bottlenecks to enable system tuning| +|| [sys.dm_exec_query_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-query-stats-transact-sql)|Identifies queries that are consuming a lot of resources or taking a long time to execute| +|| [sys.dm_exec_sessions](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-sessions-transact-sql)|Shows information about all active user connections and internal tasks| +|| [sys.dm_os_waiting_tasks](/sql/relational-databases/system-dynamic-management-views/sys-dm-os-waiting-tasks-transact-sql)|Helps identify and troubleshoot blocking issues within SQL Server| +|| [sys.dm_exec_procedure_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-exec-procedure-stats-transact-sql)|Returns how many times a procedure was executed, the total duration, reads, writes and more| +|||| +| *SQL Security Auditor* |[sys.dm_server_audit_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-server-audit-status-transact-sql)|Returns audit details such as the location of the target, size and status of the audit itself| +|||| +| Both *SQL Performance Monitor* and *SQL Security Auditor*|[sys.dm_audit_actions](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-actions-transact-sql)|Returns a row for every audit action that can be reported in the audit log and every audit action group that can be configured as part of SQL Server Audit| +||[sys.dm_audit_class_type_map](/sql/relational-databases/system-dynamic-management-views/sys-dm-audit-class-type-map-transact-sql)|When events are fired, they record the object type, not the securable class. This DMV maps the class_type field in the audit log to the class_desc field in sys.dm_audit_actions| +|||| ## More info - DevOps policies can be created, updated and deleted by any user holding *Policy Author* role at root collection level in Microsoft Purview. |
purview | How To Policies Devops Arc Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-arc-sql-server.md | SELECT * FROM sys.dm_server_external_policy_principal_assigned_actions This section contains a reference of how actions in Microsoft Purview data policies map to specific actions in Azure Arc-enabled SQL Server. -| **Microsoft Purview policy action** | **Data source specific actions** | +| **DevOps role definition** | **Data source specific actions** | |-|--| | | | | *SQL Performance Monitor* |Microsoft.Sql/sqlservers/Connect | |
purview | How To Policies Devops Azure Sql Db | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-policies-devops-azure-sql-db.md | |
purview | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/overview.md | Atop the Data Map, there are purpose-built apps that create environments for dat |App |Description | |-|--| |[Data Catalog](#data-catalog-app) | Finds trusted data sources by browsing and searching your data assets. The data catalog aligns your assets with friendly business terms and data classification to identify data sources. |-|[Data Estate Insights](#data-estate-insights-app) | Gives you an overview of your data estate to help you discover what kinds of data you have and where. | +|[Data Estate Insights](#data-estate-insights-app) | Gives you an overview of your data estate to help you discover what kinds of data you have and where it is. | |[Data Sharing](#data-sharing-app) | Allows you to securely share data internally or cross organizations with business partners and customers. | |[Data Policy](#data-policy-app) | A set of central, cloud-based experiences that help you provision access to data securely and at scale. |+||| ## Data Catalog app For more information, see our [introduction to Data Sharing](concept-data-share. ## Data Policy app Microsoft Purview Data Policy is a set of central, cloud-based experiences that help you manage access to data sources and datasets securely and at scale. - Manage access to data sources from a single-pane of glass, cloud-based experience-- At-scale access provisioning+- Enables at-scale access provisioning - Introduces a new data-plane permission model that is external to data sources-- Seamless integration with Microsoft Purview Data Map and Catalog:+- It is seamlessly integrated with Microsoft Purview Data Map and Catalog: - Search for data assets and grant access only to what is required via fine-grained policies.- - Path to support SaaS, on-premises, and multicloud data sources - - Path to leverage all associated metadata for policies + - Path to support SaaS, on-premises, and multicloud data sources. + - Path to create policies that leverage any metadata associated to the data objects. - Based on role definitions that are simple and abstracted (for example: Read, Modify) For more information, see our introductory guides: * [Data owner access policies](concept-policies-data-owner.md) (preview): Provision fine-grained to broad access to users and groups via intuitive authoring experience. * [Self-service access policies](concept-self-service-data-access-policy.md) (preview): Self-Service: Workflow approval and automatic provisioning of access requests initiated by business analysts that discover data assets in Microsoft Purview’s catalog.-* [DevOps policies](concept-policies-devops.md): Provision access for IT operations and other DevOps users from Microsoft Purview Studio, enabling them to monitor SQL database system health and security, while limiting insider threat. +* [DevOps policies](concept-policies-devops.md): Provision IT operations personnel access to SQL system metadata, so that they can monitor performance, health and audit security, while limiting the insider threat. ++Here are the benefits of the Data Policy app: ++| **Principle** | **Benefit** | +|-|-| +|*Simplify* |Permissions are bundled into role definitions that are abstracted and consistent across data source types, like Read and Modify.| +| |Reduce the need of permission expertise for each data source type.| +||| +|*Reduce effort* |Graphical interface lets you navigate the data object hierarchy quickly.| +| |Supports policies on entire Azure resource groups and subscriptions.| +||| +|*Enhance security*|Access is granted centrally and can be easily reviewed and revoked.| +| |Reduces the need for privileged accounts to configure access directly at the data source.| +| |Supports the Principle of Least Privilege via data resource scopes and common role definitions.| +||| ## Traditional challenges that Microsoft Purview seeks to address Discovering and understanding data sources and their use is the primary purpose At the same time, users can contribute to the catalog by tagging, documenting, and annotating data sources that have already been registered. They can also register new data sources, which are then discovered, understood, and consumed by the community of catalog users. -Lastly, Microsoft Purview Data Policy app applies the metadata in the Data Map, providing a superior solution to keep your data secure. -* Structure and simplify the process of granting/revoking access. -* Reduce the effort of access provisioning. -* Access decision in Microsoft data systems has negligible latency penalty. -* Enhanced security: - - Easier to review access/revoke it in a central vs. distributed access provisioning model. - - Reduced need for privileged accounts to configure access. - - Support Principle of Least Privilege (give people the appropriate level of access, limiting to the minimum permissions and the least data objects). +Lastly, Microsoft Purview Data Policy app provides a superior solution to keep your data secure. ## In-region data residency |
sap | Configure Control Plane | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md | description: Configure your deployment control plane for the SAP on Azure Deploy Previously updated : 12/28/2022 Last updated : 03/05/2023 This table shows the parameters related to the deployer virtual machine. > | `deployer_private_ip_address` | Defines the Private IP address to use | Optional | > | `deployer_enable_public_ip` | Defines if the deployer has a public IP | Optional | > | `auto_configure_deployer` | Defines deployer will be configured with the required software (Terraform and Ansible) | Optional |+> | `add_system_assigned_identity` | Defines deployer will be assigned a system identity | Optional | The Virtual Machine image is defined using the following structure: The table below defines the parameters used for defining the Key Vault informati > [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | -- | -- | -- |-> | `use_custom_dns_a_registration` | Use an existing Private DNS zone | Optional | +> | `dns_label` | DNS name of the private DNS zone | Optional | +> | `use_custom_dns_a_registration` | Uses an external system for DNS, set to false for Azure native | Optional | > | `management_dns_subscription_id` | Subscription ID for the subscription containing the Private DNS Zone | Optional | > | `management_dns_resourcegroup_name` | Resource group containing the Private DNS Zone | Optional |-> | `dns_label` | DNS name of the private DNS zone | Optional | ### Other parameters The table below defines the parameters used for defining the Key Vault informati > | -- | - | -- | -- | > | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Optional | | > | `bastion_deployment` | Boolean flag controlling if Azure Bastion host is to be deployed | Optional | |+> | `bastion_sku` | SKU for Azure Bastion host to be deployed (Basic/Standard) | Optional | | > | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. | Optional | Use only for test deployments | > | `use_private_endpoint` | Use private endpoints | Optional | > | `use_service_endpoint` | Use service endpoints for subnets | Optional | |
sap | Plan Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/plan-deployment.md | The SAP library provides storage for SAP installation media, Bill of Material (B Most SAP application landscapes are partitioned in different tiers. In SDAF these are called workload zones, for example, you might have different workload zones for development, quality assurance, and production. See [workload zones](deployment-framework.md#deployment-components). +The default naming convention for workload zones is `[ENVIRONMENT]-[REGIONCODE]-[NETWORK]-INFRASTRUCTURE`, for example, `DEV-WEEU-SAP01-INFRASTRUCTURE` for a development environment hosted in the West Europe region using the SAP01 virtual network or `PRD-WEEU-SAP02-INFRASTRUCTURE` for a production environment hosted in the West Europe region using the SAP02 virtual network. ++The `SAP01` and `SAP02` define the logical names for the Azure virtual networks, these can be used to further partition the environments. If you need two Azure Virtual Networks for the same workload zone, for example, for a multi subscription scenario where you host development environments in two subscriptions, you can use the different logical names for each virtual network. For example, `DEV-WEEU-SAP01-INFRASTRUCTURE` and `DEV-WEEU-SAP02-INFRASTRUCTURE`. + The workload zone provides the following services for the SAP Applications: * Azure Virtual Network, for a virtual network, subnets and network security groups. |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | Microsoft Sentinel customers (who are also AADIP subscribers) with [Microsoft 36 | **2** | Choose the **Show all alerts** AADIP integration. | Create automation rules to automatically close incidents with unwanted alerts.<br><br>Disable any [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. | | **3** | Don't use Microsoft 365 Defender for AADIP alerts:<br>Choose the **Turn off all alerts** option for AADIP integration. | Leave enabled those [**Microsoft Security** analytics rules](detect-threats-built-in.md) that create incidents from AADIP alerts. | + See the [Microsoft 365 Defender documentation for instructions](/microsoft-365/security/defender/investigate-alerts#configure-aad-ip-alert-service) on how to take the prescribed actions in Microsoft 365 Defender. + - If you don't have your [AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection) enabled, you must enable it. Be sure **not** to enable incident creation on the connector page. If you don't enable the connector, you may receive AADIP incidents without any data in them. - If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection was made automatically behind the scenes. You won't need to do anything else. |
virtual-machines | Image Builder Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md | Image Builder will read these commands, these commands are written out to the AI Azure Image Builder supports three distribution targets: -- **managedImage** - managed image.+- **ManagedImage** - Managed image. - **sharedImage** - Azure Compute Gallery. - **VHD** - VHD in a storage account. |