Updates from: 10/03/2022 01:06:32
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory App Resilience Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-resilience-continuous-access-evaluation.md
You can test your application by signing in a user to the application then using
When these conditions are met, the app can extract the claims challenge from the API response header as follows: ```javascript
-const authenticateHeader = response.headers.get('www-authenticate');
-const claimsChallenge = parseChallenges(authenticateHeader).claims;
-
-// ...
+try {
+ const response = await fetch(apiEndpoint, options);
+
+ if (response.status === 401 && response.headers.get('www-authenticate')) {
+ const authenticateHeader = response.headers.get('www-authenticate');
+ const claimsChallenge = parseChallenges(authenticateHeader).claims;
+
+ // use the claims challenge to acquire a new access token...
+ }
+} catch(error) {
+ // ...
+}
+// helper function to parse the www-authenticate header
function parseChallenges(header) { const schemeSeparator = header.indexOf(' '); const challenges = header.substring(schemeSeparator + 1).split(',');
function parseChallenges(header) {
Your app would then use the claims challenge to acquire a new access token for the resource. ```javascript
+const tokenRequest = {
+ claims: window.atob(claimsChallenge), // decode the base64 string
+ scopes: ['User.Read']
+ account: msalInstance.getActiveAccount();
+};
+ let tokenResponse; try {
- tokenResponse = await msalInstance.acquireTokenSilent({
- claims: window.atob(claimsChallenge), // decode the base64 string
- scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
- account: account, // current active account
- });
-
+ tokenResponse = await msalInstance.acquireTokenSilent(tokenRequest);
} catch (error) { if (error instanceof InteractionRequiredAuthError) {
- tokenResponse = await msalInstance.acquireTokenPopup({
- claims: window.atob(claimsChallenge), // decode the base64 string
- scopes: scopes, // e.g ['User.Read', 'Contacts.Read']
- account: account, // current active account
- });
+ tokenResponse = await msalInstance.acquireTokenPopup(tokenRequest);
}- } ```
const msalConfig = {
auth: { clientId: 'Enter_the_Application_Id_Here', clientCapabilities: ["CP1"]
- // the remaining settings
- // ...
+ // remaining settings...
} }
active-directory Claims Challenge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/claims-challenge.md
_clientApp = PublicClientApplicationBuilder.Create(App.ClientId)
.WithDefaultRedirectUri() .WithAuthority(authority) .WithClientCapabilities(new [] {"cp1"})
- .Build();*
+ .Build();
``` Those using Microsoft.Identity.Web can add the following code to the configuration file:
Those using Microsoft.Identity.Web can add the following code to the configurati
{ "AzureAd": { "Instance": "https://login.microsoftonline.com/",
- // the remaining settings
- // ...
- "ClientCapabilities": [ "cp1" ]
+ "ClientId": 'Enter_the_Application_Id_Here'
+ "ClientCapabilities": [ "cp1" ],
+ // remaining settings...
}, ``` #### [JavaScript](#tab/JavaScript)
-Those using MSAL.js can add `clientCapabilities` property to the configuration object.
+Those using MSAL.js or MSAL Node can add `clientCapabilities` property to the configuration object. Note: this option is available to both public and confidential cient applications.
```javascript const msalConfig = { auth: { clientId: 'Enter_the_Application_Id_Here', clientCapabilities: ["CP1"]
- // the remaining settings
- // ...
+ // remaining settings...
} }
else
### [JavaScript](#tab/JavaScript)
+The following snippet illustrates a custom Express.js middleware:
+ ```javascript const checkIsClientCapableOfClaimsChallenge = (req, res, next) => { // req.authInfo contains the decoded access token payload if (req.authInfo['xms_cc'] && req.authInfo['xms_cc'].includes('CP1')) { // Return formatted claims challenge as this client understands this
-
} else {
- return res.status(403).json({ error: 'Client is not capable' });
+ return res.status(403).json({ error: 'Client is not capable' });
} }
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
If you are already enrolled in the Microsoft Partner Network (MPN) and have met
For more details on specific benefits, requirements, and frequently asked questions see the [overview](publisher-verification-overview.md). - ## Mark your app as publisher verified Make sure you have met the [pre-requisites](publisher-verification-overview.md#requirements), then follow these steps to mark your app(s) as Publisher Verified.
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
Lifecycle Workflows come with many pre-configured tasks that are designed to aut
Lifecycle Workflow's built-in tasks each include an identifier, known as **taskDefinitionID**, and can be used to create either new workflows from scratch, or inserted into workflow templates so that they fit the needs of your organization. For more information on templates available for use with Lifecycle Workflows, see: [Lifecycle Workflow Templates](lifecycle-workflow-templates.md). -
-Lifecycle Workflows currently support the following tasks:
-
-|Task |taskdefinitionID |Category |
-||||
-|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea | Joiner |
-|[Generate Temporary Access Pass and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-pass-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d | Joiner |
-|[Add user to groups](lifecycle-workflow-tasks.md#add-user-to-groups) | 22085229-5809-45e8-97fd-270d28d66910 | Joiner, Leaver
-|[Add user to teams](lifecycle-workflow-tasks.md#add-user-to-teams) | e440ed8d-25a1-4618-84ce-091ed5be5594 | Joiner, Leaver
-|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc | Joiner, Leaver
-|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e | Joiner, Leaver
-|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 | Leaver
-|[Remove user from selected group](lifecycle-workflow-tasks.md#remove-user-from-selected-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c | Leaver
-|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc | Leaver
-|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 | Leaver |
-|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 | Leaver |
-|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e | Leaver
-|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff | Leaver |
-|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | Leaver |
-|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | Leaver |
-|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | Leaver |
## Common task parameters (preview)
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
GET 'http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-0
| `Metadata` | An HTTP request header field required by managed identities. This information is used as a mitigation against server side request forgery (SSRF) attacks. This value must be set to "true", in all lower case. | | `object_id` | (Optional) A query string parameter, indicating the object_id of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities.| | `client_id` | (Optional) A query string parameter, indicating the client_id of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities.|
-| `mi_res_id` | (Optional) A query string parameter, indicating the mi_res_id (Azure Resource ID) of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities. |
+| `msi_res_id` | (Optional) A query string parameter, indicating the msi_res_id (Azure Resource ID) of the managed identity you would like the token for. Required, if your VM has multiple user-assigned managed identities. |
Sample response:
aks Operator Best Practices Cluster Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-cluster-security.md
spec:
``` > [!NOTE]
-> We recommend you review [Azure AD workload identity][workload-identity-overview] (preview).
-> This authentication method replaces pod-managed identity (preview), which integrates with the
-> Kubernetes native capabilities to federate with any external identity providers on behalf of the
-> application.
+> Alternatively you can use [Pod Identity](./use-azure-ad-pod-identity.md) though this is in Public Preview. It has a pod (NMI) that runs as a DaemonSet on each node in the AKS cluster. NMI intercepts security token requests to the Azure Instance Metadata Service on each node, redirect them to itself and validates if the pod has access to the identity it's requesting a token for and fetch the token from the Azure AD tenant on behalf of the application.
+>
## Secure container access to resources
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kub
| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | 1.25 GA | | 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | 1.26 GA | | 1.24 | Apr-22-22 | May 2022 | Jul 2022 | 1.27 GA
-| 1.25 | Aug 2022 | Sept 2022 | Nov 2022 | 1.28 GA
+| 1.25 | Aug 2022 | Oct 2022 | Nov 2022 | 1.28 GA
## FAQ
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
+
+ Title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
+description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster.
++ Last updated : 09/30/2022++
+# Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
+
+This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. This ensures pods are scheduled onto nodes that have the required CPU and memory resources.
+
+## Benefits
+
+Vertical Pod Autoscaler provides the following benefits:
+
+* It analyzes and adjusts processor and memory resources to *right size* your applications. VPA isn't only responsible for scaling up, but also for scaling down based on their resource use over time.
+
+* A Pod is evicted if it needs to change its resource requests if its scaling mode is set to *auto* or *recreate*.
+
+* Set CPU and memory constraints for individual containers by specifying a resource policy
+
+* Ensures nodes have correct resources for pod scheduling
+
+* Configurable logging of any adjustments to processor or memory resources made
+
+* Improve cluster resource utilization and frees up CPU and memory for other pods.
+
+## Limitations
+
+* Vertical Pod autoscaling supports a maximum of 500 `VerticalPodAutoscaler` objects per cluster.
+* With this preview release, you can't change the `controlledValue` and `updateMode` of `managedCluster` object.
+
+## Before you begin
+
+* AKS cluster is running Kubernetes version 1.24 and higher.
+
+* The Azure CLI version 2.0.64 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+* The `aks-preview` extension version 0.5.102 or later.
+
+* `kubectl` should be connected to the cluster you want to install VPA.
+
+## API Object
+
+The Vertical Pod Autoscaler is an API resource in the Kubernetes autoscaling API group. The version supported in this preview release is 0.11 can be found in the [Kubernetes autoscaler repo][github-autoscaler-repo-v011].
+
+## Register the VPA provider feature
++
+To install the aks-vpapreview preview feature, run the following command:
+
+```azurecli
+az feature register --namespace Microsoft.ContainerService --name AKS-VPAPreview
+```
+
+## Deploy, upgrade, or disable VPA on a cluster
+
+In this section, you deploy, upgrade, or disable the Vertical Pod Autoscaler on your cluster.
+
+1. To enable VPA on a new cluster, use `--enable-vpa` parameter with the [az aks create][az-aks-create] command.
+
+ ```azurecli
+ az aks create -n myAKSCluster -g myResourceGroup --enable-vpa
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+2. Optionally, to enable VPA on an existing cluster, use the `--enable-vpa` with the [az aks upgrade][az-aks-upgrade] command.
+
+ ```azurecli
+ az aks update -n myAKSCluster -g myResourceGroup --enable-vpa
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+3. Optionally, to disable VPA on an existing cluster, use the `--disable-vpa` with the [az aks upgrade][az-aks-upgrade] command.
+
+ ```azurecli
+ az aks update -n myAKSCluster -g myResourceGroup --disable-vpa
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+4. To verify that the Vertical Pod Autoscaler pods have been created successfully, use the [kubectl get][kubectl-get] command.
+
+```bash
+kubectl get pods -n kube-system
+```
+
+The output of the command includes the following results specific to the VPA pods. The pods should show a *running* status.
+
+```output
+NAME READY STATUS RESTARTS AGE
+vpa-admission-controller-7867874bc5-vjfxk 1/1 Running 0 41m
+vpa-recommender-5fd94767fb-ggjr2 1/1 Running 0 41m
+vpa-updater-56f9bfc96f-jgq2g 1/1 Running 0 41m
+```
+
+## Test your Vertical Pod Autoscaler installation
+
+The following steps create a deployment with two pods, each running a single container that requests 100 millicores and tries to utilize slightly above 500 millicores. Also created is a VPA config pointing at the deployment. The VPA observes the behavior of the pods, and after about five minutes, they're updated with a higher CPU request.
+
+1. Create a file named `hamster.yaml` and copy in the following manifest of the Vertical Pod Autoscaler example from the [kubernetes/autoscaler][kubernetes-autoscaler-github-repo] GitHub repository.
+
+1. Deploy the `hamster.yaml` Vertical Pod Autoscaler example using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```bash
+ kubectl apply -f hamster.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+1. Run the following [kubectl get][kubectl-get] command to get the pods from the hamster example application:
+
+ ```bash
+ kubectl get pods -l app=hamster
+ ```
+
+ The example output resembles the following:
+
+ ```bash
+ hamster-78f9dcdd4c-hf7gk 1/1 Running 0 24s
+ hamster-78f9dcdd4c-j9mc7 1/1 Running 0 24s
+ ```
+
+1. Use the [kubectl describe][kubectl-describe] command on one of the pods to view its CPU and memory reservation. Replace "exampleID" with one of the pod IDs returned in your output from the previous step.
+
+ ```bash
+ kubectl describe pod hamster-exampleID
+ ```
+
+ The example output is a snippet of the information about the cluster:
+
+ ```bash
+ hamster:
+ Container ID: containerd://
+ Image: k8s.gcr.io/ubuntu-slim:0.1
+ Image ID: sha256:
+ Port: <none>
+ Host Port: <none>
+ Command:
+ /bin/sh
+ Args:
+ -c
+ while true; do timeout 0.5s yes >; sleep 0.5s; done
+ State: Running
+ Started: Wed, 28 Sep 2022 15:06:14 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 100m
+ memory: 50Mi
+ Environment: <none>
+ ```
+
+ The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
+
+1. Wait for the vpa-updater to launch a new hamster pod. This should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
+
+ ```bash
+ kubectl get --watch pods -l app=hamster
+ ```
+
+1. When a new hamster pod is started, describe the pod running the [kubectl describe][kubectl-describe] command and view the updated CPU and memory reservations.
+
+ ```bash
+ kubectl describe pod hamster-<exampleID>
+ ```
+
+ The example output is a snippet of the information describing the pod:
+
+ ```bash
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+ In the previous output, you can see that the CPU reservation increased to 587 millicpu, which is over five times the original value. The memory increased to 262,144 Kilobytes, which is around 250 Mibibytes, or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler corrected the estimate with a much more appropriate value.
+
+1. To view updated recommendations from VPA, run the [kubectl describe][kubectl-describe] command to describe the hamster-vpa resource information.
+
+ ```bash
+ kubectl describe vpa/hamster-vpa
+ ```
+
+ The example output is a snippet of the information about the resource utilization:
+
+ ```bash
+ State: Running
+ Started: Wed, 28 Sep 2022 15:09:51 -0400
+ Ready: True
+ Restart Count: 0
+ Requests:
+ cpu: 587m
+ memory: 262144k
+ Environment: <none>
+ ```
+
+## Set Pod Autoscaler requests automatically
+
+Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automatically set resource requests on Pods when the updateMode is set to **Auto** or **Recreate**.
+
+1. Enable VPA for your cluster by running the following command. Replace cluster name `myAKSCluster` with the name of your AKS cluster and replace `myResourceGroup` with the name of the resource group the cluster is hosted in.
+
+ ```azurecli
+ az aks update -n myAKSCluster -g myResourceGroup --enable-vpa
+ ```
+
+2. Create a file named `azure-autodeploy.yaml`, and copy in the following manifest.
+
+ ```yml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: vpa-auto-deployment
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: vpa-auto-deployment
+ template:
+ metadata:
+ labels:
+ app: vpa-auto-deployment
+ spec:
+ containers:
+ - name: mycontainer
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 50Mi
+ command: ["/bin/sh"]
+ args: ["-c", "while true; do timeout 0.5s yes >; sleep 0.5s; done"]
+ ```
+
+ This manifest describes a deployment that has two Pods. Each Pod has one container that requests 100 milliCPU and 50 MiB of memory.
+
+3. Create the pod with the [kubectl create][kubectl-create] command, as shown in the following example:
+
+ ```bash
+ kubectl create -f azure-autodeploy.yaml
+ ```
+
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster.
+
+4. Run the following [kubectl get][kubectl-get] command to get the pods:
+
+ ```bash
+ kubectl get pods
+ ```
+
+ The output resembles the following example showing the name and status of the pods:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ vpa-auto-deployment-54465fb978-kchc5 1/1 Running 0 52s
+ vpa-auto-deployment-54465fb978-nhtmj 1/1 Running 0 52s
+ ```
+
+5. Create a file named `azure-vpa-auto.yaml`, and copy in the following manifest that describes a `VerticalPodAutoscaler`:
+
+ ```yml
+ apiVersion: autoscaling.k8s.io/v1
+ kind: VerticalPodAutoscaler
+ metadata:
+ name: vpa-auto
+ spec:
+ targetRef:
+ apiVersion: "apps/v1"
+ kind: Deployment
+ name: vpa-auto-deployment
+ updatePolicy:
+ updateMode: "Auto"
+ ```
+
+ The `targetRef.name` value specifies that any Pod that is controlled by a deployment named `vpa-auto-deployment` belongs to this `VerticalPodAutoscaler`. The `updateMode` value of `Auto` means that the Vertical Pod Autoscaler controller can delete a Pod, adjust the CPU and memory requests, and then start a new Pod.
+
+6. Apply the manifest to the cluster using the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl create -f azure-vpa-auto.yaml
+ ```
+
+7. Wait a few minutes, and view the running Pods again by running the following [kubectl get][kubectl-get] command:
+
+ ```bash
+ kubectl get pods
+ ```
+
+ The output resembles the following example showing the pod names have changed and status of the pods:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ vpa-auto-deployment-54465fb978-qbhc4 1/1 Running 0 2m49s
+ vpa-auto-deployment-54465fb978-vbj68 1/1 Running 0 109s
+ ```
+
+8. Get detailed information about one of your running Pods by using the [Kubectl get][kubectl-get] command. Replace `podName` with the name of one of your Pods that you retrieved in the previous step.
+
+ ```bash
+ kubectl get pod podName --output yaml
+ ```
+
+ The output resembles the following example, showing that the Vertical Pod Autoscaler controller has increased the memory request to 262144k and CPU request to 25 milliCPU.
+
+ ```output
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ annotations:
+ vpaObservedContainers: mycontainer
+ vpaUpdates: 'Pod resources updated by vpa-auto: container 0: cpu request, memory
+ request'
+ creationTimestamp: "2022-09-29T16:44:37Z"
+ generateName: vpa-auto-deployment-54465fb978-
+ labels:
+ app: vpa-auto-deployment
+
+ spec:
+ containers:
+ - args:
+ - -c
+ - while true; do timeout 0.5s yes >; sleep 0.5s; done
+ command:
+ - /bin/sh
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ imagePullPolicy: IfNotPresent
+ name: mycontainer
+ resources:
+ requests:
+ cpu: 25m
+ memory: 262144k
+ ```
+
+9. To get detailed information about the Vertical Pod Autoscaler and its recommendations for CPU and memory, use the [kubectl get][kubectl-get] command:
+
+ ```bash
+ kubectl get vpa vpa-auto --output yaml
+ ```
+
+ The output resembles the following example:
+
+ ```output
+ recommendation:
+ containerRecommendations:
+ - containerName: mycontainer
+ lowerBound:
+ cpu: 25m
+ memory: 262144k
+ target:
+ cpu: 25m
+ memory: 262144k
+ uncappedTarget:
+ cpu: 25m
+ memory: 262144k
+ upperBound:
+ cpu: 230m
+ memory: 262144k
+ ```
+
+ The results show the `target` attribute specifies that for the container to run optimally, it doesn't need to change the CPU or the memory target. Your results may vary where the target CPU and memory recommendation are higher.
+
+ The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
+
+## Next steps
+
+This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
+
+<!-- EXTERNAL LINKS -->
+[kubernetes-autoscaler-github-repo]: https://github.com/kubernetes/autoscaler/blob/master/vertical-pod-autoscaler/examples/hamster.yaml
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[github-autoscaler-repo-v011]: https://github.com/kubernetes/autoscaler/blob/vpa-release-0.11/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go
+
+<!-- INTERNAL LINKS -->
+[get-started-with-aks]: /azure/architecture/reference-architectures/containers/aks-start-here
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[az-aks-upgrade]: /cli/azure/aks#az-aks-upgrade
+[horizontal-pod-autoscaling]: concepts-scale.md#horizontal-pod-autoscaler
+[scale-applications-in-aks]: tutorial-kubernetes-scale.md
automation Automation Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-send-email.md
#Customer intent: As a developer, I want understand runbooks so that I can use it to automate e-mails.
-# Send an email from am Automation runbook
+# Send an email from an Automation runbook
You can send an email from a runbook with [SendGrid](https://sendgrid.com/solutions) using PowerShell.
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
```azurepowershell $CLUSTER_NAME = <cluster-name> $RESOURCE_GROUP = <resource-group-name>
- $ARM_ID_CLUSTER = (az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
+ $ARM_ID_CLUSTER = (Get-AzConnectedKubernetes -ResourceGroupName $RESOURCE_GROUP -Name $CLUSTER_NAME).Id
```
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/overview.md
To deliver this experience, you need to deploy the [Azure Arc resource bridge](.
## Supported VMware vSphere versions
-Azure Arc-enabled VMware vSphere (preview) works with VMware vSphere version 6.7 and 7.
+Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 6.7 and 7.
> [!NOTE] > Azure Arc-enabled VMware vSphere (preview) supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it is not recommended to use Arc-enabled VMware vSphere with it at this point.
Azure Arc-enabled VMware vSphere doesn't store/process customer data outside the
## Next steps - [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)+
+- [Support matrix for Arc enabled VMware vSphere](support-matrix-for-arc-enabled-vmware-vsphere.md)
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
First, the script deploys a virtual appliance called [Azure Arc resource bridge
- An Azure subscription. -- A resource group in the subscription where you're a member of the *Owner/Contributor* role.
+- A resource group in the subscription where you have the *Owner*, *Contributor*, or *Azure Arc VMware Private Clouds Onboarding* role for onboarding.
+
+### Azure Arc Resource Bridge
+
+- Azure Arc Resource Bridge IP needs access to the URLs listed [here](../vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md#resource-bridge-networking-requirements).
### vCenter Server
A typical onboarding that uses the script takes 30 to 60 minutes. During the pro
| **vCenter password** | Enter the password for the vSphere account. | | **Data center selection** | Select the name of the datacenter (as shown in the vSphere client) where the Azure Arc resource bridge's VM should be deployed. | | **Network selection** | Select the name of the virtual network or segment to which the VM must be connected. This network should allow the appliance to communicate with vCenter Server and the Azure endpoints (or internet). |
-| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: Comma-separated list of DNS servers. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. </br> 6. **VLAN ID** (optional) |
+| **Static IP / DHCP** | If you have DHCP server in your network and want to use it, enter **y**. Otherwise, enter **n**. If you are using a DHCP server, reserve the IP address assigned to the Azure Arc Resource Bridge VM (Appliance VM IP). </br>When you choose a static IP configuration, you're asked for the following information: </br> 1. **Static IP address prefix**: Network address in CIDR notation. For example: **192.168.0.0/24**. </br> 2. **Static gateway**: Gateway address. For example: **192.168.0.0**. </br> 3. **DNS servers**: IP address(es) of DNS server(s) used by Azure Arc Resource Bridge VM for DNS resolution. VM must be able to resolve external sites, like mcr.microsoft.com and the vCenter server. </br> 4. **Start range IP**: Minimum size of two available IP addresses is required. One IP address is for the VM, and the other is reserved for upgrade scenarios. Provide the starting IP address of that range. Ensure the Start range IP has internet access. </br> 5. **End range IP**: Last IP address of the IP range requested in the previous field. Ensure the End range IP has internet access. </br> 6. **VLAN ID** (optional) |
| **Resource pool** | Select the name of the resource pool to which the Azure Arc resource bridge's VM will be deployed. | | **Data store** | Select the name of the datastore to be used for the Azure Arc resource bridge's VM. | | **Folder** | Select the name of the vSphere VM and the template folder where the Azure Arc resource bridge's VM will be deployed. | | **VM template Name** | Provide a name for the VM template that will be created in your vCenter Server instance based on the downloaded OVA file. For example: **arc-appliance-template**. |
-| **Control Pane IP** address | Provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address.|
+| **Control Plane IP** address | Provide a static IP address that's outside the DHCP range but still available on the network. Ensure that this IP address isn't assigned to any other machine on the network. Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. Control Plane IP must have internet access. |
| **Appliance proxy settings** | Enter **y** if there's a proxy in your appliance network. Otherwise, enter **n**. </br> You need to populate the following boxes when you have a proxy set up: </br> 1. **Http**: Address of the HTTP proxy server. </br> 2. **Https**: Address of the HTTPS proxy server. </br> 3. **NoProxy**: Addresses to be excluded from the proxy. </br> 4. **CertificateFilePath**: For SSL-based proxies, the path to the certificate to be used. After the command finishes running, your setup is complete. You can now use the capabilities of Azure Arc-enabled VMware vSphere.
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
+
+ Title: Support matrix for Arc-enabled VMware vSphere (preview)
+description: In this article, you'll learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements etc.
+ Last updated : 09/30/2022+
+# Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere.
++
+# Support matrix for Arc-enabled VMware vSphere (preview)
+
+This article documents the prerequisites and support requirements for using the [Arc-enabled VMware vSphere (preview)](overview.md) to manage your VMware vSphere VMs through Azure Arc.
+
+To use Arc-enabled VMware vSphere, you must deploy an Azure Arc resource bridge in your VMware vSphere environment. The resource bridge provides an ongoing connection between your VMware vCenter Server and Azure. Once you've connected your VMware vCenter Server to Azure, components on the resource bridge discover your vCenter inventory. You can enable them in Azure and start performing virtual hardware and guest OS operations on them using Azure Arc.
++
+## VMware vSphere Requirements
+
+### Supported vCenter Server versions
+
+- vCenter Server version 6.7 or 7.
+
+### Required vSphere account privileges
+
+You need a vSphere account that can:
+- Read all inventory.
+- Deploy and update VMs to all the resource pools (or clusters), networks, and VM templates that you want to use with Azure Arc.
+
+This account is used for the ongoing operation of Azure Arc-enabled VMware vSphere (preview) and the deployment of the Azure Arc resource bridge (preview) VM.
+
+### Resource bridge resource requirements
+
+For Arc-enabled VMware vSphere, resource bridge has the following minimum virtual hardware requirements
+
+- 16 GB of memory
+- 4 vCPUs
+- An external virtual switch that can provide access to the internet directly or through a proxy. If internet access is through a proxy or firewall, ensure [these URLs](#resource-bridge-networking-requirements) are allow-listed.
+
+### Resource bridge networking requirements
+
+The following firewall URL exceptions are needed for the Azure Arc resource bridge VM:
+
+| **Service** | **Port** | **URL** | **Direction** | **Notes**|
+| | | | | |
+| Microsoft container registry | 443 | https://mcr.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images for installation. |
+| Azure Arc Identity service | 443 | https://*.his.arc.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources |
+| Azure Arc configuration service | 443 | https://*.dp.kubernetesconfiguration.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. |
+| Cluster connect service | 443 | https://*.servicebus.windows.net | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
+| Guest Notification service | 443 | https://guestnotificationservice.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
+| SFS API endpoint | 443 | msk8s.api.cdp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
+| Resource bridge (appliance) Dataplane service | 443 | https://*.dp.prod.appliances.azure.com | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
+| Resource bridge (appliance) container image download | 443 | *.blob.core.windows.net, https://ecpacr.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| Resource bridge (appliance) image download | 80 | *.dl.delivery.mp.microsoft.com | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
+| Azure Arc for K8s container image download | 443 | https://azurearcfork8sdev.azurecr.io | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
+| ADHS telemetry service | 443 | adhs.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. |
+| Microsoft events data service | 443 | v20.events.data.microsoft.com | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
+
+## Azure permissions required
+
+Following are the minimum Azure roles required for various operations:
+
+| **Operation** | **Minimum role required** | **Scope** |
+| | | |
+| Onboarding your vCenter Server to Arc | Azure Arc VMware Private Clouds Onboarding | On the subscription or resource group into which you want to onboard |
+| Administering Arc-enabled VMware vSphere | Azure Arc VMware Administrator | On the subscription or resource group where vCenter server resource is created |
+| VM Provisioning | Azure Arc VMware Private Cloud User | On the subscription or resource group that contains the resource pool/cluster/host, datastore and virtual network resources, or on the resources themselves |
+| VM Provisioning | Azure Arc VMware VM Contributor | On the subscription or resource group where you want to provision VMs |
+| VM Operations | Azure Arc VMware VM Contributor | On the subscription or resource group that contains the VM, or on the VM itself |
+
+Any roles with higher permissions such as *Owner/Contributor* role on the same scope, will also allow you to perform all the operations listed above.
+
+## Guest management (Arc agent) requirements
+
+With Arc-enabled VMware vSphere, you can install the Arc connected machine agent on your VMs at scale and use Azure management services on the VMs. There are additional requirements for this capability:
+
+To enable guest management (install the Arc connected machine agent), ensure
+
+- VM is powered on
+- VM has VMware tools installed and running
+- Resource bridge has access to the host on which the VM is running
+- VM is running a [supported operating system](#supported-operating-systems)
+- VM has internet connectivity directly or through proxy. If the connection is through a proxy, ensure [these URLs](#networking-requirements) are allow-listed.
+
+### Supported operating systems
+
+The officially supported versions of the Windows and Linux operating system for the Azure Connected Machine agent are listed [here](../servers/prerequisites.md#supported-operating-systems). Only x86-64 (64-bit) architectures are supported. x86 (32-bit) and ARM-based architectures, including x86-64 emulation on arm64, aren't supported operating environments.
+
+### Software requirements
+
+Windows operating systems:
+
+* NET Framework 4.6 or later is required. [Download the .NET Framework](/dotnet/framework/install/guide-for-developers).
+* Windows PowerShell 5.1 is required. [Download Windows Management Framework 5.1.](https://www.microsoft.com/download/details.aspx?id=54616).
+
+Linux operating systems:
+
+* systemd
+* wget (to download the installation script)
+
+### Networking requirements
+
+The following firewall URL exceptions are needed for the Azure Arc agents:
+
+| **URL** | **Description** |
+| | |
+| aka.ms | Used to resolve the download script during installation |
+| download.microsoft.com | Used to download the Windows installation package |
+| packages.microsoft.com | Used to download the Linux installation package |
+| login.windows.net | Azure Active Directory |
+| login.microsoftonline.com | Azure Active Directory |
+| pas.windows.net | Azure Active Directory |
+| management.azure.com | Azure Resource Manager - to create or delete the Arc server resource |
+| *.his.arc.azure.com | Metadata and hybrid identity services |
+| *.guestconfiguration.azure.com | Extension management and guest configuration services |
+| guestnotificationservice.azure.com, *.guestnotificationservice.azure.com | Notification service for extension and connectivity scenarios |
+| azgn*.servicebus.windows.net | Notification service for extension and connectivity scenarios |
+| *.servicebus.windows.net | For Windows Admin Center and SSH scenarios |
+| *.blob.core.windows.net | Download source for Azure Arc-enabled servers extensions |
+| dc.services.visualstudio.com | Agent telemetry |
++
+## Next steps
+
+- [Connect VMware vCenter to Azure Arc using the helper script](quick-start-connect-vcenter-to-arc-using-script.md)
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
Title: High availability for Azure Cache for Redis description: Learn about Azure Cache for Redis high availability features and options + Last updated 03/29/2022
As with any cloud systems, unplanned outages can occur that result in a virtual machines (VM) instance, an Availability Zone, or a complete Azure region going down. We recommend customers have a plan in place to handle zone or regional outages.
-This article presents the information for customers to create a *business continuity and disaster recovery plan* for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
+This article presents the information for customers to create a _business continuity and disaster recovery plan_ for their Azure Cache for Redis, or Azure Cache for Redis Enterprise implementation.
Various high availability options are available in the Standard, Premium, and Enterprise tiers:
Various high availability options are available in the Standard, Premium, and En
Applicable tiers: **Standard**, **Premium**, **Enterprise**, **Enterprise Flash**
-Azure Cache for Redis, in the Standard or Premium tier, has a high availability architecture that ensures your managed instance is functioning, even when outages affect the underlying virtual machines (VMs). Whether the outage is planned or unplanned outages, Azure Cache for Redis delivers much greater percentage availability rates than what's attainable by hosting Redis on a single VM.
+Azure Cache for Redis has a high availability architecture that ensures your managed instance is functioning, even when outages affect the underlying virtual machines (VMs). Whether the outage is planned or unplanned outages, Azure Cache for Redis delivers much greater percentage availability rates than what's attainable by hosting Redis on a single VM.
-An Azure Cache for Redis in the Standard or Premium tier runs on a pair of Redis servers by default. The two servers are hosted on dedicated VMs. Open-source Redis allows only one server to handle data write requests.
+An Azure Cache for Redis in the applicable tiers runs on a pair of Redis servers by default. The two servers are hosted on dedicated VMs. Open-source Redis allows only one server to handle data write requests.
-With Azure Cache for Redis, one server is the *primary* node, while the other is the *replica*. After it provisions the server nodes, Azure Cache for Redis assigns primary and replica roles to them. The primary node usually is responsible for servicing write and read requests from clients. On a write operation, it commits a new key and a key update to its internal memory and replies immediately to the client. It forwards the operation to the *replica* asynchronously.
+With Azure Cache for Redis, one server is the _primary_ node, while the other is the _replica_. After it provisions the server nodes, Azure Cache for Redis assigns primary and replica roles to them. The primary node usually is responsible for servicing write and read requests from clients. On a write operation, it commits a new key and a key update to its internal memory and replies immediately to the client. It forwards the operation to the _replica_ asynchronously.
:::image type="content" source="media/cache-high-availability/replication.png" alt-text="Data replication setup":::
With Azure Cache for Redis, one server is the *primary* node, while the other is
> >
-If the *primary* node in a cache is unavailable, the *replica* promotes itself to become the new primary automatically. This process is called a *failover*. The replica waits for a sufficiently long time before taking over in case that the primary node recovers quickly. When a failover happens, Azure Cache for Redis provisions a new VM and joins it to the cache as the replica node. The replica does a full data synchronization with the primary so that it has another copy of the cache data.
+If the _primary_ node in a cache is unavailable, the _replica_ automatically promotes itself to become the new primary. This process is called a _failover_. A failover is just two nodes, primary/replica, trading roles, replica/primary, with one of the nodes possibly going offline for a few minutes. In most failovers, the primary and replica nodes coordinate the handover so you have near zero time without a primary.
+
+The former primary goes offline briefly to receive updates from the new primary. Then, the now replica comes back online and rejoins the cache fully synchronized. The key is that when a node is unavailable, it's a temporary condition and it comes back online.
+
+A typical failover sequence looks like this, when a primary needs to go down for maintenance:
+
+1. Primary and replica nodes negotiate a coordinated failover and trade roles.
+1. Replica (formerly primary) goes offline for a reboot.
+1. A few seconds or minutes later, the replica comes back online.
+1. Replica syncs the data from the primary.
A primary node can go out of service as part of a planned maintenance activity, such as an update to Redis software or the operating system. It also can stop working because of unplanned events such as failures in underlying hardware, software, or network. [Failover and patching for Azure Cache for Redis](cache-failover.md) provides a detailed explanation on types of failovers. An Azure Cache for Redis goes through many failovers during its lifetime. The design of the high availability architecture makes these changes inside a cache as transparent to its clients as possible.
A zone redundant cache provides automatic failover. When the current primary nod
### Enterprise and Enterprise Flash tiers
-A cache in either Enterprise tier runs on a Redis Enterprise *cluster*. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM.
+A cache in either Enterprise tier runs on a Redis Enterprise _cluster_. It always requires an odd number of server nodes to form a quorum. By default, it has three nodes, each hosted on a dedicated VM.
-- An Enterprise cache has two same-sized *data nodes* and one smaller *quorum node*.
+- An Enterprise cache has two same-sized _data nodes_ and one smaller _quorum node_.
- An Enterprise Flash cache has three same-sized data nodes.
-The Enterprise cluster divides Azure Cache for Redis data into partitions internally. Each partition has a *primary* and at least one *replica*. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
+The Enterprise cluster divides Azure Cache for Redis data into partitions internally. Each partition has a _primary_ and at least one _replica_. Each data node holds one or more partitions. The Enterprise cluster ensures that the primary and replica(s) of any partition are never collocated on the same data node. Partitions replicate data asynchronously from primaries to their corresponding replicas.
-When a data node becomes unavailable or a network split happens, a failover similar to the one described in [Standard replication](#standard-replication-for-high-availability) takes place. The Enterprise cluster uses a quorum-based model to determine which surviving nodes participates in a new quorum. It also promotes replica partitions within these nodes to primaries as needed.
+When a data node becomes unavailable or a network split happens, a failover similar to the one described in [Standard replication](#standard-replication-for-high-availability) takes place. The Enterprise cluster uses a quorum-based model to determine which surviving nodes participate in a new quorum. It also promotes replica partitions within these nodes to primaries as needed.
## Persistence
For more information on force-unlinking, see [Force-Unlink if there's region out
Applicable tiers: **Standard**, **Premium**, **Enterprise**, **Enterprise Flash**
-If you experience a regional outage, consider recreating your cache in a different region and updating your application to connect to the new cache instead. It's important to understand that data will be lost during a regional outage. Your application code should be resilient to data loss.
+If you experience a regional outage, consider recreating your cache in a different region, and updating your application to connect to the new cache instead. It's important to understand that data will be lost during a regional outage. Your application code should be resilient to data loss.
-Once the affected region is restored, your unavailable Azure Cache for Redis is automatically restored and available for use again. For more strategies for moving your cache to a different region, see [Move Azure Cache for Redis instances to different regions](./cache-moving-resources.md).
+Once the affected region is restored, your unavailable Azure Cache for Redis is automatically restored, and available for use again. For more strategies for moving your cache to a different region, see [Move Azure Cache for Redis instances to different regions](./cache-moving-resources.md).
## Next steps
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Title: Manage the Azure Log Analytics agent
-description: This article describes the different management tasks that you will typically perform during the lifecycle of the Log Analytics Windows or Linux agent deployed on a machine.
+description: This article describes the different management tasks that you'll typically perform during the lifecycle of the Log Analytics Windows or Linux agent deployed on a machine.
# Manage and maintain the Log Analytics agent for Windows and Linux
-After initial deployment of the Log Analytics Windows or Linux agent in Azure Monitor, you may need to reconfigure the agent, upgrade it, or remove it from the computer if it has reached the retirement stage in its lifecycle. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses.
+After initial deployment of the Log Analytics Windows or Linux agent in Azure Monitor, you might need to reconfigure the agent, upgrade it, or remove it from the computer if it has reached the retirement stage in its lifecycle. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses.
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] ## Upgrade the agent
-Upgrade to the latest release of the Log Analytics agent for Windows and Linux manually or automatically based on your deployment scenario and the environment the VM is running in:
+Upgrade to the latest release of the Log Analytics agent for Windows and Linux manually or automatically based on your deployment scenario and the environment the VM is running in.
| Environment | Installation method | Upgrade method | |--|-|-|
-| Azure VM | Log Analytics agent VM extension for Windows/Linux | Agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property _autoUpgradeMinorVersion_ to **false**. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. Only Linux agent supports automatic update post deployment with _enableAutomaticUpgrade_ property(See [Enable Auto-Update for the Linux Agent](#enable-auto-update-for-the-linux-agent)). Major version upgrade is always manual(See [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet)). |
-| Custom Azure VM images | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.|
-| Non-Azure VMs | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. |
+| Azure VM | Log Analytics agent VM extension for Windows/Linux | The agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property `autoUpgradeMinorVersion` to **false**. Once deployed, however, the extension won't upgrade minor versions unless redeployed, even with this property set to **true**. Only the Linux agent supports automatic update post deployment with `enableAutomaticUpgrade` property (see [Enable Auto-update for the Linux agent](#enable-auto-update-for-the-linux-agent)). Major version upgrade is always manual (see [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet)). |
+| Custom Azure VM images | Manual installation of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent must be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.|
+| Non-Azure VMs | Manual installation of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent must be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle. |
-### Upgrade Windows agent
+### Upgrade the Windows agent
-To update the agent on a Windows VM to the latest version not installed using the Log Analytics VM extension, you either run from the Command Prompt, script or other automation solution, or by using the MMASetup-\<platform\>.msi Setup Wizard.
+To update the agent on a Windows VM to the latest version not installed by using the Log Analytics VM extension, you either run from the command prompt, script, or other automation solution or use the **MMASetup-\<platform\>.msi Setup Wizard**.
-You can download the latest version of the Windows agent from your Log Analytics workspace, by performing the following steps.
+To download the latest version of the Windows agent from your Log Analytics workspace:
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, click **All services**. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**.
+1. In the Azure portal, select **All services**. In the list of resources, enter **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**.
-3. In your list of Log Analytics workspaces, select the workspace.
+1. In your list of Log Analytics workspaces, select the workspace.
-4. In your Log Analytics workspace, select **Agents Management** tile, and then **Windows Servers**.
+1. In your Log Analytics workspace, select the **Agents Management** tile and then select **Windows Servers**.
-5. From the **Windows Servers** page, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system.
+1. On the **Windows Servers** screen, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system.
>[!NOTE]
->During the upgrade of the Log Analytics agent for Windows, it does not support configuring or reconfiguring a workspace to report to. To configure the agent, you need to follow one of the supported methods listed under [Add or remove a workspace](#add-or-remove-a-workspace).
+>During the upgrade of the Log Analytics agent for Windows, it doesn't support configuring or reconfiguring a workspace to report to. To configure the agent, follow one of the supported methods listed under [Add or remove a workspace](#add-or-remove-a-workspace).
>
-#### To upgrade using the Setup Wizard
+#### Upgrade using the Setup Wizard
1. Sign on to the computer with an account that has administrative rights.
-2. Execute **MMASetup-\<platform\>.exe** to start the Setup Wizard.
+1. Execute **MMASetup-\<platform\>.exe** to start the **Setup Wizard**.
-3. On the first page of the Setup Wizard, click **Next**.
+1. On the first page of the **Setup Wizard**, select **Next**.
-4. In the **Microsoft Monitoring Agent Setup** dialog box, click **I agree** to accept the license agreement.
+1. In the **Microsoft Monitoring Agent Setup** dialog, select **I agree** to accept the license agreement.
-5. In the **Microsoft Monitoring Agent Setup** dialog box, click **Upgrade**. The status page displays the progress of the upgrade.
+1. In the **Microsoft Monitoring Agent Setup** dialog, select **Upgrade**. The status page displays the progress of the upgrade.
-6. When the **Microsoft Monitoring Agent configuration completed successfully.** page appears, click **Finish**.
+1. When the **Microsoft Monitoring Agent configuration completed successfully** page appears, select **Finish**.
-#### To upgrade from the command line
+#### Upgrade from the command line
1. Sign on to the computer with an account that has administrative rights.
-2. To extract the agent installation files, from an elevated command prompt run `MMASetup-<platform>.exe /c` and it will prompt you for the path to extract files to. Alternatively, you can specify the path by passing the arguments `MMASetup-<platform>.exe /c /t:<Full Path>`.
+1. To extract the agent installation files, run `MMASetup-<platform>.exe /c` from an elevated command prompt, and it will prompt you for the path to extract files to. Alternatively, you can specify the path by passing the arguments `MMASetup-<platform>.exe /c /t:<Full Path>`.
-3. Run the following command, where D:\ is the location for the upgrade log file.
+1. Run the following command, where D:\ is the location for the upgrade log file:
```dos setup.exe /qn /l*v D:\logs\AgentUpgrade.log AcceptEndUserLicenseAgreement=1 ```
-### Upgrade Linux agent
+### Upgrade the Linux agent
Upgrade from prior versions (>1.0.0-47) is supported. Performing the installation with the `--upgrade` command will upgrade all components of the agent to the latest version.
-Run the following command to upgrade the agent.
+Run the following command to upgrade the agent:
`sudo sh ./omsagent-*.universal.x64.sh --upgrade`
-### Enable Auto-Update for the Linux Agent
+### Enable auto-update for the Linux agent
-We recommend enabling [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) using these commands to update the agent automatically:
+We recommend that you enable [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) by using these commands to update the agent automatically.
+
+# [PowerShell](#tab/PowerShellLinux)
-# [Powershell](#tab/PowerShellLinux)
```powershell Set-AzVMExtension \ -ResourceGroupName myResourceGroup \
Set-AzVMExtension \
-SettingString '{"workspaceId":"myWorkspaceId","skipDockerProviderInstall": true}' \ -EnableAutomaticUpgrade $true ```+ # [Azure CLI](#tab/CLILinux)+ ```powershell az vm extension set \ --resource-group myResourceGroup \
az vm extension set \
## Add or remove a workspace
+Add or remove a workspace using the Windows agent or the Linux agent.
+ ### Windows agent
-The steps in this section are necessary when you want to not only reconfigure the Windows agent to report to a different workspace or to remove a workspace from its configuration, but also when you want to configure the agent to report to more than one workspace (commonly referred to as multi-homing). Configuring the Windows agent to report to multiple workspaces can only be performed after initial setup of the agent and using the methods described below.
+
+The steps in this section are necessary not only when you want to reconfigure the Windows agent to report to a different workspace or remove a workspace from its configuration, but also when you want to configure the agent to report to more than one workspace. (This practice is commonly referred to as multihoming.) Configuring the Windows agent to report to multiple workspaces can only be performed after initial setup of the agent and by using the methods described in this section.
#### Update settings from Control Panel 1. Sign on to the computer with an account that has administrative rights.
-2. Open **Control Panel**.
+1. Open Control Panel.
-3. Select **Microsoft Monitoring Agent** and then click the **Azure Log Analytics** tab.
+1. Select **Microsoft Monitoring Agent** and then select the **Azure Log Analytics** tab.
-4. If removing a workspace, select it and then click **Remove**. Repeat this step for any other workspace you want the agent to stop reporting to.
+1. If you're removing a workspace, select it and then select **Remove**. Repeat this step for any other workspace you want the agent to stop reporting to.
-5. If adding a workspace, click **Add** and on the **Add a Log Analytics Workspace** dialog box, paste the Workspace ID and Workspace Key (Primary Key). If the computer should report to a Log Analytics workspace in Azure Government cloud, select Azure US Government from the Azure Cloud drop-down list.
+1. If you're adding a workspace, select **Add**. In the **Add a Log Analytics Workspace** dialog, paste the workspace ID and workspace key (primary key). If the computer should report to a Log Analytics workspace in Azure Government cloud, select **Azure US Government** from the **Azure Cloud** dropdown list.
-6. Click **OK** to save your changes.
+1. Select **OK** to save your changes.
#### Remove a workspace using PowerShell
$mma.ReloadConfiguration()
> ### Linux agent+ The following steps demonstrate how to reconfigure the Linux agent if you decide to register it with a different workspace or to remove a workspace from its configuration.
-1. To verify it is registered to a workspace, run the following command:
+1. To verify the agent is registered to a workspace, run the following command:
`/opt/microsoft/omsagent/bin/omsadmin.sh -l`
The following steps demonstrate how to reconfigure the Linux agent if you decide
`Primary Workspace: <workspaceId> Status: Onboarded(OMSAgent Running)`
- It is important that the status also shows the agent is running, otherwise the following steps to reconfigure the agent will not complete successfully.
+ It's important that the status also shows the agent is running. Otherwise, the following steps to reconfigure the agent won't finish successfully.
-2. If it is already registered with a workspace, remove the registered workspace by running the following command. Otherwise if it is not registered, proceed to the next step.
+1. If the agent is already registered with a workspace, remove the registered workspace by running the following command. Otherwise, if it isn't registered, proceed to the next step.
`/opt/microsoft/omsagent/bin/omsadmin.sh -X`
-3. To register with a different workspace, run the following command:
+1. To register with a different workspace, run the following command:
`/opt/microsoft/omsagent/bin/omsadmin.sh -w <workspace id> -s <shared key> [-d <top level domain>]`
-4. To verify your changes took effect, run the following command:
+1. To verify your changes took effect, run the following command:
`/opt/microsoft/omsagent/bin/omsadmin.sh -l`
The following steps demonstrate how to reconfigure the Linux agent if you decide
`Primary Workspace: <workspaceId> Status: Onboarded(OMSAgent Running)`
-The agent service does not need to be restarted in order for the changes to take effect.
+The agent service doesn't need to be restarted for the changes to take effect.
## Update proxy settings
-Log Analytics Agent (MMA) does not use the system proxy settings. Hence, user has to pass proxy setting while installing MMA and these settings will be stored under MMA configuration(registry) on VM. To configure the agent to communicate to the service through a proxy server or [Log Analytics gateway](./gateway.md) after deployment, use one of the following methods to complete this task.
+
+Log Analytics Agent (MMA) doesn't use the system proxy settings. As a result, you have to pass proxy settings while you install MMA. These settings will be stored under MMA configuration (registry) on the VM. To configure the agent to communicate to the service through a proxy server or [Log Analytics gateway](./gateway.md) after deployment, use one of the following methods to complete this task.
### Windows agent
+Use a Windows agent.
+ #### Update settings using Control Panel 1. Sign on to the computer with an account that has administrative rights.
-2. Open **Control Panel**.
-
-3. Select **Microsoft Monitoring Agent** and then click the **Proxy Settings** tab.
+1. Open Control Panel.
-4. Click **Use a proxy server** and provide the URL and port number of the proxy server or gateway. If your proxy server or Log Analytics gateway requires authentication, type the username and password to authenticate and then click **OK**.
+1. Select **Microsoft Monitoring Agent** and then select the **Proxy Settings** tab.
+1. Select **Use a proxy server** and provide the URL and port number of the proxy server or gateway. If your proxy server or Log Analytics gateway requires authentication, enter the username and password to authenticate and then select **OK**.
#### Update settings using PowerShell
$healthServiceSettings.SetProxyInfo($ProxyDomainName, $ProxyUserName, $cred.GetN
``` ### Linux agent
-Perform the following steps if your Linux computers need to communicate through a proxy server or Log Analytics gateway. The proxy configuration value has the following syntax `[protocol://][user:password@]proxyhost[:port]`. The *proxyhost* property accepts a fully qualified domain name or IP address of the proxy server.
-1. Edit the file `/etc/opt/microsoft/omsagent/proxy.conf` by running the following commands and change the values to your specific settings.
+Perform the following steps if your Linux computers need to communicate through a proxy server or Log Analytics gateway. The proxy configuration value has the following syntax: `[protocol://][user:password@]proxyhost[:port]`. The `proxyhost` property accepts a fully qualified domain name or IP address of the proxy server.
+
+1. Edit the file `/etc/opt/microsoft/omsagent/proxy.conf` by running the following commands and change the values to your specific settings:
``` proxyconf="https://proxyuser:proxypassword@proxyserver01:30443"
Perform the following steps if your Linux computers need to communicate through
sudo chown omsagent:omiusers /etc/opt/microsoft/omsagent/proxy.conf ```
-2. Restart the agent by running the following command:
+1. Restart the agent by running the following command:
``` sudo /opt/microsoft/omsagent/bin/service_control restart [<workspace id>] ```
- If you see "cURL failed to perform on this base url" in the log, you can try removing '\n' in proxy.conf EOF to resolve the failure:
+
+ If you see `cURL failed to perform on this base url` in the log, you can try removing `'\n'` in `proxy.conf` EOF to resolve the failure:
+ ``` od -c /etc/opt/microsoft/omsagent/proxy.conf cat /etc/opt/microsoft/omsagent/proxy.conf | tr -d '\n' > /etc/opt/microsoft/omsagent/proxy2.conf
Perform the following steps if your Linux computers need to communicate through
``` ## Uninstall agent
-Use one of the following procedures to uninstall the Windows or Linux agent using the command line or setup wizard.
+
+Use one of the following procedures to uninstall the Windows or Linux agent by using the command line or **Setup Wizard**.
### Windows agent
+Use the Windows agent.
+ #### Uninstall from Control Panel+ 1. Sign on to the computer with an account that has administrative rights.
-2. In **Control Panel**, click **Programs and Features**.
+1. In Control Panel, select **Programs and Features**.
-3. In **Programs and Features**, click **Microsoft Monitoring Agent**, click **Uninstall**, and then click **Yes**.
+1. In **Programs and Features**, select **Microsoft Monitoring Agent** > **Uninstall** > **Yes**.
>[!NOTE]
->The Agent Setup Wizard can also be run by double-clicking **MMASetup-\<platform\>.exe**, which is available for download from a workspace in the Azure portal.
+>The **Agent Setup Wizard** can also be run by double-clicking `MMASetup-\<platform\>.exe`, which is available for download from a workspace in the Azure portal.
#### Uninstall from the command line
-The downloaded file for the agent is a self-contained installation package created with IExpress. The setup program for the agent and supporting files are contained in the package and need to be extracted in order to properly uninstall using the command line shown in the following example.
+
+The downloaded file for the agent is a self-contained installation package created with IExpress. The setup program for the agent and supporting files are contained in the package and must be extracted to properly uninstall by using the command line shown in the following example.
1. Sign on to the computer with an account that has administrative rights.
-2. To extract the agent installation files, from an elevated command prompt run `extract MMASetup-<platform>.exe` and it will prompt you for the path to extract files to. Alternatively, you can specify the path by passing the arguments `extract MMASetup-<platform>.exe /c:<Path> /t:<Path>`. For more information on the command-line switches supported by IExpress, see [Command-line switches for IExpress](https://www.betaarchive.com/wiki/index.php?title=Microsoft_KB_Archive/197147) and then update the example to suit your needs.
+1. To extract the agent installation files, from an elevated command prompt run `extract MMASetup-<platform>.exe` and it will prompt you for the path to extract files to. Alternatively, you can specify the path by passing the arguments `extract MMASetup-<platform>.exe /c:<Path> /t:<Path>`. For more information on the command-line switches supported by IExpress, see [Command-line switches for IExpress](https://www.betaarchive.com/wiki/index.php?title=Microsoft_KB_Archive/197147) and then update the example to suit your needs.
-3. At the prompt, type `%WinDir%\System32\msiexec.exe /x <Path>:\MOMAgent.msi /qb`.
+1. At the prompt, enter `%WinDir%\System32\msiexec.exe /x <Path>:\MOMAgent.msi /qb`.
### Linux agent
-To remove the agent, run the following command on the Linux computer. The *--purge* argument completely removes the agent and its configuration.
+
+To remove the agent, run the following command on the Linux computer. The `--purge` argument completely removes the agent and its configuration.
`wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh && sh onboard_agent.sh --purge` ## Configure agent to report to an Operations Manager management group
+Use the Windows agent.
+ ### Windows agent+ Perform the following steps to configure the Log Analytics agent for Windows to report to a System Center Operations Manager management group. [!INCLUDE [log-analytics-agent-note](../../../includes/log-analytics-agent-note.md)] 1. Sign on to the computer with an account that has administrative rights.
-2. Open **Control Panel**.
+1. Open Control Panel.
-3. Click **Microsoft Monitoring Agent** and then click the **Operations Manager** tab.
+1. Select **Microsoft Monitoring Agent** and then select the **Operations Manager** tab.
-4. If your Operations Manager servers have integration with Active Directory, click **Automatically update management group assignments from AD DS**.
+1. If your Operations Manager servers have integration with Active Directory, select **Automatically update management group assignments from AD DS**.
-5. Click **Add** to open the **Add a Management Group** dialog box.
+1. Select **Add** to open the **Add a Management Group** dialog.
-6. In **Management group name** field, type the name of your management group.
+1. In the **Management group name** field, enter the name of your management group.
-7. In the **Primary management server** field, type the computer name of the primary management server.
+1. In the **Primary management server** field, enter the computer name of the primary management server.
-8. In the **Management server port** field, type the TCP port number.
+1. In the **Management server port** field, enter the TCP port number.
-9. Under **Agent Action Account**, choose either the Local System account or a local domain account.
+1. Under **Agent Action Account**, choose either the local system account or a local domain account.
-10. Click **OK** to close the **Add a Management Group** dialog box and then click **OK** to close the **Microsoft Monitoring Agent Properties** dialog box.
+1. Select **OK** to close the **Add a Management Group** dialog. Then select **OK** to close the **Microsoft Monitoring Agent Properties** dialog.
### Linux agent+ Perform the following steps to configure the Log Analytics agent for Linux to report to a System Center Operations Manager management group. [!INCLUDE [log-analytics-agent-note](../../../includes/log-analytics-agent-note.md)]
-1. Edit the file `/etc/opt/omi/conf/omiserver.conf`
+1. Edit the file `/etc/opt/omi/conf/omiserver.conf`.
-2. Ensure that the line beginning with `httpsport=` defines the port 1270. Such as: `httpsport=1270`
+1. Ensure that the line beginning with `httpsport=` defines the port 1270, such as, `httpsport=1270`.
-3. Restart the OMI server: `sudo /opt/omi/bin/service_control restart`
+1. Restart the OMI server by using the following command:
-## Next steps
+ `sudo /opt/omi/bin/service_control restart`
-- Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while installing or managing the Linux agent.
+## Next steps
-- Review [Troubleshooting the Windows agent](agent-windows-troubleshoot.md) if you encounter issues while installing or managing the Windows agent.
+- Review [Troubleshooting the Linux agent](agent-linux-troubleshoot.md) if you encounter issues while you install or manage the Linux agent.
+- Review [Troubleshooting the Windows agent](agent-windows-troubleshoot.md) if you encounter issues while you install or manage the Windows agent.
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
Title: Troubleshoot issues with Log Analytics agent for Windows
+ Title: Troubleshoot issues with the Log Analytics agent for Windows
description: Describe the symptoms, causes, and resolution for the most common issues with the Log Analytics agent for Windows in Azure Monitor. Last updated 03/31/2022
-# How to troubleshoot issues with the Log Analytics agent for Windows
-
-This article provides help troubleshooting errors you might experience with the Log Analytics agent for Windows in Azure Monitor and suggests possible solutions to resolve them.
+# Troubleshoot issues with the Log Analytics agent for Windows
+This article provides help in troubleshooting errors you might experience with the Log Analytics agent for Windows in Azure Monitor and suggests possible solutions to resolve them.
## Log Analytics Troubleshooting Tool
-The Log Analytics Agent Windows Troubleshooting Tool is a collection of PowerShell scripts designed to help find and diagnose issues with the Log Analytics Agent. It is automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
+The Log Analytics agent for Windows Troubleshooting Tool is a collection of PowerShell scripts designed to help find and diagnose issues with the Log Analytics agent. It's automatically included with the agent upon installation. Running the tool should be the first step in diagnosing an issue.
+
+### Use the Troubleshooting Tool
+
+1. Open the PowerShell prompt as administrator on the machine where the Log Analytics agent is installed.
+1. Go to the directory where the tool is located:
-### How to use
-1. Open PowerShell prompt as Administrator on the machine where Log Analytics Agent is installed.
-1. Navigate to the directory where the tool is located.
- * `cd "C:\Program Files\Microsoft Monitoring Agent\Agent\Troubleshooter"`
-1. Execute the main script using this command:
- * `.\GetAgentInfo.ps1`
+ `cd "C:\Program Files\Microsoft Monitoring Agent\Agent\Troubleshooter"`
+1. Execute the main script by using this command:
+
+ `.\GetAgentInfo.ps1`
1. Select a troubleshooting scenario.
-1. Follow instructions on the console. (Note: trace logs steps requires manual intervention to stop log collection. Based upon the reproducibility of the issue, wait for the time duration and press 's' to stop log collection and proceed to the next step).
+1. Follow instructions on the console. Note that trace logs steps require manual intervention to stop log collection. Based on the reproducibility of the issue, wait for the time duration and select "s" to stop log collection and proceed to the next step.
- Locations of the results file is logged upon completion and a new explorer window highlighting it is opened.
+ The location of the results file is logged upon completion and a new explorer window highlighting it is opened.
### Installation
-The Troubleshooting Tool is automatically included upon installation of the Log Analytics Agent build 10.20.18053.0 and onwards.
+
+The Troubleshooting Tool is automatically included upon installation of the Log Analytics Agent build 10.20.18053.0 and onward.
### Scenarios covered
-Below is a list of scenarios checked by the Troubleshooting Tool:
--- Agent not reporting data or heartbeat data missing-- Agent extension deployment failing-- Agent crashing-- Agent consuming high CPU/memory-- Installation/uninstallation failures-- Custom logs issue-- OMS Gateway issue-- Performance counters issue-- Collect all logs+
+The Troubleshooting Tool checks the following scenarios:
+
+- The agent isn't reporting data or heartbeat data is missing.
+- The agent extension deployment is failing.
+- The agent is crashing.
+- The agent is consuming high CPU or memory.
+- Installation and uninstallation experience failures.
+- Custom logs have issues.
+- OMS Gateway has issues.
+- Performance counters have issues.
+- Agent logs can't be collected.
>[!NOTE]
->Please run the Troubleshooting tool when you experience an issue. When opening a ticket, having the logs initially will greatly help our support team troubleshoot your issue quicker.
+>Run the Troubleshooting Tool when you experience an issue. Having the logs initially will help our support team troubleshoot your issue faster.
## Important troubleshooting sources
- To assist with troubleshooting issues related to Log Analytics agent for Windows, the agent logs events to the Windows Event Log, specifically under *Application and Services\Operations Manager*.
+ To assist with troubleshooting issues related to the Log Analytics agent for Windows, the agent logs events to the Windows Event Log, specifically under *Application and Services\Operations Manager*.
## Connectivity issues
-If the agent is communicating through a proxy server or firewall, there may be restrictions in place preventing communication from the source computer and the Azure Monitor service. In case communication is blocked, because of misconfiguration, registration with a workspace might fail while attempting to install the agent or configure the agent post-setup to report to an additional workspace. Agent communication may fail after successful registration. This section describes the methods to troubleshoot this type of issue with the Windows agent.
+If the agent is communicating through a proxy server or firewall, restrictions might be in place that prevent communication from the source computer and the Azure Monitor service. If communication is blocked because of misconfiguration, registration with a workspace might fail while attempting to install the agent or configure the agent post-setup to report to another workspace. Agent communication might fail after successful registration. This section describes the methods to troubleshoot this type of issue with the Windows agent.
-Double check that the firewall or proxy is configured to allow the following ports and URLs described in the following table. Also confirm HTTP inspection is not enabled for web traffic, as it can prevent a secure TLS channel between the agent and Azure Monitor.
+Double-check that the firewall or proxy is configured to allow the following ports and URLs described in the following table. Also confirm that HTTP inspection isn't enabled for web traffic. It can prevent a secure TLS channel between the agent and Azure Monitor.
-|Agent Resource|Ports |Direction |Bypass HTTPS inspection|
+|Agent resource|Ports |Direction |Bypass HTTPS inspection|
|||--|--| |*.ods.opinsights.azure.com |Port 443 |Outbound|Yes | |*.oms.opinsights.azure.com |Port 443 |Outbound|Yes | |*.blob.core.windows.net |Port 443 |Outbound|Yes | |*.agentsvc.azure-automation.net |Port 443 |Outbound|Yes |
-For firewall information required for Azure Government, see [Azure Government management](../../azure-government/compare-azure-government-global-azure.md#azure-monitor). If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation service to use runbooks or management solutions in your environment, it must have access to the port number and the URLs described in [Configure your network for the Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md#network-planning).
+For firewall information required for Azure Government, see [Azure Government management](../../azure-government/compare-azure-government-global-azure.md#azure-monitor). If you plan to use the Azure Automation Hybrid Runbook Worker to connect to and register with the Automation service to use runbooks or management solutions in your environment, it must have access to the port number and the URLs described in [Configure your network for the Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md#network-planning).
-There are several ways you can verify if the agent is successfully communicating with Azure Monitor.
+There are several ways you can verify if the agent is successfully communicating with Azure Monitor:
-- Enable the [Azure Log Analytics Agent Health assessment](../insights/solution-agenthealth.md) in the workspace. From the Agent Health dashboard, view the **Count of unresponsive agents** column to quickly see if the agent is listed. --- Run the following query to confirm the agent is sending a heartbeat to the workspace it is configured to report to. Replace `<ComputerName>` with the actual name of the machine.
+- Enable the [Azure Log Analytics Agent Health assessment](../insights/solution-agenthealth.md) in the workspace. From the Agent Health dashboard, view the **Count of unresponsive agents** column to quickly see if the agent is listed.
+- Run the following query to confirm the agent is sending a heartbeat to the workspace it's configured to report to. Replace `<ComputerName>` with the actual name of the machine.
``` Heartbeat
There are several ways you can verify if the agent is successfully communicating
| summarize arg_max(TimeGenerated, * ) by Computer ```
- If the computer is successfully communicating with the service, the query should return a result. If the query did not return a result, first verify the agent is configured to report to the correct workspace. If it is configured correctly, proceed to step 3 and search the Windows Event Log to identify if the agent is logging what issue might be preventing it from communicating with Azure Monitor.
+ If the computer is successfully communicating with the service, the query should return a result. If the query didn't return a result, first verify the agent is configured to report to the correct workspace. If it's configured correctly, proceed to step 3 and search the Windows Event Log to identify if the agent is logging the issue that might be preventing it from communicating with Azure Monitor.
-- Another method to identify a connectivity issue is by running the **TestCloudConnectivity** tool. The tool is installed by default with the agent in the folder *%SystemRoot%\Program Files\Microsoft Monitoring Agent\Agent*. From an elevated command prompt, navigate to the folder and run the tool. The tool returns the results and highlights where the test failed (for example, if it was related to a particular port/URL that was blocked).
+- Another method to identify a connectivity issue is by running the **TestCloudConnectivity** tool. The tool is installed by default with the agent in the folder *%SystemRoot%\Program Files\Microsoft Monitoring Agent\Agent*. From an elevated command prompt, go to the folder and run the tool. The tool returns the results and highlights where the test failed. For example, perhaps it was related to a particular port or URL that was blocked.
- ![TestCloudConnection tool execution results](./media/agent-windows-troubleshoot/output-testcloudconnection-tool-01.png)
+ ![Screenshot that shows TestCloudConnection tool execution results.](./media/agent-windows-troubleshoot/output-testcloudconnection-tool-01.png)
-- Filter the *Operations Manager* event log by **Event sources** - *Health Service Modules*, *HealthService*, and *Service Connector* and filter by **Event Level** *Warning* and *Error* to confirm if it has written events from the following table. If they are, review the resolution steps included for each possible event.
+- Filter the *Operations Manager* event log by **Event sources** *Health Service Modules*, *HealthService*, and *Service Connector* and filter by **Event Level** *Warning* and *Error* to confirm if it has written events from the following table. If they are, review the resolution steps included for each possible event.
|Event ID |Source |Description |Resolution | ||-||--|
- |2133 & 2129 |Health Service |Connection to the service from the agent failed |This error can occur when the agent cannot communicate directly or through a firewall/proxy server to the Azure Monitor service. Verify agent proxy settings or that the network firewall/proxy allows TCP traffic from the computer to the service.|
- |2138 |Health Service Modules |Proxy requires authentication |Configure the agent proxy settings and specify the username/password required to authenticate with the proxy server. |
- |2129 |Health Service Modules |Failed connection/Failed TLS negotiation |Check your network adapter TCP/IP settings and agent proxy settings.|
- |2127 |Health Service Modules |Failure sending data received error code |If it only happens periodically during the day, it could just be a random anomaly that can be ignored. Monitor to understand how often it happens. If it happens often throughout the day, first check your network configuration and proxy settings. If the description includes HTTP error code 404 and it's the first time that the agent tries to send data to the service, it will include a 500 error with an inner 404 error code. 404 means not found, which indicates that the storage area for the new workspace is still being provisioned. On next retry, data will successfully write to the workspace as expected. An HTTP error 403 might indicate a permission or credentials issue. There is more information included with the 403 error to help troubleshoot the issue.|
- |4000 |Service Connector |DNS name resolution failed |The machine could not resolve the Internet address used when sending data to the service. This might be DNS resolver settings on your machine, incorrect proxy settings, or maybe a temporary DNS issue with your provider. If it happens periodically, it could be caused by a transient network-related issue.|
- |4001 |Service Connector |Connection to the service failed. |This error can occur when the agent cannot communicate directly or through a firewall/proxy server to the Azure Monitor service. Verify agent proxy settings or that the network firewall/proxy allows TCP traffic from the computer to the service.|
- |4002 |Service Connector |The service returned HTTP status code 403 in response to a query. Check with the service administrator for the health of the service. The query will be retried later. |This error is written during the agentΓÇÖs initial registration phase and youΓÇÖll see a URL similar to the following: *https://\<workspaceID>.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest*. An error code 403 means forbidden and can be caused by a mistyped Workspace ID or key, or the data and time is incorrect on the computer. If the time is +/- 15 minutes from current time, then onboarding fails. To correct this, update the date and/or timezone of your Windows computer.|
+ |2133 & 2129 |Health Service |Connection to the service from the agent failed. |This error can occur when the agent can't communicate directly or through a firewall or proxy server to the Azure Monitor service. Verify agent proxy settings or that the network firewall or proxy allows TCP traffic from the computer to the service.|
+ |2138 |Health Service Modules |Proxy requires authentication. |Configure the agent proxy settings and specify the username/password required to authenticate with the proxy server. |
+ |2129 |Health Service Modules |Failed connection. Failed TLS negotiation. |Check your network adapter TCP/IP settings and agent proxy settings.|
+ |2127 |Health Service Modules |Failure sending data received error code. |If it only happens periodically during the day, it could be a random anomaly that can be ignored. Monitor to understand how often it happens. If it happens often throughout the day, first check your network configuration and proxy settings. If the description includes HTTP error code 404 and it's the first time that the agent tries to send data to the service, it will include a 500 error with an inner 404 error code. The 404 error code means "not found," which indicates that the storage area for the new workspace is still being provisioned. On the next retry, data will successfully write to the workspace as expected. An HTTP error 403 might indicate a permission or credentials issue. More information is included with the 403 error to help troubleshoot the issue.|
+ |4000 |Service Connector |DNS name resolution failed. |The machine couldn't resolve the internet address used when it sent data to the service. This issue might be DNS resolver settings on your machine, incorrect proxy settings, or a temporary DNS issue with your provider. If it happens periodically, it could be caused by a transient network-related issue.|
+ |4001 |Service Connector |Connection to the service failed. |This error can occur when the agent can't communicate directly or through a firewall or proxy server to the Azure Monitor service. Verify agent proxy settings or that the network firewall or proxy allows TCP traffic from the computer to the service.|
+ |4002 |Service Connector |The service returned HTTP status code 403 in response to a query. Check with the service administrator for the health of the service. The query will be retried later. |This error is written during the agent's initial registration phase. You'll see a URL similar to *https://\<workspaceID>.oms.opinsights.azure.com/AgentService.svc/AgentTopologyRequest*. A 403 error code means "forbidden" and can be caused by a mistyped Workspace ID or key. The date and time might also be incorrect on the computer. If the time is +/- 15 minutes from current time, onboarding fails. To correct this issue, update the date and/or time of your Windows computer.|
## Data collection issues
-After the agent is installed and reports to its configured workspace or workspaces, it may stop receiving configuration, collecting or forwarding performance, logs, or other data to the service depending on what is enabled and targeting the computer. It is necessary to determine if:
+After the agent is installed and reports to its configured workspace or workspaces, it might stop receiving configuration and collecting or forwarding performance, logs, or other data to the service depending on what's enabled and targeting the computer. You need to determine:
-- Is it a particular data type or all data that is not available in the workspace?
+- Is it a particular data type or all data that's not available in the workspace?
- Is the data type specified by a solution or specified as part of the workspace data collection configuration?-- How many computers are affected? Is it a single or multiple computers reporting to the workspace?-- Was it working and did it stop at a particular time of day, or has it never been collected? -- Is the log search query you are using syntactically correct?
+- How many computers are affected? Is it a single computer or multiple computers reporting to the workspace?
+- Was it working and did it stop at a particular time of day, or has it never been collected?
+- Is the log search query you're using syntactically correct?
- Has the agent ever received its configuration from Azure Monitor? The first step in troubleshooting is to determine if the computer is sending a heartbeat event.
Heartbeat
| summarize arg_max(TimeGenerated, * ) by Computer ```
-If the query returns results, then you need to determine if a particular data type is not collected and forwarded to the service. This could be caused by the agent not receiving updated configuration from the service, or some other symptom preventing the agent from operating normally. Perform the following steps to further troubleshoot.
+If the query returns results, you need to determine if a particular data type isn't collected and forwarded to the service. This issue could be caused by the agent not receiving updated configuration from the service or some other symptom that prevents the agent from operating normally. Perform the following steps to further troubleshoot.
-1. Open an elevated command prompt on the computer and restart the agent service by typing `net stop healthservice && net start healthservice`.
-2. Open the *Operations Manager* event log and search for **event IDs** *7023, 7024, 7025, 7028* and *1210* from **Event source** *HealthService*. These events indicate the agent is successfully receiving configuration from Azure Monitor and they are actively monitoring the computer. The event description for event ID 1210 will also specify on the last line all of the solutions and Insights that are included in the scope of monitoring on the agent.
+1. Open an elevated command prompt on the computer and restart the agent service by entering `net stop healthservice && net start healthservice`.
+1. Open the *Operations Manager* event log and search for **event IDs** *7023, 7024, 7025, 7028*, and *1210* from **Event source** *HealthService*. These events indicate the agent is successfully receiving configuration from Azure Monitor and they're actively monitoring the computer. The event description for event ID 1210 will also specify on the last line all of the solutions and Insights that are included in the scope of monitoring on the agent.
- ![Event ID 1210 description](./media/agent-windows-troubleshoot/event-id-1210-healthservice-01.png)
+ ![Screenshot that shows an Event ID 1210 description.](./media/agent-windows-troubleshoot/event-id-1210-healthservice-01.png)
-3. If after several minutes you do not see the expected data in the query results or visualization, depending on if you are viewing the data from a solution or Insight, from the *Operations Manager* event log, search for **Event sources** *HealthService* and *Health Service Modules* and filter by **Event Level** *Warning* and *Error* to confirm if it has written events from the following table.
+1. Wait several minutes. If you don't see the expected data in the query results or visualization, depending on if you're viewing the data from a solution or Insight, from the *Operations Manager* event log, search for **Event sources** *HealthService* and *Health Service Modules*. Filter by **Event Level** *Warning* and *Error* to confirm if it has written events from the following table.
|Event ID |Source |Description |Resolution |
- ||-||
- |8000 |HealthService |This event will specify if a workflow related to performance, event, or other data type collected is unable to forward to the service for ingestion to the workspace. | Event ID 2136 from source HealthService is written together with this event and can indicate the agent is unable to communicate with the service, possibly due to misconfiguration of the proxy and authentication settings, network outage, or the network firewall/proxy does not allow TCP traffic from the computer to the service.|
- |10102 and 10103 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified performance counter or instance does not exist on the computer or is incorrectly defined in the workspace data settings. If this is a user-specified [performance counter](data-sources-performance-counters.md#configuring-performance-counters), verify the information specified is following the correct format and exists on the target computers. |
- |26002 |Health Service Modules |Workflow could not resolve data source. |This can occur if the specified Windows event log does not exist on the computer. This error can be safely ignored if the computer is not expected to have this event log registered, otherwise if this is a user-specified [event log](data-sources-windows-events.md#configure-windows-event-logs), verify the information specified is correct. |
+ ||-||--|
+ |8000 |HealthService |This event will specify if a workflow related to performance, event, or other data type collected is unable to forward to the service for ingestion to the workspace. | Event ID 2136 from source HealthService is written together with this event and can indicate the agent is unable to communicate with the service. Possible reasons might be misconfiguration of the proxy and authentication settings, network outage, or the network firewall or proxy doesn't allow TCP traffic from the computer to the service.|
+ |10102 and 10103 |Health Service Modules |Workflow couldn't resolve the data source. |This issue can occur if the specified performance counter or instance doesn't exist on the computer or is incorrectly defined in the workspace data settings. If this is a user-specified [performance counter](data-sources-performance-counters.md#configuring-performance-counters), verify the information specified follows the correct format and exists on the target computers. |
+ |26002 |Health Service Modules |Workflow couldn't resolve the data source. |This issue can occur if the specified Windows event log doesn't exist on the computer. This error can be safely ignored if the computer isn't expected to have this event log registered. Otherwise, if this is a user-specified [event log](data-sources-windows-events.md#configure-windows-event-logs), verify the information specified is correct. |
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body
#2. Create Monitored Object
+# "location" property value under the "body" section should be the Azure region where the MO object would be stored. It should be the "same region" where you created the Data Collection Rule. This is the location of the region from where agent communications would happen.
+ $request = "https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/$TenantID`?api-version=2021-09-01-preview" $body = @' {
$body = @'
$Respond = Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body -Verbose $RespondID = $Respond.id
-#########
+##########################
#3. Associate DCR to Monitored Object
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
Title: Monitor data from virtual machines with Azure Monitor agent
-description: Describes how to collect events and performance data from virtual machines using the Azure Monitor agent.
+ Title: Monitor data from virtual machines with Azure Monitor Agent
+description: Describes how to collect events and performance data from virtual machines by using Azure Monitor Agent.
Last updated 06/23/2022
-# Collect data from virtual machines with the Azure Monitor agent
+# Collect data from virtual machines with Azure Monitor Agent
-This article describes how to collect events and performance counters from virtual machines using the Azure Monitor agent.
+This article describes how to collect events and performance counters from virtual machines by using Azure Monitor Agent.
-To collect data from virtual machines using the Azure Monitor agent, you'll:
+To collect data from virtual machines by using Azure Monitor Agent, you'll:
-1. Create [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor agent sends to which destinations.
+1. Create [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) that define which data Azure Monitor Agent sends to which destinations.
1. Associate the data collection rule to specific virtual machines.
- You can associate virtual machines to multiple data collection rules. This allows you to define each data collection rule to address a particular requirement, and associate the data collection rules to virtual machines based on the specific data you want to collect from each machine.
+ You can associate virtual machines to multiple data collection rules. Define each data collection rule to address a particular requirement. Associate one or more data collection rules to a virtual machine based on the specific data you want the machine to collect.
## Create data collection rule and association
-To send data to Log Analytics, create the data collection rule in the **same region** as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
+To send data to Log Analytics, create the data collection rule in the *same region* as your Log Analytics workspace. You can still associate the rule to machines in other supported regions.
### [Portal](#tab/portal)
-1. From the **Monitor** menu, select **Data Collection Rules**.
-1. Select **Create** to create a new Data Collection Rule and associations.
+1. On the **Monitor** menu, select **Data Collection Rules**.
+1. Select **Create** to create a new data collection rule and associations.
- [ ![Screenshot showing the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
-
-1. Provide a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**.
+ [ ![Screenshot that shows the Create button on the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rules-updated.png#lightbox)
- **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
+1. Enter a **Rule name** and specify a **Subscription**, **Resource Group**, **Region**, and **Platform Type**:
- **Platform Type** specifies the type of resources this rule can apply to. Custom allows for both Windows and Linux types.
+ - **Region** specifies where the DCR will be created. The virtual machines and their associations can be in any subscription or resource group in the tenant.
- [ ![Screenshot showing the Basics tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
+ - **Platform Type** specifies the type of resources this rule can apply to. The **Custom** option allows for both Windows and Linux types.
-1. On the **Resources** tab, add the resources (virtual machines, virtual machine scale sets, Arc for servers) to which to associate the data collection rule. The portal will install Azure Monitor Agent on resources that don't already have it installed, and will also enable Azure Managed Identity.
+ [ ![Screenshot that shows the Basics tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-basics-updated.png#lightbox)
- > [!IMPORTANT]
- > The portal enables System-Assigned managed identity on the target resources, in addition to existing User-Assigned Identities (if any). For existing applications, unless you specify the User-Assigned identity in the request, the machine will default to using System-Assigned Identity instead.
+1. On the **Resources** tab, add the resources to which to associate the data collection rule. Resources can be virtual machines, virtual machine scale sets, and Azure Arc for servers. The Azure portal installs Azure Monitor Agent on resources that don't already have it installed. The portal also enables Azure Managed Identity.
- If you need network isolation using private links, select existing endpoints from the same region for the respective resources, or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
+ > [!IMPORTANT]
+ > The portal enables system-assigned managed identity on the target resources, along with existing user-assigned identities, if there are any. For existing applications, unless you specify the user-assigned identity in the request, the machine defaults to using system-assigned identity instead.
- [ ![Screenshot showing the Resources tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
+ If you need network isolation using private links, select existing endpoints from the same region for the respective resources or [create a new endpoint](../essentials/data-collection-endpoint-overview.md).
+ [ ![Screenshot that shows the Resources tab of the Data Collection Rule screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox)
1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination. 1. Select a **Data source type**.
-1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
+1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
- [ ![Screenshot of Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
+ [ ![Screenshot that shows the Azure portal form to select basic performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-basic-updated.png#lightbox)
-1. Select **Custom** to collect logs and performance counters that are not [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to [filter events using XPath queries](#filter-events-using-xpath-queries). You can then specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any specific values. See [Sample DCR](data-collection-rule-sample-agent.md) for an example.
+1. Select **Custom** to collect logs and performance counters that aren't [currently supported data sources](azure-monitor-agent-overview.md#data-sources-and-destinations) or to [filter events by using XPath queries](#filter-events-using-xpath-queries). You can then specify an [XPath](https://www.w3schools.com/xml/xpath_syntax.asp) to collect any specific values. For an example, see [Sample DCR](data-collection-rule-sample-agent.md).
- [ ![Screenshot of Azure portal form to select custom performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
+ [ ![Screenshot that shows the Azure portal form to select custom performance counters in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-data-source-custom-updated.png#lightbox)
-1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types - for instance multiple Log Analytics workspaces (known as "multi-homing").
+1. On the **Destination** tab, add one or more destinations for the data source. You can select multiple destinations of the same or different types. For instance, you can select multiple Log Analytics workspaces, which is also known as multihoming.
- You can send Windows event and Syslog data sources can to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
+ You can send Windows event and Syslog data sources to Azure Monitor Logs only. You can send performance counters to both Azure Monitor Metrics and Azure Monitor Logs.
- [ ![Screenshot of Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
+ [ ![Screenshot that shows the Azure portal form to add a data source in a data collection rule.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-destination.png#lightbox)
-1. Select **Add Data Source** and then **Review + create** to review the details of the data collection rule and association with the set of virtual machines.
+1. Select **Add data source** and then select **Review + create** to review the details of the data collection rule and association with the set of virtual machines.
1. Select **Create** to create the data collection rule. > [!NOTE]
To send data to Log Analytics, create the data collection rule in the **same reg
### [API](#tab/api)
-1. Create a DCR file using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
+1. Create a DCR file by using the JSON format shown in [Sample DCR](data-collection-rule-sample-agent.md).
-2. Create the rule using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
+1. Create the rule by using the [REST API](/rest/api/monitor/datacollectionrules/create#examples).
-3. Create an association for each virtual machine to the data collection rule using the [REST API](/rest/api/monitor/datacollectionruleassociations/create#examples).
+1. Create an association for each virtual machine to the data collection rule by using the [REST API](/rest/api/monitor/datacollectionruleassociations/create#examples).
### [PowerShell](#tab/powershell)
To send data to Log Analytics, create the data collection rule in the **same reg
| Action | Command | |:|:|
-| Get rule(s) | [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule?view=azps-5.4.0&preserve-view=true) |
+| Get rules | [Get-AzDataCollectionRule](/powershell/module/az.monitor/get-azdatacollectionrule?view=azps-5.4.0&preserve-view=true) |
| Create a rule | [New-AzDataCollectionRule](/powershell/module/az.monitor/new-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | | Update a rule | [Set-AzDataCollectionRule](/powershell/module/az.monitor/set-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | | Delete a rule | [Remove-AzDataCollectionRule](/powershell/module/az.monitor/remove-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
-| Update 'Tags' for a rule | [Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Update "Tags" for a rule | [Update-AzDataCollectionRule](/powershell/module/az.monitor/update-azdatacollectionrule?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
**Data collection rule associations** | Action | Command | |:|:|
-| Get association(s) | [Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
+| Get associations | [Get-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/get-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) |
| Create an association | [New-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/new-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | | Delete an association | [Remove-AzDataCollectionRuleAssociation](/powershell/module/az.monitor/remove-azdatacollectionruleassociation?view=azps-6.0.0&viewFallbackFrom=azps-5.4.0&preserve-view=true) | ### [Azure CLI](#tab/cli)
-This is enabled as part of Azure CLI **monitor-control-service** Extension. [View all commands](/cli/azure/monitor/data-collection/rule)
+This capability is enabled as part of the Azure CLI monitor-control-service extension. [View all commands](/cli/azure/monitor/data-collection/rule).
### [Resource Manager template](#tab/arm)
-See [Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md) for sample templates.
+For sample templates, see [Azure Resource Manager template samples for data collection rules in Azure Monitor](./resource-manager-data-collection-rules.md).
## Filter events using XPath queries
-Since you're charged for any data you collect in a Log Analytics workspace, collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
-To specify additional filters, use Custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you may want to return only events from the Application event log with an event ID of 1035. The XPathQuery for these events would be `*[System[EventID=1035]]`. Since you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
+You're charged for any data you collect in a Log Analytics workspace, so collect only the data you need. The basic configuration in the Azure portal provides you with a limited ability to filter out events.
+
+To specify more filters, use custom configuration and specify an XPath that filters out the events you don't need. XPath entries are written in the form `LogName!XPathQuery`. For example, you might want to return only events from the Application event log with an event ID of 1035. The `XPathQuery` for these events would be `*[System[EventID=1035]]`. Because you want to retrieve the events from the Application event log, the XPath is `Application!*[System[EventID=1035]]`
-### Extracting XPath queries from Windows Event Viewer
-In Windows, you can use Event Viewer to extract XPath queries as shown below.
+### Extract XPath queries from Windows Event Viewer
-When you paste the XPath query into the field on the **Add data source** screen, (step 5 in the picture below), you must append the log type category followed by '!'.
+In Windows, you can use Event Viewer to extract XPath queries as shown in the screenshots.
-[ ![Screenshot of steps in Azure portal showing the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+When you paste the XPath query into the field on the **Add data source** screen, as shown in step 5, you must append the log type category followed by an exclamation point (!).
-See [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations) for a list of limitations in the XPath supported by Windows event log.
+[ ![Screenshot that shows the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
+
+For a list of limitations in the XPath supported by Windows event log, see [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations).
> [!TIP]
-> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPathQuery locally on your machine first. The following script shows an example.
->
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPath query locally on your machine first. The following script shows an example:
+>
> ```powershell > $XPath = '*[System[EventID=1035]]' > Get-WinEvent -LogName 'Application' -FilterXPath $XPath > ``` >
-> - **In the cmdlet above, the value of the *-LogName* parameter is the initial part of the XPath query until the '!'. The rest of the XPath query goes into the *$XPath* parameter.**
+> - In the preceding cmdlet, the value of the `-LogName` parameter is the initial part of the XPath query until the exclamation point (!). The rest of the XPath query goes into the `$XPath` parameter.
> - If the script returns events, the query is valid.
-> - If you receive the message *No events were found that match the specified selection criteria.*, the query may be valid, but there are no matching events on the local machine.
-> - If you receive the message *The specified query is invalid* , the query syntax is invalid.
+> - If you receive the message "No events were found that match the specified selection criteria," the query might be valid but there are no matching events on the local machine.
+> - If you receive the message "The specified query is invalid," the query syntax is invalid.
-Examples of filtering events using a custom XPath:
+Examples of using a custom XPath to filter events:
| Description | XPath | |:|:|
Examples of filtering events using a custom XPath:
## Next steps -- [Collect text logs using Azure Monitor agent.](data-collection-text-log.md)-- Learn more about the [Azure Monitor Agent](azure-monitor-agent-overview.md).
+- [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md).
+- Learn more about [Azure Monitor Agent](azure-monitor-agent-overview.md).
- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Data Sources Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-syslog.md
Title: Collect Syslog data sources with Log Analytics agent in Azure Monitor
-description: Syslog is an event logging protocol that is common to Linux. This article describes how to configure collection of Syslog messages in Log Analytics and details of the records they create.
+ Title: Collect Syslog data sources with the Log Analytics agent in Azure Monitor
+description: Syslog is an event logging protocol that's common to Linux. This article describes how to configure collection of Syslog messages in Log Analytics and details the records they create.
Last updated 04/06/2022
-# Collect Syslog data sources with Log Analytics agent
-Syslog is an event logging protocol that is common to Linux. Applications will send messages that may be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the message to Azure Monitor where a corresponding record is created.
+# Collect Syslog data sources with the Log Analytics agent
+Syslog is an event logging protocol that's common to Linux. Applications send messages that might be stored on the local machine or delivered to a Syslog collector. When the Log Analytics agent for Linux is installed, it configures the local Syslog daemon to forward messages to the agent. The agent then sends the messages to Azure Monitor where a corresponding record is created.
> [!NOTE]
-> Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) is not supported for syslog event collection. To collect syslog data from this version of these distributions, the [rsyslog daemon](http://rsyslog.com) should be installed and configured to replace sysklog.
+> Azure Monitor supports collection of messages sent by rsyslog or syslog-ng, where rsyslog is the default daemon. The default Syslog daemon on version 5 of Red Hat Enterprise Linux, CentOS, and Oracle Linux version (sysklog) isn't supported for Syslog event collection. To collect Syslog data from this version of these distributions, the [rsyslog daemon](http://rsyslog.com) should be installed and configured to replace sysklog.
-
-![Syslog collection](media/data-sources-syslog/overview.png)
+![Diagram that shows Syslog collection.](media/data-sources-syslog/overview.png)
The following facilities are supported with the Syslog collector:
The following facilities are supported with the Syslog collector:
* local0-local7 For any other facility, [configure a Custom Logs data source](data-sources-custom-logs.md) in Azure Monitor.
-
-## Configuring Syslog
+
+## Configure Syslog
+ The Log Analytics agent for Linux will only collect events with the facilities and severities that are specified in its configuration. You can configure Syslog through the Azure portal or by managing configuration files on your Linux agents. ### Configure Syslog in the Azure portal+ Configure Syslog from the [Agent configuration menu](../agents/agent-data-sources.md#configuring-data-sources) for the Log Analytics workspace. This configuration is delivered to the configuration file on each Linux agent.
-You can add a new facility by clicking **Add facility**. For each facility, only messages with the selected severities will be collected. Check the severities for the particular facility that you want to collect. You cannot provide any additional criteria to filter messages.
+You can add a new facility by selecting **Add facility**. For each facility, only messages with the selected severities will be collected. Select the severities for the particular facility that you want to collect. You can't provide any other criteria to filter messages.
-[![Configure Syslog](media/data-sources-syslog/configure.png)](media/data-sources-syslog/configure.png#lightbox)
+[![Screenshot that shows configuring Syslog.](media/data-sources-syslog/configure.png)](media/data-sources-syslog/configure.png#lightbox)
-By default, all configuration changes are automatically pushed to all agents. If you want to configure Syslog manually on each Linux agent, then uncheck the box *Apply below configuration to my machines*.
+By default, all configuration changes are automatically pushed to all agents. If you want to configure Syslog manually on each Linux agent, clear the **Apply below configuration to my machines** checkbox.
### Configure Syslog on Linux agent
-When the [Log Analytics agent is installed on a Linux client](../vm/monitor-virtual-machine.md), it installs a default syslog configuration file that defines the facility and severity of the messages that are collected. You can modify this file to change the configuration. The configuration file is different depending on the Syslog daemon that the client has installed.
+
+When the [Log Analytics agent is installed on a Linux client](../vm/monitor-virtual-machine.md), it installs a default Syslog configuration file that defines the facility and severity of the messages that are collected. You can modify this file to change the configuration. The configuration file is different depending on the Syslog daemon that the client has installed.
> [!NOTE]
-> If you edit the syslog configuration, you must restart the syslog daemon for the changes to take effect.
+> If you edit the Syslog configuration, you must restart the Syslog daemon for the changes to take effect.
> > #### rsyslog
-The configuration file for rsyslog is located at **/etc/rsyslog.d/95-omsagent.conf**. Its default contents are shown below. This collects syslog messages sent from the local agent for all facilities with a level of warning or higher.
+
+The configuration file for rsyslog is located at `/etc/rsyslog.d/95-omsagent.conf`. Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with a level of warning or higher.
```config kern.warning @127.0.0.1:25224
local6.warning @127.0.0.1:25224
local7.warning @127.0.0.1:25224 ```
-You can remove a facility by removing its section of the configuration file. You can limit the severities that are collected for a particular facility by modifying that facility's entry. For example, to limit the user facility to messages with a severity of error or higher you would modify that line of the configuration file to the following:
+You can remove a facility by removing its section of the configuration file. You can limit the severities that are collected for a particular facility by modifying that facility's entry. For example, to limit the user facility to messages with a severity of error or higher, you would modify that line of the configuration file to the following example:
```config user.error @127.0.0.1:25224 ``` #### syslog-ng
-The configuration file for syslog-ng is location at **/etc/syslog-ng/syslog-ng.conf**. Its default contents are shown below. This collects syslog messages sent from the local agent for all facilities and all severities.
+
+The configuration file for syslog-ng is located at `/etc/syslog-ng/syslog-ng.conf`. Its default contents are shown in this example. This example collects Syslog messages sent from the local agent for all facilities and all severities.
```config #
filter f_user_oms { level(alert,crit,debug,emerg,err,info,notice,warning) and fa
log { source(src); filter(f_user_oms); destination(d_oms); }; ```
-You can remove a facility by removing its section of the configuration file. You can limit the severities that are collected for a particular facility by removing them from its list. For example, to limit the user facility to just alert and critical messages, you would modify that section of the configuration file to the following:
+You can remove a facility by removing its section of the configuration file. You can limit the severities that are collected for a particular facility by removing them from its list. For example, to limit the user facility to alert only critical messages, you would modify that section of the configuration file as shown in the following example:
```config #OMS_facility = user
filter f_user_oms { level(alert,crit) and facility(user); };
log { source(src); filter(f_user_oms); destination(d_oms); }; ```
-### Collecting data from additional Syslog ports
-The Log Analytics agent listens for Syslog messages on the local client on port 25224. When the agent is installed, a default syslog configuration is applied and found in the following location:
+### Collect data from other Syslog ports
+
+The Log Analytics agent listens for Syslog messages on the local client on port 25224. When the agent is installed, a default Syslog configuration is applied and found in the following location:
* Rsyslog: `/etc/rsyslog.d/95-omsagent.conf` * Syslog-ng: `/etc/syslog-ng/syslog-ng.conf`
-You can change the port number by creating two configuration files: a FluentD config file and a rsyslog-or-syslog-ng file depending on the Syslog daemon you have installed.
+You can change the port number by creating two configuration files: a FluentD config file and a rsyslog-or-syslog-ng file depending on the Syslog daemon you have installed.
-* The FluentD config file should be a new file located in: `/etc/opt/microsoft/omsagent/conf/omsagent.d` and replace the value in the **port** entry with your custom port number.
+* The FluentD config file should be a new file located in `/etc/opt/microsoft/omsagent/conf/omsagent.d` and replace the value in the `port` entry with your custom port number.
```xml <source>
You can change the port number by creating two configuration files: a FluentD co
type filter_syslog ```
-* For rsyslog, you should create a new configuration file located in: `/etc/rsyslog.d/` and replace the value %SYSLOG_PORT% with your custom port number.
+* For rsyslog, you should create a new configuration file located in `/etc/rsyslog.d/` and replace the value `%SYSLOG_PORT%` with your custom port number.
> [!NOTE] > If you modify this value in the configuration file `95-omsagent.conf`, it will be overwritten when the agent applies a default configuration.
You can change the port number by creating two configuration files: a FluentD co
auth.warning @127.0.0.1:%SYSLOG_PORT% ```
-* The syslog-ng config should be modified by copying the example configuration shown below and adding the custom modified settings to the end of the syslog-ng.conf configuration file located in `/etc/syslog-ng/`. Do **not** use the default label **%WORKSPACE_ID%_oms** or **%WORKSPACE_ID_OMS**, define a custom label to help distinguish your changes.
+* The syslog-ng config should be modified by copying the example configuration shown next and adding the custom modified settings to the end of the `syslog-ng.conf` configuration file located in `/etc/syslog-ng/`. Do *not* use the default label `%WORKSPACE_ID%_oms` or `%WORKSPACE_ID_OMS`. Define a custom label to help distinguish your changes.
> [!NOTE]
- > If you modify the default values in the configuration file, they will be overwritten when the agent applies a default configuration.
+ > If you modify the default values in the configuration file, they'll be overwritten when the agent applies a default configuration.
> ```config
You can change the port number by creating two configuration files: a FluentD co
log { source(s_src); filter(f_custom_filter); destination(d_custom_dest); }; ```
-After completing the changes, the Syslog and the Log Analytics agent service needs to be restarted to ensure the configuration changes take effect.
+After you finish the changes, restart the Syslog and the Log Analytics agent service to ensure the configuration changes take effect.
## Syslog record properties
-Syslog records have a type of **Syslog** and have the properties in the following table.
+
+Syslog records have a type of **Syslog** and have the properties shown in the following table.
| Property | Description | |: |: |
Syslog records have a type of **Syslog** and have the properties in the followin
| EventTime |Date and time that the event was generated. | ## Log queries with Syslog records+ The following table provides different examples of log queries that retrieve Syslog records. | Query | Description | |: |: |
-| Syslog |All Syslogs. |
-| Syslog &#124; where SeverityLevel == "error" |All Syslog records with severity of error. |
-| Syslog &#124; summarize AggregatedValue = count() by Computer |Count of Syslog records by computer. |
-| Syslog &#124; summarize AggregatedValue = count() by Facility |Count of Syslog records by facility. |
+| Syslog |All Syslogs |
+| Syslog &#124; where SeverityLevel == "error" |All Syslog records with severity of error |
+| Syslog &#124; summarize AggregatedValue = count() by Computer |Count of Syslog records by computer |
+| Syslog &#124; summarize AggregatedValue = count() by Facility |Count of Syslog records by facility |
## Next steps+ * Learn about [log queries](../logs/log-query-overview.md) to analyze the data collected from data sources and solutions.
-* Use [Custom Fields](../logs/custom-fields.md) to parse data from syslog records into individual fields.
+* Use [custom fields](../logs/custom-fields.md) to parse data from Syslog records into individual fields.
* [Configure Linux agents](../vm/monitor-virtual-machine.md) to collect other types of data.
azure-monitor Diagnostics Extension Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-overview.md
Title: Azure Diagnostics extension overview
-description: Use Azure diagnostics for debugging, measuring performance, monitoring, traffic analysis in cloud services, virtual machines and service fabric
+description: Use Azure Diagnostics for debugging, measuring performance, monitoring, and performing traffic analysis in cloud services, virtual machines, and service fabric.
Last updated 04/06/2022
# Azure Diagnostics extension overview
-Azure Diagnostics extension is an [agent in Azure Monitor](../agents/agents-overview.md) that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. This article provides an overview of Azure Diagnostics extension including specific functionality that it supports and options for installation and configuration.
+
+Azure Diagnostics extension is an [agent in Azure Monitor](../agents/agents-overview.md) that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. This article provides an overview of Azure Diagnostics extension, the specific functionality that it supports, and options for installation and configuration.
> [!NOTE]
-> Azure Diagnostics extension is one of the agents available to collect monitoring data from the guest operating system of compute resources. See [Overview of the Azure Monitor agents](../agents/agents-overview.md) for a description of the different agents and guidance on selecting the appropriate agents for your requirements.
+> Azure Diagnostics extension is one of the agents available to collect monitoring data from the guest operating system of compute resources. For a description of the different agents and guidance on selecting the appropriate agents for your requirements, see [Overview of the Azure Monitor agents](../agents/agents-overview.md).
## Primary scenarios
-The primary scenarios addressed by the diagnostics extension are:
-Use the Azure Diagnostics extension if you need to:
+Use Azure Diagnostics extension if you need to:
- Send data to Azure Storage for archiving or to analyze it with tools such as [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md).-- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [Metrics Explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).
+- Send data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) to analyze it with [metrics explorer](../essentials/metrics-getting-started.md) and to take advantage of features such as near-real-time [metric alerts](../alerts/alerts-metric-overview.md) and [autoscale](../autoscale/autoscale-overview.md) (Windows only).
- Send data to third-party tools by using [Azure Event Hubs](./diagnostics-extension-stream-event-hubs.md).-- Collect [Boot Diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.-
-Limitations of the Azure Diagnostics extension:
+- Collect [boot diagnostics](/troubleshoot/azure/virtual-machines/boot-diagnostics) to investigate VM boot issues.
-- Can only be used with Azure resources.-- Limited ability to send data to Azure Monitor Logs.
+Limitations of Azure Diagnostics extension:
+- It can only be used with Azure resources.
+- It has limited ability to send data to Azure Monitor Logs.
## Comparison to Log Analytics agent
-The Log Analytics agent in Azure Monitor can also be used to collect monitoring data from the guest operating system of virtual machines. You may choose to use either or both depending on your requirements. See [Overview of the Azure Monitor agents](../agents/agents-overview.md) for a detailed comparison of the Azure Monitor agents.
+
+The Log Analytics agent in Azure Monitor can also be used to collect monitoring data from the guest operating system of virtual machines. You can choose to use either or both depending on your requirements. For a comparison of the Azure Monitor agents, see [Overview of the Azure Monitor agents](../agents/agents-overview.md).
The key differences to consider are: -- Azure Diagnostics Extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.-- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only) and Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).
+- Azure Diagnostics extension can be used only with Azure virtual machines. The Log Analytics agent can be used with virtual machines in Azure, other clouds, and on-premises.
+- Azure Diagnostics extension sends data to Azure Storage, [Azure Monitor Metrics](../essentials/data-platform-metrics.md) (Windows only), and Azure Event Hubs. The Log Analytics agent collects data to [Azure Monitor Logs](../logs/data-platform-logs.md).
- The Log Analytics agent is required for [solutions](../monitor-reference.md#insights-and-curated-visualizations), [VM insights](../vm/vminsights-overview.md), and other services such as [Microsoft Defender for Cloud](../../security-center/index.yml). ## Costs
-There is no cost for Azure Diagnostic Extension, but you may incur charges for the data ingested. Check [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for the destination where you're collecting data.
+
+There's no cost for Azure Diagnostics extension, but you might incur charges for the data ingested. Check [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) for the destination where you're collecting data.
## Data collected+ The following tables list the data that can be collected by the Windows and Linux diagnostics extension. ### Windows diagnostics extension (WAD)
-| Data Source | Description |
+| Data source | Description |
| | |
-| Windows Event logs | Events from Windows event log. |
+| Windows event logs | Events from Windows event log. |
| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads. |
-| IIS Logs | Usage information for IIS web sites running on the guest operating system. |
+| IIS logs | Usage information for IIS websites running on the guest operating system. |
| Application logs | Trace messages written by your application. |
-| .NET EventSource logs |Code writing events using the .NET [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) class |
-| [Manifest based ETW logs](/windows/desktop/etw/about-event-tracing) |Event Tracing for Windows events generated by any process. |
+| .NET EventSource logs |Code writing events using the .NET [EventSource](/dotnet/api/system.diagnostics.tracing.eventsource) class. |
+| [Manifest-based ETW logs](/windows/desktop/etw/about-event-tracing) |Event tracing for Windows events generated by any process. |
| Crash dumps (logs) | Information about the state of the process if an application crashes. |
-| File based logs | Logs created by your application or service. |
+| File-based logs | Logs created by your application or service. |
| Agent diagnostic logs | Information about Azure Diagnostics itself. | - ### Linux diagnostics extension (LAD)
-| Data Source | Description |
+| Data source | Description |
| | |
-| Syslog | Events sent to the Linux event logging system. |
-| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads. |
-| Log files | Entries sent to a file based log. |
+| Syslog | Events sent to the Linux event logging system |
+| Performance counters | Numerical values measuring performance of different aspects of operating system and workloads |
+| Log files | Entries sent to a file-based log |
## Data destinations
-The Azure Diagnostic extension for both Windows and Linux always collect data into an Azure Storage account. See [Install and configure Windows Azure diagnostics extension (WAD)](diagnostics-extension-windows-install.md) and [Use Linux Diagnostic Extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md) for a list of specific tables and blobs where this data is collected.
-Configure one or more *data sinks* to send data to other additional destinations. The following sections list the sinks available for the Windows and Linux diagnostics extension.
+The Azure Diagnostics extension for both Windows and Linux always collects data into an Azure Storage account. For a list of specific tables and blobs where this data is collected, see [Install and configure Azure Diagnostics extension for Windows](diagnostics-extension-windows-install.md) and [Use Azure Diagnostics extension for Linux to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md).
+
+Configure one or more *data sinks* to send data to other destinations. The following sections list the sinks available for the Windows and Linux diagnostics extension.
### Windows diagnostics extension (WAD) | Destination | Description | |:|:| | Azure Monitor Metrics | Collect performance data to Azure Monitor Metrics. See [Send Guest OS metrics to the Azure Monitor metric database](../essentials/collect-custom-metrics-guestos-resource-manager-vm.md). |
-| Event hubs | Use Azure Event Hubs to send data outside of Azure. See [Streaming Azure Diagnostics data to Event Hubs](diagnostics-extension-stream-event-hubs.md) |
-| Azure Storage blobs | Write to data to blobs in Azure Storage in addition to tables. |
+| Event hubs | Use Azure Event Hubs to send data outside of Azure. See [Streaming Azure Diagnostics data to Azure Event Hubs](diagnostics-extension-stream-event-hubs.md). |
+| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
| Application Insights | Collect data from applications running in your VM to Application Insights to integrate with other application monitoring. See [Send diagnostic data to Application Insights](diagnostics-extension-to-application-insights.md). |
-You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide additional functionality. See [Collect Azure diagnostic logs from Azure Storage](../agents/diagnostics-extension-logs.md).
-
+You can also collect WAD data from storage into a Log Analytics workspace to analyze it with Azure Monitor Logs, although the Log Analytics agent is typically used for this functionality. It can send data directly to a Log Analytics workspace and supports solutions and insights that provide more functionality. See [Collect Azure diagnostic logs from Azure Storage](../agents/diagnostics-extension-logs.md).
### Linux diagnostics extension (LAD)+ LAD writes data to tables in Azure Storage. It supports the sinks in the following table. | Destination | Description | |:|:| | Event hubs | Use Azure Event Hubs to send data outside of Azure. |
-| Azure Storage blobs | Write to data to blobs in Azure Storage in addition to tables. |
+| Azure Storage blobs | Write data to blobs in Azure Storage in addition to tables. |
| Azure Monitor Metrics | Install the Telegraf agent in addition to LAD. See [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](../essentials/collect-custom-metrics-linux-telegraf.md). - ## Installation and configuration
-The Diagnostic extension is implemented as a [virtual machine extension](../../virtual-machines/extensions/overview.md) in Azure, so it supports the same installation options using Resource Manager templates, PowerShell, and CLI. See [Virtual machine extensions and features for Windows](../../virtual-machines/extensions/features-windows.md) and [Virtual machine extensions and features for Linux](../../virtual-machines/extensions/features-linux.md) for general details on installing and maintaining virtual machine extensions.
-You can also install and configure both the Windows and Linux diagnostic extension in the Azure portal under **Diagnostic settings** in the **Monitoring** section of the virtual machine's menu.
+The diagnostics extension is implemented as a [virtual machine extension](../../virtual-machines/extensions/overview.md) in Azure, so it supports the same installation options using Azure Resource Manager templates, PowerShell, and the Azure CLI. For information on installing and maintaining virtual machine extensions, see [Virtual machine extensions and features for Windows](../../virtual-machines/extensions/features-windows.md) and [Virtual machine extensions and features for Linux](../../virtual-machines/extensions/features-linux.md).
-See the following articles for details on installing and configuring the diagnostics extension for Windows and Linux.
+You can also install and configure both the Windows and Linux diagnostics extension in the Azure portal under **Diagnostic settings** in the **Monitoring** section of the virtual machine's menu.
-- [Install and configure Windows Azure diagnostics extension (WAD)](diagnostics-extension-windows-install.md)-- [Use Linux Diagnostic Extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md)
+See the following articles for information on installing and configuring the diagnostics extension for Windows and Linux:
+
+- [Install and configure Azure Diagnostics extension for Windows](diagnostics-extension-windows-install.md)
+- [Use Linux diagnostics extension to monitor metrics and logs](../../virtual-machines/extensions/diagnostics-linux.md)
## Other documentation
-### Azure Cloud Service (classic) Web and Worker Roles
-- [Introduction to Cloud Service Monitoring](../../cloud-services/cloud-services-how-to-monitor.md)
+See the following articles for more information.
+
+### Azure Cloud Services (classic) web and worker roles
+
+- [Introduction to Azure Cloud Services monitoring](../../cloud-services/cloud-services-how-to-monitor.md)
- [Enabling Azure Diagnostics in Azure Cloud Services](../../cloud-services/cloud-services-dotnet-diagnostics.md)-- [Application Insights for Azure cloud services](../app/azure-web-apps-net-core.md)<br>[Trace the flow of a Cloud Services application with Azure Diagnostics](../../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
+- [Application Insights for Azure Cloud Services](../app/azure-web-apps-net-core.md)<br>
+- [Trace the flow of an Azure Cloud Services application with Azure Diagnostics](../../cloud-services/cloud-services-dotnet-diagnostics-trace-flow.md)
### Azure Service Fabric-- [Monitor and diagnose services in a local machine development setup](../../service-fabric/service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md)
-## Next steps
+[Monitor and diagnose services in a local machine development setup](../../service-fabric/service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md)
+## Next steps
-* Learn to [use Performance Counters in Azure Diagnostics](../../cloud-services/diagnostics-performance-counters.md).
-* If you have trouble with diagnostics starting or finding your data in Azure storage tables, see [TroubleShooting Azure Diagnostics](diagnostics-extension-troubleshooting.md)
+* Learn to [use performance counters in Azure Diagnostics](../../cloud-services/diagnostics-performance-counters.md).
+* If you have trouble with diagnostics starting or finding your data in Azure Storage tables, see [Troubleshooting Azure Diagnostics](diagnostics-extension-troubleshooting.md).
azure-monitor Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/data-sources.md
This article describes common sources of monitoring data collected by Azure Moni
Some of these data sources use the [new data ingestion pipeline](essentials/data-collection.md) in Azure Monitor. This article will be updated as other data sources transition to this new data collection method.
+> [!NOTE]
+> Access to data in the Log Analytics Workspaces is governed as outline [here](https://learn.microsoft.com/azure/azure-monitor/logs/manage-access).
+>
+ ## Application tiers Sources of monitoring data from Azure applications can be organized into tiers, the highest tiers being your application itself and the lower tiers being components of Azure platform. The method of accessing data from each tier varies. The application tiers are summarized in the table below, and the sources of monitoring data in each tier are presented in the following sections. See [Monitoring data locations in Azure](monitor-reference.md) for a description of each data location and how you can access its data.
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Last updated 06/28/2022
# Overview of Log Analytics in Azure Monitor
-Log Analytics is a tool in the Azure portal that's used to edit and run log queries with data in Azure Monitor Logs.
+Log Analytics is a tool in the Azure portal that's used to edit and run log queries against data in the Azure Monitor Logs store.
You might write a simple query that returns a set of records and then use features of Log Analytics to sort, filter, and analyze them. Or you might write a more advanced query to perform statistical analysis and visualize the results in a chart to identify a particular trend.
azure-resource-manager Scenarios Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-rbac.md
If you don't explicitly specify the scope, Bicep uses the file's `targetScope`.
::: code language="bicep" source="~/azure-docs-bicep-samples/samples/scenarios-rbac/scope-default.bicep" highlight="4" ::: > [!TIP]
-> Ensure you use the smallest scope required for your requirements.
+> Use the smallest scope that you need to meet your requirements.
> > For example, if you need to grant a managed identity access to a single storage account, it's good security practice to create the role assignment at the scope of the storage account, not at the resource group or subscription scope.
azure-resource-manager Request Limits And Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/request-limits-and-throttling.md
Title: Request limits and throttling description: Describes how to use throttling with Azure Resource Manager requests when subscription limits have been reached. Previously updated : 12/01/2021 Last updated : 09/30/2022 # Throttling Resource Manager requests
For information about throttling in other resource providers, see:
* [Azure Key Vault throttling guidance](../../key-vault/general/overview-throttling.md) * [AKS troubleshooting](../../aks/troubleshooting.md#im-receiving-429too-many-requests-errors)
+* [Managed identities](../../active-directory/managed-identities-azure-resources/managed-identities-faq.md#are-there-any-rate-limits-that-apply-to-managed-identities)
## Error code
azure-video-indexer Restricted Viewer Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/restricted-viewer-role.md
Users with this role are **unable** to perform the following tasks:
## Using an ARM API
-To generate a Video Indexer restricted viewer access token via API, see [documentation](https://aka.ms/vi-restricted-doc).
+To generate a Video Indexer restricted viewer access token via API, see [documentation](/rest/api/videoindexer/generate/access-token).
## Restricted Viewer Video Indexer website experience
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
You can access Azure Video Indexer capabilities in three ways:
* API integration: All of Azure Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure. To get started, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md). * Embeddable widget: Lets you embed the Azure Video Indexer insights, player, and editor experiences into your app. For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-If you're using the website, the insights are added as metadata and are visible in the portal. If you're using APIs, the insights are available as a JSON file. This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video.
--
-## Sign up for Azure Video Indexer
-
-To start developing with Azure Video Indexer, browse to the [Azure Video Indexer](https://www.videoindexer.ai/) website and sign up.
- Once you start using Azure Video Indexer, all your stored data and uploaded content are encrypted at rest with a Microsoft managed key.
-You can access Azure Video Indexer capabilities in three ways:
-
-* Azure Video Indexer portal: An easy-to-use solution that lets you evaluate the product, manage the account, and customize models.
-
- For more information about the portal, see [Get started with the Azure Video Indexer website](video-indexer-get-started.md).
-* API integration: All of Azure Video Indexer's capabilities are available through a REST API, which lets you integrate the solution into your apps and infrastructure.
-
- To get started as a developer, seeΓÇ»[Use Azure Video Indexer REST API](video-indexer-use-apis.md).
-* Embeddable widget: Lets you embed the Azure Video Indexer insights, player, and editor experiences into your app.
-
- For more information, seeΓÇ»[Embed visual widgets in your application](video-indexer-embed-widgets.md).
-If you're using the website, the insights are added as metadata and are visible in the portal. If you're using APIs, the insights are available as a JSON file.
> [!NOTE] > Review [planned Azure Video Indexer website authenticatication changes](./release-notes.md#planned-azure-video-indexer-website-authenticatication-changes).
-## Upload a video using the Azure Video Indexer website
+This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video.
++
+## Sign up and upload a video
### Supported browsers
The following list shows the supported browsers that you can use for the Azure V
See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure Video Indexer.
-### Upload a video
+### Upload
1. Sign in on the [Azure Video Indexer](https://www.videoindexer.ai/) website. 1. To upload a video, press the **Upload** button or link.
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
The default retention period is seven days. It's possible to set the retention p
The auto-purge runs every 24 hours. The auto-purge always considers the current value of `retention days` before permanently deleting the soft deleted artifacts. For example, after five days of soft deleting the artifact, if the user changes the value of retention days from seven to 14 days, the artifact will only expire after 14 days from the initial soft delete.
-## Preview limitations
+## Preview limitations
* ACR currently doesn't support manually purging soft deleted artifacts. * The soft delete policy doesn't support a geo-replicated registry. * ACR doesn't allow enabling both the retention policy and the soft delete policy. See [retention policy for untagged manifests.](container-registry-retention-policy.md)
+## Known issues
+
+>* Enabling the soft delete policy with AZ through ARM template leaves the registry stuck in the `creation` state. To avoid this, we recommend deleting and recreating the registry by disabling the soft delete policy.
+>* Accessing the manage deleted artifacts blade after disabling the soft delete policy will throw an error message with 405 status.
+>* The customers with restrictions on permissions to restore, will see an issue as File not found.
## Enable soft delete policy for registry - CLI 1. Update soft delete policy for a given `MyRegistry` ACR with a retention period set between 1 to 90 days.
You can also enable a registry's soft delete policy in the [Azure portal](https:
## Next steps
-* Learn more about options to [delete images and repositories](container-registry-delete.md) in Azure Container Registry.
+* Learn more about options to [delete images and repositories](container-registry-delete.md) in Azure Container Registry.
defender-for-iot How To Set Up High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-high-availability.md
The installation and configuration procedures are performed in four main stages:
1. Install an on-premises management console secondary appliance. For more information, see [About the Defender for IoT Installation](how-to-install-software.md).
-1. Pair the primary and secondary on-premises management console appliances as described [here](https://infrascale.secure.force.com/pkb/articles/Support_Article/How-to-access-your-Appliance-Management-Console). The primary on-premises management console must manage at least two sensors in order to carry out the setup.
+1. Pair the primary and secondary on-premises management console appliances. The primary on-premises management console must manage at least two sensors in order to carry out the setup.
+
+ For more information, see [Create the primary and secondary pair](#create-the-primary-and-secondary-pair).
## High availability requirements
load-testing How To Create And Run Load Test With Jmeter Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-and-run-load-test-with-jmeter-script.md
Previously updated : 06/10/2022 Last updated : 10/02/2022 adobe-target: true
Use cases for creating a load test with an existing JMeter script include:
## Create an Apache JMeter script
-If you don't have an existing Apache JMeter script, you'll create a sample script to load test a single web application endpoint. For more information about creating an Apache JMeter script, see [Getting started with Apache JMeter](https://jmeter.apache.org/usermanual/get-started.html).
+If you already have a script, you can skip to [Create a load test](#create-a-load-test). In this section, you'll create a sample JMeter test script to load test a single web endpoint.
-If you already have a script, you can skip to [Create a load test](#create-a-load-test).
+You can also use the [Apache JMeter test script recorder](https://jmeter.apache.org/usermanual/jmeter_proxy_step_by_step.html) to record the requests while navigating the application in a browser. Alternatively, [import cURL commands](https://jmeter.apache.org/usermanual/curl.html) to generate the requests in the JMeter test script.
+
+To create a sample JMeter test script:
1. Create a *SampleTest.jmx* file on your local machine:
If you already have a script, you can skip to [Create a load test](#create-a-loa
</ThreadGroup> <hashTree> <HTTPSamplerProxy guiclass="HttpTestSampleGui" testclass="HTTPSamplerProxy" testname="HTTP request" enabled="true">
- <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="User Defined Variables" enabled="true">
+ <elementProp name="HTTPsampler.Arguments" elementType="Arguments" guiclass="HTTPArgumentsPanel" testclass="Arguments" testname="Sample web test" enabled="true">
<collectionProp name="Arguments.arguments"/> </elementProp> <stringProp name="HTTPSampler.domain"></stringProp>
If you already have a script, you can skip to [Create a load test](#create-a-loa
## Create a load test
-To create a load test in Azure Load Testing, you have to specify a JMeter script. This script defines the [test plan](./how-to-create-manage-test.md#test-plan) for the load test. You can create multiple load tests in an Azure Load Testing resource.
+When you create a load test in Azure Load Testing, you specify a JMeter script to define the [load test plan](./how-to-create-manage-test.md#test-plan). An Azure Load Testing resource can contain multiple load tests.
-> [!NOTE]
-> When you [create a quick test by using a URL](./quickstart-create-and-run-load-test.md), Azure Load Testing automatically generates the JMeter script.
+When you [create a quick test by using a URL](./quickstart-create-and-run-load-test.md), Azure Load Testing automatically generates the corresponding JMeter script.
To create a load test using an existing JMeter script in the Azure portal:
To create a load test using an existing JMeter script in the Azure portal:
1. Select **Review + create**. Review all settings, and then select **Create** to create the load test.
- :::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/create-new-test-review.png" alt-text="Screenshot that shows the tab for reviewing and creating a test." :::
-
-> [!NOTE]
-> You can update the test configuration at any time, for example to upload a different JMX file. Choose your test in the list of tests, and then select **Edit**.
+You can update the test configuration at any time, for example to upload a different JMX file. Choose your test in the list of tests, and then select **Edit**.
## Run the load test
-When Azure Load Testing starts your load test, it will first deploy the JMeter script and any other files onto test engine instances and run the test.
+When Azure Load Testing starts your load test, it will first deploy the JMeter script, and any other files onto test engine instances, and then start the load test.
If you selected **Run test after creation**, your load test will start automatically. To manually start the load test you created earlier, perform the following steps:
If you selected **Run test after creation**, your load test will start automatic
> [!TIP] > You can stop a load test at any time from the Azure portal.
-While the test runs and after it finishes, you can view the test run details, statistics, and metrics in the test run dashboard.
+1. Notice the test run details, statistics, and client metrics in the Azure portal.
+ :::image type="content" source="./media/how-to-create-and-run-load-test-with-jmeter-script/test-run-aggregated-by-percentile.png" alt-text="Screenshot that shows the test run dashboard." :::
-## Next steps
+ Use the run statistics and error information to identify performance and stability issues for your application under load.
-- To learn more about [creating and managing tests](./how-to-create-manage-test.md).
+## Next steps
-- To learn how to export test results, see [Export test results](./how-to-export-test-results.md).
+You've created a cloud-based load test based on an existing JMeter test script. For Azure-hosted applications, you can also [monitor server-side metrics](./how-to-monitor-server-side-metrics.md) for further application insights.
-- To learn how to monitor server side metrics, see [Monitor server side metrics](./how-to-monitor-server-side-metrics.md).
+- Learn how to [export test results](./how-to-export-test-results.md).
+- Learn how to [parameterize a load test with environment variables](./how-to-parameterize-load-tests.md).
+- Learn how to [configure your test for high-scale load](./how-to-high-scale-load.md).
machine-learning Migrate To V2 Managed Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-managed-online-endpoints.md
[Managed online endpoints](concept-endpoints.md#what-are-online-endpoints) help to deploy your ML models in a turnkey manner. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. Managed online endpoints take care of serving, scaling, securing, and monitoring your models, freeing you from the overhead of setting up and managing the underlying infrastructure. Details can be found on [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-managed-online-endpoints.md).
-You can deploy directly to the new compute target with your previous models and environments, or use the [scripts](https://aka.ms/moeonboard) (preview) provided by us to export the current services and then deploy to the new compute without affecting your existing services. If you regularly create and delete Azure Container Instances (ACI) services, we strongly recommend the deploying directly and not using the scripts.
+You can deploy directly to the new compute target with your previous models and environments, or use the [scripts](https://aka.ms/moeonboard) (preview) provided by us to export the current services and then deploy to the new compute without affecting your existing services. If you regularly create and delete Azure Container Instances (ACI) web services, we strongly recommend the deploying directly and not using the scripts.
> [!IMPORTANT] > The scripts are preview and are provided without a service level agreement.
marketplace Add Publishers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/add-publishers.md
An organization can have multiple publishers associated with a commercial market
1. In the upper-right, select **Settings** (gear icon) > **Account settings**. 1. Under **Organization Profile**, select **Identifiers**. 1. In the **Publisher** section, select **Add publisher**.
-1. Choose the MPN ID that you want to associate with the publisher.
+1. Choose the PartnerID (formerly MPN ID) that you want to associate with the publisher.
1. Update the **Publisher details** on the form.
- - **Publisher location**: Select the MPN ID you want to use for this new user.
+ - **Publisher location**: Select the PartnerID you want to use for this new user.
- **Publisher name**: The name that's displayed in the commercial marketplace with the offer. - **PublisherID**: An identifier that's used by Partner Center to uniquely identify the publisher. The default value for this field maps to an existing and unique Publisher ID in the system. Because the Publisher ID can't be reused, this field needs to be updated. - **Contact information**: Update the contact information when necessary.
marketplace Azure Partner Customer Usage Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-partner-customer-usage-attribution.md
Unlike the tracking IDs that Partner Center creates on your behalf for Azure app
[guid]::NewGuid() ```
-You should create a unique GUID for each product and distribution channel. You can use a single GUID for a product's multiple distribution channels if you don't want reporting to be split. Reporting occurs by Microsoft Partner Network ID and GUID.
+You should create a unique GUID for each product and distribution channel. You can use a single GUID for a product's multiple distribution channels if you don't want reporting to be split. Reporting occurs by PartnerID and GUID.
### Register GUIDs
marketplace Co Sell Solution Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/co-sell-solution-migration.md
As a Microsoft partner enrolled in the commercial marketplace, you can:
## Prerequisites to continue co-selling with Microsoft
-Ensure you have an active Microsoft Partner Network membership and are enrolled in the commercial marketplace in Partner Center.
+Ensure you have an active Microsoft Cloud Partner Program membership and are enrolled in the commercial marketplace in Partner Center.
-- Join the Microsoft Partner Network [at no cost](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership). As a partner, youΓÇÖll have access to exclusive resources, programs, tools, and connections to grow your business.
+- Join the Microsoft Cloud Partner Program [at no cost](https://partner.microsoft.com/dashboard/account/v3/enrollment/introduction/partnership). As a partner, youΓÇÖll have access to exclusive resources, programs, tools, and connections to grow your business.
- If you do not have an account in commercial marketplace, [enroll now](create-account.md) to continue co-selling with Microsoft and access the full publishing experience. ## Publishing updates for attaining co-sell-ready status
Follow these steps before importing your solutions from OCP GTM:
:::image type="content" source="media/co-sell-migrate/welcome-overveiw.png" alt-text="Displays overview page":::
-1. To begin migrating, select the **Solutions** tab, which displays all the solutions associated to your MPN IDs.
+1. To begin migrating, select the **Solutions** tab, which displays all the solutions associated to your PartnerIDs.
:::image type="content" source="media/co-sell-migrate/solutions-tab.png" alt-text="Partner Center Overview page, Solutions tab.":::
marketplace Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-account.md
To create an account in the commercial marketplace program in Partner Center, ma
**There are two ways to create an account**: -- If you're new to Partner Center and don't have a Microsoft Partner Network (MPN) account, continue to [Create a Partner Center account and enroll in the commercial marketplace](#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace).-- If you're already enrolled in the Microsoft Partner Network or a developer program, create an account directly from Partner Center. Go to [Use an existing Partner Center account to enroll in the commercial marketplace](#use-an-existing-partner-center-account-to-enroll-in-the-commercial-marketplace).
+- If you're new to Partner Center and don't have a Microsoft Cloud Partner Program account, continue to [Create a Partner Center account and enroll in the commercial marketplace](#create-a-partner-center-account-and-enroll-in-the-commercial-marketplace).
+- If you're already enrolled in the Microsoft Cloud Partner Program or a developer program, create an account directly from Partner Center. Go to [Use an existing Partner Center account to enroll in the commercial marketplace](#use-an-existing-partner-center-account-to-enroll-in-the-commercial-marketplace).
### Create a Partner Center account and enroll in the commercial marketplace
Sign in with a work account so that you can link your company's work email accou
#### Agree to the terms and conditions
-As part of the commercial marketplace registration process, you need to agree to the terms and conditions in the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). If youΓÇÖre new to Microsoft Partner Network, you also need to agree to the terms and conditions in the Microsoft Partner Network Agreement.
+As part of the commercial marketplace registration process, you need to agree to the terms and conditions in the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement). If youΓÇÖre new to Microsoft Partner Network, you also need to agree to the terms and conditions in the Microsoft Cloud Partner Program Agreement.
You've now created a commercial marketplace account in Partner Center. Continue to [Add new publishers to the commercial marketplace](add-publishers.md).
You've now created a commercial marketplace account in Partner Center. Continue
Follow the instructions in this section to create a commercial marketplace account if you already have an enrollment in Microsoft Partner Center. There are two types of existing enrollments that you can use to set up your commercial marketplace account. Choose the scenario that applies to you: *What if I'm already enrolled in the Microsoft Partner Network?*-- [Use an existing Microsoft Partner Network account](#use-an-existing-microsoft-partner-network-account) to create your account.
+- [Use an existing Microsoft Cloud Partner Program account](#use-an-existing-microsoft-cloud-partner-program-account) to create your account.
*What if I'm already enrolled in a developer program?* - [Use an existing developer program enrollment](#use-a-developer-program-enrollment) to create your account. For both enrollment types, you sign in to Partner Center with your existing credentials. Be sure to have your account and publisher profile information available.
-#### Use an existing Microsoft Partner Network account
+#### Use an existing Microsoft Cloud Partner Program account
-When you use your existing Microsoft Partner Network account to enroll in the commercial marketplace program in Partner Center, we link your company's work email account domain to your new commercial marketplace account.
+When you use your existing Microsoft Cloud Partner Program account to enroll in the commercial marketplace program in Partner Center, we link your company's work email account domain to your new commercial marketplace account.
You can then assign the appropriate user roles and permissions to your users, so they can have access to the commercial marketplace program in Partner Center. **Enroll in the commercial marketplace**
-1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507) with your Microsoft Partner Network account.
+1. Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165507) with your Microsoft Cloud Partner Program account.
>[!NOTE] > You must have an **account admin** or a **global admin** role to sign in to Microsoft Partner Network.
You can then assign the appropriate user roles and permissions to your users, so
1. Under **Commercial Marketplace**, select **Get Started**.
- Microsoft Partner Network detects your subscription and displays the **Publisher profile** pane.
+ Microsoft Cloud Partner Program detects your subscription and displays the **Publisher profile** pane.
-1. Select the MPN ID you want to link to your publisher account and enter your company name.
+1. Select the PartnerID you want to link to your publisher account and enter your company name.
1. Read the terms and conditions in the [Microsoft Publisher Agreement](/legal/marketplace/msft-publisher-agreement), and then select **Accept and continue** to complete your enrollment.
marketplace Find Tenant Object Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/find-tenant-object-id.md
Last updated 12/08/2021
This article describes how to find the Tenant ID, Object ID, and partner association details, along with their respective subscription IDs.
-If you need to get screenshots of these items in Azure Cloud Shell to use for debugging assistance, jump down to [Find Tenant, Object, and Partner ID association for debugging](#find-ids-for-debugging).
+If you need to get screenshots of these items in Azure Cloud Shell to use for debugging assistance, jump down to [Find Tenant, Object, and PartnerID association for debugging](#find-ids-for-debugging).
>[!Note] > Only the owner of a subscription has the privileges to perform these steps.
If you need to get screenshots of these items in Azure Cloud Shell to use for de
:::image type="content" source="media/tenant-and-object-id/subscriptions-screen-1.png" alt-text="The Subscriptions screen in the Azure portal.":::
-## Find Partner ID
+## Find PartnerID
1. Navigate to the Subscriptions page as described in the previous section. 2. Select a subscription.
If you need to get screenshots of these items in Azure Cloud Shell to use for de
## Find IDs for debugging
-This section describes how to find tenant, object, and partner ID association for debugging purposes.
+This section describes how to find tenant, object, and PartnerID (formerly MPN ID) association for debugging purposes.
1. Go to the [Azure portal](https://portal.azure.com/). 2. Open Azure Cloud Shell by selecting the PowerShell icon at the top-right.
marketplace Gtm Marketing Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-marketing-best-practices.md
Last updated 05/10/2022
Keep marketing best practices in mind as you create and list offers to the commercial marketplace, provide customer trials, and connect with Microsoft customers and the partner community. If you optimize your offer listings and go-to-market campaigns, you can accelerate your customer acquisition. Download the [Azure Marketplace & AppSource best practice guide](https://aka.ms/marketplacebestpracticesguide) to learn how to get the most out of your online marketing efforts.
-To learn more about how the Microsoft Partner Network can help you grow your business, see [Go to market with Microsoft](https://partner.microsoft.com/reach-customers/gtm). Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) to create and configure your offer.
+To learn more about how the Microsoft Cloud Partner Program can help you grow your business, see [Go to market with Microsoft](https://partner.microsoft.com/reach-customers/gtm). Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2165290) to create and configure your offer.
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
Based on your eligibility, you will be contacted by a member of the Rewards team
List and trial offers receive one-time use benefits. Transact offers are eligible for evergreen benefit engagement. As transacting partners, as you grow your billed sales through the commercial marketplace, you unlock greater benefits per billed sales (or seats sold) tier.
-The minimum requirement to publish in the online stores is an MPNID, so these benefits are available to all partners regardless of MPN competency status or partner type. Every partner is empowered to grow your business through the commercial marketplace as a platform.
+The minimum requirement to publish in the online stores is an MPNID, so these benefits are available to all partners regardless of competency status or partner type. Every partner is empowered to grow your business through the commercial marketplace as a platform.
You will get support in understanding the resources available to you and in implementing best practices, which you can also [review on your own](https://onedrive.live.com/view.aspx?resid=6C423AE231DA44BB!1039&ithint=file%2cdocx&authkey=!AFs7CHF5_XGje3k).
marketplace Manage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-account.md
You can also select the **Update** link to change your contact info, such as pub
In the left-menu, under **Organization profile**, select **Identifiers** to see the following information: -- **MPN IDs**: Any MPN IDs associated with your account-- **CSP**: MPN IDs associated with the CSP program for this account.
+- **MPN IDs**: Any PartnerIDs associated with your account
+- **CSP**: PartnerIDs associated with the CSP program for this account.
- **Publisher**: Seller IDs associated with your account - **Tracking GUIDs**: Any tracking GUIDs associated with your account
If you deploy a product by using a template and it is available on both Azure Ma
- Product A in Azure Marketplace - Product A on GitHub
-Reporting is done by the partner value (Microsoft Partner ID) and the GUIDs. You can also track GUIDs at a more granular level aligning to each plan within your offer.
+Reporting is done by the partner value (Microsoft PartnerID) and the GUIDs. You can also track GUIDs at a more granular level aligning to each plan within your offer.
For more information, see the [Tracking Azure customer usage with GUIDs FAQ](azure-partner-customer-usage-attribution.md#faq)).
marketplace Marketplace Rewards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-rewards.md
You will be contacted by a member of the Rewards team when your offer goes live,
For Transact partners, as you grow your billed sales through the commercial marketplace platform, you unlock greater benefits per tier.
-The minimum requirement to publish in the online stores is an MPNID, so these benefits are available to all partners regardless of MPN competency status or partner type. Each partner is empowered to grow their business through the commercial marketplace as a platform.
+The minimum requirement to publish in the online stores is an PartnerID, so these benefits are available to all partners regardless of competency status or partner type. Each partner is empowered to grow their business through the commercial marketplace as a platform.
You will get support in understanding the resources available to you and in implementing the best practices, which you can also [review on your own](https://partner.microsoft.com/asset/collection/azure-marketplace-and-appsource-publisher-toolkit#/).
marketplace Monetize Addins Through Microsoft Commercial Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/monetize-addins-through-microsoft-commercial-marketplace.md
Your offer must also use the SaaS fulfillment APIs to integrate with Commercial
To begin submitting your SaaS offer, you must create an account in the Commercial Marketplace program in Partner Center. This account must be associated with a company. - If you're new to Partner Center, and have never enrolled in the Microsoft Partner Network, see [Create an account using the Partner Center enrollment page](/azure/marketplace/partner-center-portal/create-account#create-an-account-using-the-partner-center-enrollment-page).-- If you're already enrolled in the Microsoft Partner Network or in a Partner Center developer program, see [Create an account using existing Microsoft Partner Center enrollments](/azure/marketplace/partner-center-portal/create-account#create-an-account-using-existing-microsoft-partner-center-enrollments) for information about how to create your account.
+- If you're already enrolled in the Microsoft Cloud Partner Program or in a Partner Center developer program, see [Create an account using existing Microsoft Partner Center enrollments](/azure/marketplace/partner-center-portal/create-account#create-an-account-using-existing-microsoft-partner-center-enrollments) for information about how to create your account.
### Register a SaaS application
marketplace Review Publish Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/review-publish-offer.md
There are three levels of validation included in the certification process for e
#### Publisher business eligibility
-Each offer type checks a set of required base eligibility criteria. This criteria may include publisher MPN status, competencies held, competency levels, and so on.
+Each offer type checks a set of required base eligibility criteria. This criteria may include publisher Microsoft Cloud Partner Program status, competencies held, competency levels, and so on.
#### Content validation
marketplace User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/user-roles.md
In order to access capabilities related to marketplace or your developer account
| Marketer | &#10004;&#160;&#160;Respond to customer reviews<br>&#10004;&#160;&#160;View non-financial reports<br>&#x2718;&#160;&#160;Make changes to apps or settings | > [!NOTE]
-> For the Commercial Marketplace program, the Global admin, Business Contributor, Financial Contributor, and Marketer roles are not used. Assigning these roles to users has no effect. Only the Manager and Developer roles grant permissions to users.
+> For the commercial marketplace program, the Global admin, Business Contributor, Financial Contributor, and Marketer roles are not used. Assigning these roles to users has no effect. Only the Manager and Developer roles grant permissions to users.
-For more information about managing roles and permissions in other areas of Partner Center, such as Azure Active Directory (AD), Cloud Solution Provider (CSP), Control Panel Vendor (CPV), Guest users, or Microsoft Partner Network (MPN), see [Assign users roles and permissions in Partner Center](/partner-center/permissions-overview).
+For more information about managing roles and permissions in other areas of Partner Center, such as Azure Active Directory (AD), Cloud Solution Provider (CSP), Control Panel Vendor (CPV), Guest users, or Microsoft Partner Network, see [Assign users roles and permissions in Partner Center](/partner-center/permissions-overview).
> [!NOTE] > Any user management, role assignment activities done on these lines will be in context of the account you are on. Refer to section on switching between seller if you need to manage a different account.
orbital Organize Stac Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/organize-stac-data.md
+
+ Title: Organize spaceborne geospatial data with STAC - Azure Orbital Analytics
+description: Create an implementation of SpatioTemporal Asset Catalog (STAC) creation to structure geospatial data.
++++ Last updated : 09/29/2022+++
+# Organize spaceborne geospatial data with SpatioTemporal Asset Catalog (STAC)
+
+This reference architecture shows an end-to-end implementation of [SpatioTemporal Asset Catalog (STAC)](https://stacspec.org) creation to structure geospatial data. In this document, we'll use the publicly available [National Agriculture Imagery Program (NAIP)](https://catalog.data.gov/dataset/national-agriculture-imagery-program-naip) data set using geospatial libraries on Azure. The architecture can be adapted to take data sets from other sources such as Satellite imagery providers, [Azure Orbital Ground Station (AOGS)](https://azure.microsoft.com/products/orbital/), or Bring Your Own Data (BYOD).
+
+The implementation consists of four stages: Data acquisition, Metadata generation, Cataloging, and Data discovery via [STAC FastAPI](https://github.com/stac-utils/stac-fastapi). This article also shows how to build STAC based on a new data source or on bring-your-own-data.
+
+An implementation of this architecture is available on [GitHub](https://github.com/Azure/Azure-Orbital-STAC).
+
+This article is intended for users with intermediate levels of skill in working with spaceborne geospatial data. Refer to the table in the [glossary](#glossary) for the definition of commonly used STAC terms. For more details visit the official [stacspec](https://stacspec.org/en) page.
+
+## Scenario details
+
+Spaceborne data collection is becoming increasingly common. There are various data providers of spatiotemporal assets such as Imagery, Synthetic Aperture Radar (SAR), Point Clouds, and so forth. Data providers don't have a standard way of providing users access to their spatiotemporal data. Users of spatiotemporal data are often burdened with building unique workflows for each different collection of data they want to consume. Developers are required to develop new tools and libraries to interact with the spatiotemporal data.
+
+The STAC community has defined a specification to remove these complexities and spur common tooling. The STAC specification is a common language to describe geospatial information, so it can more easily be worked with, indexed, and discovered. There are many deployed products that are built on top of STAC and one such is [Microsoft Planetary Computer](https://planetarycomputer.microsoft.com/docs/overview/about) providing a multi-petabyte STAC catalog of global environmental data for sustainability research.
+
+Our [sample solution](https://github.com/Azure/Azure-Orbital-STAC) uses open source tools such as STAC FastAPI, [pystac](https://github.com/stac-utils/pystac), [Microsoft Planetary Computer APIs](https://github.com/microsoft/planetary-computer-apis) and open standard geospatial libraries (listed in the [Components](#components) section) to run the solution on Azure.
++
+### Potential use cases
+
+STAC has become an industry standard to support how geospatial data should be structured and queried. It has been used in many production deployments for various use cases.
+
+Here are a couple of examples:
+
+- A satellite data provider company needs to make their data easy to discover and access. The provider builds STAC Catalogs to index all of its historic archive data sets and also the incoming refresh data on a daily basis. A web client UI is built on top of STAC APIs that allows users to browse the catalogs and search for their desired images based on the area of interest (AOI), date/time range, and other parameters.
+
+- A geospatial data analysis company needs to build a database of spaceborne data including imagery, Digital Elevation Model (DEM), and 3D types that it has acquired from various data sources. The database will serve its geographic information system (GIS) analysis solution to aggregate different data sets for machine learning model-based object detection analysis. To support a standard data access layer, the company decides to implement an open source compatible STAC API interface for the GIS analysis solution to interact with the database in a scalable and performing way.
++
+## Architecture
++
+Download a [Visio file](https://download.microsoft.com/download/5/6/4/564196b7-dd01-468a-af21-1da16489f298/stac_arch.vsdx) for this architecture.
+
+### Dataflow
++
+Download a [Visio file](https://download.microsoft.com/download/5/6/4/564196b7-dd01-468a-af21-1da16489f298/stac_data_flow.vsdx) for this dataflow.
+
+The following sections describe the four stages in the architecture.
+
+**Data acquisition**
+
+- Spaceborne data is provided by various data providers including [Airbus](https://oneatlas.airbus.com/home), [NAIP/USDA (via the Planetary Computer API)](https://planetarycomputer.microsoft.com/dataset/naip), and [Maxar](https://www.maxar.com).
+- In the sample solution we use the NAIP dataset provided by [Microsoft Planetary Computer](https://planetarycomputer.microsoft.com/docs/overview/about).
+
+**Metadata generation**
+
+- Data providers define the metadata describing provider, license terms, keywords, etc. This metadata forms the STAC Collection.
+- Data providers may provide metadata describing the geospatial assets. In our sample, we use metadata provided by [NAIP](https://www.usgs.gov/centers/eros/science/national-agriculture-imagery-program-naip-data-dictionary) & [FGDC](https://www.fgdc.gov/metadata). More metadata is extracted from the assets using standard geospatial libraries. This metadata forms the STAC Items.
+- This STAC Collection and Items are used to build the STAC Catalog that helps users discover the spatiotemporal assets using STAC APIs.
+
+**Cataloging**
+
+- STAC Catalog
+
+ - STAC Catalog is a top-level object that logically groups other Catalog, Collection, and Item Objects. As part of the deployment of this solution, we create a STAC Catalog under which all the collections and items are organized.
+
+- STAC Collection
+
+ - It's a related group of STAC Items that is made available by a data provider.
+ - Search queries for discovering the assets are scoped at the STAC Collection level.
+ - It's generated for a data provider, NAIP in this case and this JSON metadata is uploaded to an Azure Storage container.
+ - The upload of a [STAC Collection](https://stacspec.org/en/about/stac-spec/) metadata file triggers a message to Azure Service Bus.
+ - The processor processes this metadata on Azure Kubernetes Cluster and ingests to the STAC Catalog database (PostgreSQL database). There are different processors for different data providers and each processor subscribes to the respective Service Bus Topic.
+
+- STAC Item and asset
+ - An asset that is to be cataloged (raster data in the form of [GeoTiff](https://www.ogc.org/standards/geotiff), [Cloud Optimized GeoTiff](https://www.cogeo.org/) and so forth). Metadata describing the asset and the metadata extracted from the asset is uploaded to the Storage Account under the appropriate storage container.
+ - The asset(s) (GeoTiff) are then uploaded to the Storage Account under the appropriate storage container following the successful upload of their corresponding metadata.
+ - Each asset and its associated metadata uploaded to the Storage Account triggers a message to the Service Bus. This metadata forms the STAC Item in the catalog database.
+ - The processor processes this metadata on Azure Kubernetes Cluster and ingests to the STAC Catalog database (PostgreSQL database).
+
+**Data discovery**
+
+- STAC API is based on open source STAC FastAPI.
+- STAC API layer is implemented on Azure Kubernetes Service and the APIs are exposed using [API Management Service](https://azure.microsoft.com/products/api-management/).
+- STAC APIs are used to discover the geospatial data in your Catalog. These APIs are based on STAC specifications and understand the STAC metadata defined and indexed in the STAC Catalog database (PostgresSQL server).
+- Based on the search criteria, you can quickly locate your data from a large dataset.
+ - Querying the STAC Collection, Items & Assets:
+ - A query is submitted by a user to look up one or more STAC Collection, Items & Assets through the STAC FastAPI.
+ - STAC FastAPI queries the data in the PostgreSQL database to retrieve the STAC Collection, Items & references to Assets.
+ - The result is served back to the user by the STAC FastAPI.
+
+### Components
+
+The following Azure services are used in this architecture.
+
+- [Key Vault](/azure/key-vault/general/basic-concepts) stores and controls access to secrets such as tokens, passwords, and API keys. Key Vault also creates and controls encryption keys and manages security certificates.
+- [Service Bus](https://azure.microsoft.com/services/service-bus/) is part of a broader [Azure messaging](/azure/service-bus-messaging/service-bus-messaging-overview) infrastructure that supports queueing, publish/subscribe, and more advanced integration patterns.
+- [Azure Data Lake Storage](https://azure.microsoft.com/services/storage/data-lake-storage/) is dedicated to big data analytics, and is built on [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs).
+- [Azure Virtual Network](/azure/virtual-network/virtual-networks-overview) enables Azure resources to securely communicate with each other, the internet, and on-premises networks.
+- [Azure Database for PostgreSQL - Flexible Server](/azure/postgresql/flexible-server/overview) is a fully managed database service designed to provide more granular control and flexibility over database management functions and configuration settings. It has richer capabilities such as zone resilient high availability (HA), predictable performance, maximum control, custom maintenance window, cost optimization controls, and simplified developer experience suitable for your enterprise workloads.
+- [API Management Services](https://azure.microsoft.com/services/api-management/) offers a scalable, multicloud API management platform for securing, publishing and analyzing APIs.
+- [Azure Kubernetes Services](/azure/aks/intro-kubernetes) offers the quickest way to start developing and deploying cloud-native apps, with built-in code-to-cloud pipelines and guardrails.
+- [Container Registry](/azure/container-registry/container-registry-intro) to store and manage your container images and related artifacts.
+- [Virtual Machine](/azure/virtual-machines/overview) (VM) gives you the flexibility of virtualization for a wide range of computing solutions. In a fully secured deployment, a user connects to a VM via Azure Bastion (described in the next item below) to perform a range of operations like copying files to storage accounts, running Azure CLI commands, and interacting with other services.
+- [Azure Bastion](/azure/bastion/bastion-overview) enables you to securely and seamlessly RDP & SSH to your VMs in Azure virtual network, without the need of public IP on the VM, directly from the Azure portal, and without the need of any other client/agent or any piece of software.
+- [Application Insights](/azure/azure-monitor/app/app-insights-overview) provides extensible application performance management and monitoring for live web apps.
+- [Log Analytics](/azure/azure-monitor/logs/log-analytics-overview) is a tool to edit and run log queries from data collected by Azure Monitor logs and interactively analyze the results.
+
+The following Geospatial libraries are also used:
+
+- [GDAL](https://gdal.org/) is a library of tools for manipulating spaceborne data. GDAL works on raster and vector data types. It's a good tool to know if you're working with spaceborne data.
+- [Rasterio](https://rasterio.readthedocs.io/en/latest/intro.html) is a module for raster processing. You can use it to read and write several different raster formats in Python. Rasterio is based on GDAL. When the module is imported, Python automatically registers all known GDAL drivers for reading supported formats.
+- [Shapely](https://shapely.readthedocs.io/en/stable/manual.html#introduction) is a Python package for set-theoretic analysis and manipulation of planar features. It uses (via Python's ctypes module) functions from the widely deployed GEOS library.
+- [pyproj](https://pyproj4.github.io/pyproj/stable/examples.html) performs cartographic transformations. It converts from longitude and latitude to native map projection x, y coordinates, and vice versa, by using [PROJ](https://proj.org/).
+
+## Considerations
+
+- The sample solution demonstrates STAC's core JSON support that is needed to interact with any geospatial data collection. While STAC standardizes metadata fields, naming conventions, query language, and catalog structure, users should additionally consider [STAC Extensions](https://stac-extensions.github.io/) to support metadata fields specific to their Assets.
+
+- In the sample implementation, components that process the asset to extract metadata have a set number of replicas. Scaling this component allows you to process your assets faster. However, scaling isn't dynamic. If large number of assets to be cataloged, consider scaling these replicas.
+
+### Adding a new data source
+
+To catalog more data sources or to catalog your own data source, consider the following options.
+
+- Define the STAC Collection for your data source. Search queries are scoped at the STAC Collection level. Consider how the user will search STAC Items and Assets in your collection.
+- Generate the STAC Item metadata. More metadata may be derived from geospatial assets using standard tools and libraries. Define and implement the process to capture supplemental metadata for the assets that will be useful in making STAC items rich and in turn make discovery of data using APIs easier.
+- Once this metadata (in the form STAC Collection and STAC Items) is available for a data source, this sample solution can be used to build your STAC Catalog using the same flow. The data once cataloged will be queryable using standard STAC APIs.
+- The processor component of this architecture is extensible to include custom code that can be developed and run as containers in Azure Kubernetes Cluster. It's intended to provide a way for different representation of geospatial data to be cataloged as assets.
+
+### Security
+
+Security provides assurances against deliberate attacks and the abuse of your valuable data and systems. For more information, see [Overview of the security pillar](/azure/architecture/framework/security/overview).
+
+- Azure Kubernetes Service [Container Security](https://learn.microsoft.com/azure/aks/concepts-security) implementation ensures the processors are built and run as containers are secure.
+- API Management Service [Security baseline](https://learn.microsoft.com/azure/aks/concepts-security) provides recommendations on how to secure your cloud solutions on Azure.
+- [Azure Database for PostgreSQL Security](https://learn.microsoft.com/azure/postgresql/flexible-server/concepts-security) covers in-depth the security at multiple layers when data is stored in PostgreSQL Flexible Server including data at rest and data in transit scenarios.
+
+### Cost optimization
+
+Cost optimization is about looking at ways to reduce unnecessary expenses and improve operational efficiencies. For more information, see [Overview of the cost optimization pillar](https://learn.microsoft.com/azure/architecture/framework/cost/overview).
+
+As this solution is intended for learning and development, we have used minimal configuration for the Azure resources. This minimal configuration runs a sample solution on a sample dataset.
+
+Users can also adjust the configurations to meet their workload and scaling needs to be performant. For instance, you can swap Standard HDDs for Premium SSDs in your AKS cluster or scale API Management Services to premium SKUs.
+
+### Performance efficiency
+
+Performance efficiency is the ability of your workload to scale to meet the demands placed on it by users in an efficient manner. For more information, see [Performance efficiency pillar overview](/azure/architecture/framework/scalability/overview). Additionally, the following guidance can be useful in maximizing performance efficiency:
+
+- [Monitor and tune](/azure/postgresql/single-server/tutorial-monitor-and-tune) provides a way to monitor your data and tune your database to improvement performance.
+- [Performance tuning a distributing application](/azure/architecture/performance/) walks through a few different scenarios, how to identify key metrics and improve performance.
+- [Baseline architecture for an Azure Kubernetes Service (AKS) cluster](/azure/architecture/reference-architectures/containers/aks/baseline-aks) recommends baseline infrastructure architecture to deploy an Azure Kubernetes Service (AKS) cluster on Azure.
+- [Improve the performance of an API by adding a caching policy in Azure API Management](/training/modules/improve-api-performance-with-apim-caching-policy/) is a training module on improving performance through Caching Policy.
+
+## Deploy this scenario
+
+We built a [sample solution](https://github.com/Azure/Azure-Orbital-STAC) that can be deployed into your subscription. This solution enables users to validate the overall data flow from STAC metadata, ingestion to discovering the assets using standard STAC APIs. The deployment instructions and the validation steps are documented in the [README](https://github.com/Azure/Azure-Orbital-STAC/blob/main/deploy/README.md) file.
+
+At a high level, this deployment does the following:
+
+- Deploys various infrastructure components such as Azure Kubernetes Services, Azure PostgreSQL Server, Azure Key Vault, Azure Storage account, Azure Service Bus, and so forth, in the private network.
+- Deploys Azure API Management service and publishes the endpoint for STAC FastAPI.
+- Packages the code and its dependencies, builds the Docker container images, and pushes them to Azure Container Registry.
+
+ :::image type="content" source="media/stac-deploy.png" alt-text="Diagram of STAC deployment services." lightbox="media/stac-deploy.png":::
+
+Download a [Visio file](https://download.microsoft.com/download/5/6/4/564196b7-dd01-468a-af21-1da16489f298/stac_deploy.vsdx) for this implementation.
+
+## Next steps
+
+If you want to start building this, we have put together a [sample solution](https://github.com/Azure/Azure-Orbital-STAC) discussed briefly above. Below are some useful links to get started on STAC & model implementation.
+
+- [STAC Overview](https://github.com/radiantearth/stac-spec/blob/master/overview.md)
+- [STAC tutorial](https://stacspec.org/en/tutorials/)
+- [Microsoft Planetary Computer API](https://github.com/Microsoft/planetary-computer-apis)
+
+## Related resources
+
+- [Microsoft Planetary Computer](https://planetarycomputer.microsoft.com/docs/overview/about) lets users apply the power of the cloud to accelerate environmental sustainability and Earth science. Many of the Planetary Computer components are also open source.
+- [The STAC specification](https://stacspec.org/en)
+- [STAC FastAPI](https://stac-utils.github.io/stac-fastapi/)
+- [PySTAC](https://pystac.readthedocs.io/en/stable/)
+- [PgSTAC](https://stac-utils.github.io/pgstac/pgstac/)
+- [pyPgSTAC](https://stac-utils.github.io/pgstac/pypgstac/)
+- [NAIP](https://datagateway.nrcs.usda.gov/GDGHome_DirectDownLoad.aspx)
+- [FGDC](https://www.fgdc.gov/metadata)
+
+## Glossary
+
+|STAC term|Definition|
+|-||
+|Asset|Any file that represents spaceborne data captured in a certain space and time.|
+|STAC Specification|Allows you to describe the geospatial data so it can be easily indexed and discovered.|
+|STAC Item|The core atomic unit, representing a single spatiotemporal asset as a GeoJSON feature plus metadata like datetime and reference links.|
+|STAC Catalog|A simple, flexible JSON that provides a structure and organized the metadata like STAC items, collections and other catalogs.|
+|STAC Collection|Provides additional information such as the extents, license, keywords, providers, and so forth, that describe STAC Items within the Collection.|
+|STAC API|Provides a RESTful endpoint that enables search of STAC Items, specified in OpenAPI.|
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
To begin the migration in either Online or Offline mode, you can get started wit
Assign [contributor roles](./how-to-set-up-azure-ad-app-portal.md#add-contributor-privileges-to-an-azure-resource) to source server, target server and the migration resource group. In case of private access for source/target server, add Contributor privileges to the corresponding VNet as well.
+#### Verify replication privileges for Single server's admin user
+
+ Please run the following query to check if single server's admin user has replication privileges.
+
+```
+ SELECT usename, userepl FROM pg_catalog.pg_user;
+```
+
+ Verify that the **userpl** column for the single server's admin user has the value **true**. If it is set to **false**, please grant the replication privileges to the admin user by running the following query on the single server.
+
+ ```
+ ALTER ROLE <adminusername> WITH REPLICATION;
+```
+ #### Allow-list required extensions If you are using any PostgreSQL extensions on the Single Server, it has to be allow-listed on the Flexible Server before initiating the migration using the steps below:
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [AcrPush](#acrpush) | Push artifacts to or pull artifacts from a container registry. | 8311e382-0749-4cb8-b61a-304f252e45ec | > | [AcrQuarantineReader](#acrquarantinereader) | Pull quarantined images from a container registry. | cdda3590-29a3-44f6-95f2-9f980659eb04 | > | [AcrQuarantineWriter](#acrquarantinewriter) | Push quarantined images to or pull quarantined images from a container registry. | c8d4ff99-41c3-41a8-9f60-21dfdad59608 |
+> | [Azure Kubernetes Fleet Manager RBAC Admin](#azure-kubernetes-fleet-manager-rbac-admin) | This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces. | 434fb43a-c01c-447e-9f67-c3ad923cfaba |
+> | [Azure Kubernetes Fleet Manager RBAC Cluster Admin](#azure-kubernetes-fleet-manager-rbac-cluster-admin) | Lets you manage all resources in the fleet manager cluster. | 18ab4d3d-a1bf-4477-8ad9-8359bc988f69 |
+> | [Azure Kubernetes Fleet Manager RBAC Reader](#azure-kubernetes-fleet-manager-rbac-reader) | Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces. | 30b27cfc-9c84-438e-b0ce-70e35255df80 |
+> | [Azure Kubernetes Fleet Manager RBAC Writer](#azure-kubernetes-fleet-manager-rbac-writer) | Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces. | 5af6afb3-c06c-4fa4-8848-71a8aee05683 |
> | [Azure Kubernetes Service Cluster Admin Role](#azure-kubernetes-service-cluster-admin-role) | List cluster admin credential action. | 0ab0b1a8-8aac-4efd-b8c2-3ee1fb270be8 | > | [Azure Kubernetes Service Cluster User Role](#azure-kubernetes-service-cluster-user-role) | List cluster user credential action. | 4abbcc35-e782-43d8-92c5-2d3f1bd2253f | > | [Azure Kubernetes Service Contributor Role](#azure-kubernetes-service-contributor-role) | Grants access to read and write Azure Kubernetes Service clusters | ed7f3fbd-7b88-4dd4-9017-9adb7ce333f8 |
Push quarantined images to or pull quarantined images from a container registry.
} ```
+### Azure Kubernetes Fleet Manager RBAC Admin
+
+This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/read | Get fleet |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/listCredentials/action | List fleet credentials |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/controllerrevisions/read | Reads controllerrevisions |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/daemonsets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/deployments/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/statefulsets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/authorization.k8s.io/localsubjectaccessreviews/write | Writes localsubjectaccessreviews |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/autoscaling/horizontalpodautoscalers/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/batch/cronjobs/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/batch/jobs/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/configmaps/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/endpoints/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/events.k8s.io/events/read | Reads events |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/events/read | Reads events |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/daemonsets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/deployments/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/ingresses/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/networkpolicies/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/limitranges/read | Reads limitranges |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/namespaces/read | Reads namespaces |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/networking.k8s.io/ingresses/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/networking.k8s.io/networkpolicies/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/persistentvolumeclaims/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/policy/poddisruptionbudgets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/rbac.authorization.k8s.io/rolebindings/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/rbac.authorization.k8s.io/roles/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/replicationcontrollers/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/replicationcontrollers/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/resourcequotas/read | Reads resourcequotas |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/secrets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/serviceaccounts/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/services/* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "This role grants admin access - provides write permissions on most objects within a a namespace, with the exception of ResourceQuota object and the namespace object itself. Applying this role at cluster scope will give access across all namespaces.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/434fb43a-c01c-447e-9f67-c3ad923cfaba",
+ "name": "434fb43a-c01c-447e-9f67-c3ad923cfaba",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.ContainerService/fleets/read",
+ "Microsoft.ContainerService/fleets/listCredentials/action"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.ContainerService/fleets/apps/controllerrevisions/read",
+ "Microsoft.ContainerService/fleets/apps/daemonsets/*",
+ "Microsoft.ContainerService/fleets/apps/deployments/*",
+ "Microsoft.ContainerService/fleets/apps/statefulsets/*",
+ "Microsoft.ContainerService/fleets/authorization.k8s.io/localsubjectaccessreviews/write",
+ "Microsoft.ContainerService/fleets/autoscaling/horizontalpodautoscalers/*",
+ "Microsoft.ContainerService/fleets/batch/cronjobs/*",
+ "Microsoft.ContainerService/fleets/batch/jobs/*",
+ "Microsoft.ContainerService/fleets/configmaps/*",
+ "Microsoft.ContainerService/fleets/endpoints/*",
+ "Microsoft.ContainerService/fleets/events.k8s.io/events/read",
+ "Microsoft.ContainerService/fleets/events/read",
+ "Microsoft.ContainerService/fleets/extensions/daemonsets/*",
+ "Microsoft.ContainerService/fleets/extensions/deployments/*",
+ "Microsoft.ContainerService/fleets/extensions/ingresses/*",
+ "Microsoft.ContainerService/fleets/extensions/networkpolicies/*",
+ "Microsoft.ContainerService/fleets/limitranges/read",
+ "Microsoft.ContainerService/fleets/namespaces/read",
+ "Microsoft.ContainerService/fleets/networking.k8s.io/ingresses/*",
+ "Microsoft.ContainerService/fleets/networking.k8s.io/networkpolicies/*",
+ "Microsoft.ContainerService/fleets/persistentvolumeclaims/*",
+ "Microsoft.ContainerService/fleets/policy/poddisruptionbudgets/*",
+ "Microsoft.ContainerService/fleets/rbac.authorization.k8s.io/rolebindings/*",
+ "Microsoft.ContainerService/fleets/rbac.authorization.k8s.io/roles/*",
+ "Microsoft.ContainerService/fleets/replicationcontrollers/*",
+ "Microsoft.ContainerService/fleets/replicationcontrollers/*",
+ "Microsoft.ContainerService/fleets/resourcequotas/read",
+ "Microsoft.ContainerService/fleets/secrets/*",
+ "Microsoft.ContainerService/fleets/serviceaccounts/*",
+ "Microsoft.ContainerService/fleets/services/*"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Kubernetes Fleet Manager RBAC Admin",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Kubernetes Fleet Manager RBAC Cluster Admin
+
+Lets you manage all resources in the fleet manager cluster.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/read | Get fleet |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/listCredentials/action | List fleet credentials |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Lets you manage all resources in the fleet manager cluster.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/18ab4d3d-a1bf-4477-8ad9-8359bc988f69",
+ "name": "18ab4d3d-a1bf-4477-8ad9-8359bc988f69",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.ContainerService/fleets/read",
+ "Microsoft.ContainerService/fleets/listCredentials/action"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.ContainerService/fleets/*"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Kubernetes Fleet Manager RBAC Cluster Admin",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Kubernetes Fleet Manager RBAC Reader
+
+Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/read | Get fleet |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/listCredentials/action | List fleet credentials |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/controllerrevisions/read | Reads controllerrevisions |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/daemonsets/read | Reads daemonsets |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/deployments/read | Reads deployments |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/statefulsets/read | Reads statefulsets |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/autoscaling/horizontalpodautoscalers/read | Reads horizontalpodautoscalers |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/batch/cronjobs/read | Reads cronjobs |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/batch/jobs/read | Reads jobs |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/configmaps/read | Reads configmaps |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/endpoints/read | Reads endpoints |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/events.k8s.io/events/read | Reads events |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/events/read | Reads events |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/daemonsets/read | Reads daemonsets |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/deployments/read | Reads deployments |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/ingresses/read | Reads ingresses |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/networkpolicies/read | Reads networkpolicies |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/limitranges/read | Reads limitranges |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/namespaces/read | Reads namespaces |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/networking.k8s.io/ingresses/read | Reads ingresses |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/networking.k8s.io/networkpolicies/read | Reads networkpolicies |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/persistentvolumeclaims/read | Reads persistentvolumeclaims |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/policy/poddisruptionbudgets/read | Reads poddisruptionbudgets |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/replicationcontrollers/read | Reads replicationcontrollers |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/replicationcontrollers/read | Reads replicationcontrollers |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/resourcequotas/read | Reads resourcequotas |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/serviceaccounts/read | Reads serviceaccounts |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/services/read | Reads services |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows read-only access to see most objects in a namespace. It does not allow viewing roles or role bindings. This role does not allow viewing Secrets, since reading the contents of Secrets enables access to ServiceAccount credentials in the namespace, which would allow API access as any ServiceAccount in the namespace (a form of privilege escalation). Applying this role at cluster scope will give access across all namespaces.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/30b27cfc-9c84-438e-b0ce-70e35255df80",
+ "name": "30b27cfc-9c84-438e-b0ce-70e35255df80",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.ContainerService/fleets/read",
+ "Microsoft.ContainerService/fleets/listCredentials/action"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.ContainerService/fleets/apps/controllerrevisions/read",
+ "Microsoft.ContainerService/fleets/apps/daemonsets/read",
+ "Microsoft.ContainerService/fleets/apps/deployments/read",
+ "Microsoft.ContainerService/fleets/apps/statefulsets/read",
+ "Microsoft.ContainerService/fleets/autoscaling/horizontalpodautoscalers/read",
+ "Microsoft.ContainerService/fleets/batch/cronjobs/read",
+ "Microsoft.ContainerService/fleets/batch/jobs/read",
+ "Microsoft.ContainerService/fleets/configmaps/read",
+ "Microsoft.ContainerService/fleets/endpoints/read",
+ "Microsoft.ContainerService/fleets/events.k8s.io/events/read",
+ "Microsoft.ContainerService/fleets/events/read",
+ "Microsoft.ContainerService/fleets/extensions/daemonsets/read",
+ "Microsoft.ContainerService/fleets/extensions/deployments/read",
+ "Microsoft.ContainerService/fleets/extensions/ingresses/read",
+ "Microsoft.ContainerService/fleets/extensions/networkpolicies/read",
+ "Microsoft.ContainerService/fleets/limitranges/read",
+ "Microsoft.ContainerService/fleets/namespaces/read",
+ "Microsoft.ContainerService/fleets/networking.k8s.io/ingresses/read",
+ "Microsoft.ContainerService/fleets/networking.k8s.io/networkpolicies/read",
+ "Microsoft.ContainerService/fleets/persistentvolumeclaims/read",
+ "Microsoft.ContainerService/fleets/policy/poddisruptionbudgets/read",
+ "Microsoft.ContainerService/fleets/replicationcontrollers/read",
+ "Microsoft.ContainerService/fleets/replicationcontrollers/read",
+ "Microsoft.ContainerService/fleets/resourcequotas/read",
+ "Microsoft.ContainerService/fleets/serviceaccounts/read",
+ "Microsoft.ContainerService/fleets/services/read"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Kubernetes Fleet Manager RBAC Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Kubernetes Fleet Manager RBAC Writer
+
+Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/operationresults/read | Get the subscription operation results. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/read | Gets the list of subscriptions. |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/read | Get fleet |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/listCredentials/action | List fleet credentials |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/controllerrevisions/read | Reads controllerrevisions |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/daemonsets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/deployments/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/apps/statefulsets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/autoscaling/horizontalpodautoscalers/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/batch/cronjobs/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/batch/jobs/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/configmaps/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/endpoints/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/events.k8s.io/events/read | Reads events |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/events/read | Reads events |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/daemonsets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/deployments/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/ingresses/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/extensions/networkpolicies/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/limitranges/read | Reads limitranges |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/namespaces/read | Reads namespaces |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/networking.k8s.io/ingresses/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/networking.k8s.io/networkpolicies/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/persistentvolumeclaims/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/policy/poddisruptionbudgets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/replicationcontrollers/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/replicationcontrollers/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/resourcequotas/read | Reads resourcequotas |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/secrets/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/serviceaccounts/* | |
+> | [Microsoft.ContainerService](resource-provider-operations.md#microsoftcontainerservice)/fleets/services/* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Allows read/write access to most objects in a namespace.This role does not allow viewing or modifying roles or role bindings. However, this role allows accessing Secrets as any ServiceAccount in the namespace, so it can be used to gain the API access levels of any ServiceAccount in the namespace. Applying this role at cluster scope will give access across all namespaces.",
+ "id": "/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/5af6afb3-c06c-4fa4-8848-71a8aee05683",
+ "name": "5af6afb3-c06c-4fa4-8848-71a8aee05683",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Resources/subscriptions/operationresults/read",
+ "Microsoft.Resources/subscriptions/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.ContainerService/fleets/read",
+ "Microsoft.ContainerService/fleets/listCredentials/action"
+ ],
+ "notActions": [],
+ "dataActions": [
+ "Microsoft.ContainerService/fleets/apps/controllerrevisions/read",
+ "Microsoft.ContainerService/fleets/apps/daemonsets/*",
+ "Microsoft.ContainerService/fleets/apps/deployments/*",
+ "Microsoft.ContainerService/fleets/apps/statefulsets/*",
+ "Microsoft.ContainerService/fleets/autoscaling/horizontalpodautoscalers/*",
+ "Microsoft.ContainerService/fleets/batch/cronjobs/*",
+ "Microsoft.ContainerService/fleets/batch/jobs/*",
+ "Microsoft.ContainerService/fleets/configmaps/*",
+ "Microsoft.ContainerService/fleets/endpoints/*",
+ "Microsoft.ContainerService/fleets/events.k8s.io/events/read",
+ "Microsoft.ContainerService/fleets/events/read",
+ "Microsoft.ContainerService/fleets/extensions/daemonsets/*",
+ "Microsoft.ContainerService/fleets/extensions/deployments/*",
+ "Microsoft.ContainerService/fleets/extensions/ingresses/*",
+ "Microsoft.ContainerService/fleets/extensions/networkpolicies/*",
+ "Microsoft.ContainerService/fleets/limitranges/read",
+ "Microsoft.ContainerService/fleets/namespaces/read",
+ "Microsoft.ContainerService/fleets/networking.k8s.io/ingresses/*",
+ "Microsoft.ContainerService/fleets/networking.k8s.io/networkpolicies/*",
+ "Microsoft.ContainerService/fleets/persistentvolumeclaims/*",
+ "Microsoft.ContainerService/fleets/policy/poddisruptionbudgets/*",
+ "Microsoft.ContainerService/fleets/replicationcontrollers/*",
+ "Microsoft.ContainerService/fleets/replicationcontrollers/*",
+ "Microsoft.ContainerService/fleets/resourcequotas/read",
+ "Microsoft.ContainerService/fleets/secrets/*",
+ "Microsoft.ContainerService/fleets/serviceaccounts/*",
+ "Microsoft.ContainerService/fleets/services/*"
+ ],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Kubernetes Fleet Manager RBAC Writer",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### Azure Kubernetes Service Cluster Admin Role List cluster admin credential action. [Learn more](../aks/control-kubeconfig-access.md)
role-based-access-control Role Assignments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments.md
+
+ Title: Understand Azure role assignments - Azure RBAC
+description: Learn about Azure role assignments in Azure role-based access control (Azure RBAC) for fine-grained access management of Azure resources.
+
+documentationcenter: ''
++++ Last updated : 10/03/2022++
+# Understand Azure role assignments
+
+Role assignments enable you to grant a principal (such as a user, a group, a managed identity, or a service principal) access to a specific Azure resource. This article describes the details of role assignments.
+
+## Role assignment
+
+Access to Azure resources is granted by creating a role assignment, and access is revoked by removing a role assignment.
+
+A role assignment has several components, including:
+
+- The *principal*, or *who* is assigned the role.
+- The *role* that they're assigned.
+- The *scope* at which the role is assigned.
+- The *name* of the role assignment, and a *description* that helps you to explain why the role has been assigned.
+
+For example, you can use Azure RBAC to assign roles like:
+
+- User Sally has owner access to the storage account *contoso123* in the resource group *ContosoStorage*.
+- Everybody in the Cloud Administrators group in Azure Active Directory has reader access to all resources in the resource group *ContosoStorage*.
+- The managed identity associated with an application is allowed to restart virtual machines within Contoso's subscription.
+
+The following shows an example of the properties in a role assignment when displayed using [Azure PowerShell](role-assignments-list-powershell.md):
+
+```json
+{
+ "RoleAssignmentName": "00000000-0000-0000-0000-000000000000",
+ "RoleAssignmentId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/roleAssignments/00000000-0000-0000-0000-000000000000",
+ "Scope": "/subscriptions/11111111-1111-1111-1111-111111111111",
+ "DisplayName": "User Name",
+ "SignInName": "user@contoso.com",
+ "RoleDefinitionName": "Contributor",
+ "RoleDefinitionId": "b24988ac-6180-42a0-ab88-20f7382dd24c",
+ "ObjectId": "22222222-2222-2222-2222-222222222222",
+ "ObjectType": "User",
+ "CanDelegate": false,
+ "Description": null,
+ "ConditionVersion": null,
+ "Condition": null
+}
+```
+
+The following shows an example of the properties in a role assignment when displayed using the [Azure CLI](role-assignments-list-cli.md), or the [REST API](role-assignments-list-rest.md):
+
+```json
+{
+ "canDelegate": null,
+ "condition": null,
+ "conditionVersion": null,
+ "description": null,
+ "id": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/roleAssignments/00000000-0000-0000-0000-000000000000",
+ "name": "00000000-0000-0000-0000-000000000000",
+ "principalId": "22222222-2222-2222-2222-222222222222",
+ "principalName": "user@contoso.com",
+ "principalType": "User",
+ "roleDefinitionId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/roleDefinitions/b24988ac-6180-42a0-ab88-20f7382dd24c",
+ "roleDefinitionName": "Contributor",
+ "scope": "/subscriptions/11111111-1111-1111-1111-111111111111",
+ "type": "Microsoft.Authorization/roleAssignments"
+}
+```
+
+The following table describes what the role assignment properties mean.
+
+| Property | Description |
+| | |
+| `RoleAssignmentName`<br />`name` | The name of the role assignment, which is a globally unique identifier (GUID). |
+| `RoleAssignmentId`<br />`id` | The unique ID of the role assignment, which includes the name. |
+| `Scope`<br />`scope` | The Azure resource identifier that the role assignment is scoped to. |
+| `RoleDefinitionId`<br />`roleDefinitionId` | The unique ID of the role. |
+| `RoleDefinitionName`<br />`roleDefinitionName` | The name of the role. |
+| `ObjectId`<br />`principalId` | The Azure Active Directory (Azure AD) object identifier for the principal who has the role assigned. |
+| `ObjectType`<br />`principalType` | The type of Azure AD object that the principal represents. Valid values include `User`, `Group`, and `ServicePrincipal`. |
+| `DisplayName` | For role assignments for users, the display name of the user. |
+| `SignInName`<br />`principalName` | The unique principal name (UPN) of the user, or the name of the application associated with the service principal. |
+| `Description`<br />`description` | The description of the role assignment. |
+| `Condition`<br />`condition` | Condition statement built using one or more actions from role definition and attributes. |
+| `ConditionVersion`<br />`conditionVersion` | The condition version number. Defaults to 2.0 and is the only supported version. |
+| `CanDelegate`<br />`canDelegate` | Not implemented. |
+
+## Scope
+
+When you create a role assignment, you need to specify the scope at which it's applied. The scope represents the resource, or set of resources, that the principal is allowed to access. You can scope a role assignment to a single resource, a resource group, a subscription, or a management group.
+
+> [!TIP]
+> Use the smallest scope that you need to meet your requirements.
+>
+> For example, if you need to grant a managed identity access to a single storage account, it's good security practice to create the role assignment at the scope of the storage account, not at the resource group or subscription scope.
+
+For more information about scope, see [Understand scope](scope-overview.md).
+
+## Role to assign
+
+A role assignment is associated with a role definition. The role definition specifies the permissions that the principal should have within the role assignment's scope.
+
+You can assign a built-in role definition or a custom role definition. When you create a role assignment, some tooling requires that you use the role definition ID while other tooling allows you to provide the name of the role.
+
+For more information about role definitions, see [Understand role definitions](role-definitions.md).
+
+## Principal
+
+Principals include users, security groups, managed identities, workload identities, and service principals. Principals are created and managed in your Azure Active Directory (Azure AD) tenant. You can assign a role to any principal. Use the Azure AD *object ID* to identify the principal that you want to assign the role to.
+
+When you create a role assignment by using Azure PowerShell, the Azure CLI, Bicep, or another infrastructure as code (IaC) technology, you specify the *principal type*. Principal types include *User*, *Group*, and *ServicePrincipal*. It's important to specify the correct principal type. Otherwise, you might get intermittent deployment errors, especially when you work with service principals and managed identities.
+
+## Name
+
+A role assignment's resource name must be a globally unique identifier (GUID).
+
+Role assignment resource names must be unique within the Azure Active Directory tenant, even if the scope of the role assignment is narrower.
+
+> [!TIP]
+> When you create a role assignment by using the Azure portal, Azure PowerShell, or the Azure CLI, the creation process gives the role assignment a unique name for you automatically.
+>
+> If you create a role assignment by using Bicep or another infrastructure as code (IaC) technology, you need to carefully plan how you name your role assignments. For more information, see [Create Azure RBAC resources by using Bicep](../azure-resource-manager/bicep/scenarios-rbac.md).
+
+### Resource deletion behavior
+
+When you delete a user, group, service principal, or managed identity from Azure AD, it's a good practice to delete any role assignments. They aren't deleted automatically. Any role assignments that refer to a deleted principal ID become invalid.
+
+If you try to reuse a role assignment's name for another role assignment, the deployment will fail. This issue is more likely to occur when you use Bicep or an Azure Resource Manager template (ARM template) to deploy your role assignments, because you have to explicitly set the role assignment name when you use these tools. To work around this behavior, you should either remove the old role assignment before you recreate it, or ensure that you use a unique name when you deploy a new role assignment.
+
+## Description
+
+You can add a text description to a role assignment. While descriptions are optional, it's a good practice to add them to your role assignments. Provide a short justification for why the principal needs the assigned role. When somebody audits the role assignments, descriptions can help to understand why they've been created and whether they're still applicable.
+
+## Conditions
+
+Some roles support *role assignment conditions* based on attributes in the context of specific actions. A role assignment condition is an additional check that you can optionally add to your role assignment to provide more fine-grained access control.
+
+For example, you can add a condition that requires an object to have a specific tag for the user to read the object.
+
+You typically build conditions using a visual condition editor, but here's what an example condition looks like in code:
+
+```
+((!(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})) OR (@resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEqualsIgnoreCase 'Cascade'))
+```
+
+The preceding condition allows users to read blobs with a blob index tag key of *Project* and a value of *Cascade*.
+
+For more information about conditions, see [What is Azure attribute-based access control (Azure ABAC)?](conditions-overview.md)
+
+## Next steps
+
+* [Understand role definitions](role-definitions.md)
role-based-access-control Role Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-definitions.md
For more information about `AssignableScopes` for custom roles, see [Azure custo
## Next steps
+* [Understand role assignments](role-assignments.md)
* [Azure built-in roles](built-in-roles.md) * [Azure custom roles](custom-roles.md) * [Azure resource provider operations](resource-provider-operations.md)
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
Set the `principalType` property to `ServicePrincipal` when creating the role as
### Symptom - ARM template role assignment returns BadRequest status
-When you try to deploy an ARM template that assigns a role to a service principal you get the error:
+When you try to deploy a Bicep file or ARM template that assigns a role to a service principal you get the error:
`Tenant ID, application ID, principal ID, and scope are not allowed to be updated. (code: RoleAssignmentUpdateNotPermitted)`
+For example, if you create a role assignment for a managed identity, then you delete the managed identity and recreate it, the new managed identity has a different principal ID. If you try to deploy the role assignment again and use the same role assignment name, the deployment fails.
+ **Cause** The role assignment `name` is not unique, and it is viewed as an update.
+Role assignments are uniquely identified by their name, which is a globally unique identifier (GUID). You can't create two role assignments with the same name, even in different Azure subscriptions. You also can't change the properties of an existing role assignment.
+ **Solution**
-Provide an idempotent unique value for the role assignment `name`
+Provide an idempotent unique value for the role assignment `name`. It's a good practice to create a GUID that uses the scope, principal ID, and role ID together. It's a good idea to use the `guid()` function to help you to create a deterministic GUID for your role assignment names, like in this example:
+
+# [Bicep](#tab/bicep)
+
+```bicep
+resource roleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = {
+ name: guid(resourceGroup().id, principalId, roleDefinitionId)
+ properties: {
+ roleDefinitionId: roleDefinitionId
+ principalId: principalId
+ principalType: principalType
+ }
+}
```+
+# [ARM template](#tab/armtemplate)
+
+```json
{
- "type": "Microsoft.Authorization/roleAssignments",
- "apiVersion": "2018-09-01-preview",
- "name": "[guid(concat(resourceGroup().id, variables('resourceName'))]",
- "properties": {
- "roleDefinitionId": "[variables('roleDefinitionId')]",
- "principalId": "[variables('principalId')]"
- }
+ "type": "Microsoft.Authorization/roleAssignments",
+ "apiVersion": "2020-10-01-preview",
+ "name": "[guid(resourceGroup().id, variables('principalId'), variables('roleDefinitionId'))]",
+ "properties": {
+ "roleDefinitionId": "[variables('roleDefinitionId')]",
+ "principalId": "[variables('principalId')]",
+ "principalType": "[variables('principalType')]"
+ }
} ``` ++
+For more information, see [Create Azure RBAC resources by using Bicep](../azure-resource-manager/bicep/scenarios-rbac.md).
+ ### Symptom - Role assignments with identity not found In the list of role assignments for the Azure portal, you notice that the security principal (user, group, service principal, or managed identity) is listed as **Identity not found** with an **Unknown** type.
CanDelegate : False
Similarly, if you list this role assignment using Azure CLI, you might see an empty `principalName`. For example, [az role assignment list](/cli/azure/role/assignment#az-role-assignment-list) returns a role assignment that is similar to the following output:
-```
+```json
{ "canDelegate": null, "id": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/roleAssignments/22222222-2222-2222-2222-222222222222",
sentinel Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/migration.md
This article discusses the reasons for migrating from a legacy SIEM, and describ
In this guide, you learn how to migrate your legacy SIEM to Microsoft Sentinel. Follow your migration process through this series of articles, in which you'll learn how to navigate different steps in the process.
+> [!NOTE]
+> For a guided migration process, join the Microsoft Sentinel Migration and Modernization Program. The program allows you to simplify and accelerate the migration, including best practice guidance, resources, and expert help at every stage. To learn more, reach out to your account team.
+ |Step |Article | ||| |Plan your migration |**You are here** |
When planning the discover phase, use the following guidance to identify your us
In this article, you learned how to plan and prepare for your migration. > [!div class="nextstepaction"]
-> [Track your migration with a workbook](migration-track.md)
+> [Track your migration with a workbook](migration-track.md)
sentinel Configure Audit Log Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-audit-log-rules.md
+
+ Title: Configure SAP audit log monitoring rules with Microsoft Sentinel
+description: Monitor the SAP audit logs using Microsoft Sentinel built-in analytics rules, to easily manage your SAP logs, reducing noise with no compromise to security value.
+++ Last updated : 08/19/2022
+#Customer.intent: As a security operator, I want to monitor the SAP audit logs and easily manage the logs, so I can reduce noise without compromising security value.
++
+# Configure SAP audit log monitoring rules
+
+The SAP audit log records audit and security actions on SAP systems, like failed sign-in attempts or other suspicious actions. This article describes how to monitor the SAP audit log using Microsoft Sentinel built-in analytics rules.
+
+With these rules, you can monitor all audit log events, or get alerts only when anomalies are detected. This way, you can better manage your SAP logs, reducing noise with no compromise to your security value.
+
+You use two analytics rules to monitor and analyze your SAP audit log data:
+
+- **SAP - Dynamic Deterministic Audit Log Monitor (PREVIEW)**. Alerts on any SAP audit log events with minimal configuration. You can configure the rule for an even lower false-positive rate. [Learn how to configure the rule](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-for-sap-news-dynamic-sap-security-audit-log/ba-p/3326842).
+- **SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)**. Alerts on SAP audit log events when anomalies are detected, using machine learning capabilities and with no coding required. [Learn how to configure the rule](#set-up-the-sapdynamic-anomaly-based-audit-log-monitor-alerts-preview-rule-for-anomaly-detection).
+
+The two [SAP Audit log monitor rules](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) are delivered as ready to run out of the box, and allow for further fine tuning using the [SAP_Dynamic_Audit_Log_Monitor_Configuration and SAP_User_Config watchlists](sap-solution-security-content.md#available-watchlists).
+
+## Anomaly detection
+
+When trying to identify security events in a diverse activity log like the SAP audit log, you need to balance the configuration effort, and the amount of noise the alerts produce.
+
+With the SAP audit log module in the Sentinel for SAP solution, you can choose:
+- Which events you want to look at deterministically, using customized, predefined thresholds and filters.
+- Which events you want to leave out, so the machine can learn the parameters on its own.
+
+Once you mark an SAP audit log event type for anomaly detection, the alerting engine checks the events recently streamed from the SAP audit log. The engine checks if the events seem normal, considering the history it has learned.
+
+Microsoft Sentinel checks an event or group of events for anomalies. It tries to match the event or group of events with previously seen activities of the same kind, at the user and system levels. The algorithm learns the network characteristics of the user at the subnet mask level, and according to seasonality.
+
+With this ability, you can look for anomalies in previously quieted event types, such as user sign-in events. For example, if the user JohnDoe signs in hundreds of times an hour, you can now let Microsoft Sentinel decide if behavior is suspicious. Is this John from accounting, repeatedly refreshing a financial dashboard with multiple data source, or a DDoS attack forming up?
+
+## Set up the SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW) rule for anomaly detection
+
+If your SAP audit log data isn't already streaming data into the Microsoft Sentinel workspace, learn how to [deploy the solution](deployment-overview.md).
+
+1. From the Microsoft Sentinel navigation menu, under **Content management**, select **Content hub (Preview)**.
+1. Check if your **Continuous threat monitoring for SAP** application has updates.
+1. From the navigation menu, under **Analytics**, enable these 3 audit log alerts:
+ - **SAP - Dynamic Deterministic Audit Log Monitor**. Runs every 10 minutes and focuses on the SAP audit log events marked as **Deterministic**.
+ - **SAP - (Preview) Dynamic Anomaly based Audit Log Monitor Alerts**. Runs hourly and focuses on SAP events marked as **AnomaliesOnly**.
+ - **SAP - Missing configuration in the Dynamic Security Audit Log Monitor**. Runs daily to provide configuration recommendations for the SAP audit log module.
+
+Microsoft Sentinel now scans the entire SAP audit log at regular intervals, for deterministic security events and anomalies. You can view the incidents this log generates in the **Incidents** page.
+
+As with every machine learning solution, it will perform better with time. Anomaly detection works best using an SAP audit log history of seven days or more.
+
+### Configure event types with the SAP_Dynamic_Audit_Log_Monitor_Configuration watchlist
+
+You can further configure event types that produce too many incidents using the [SAP_Dynamic_Audit_Log_Monitor_Configuration](sap-solution-security-content.md#available-watchlists) watchlist. Here are a few options for reducing incidents.
+
+|Option |Description |
+|||
+|Set severities and disable unwanted events |By default, both the deterministic rules and the rules based on anomalies create alerts for events marked with medium and high severities. You can set these severities specifically for production and non-production environments. For example, you can set a debugging activity event as high severity in production systems, and disable those events in non-production systems. |
+|Exclude users by their SAP roles or SAP profiles |Microsoft Sentinel for SAP ingests the SAP userΓÇÖs authorization profile, including direct and indirect role assignments, groups and profiles, so that you can speak the SAP language in your SIEM.<br><br>You can configure an SAP event to exclude users based on their SAP roles and profiles. In the watchlist, add the roles or profiles that group your RFC interface users in the **RolesTagsToExclude** column, next to the **Generic table access by RFC** event. From now on, youΓÇÖll get alerts only for users that are missing these roles. |
+|Exclude users by their SOC tags |With tags, you can come up with your own grouping, without relying on complicated SAP definitions or even without SAP authorization. This method is useful for SOC teams that want to create their own grouping for SAP users.<br><br>Conceptually, excluding users by tags works like name tags: you can set multiple events in the configuration with multiple tags. You donΓÇÖt get alerts for a user with a tag associated with a specific event. For example, you donΓÇÖt want specific service accounts to be alerted for **Generic table access by RFC** events, but canΓÇÖt find an SAP role or an SAP profile that groups these users. In this case, you can add the **GenTableRFCReadOK** tag next to the relevant event in the watchlist, and then go to the **SAP_User_Config** watchlist and assign the interface users the same tag. |
+|Specify a frequency threshold per event type and system role |Works like a speed limit. For example, you can decide that the noisy **User Master Record Change** events only trigger alerts if more than 12 activities are observed in an hour, by the same user in a production system. If a user exceeds the 12 per hour limitΓÇöfor example, 2 events in a 10-minute windowΓÇöan incident is triggered. |
+|Determinism or anomalies |If you know the eventΓÇÖs characteristics, you can use the deterministic capabilities. If you aren't sure how to correctly configure the event, the machine learning capabilities can decide. |
+|SOAR capabilities |You can use Microsoft Sentinel to further orchestrate, automate and respond to incidents that can be applied to the SAP audit log dynamic alerts. Learn about [Security Orchestration, Automation, and Response (SOAR)](../automation.md). |
++++
+
++++
+
++
+
+
+
+
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Follow your deployment journey through this series of articles, in which you'll
| **4. Deploy data connector agent** | [Deploy and configure the container hosting the data connector agent](deploy-data-connector-agent-container.md) | | **5. Deploy SAP security content** | [Deploy SAP security content](deploy-sap-security-content.md) | **6. Microsoft Sentinel Solution for SAP** | [Configure Microsoft Sentinel Solution for SAP](deployment-solution-configuration.md)
-| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)
+| **7. Optional steps** | - [Configure auditing](configure-audit.md)<br>- [Configure Microsoft Sentinel for SAP data connector to use SNC](configure-snc.md)<br>- [Configure audit log monitoring rules](configure-audit-log-rules.md)
## Next steps
sentinel Deployment Solution Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-solution-configuration.md
By default, all analytics rules provided in the Microsoft Sentinel Solution for
5. Sensitive privilege user password change and login 6. Brute force (RFC) 7. Function module tested
-8. The SAP audit log monitoring analytics rules
-
-#### Configuring the SAP audit log monitoring analytics rules
-The two [SAP Audit log monitor rules](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) are delivered as ready to run out of the box, and allow for further fine tuning using watchlists:
-- **SAP_Dynamic_Audit_Log_Monitor_Configuration**
- The **SAP_Dynamic_Audit_Log_Monitor_Configuration** is a watchlist detailing all available SAP standard audit log message IDs and can be extended to contain additional message IDs you might create on your own using ABAP enhancements on your SAP NetWeaver systems.This watchlist allows for customizing an SAP message ID (=event type), at different levels:
- - Severities per production/ non-production systems -for example, debugging activity gets ΓÇ£HighΓÇ¥ for production systems, and ΓÇ£DisabledΓÇ¥ for other systems
- - Assigning different thresholds for production/ non-production systems- which are considered as ΓÇ£speed limitsΓÇ¥. Setting a threshold of 60 events an hour, will trigger an incident if more than 30 events were observed within 30 minutes
- - Assigning Rule Types- either ΓÇ£DeterministicΓÇ¥ or ΓÇ£AnomaliesOnlyΓÇ¥ determines by which manner this event is considered
- - Roles and Tags to Exclude- specific users can be excluded from specific event types. This field can either accept SAP roles, SAP profiles or Tags:
- - Listing SAP roles or SAP profiles ([see User Master data collection](sap-solution-deploy-alternate.md#configuring-user-master-data-collection)) would exclude any user bearing those roles/ profiles from these event types for the same SAP system. For example, specifying the ΓÇ£BASIC_BO_USERSΓÇ¥ ABAP role for the RFC related event types will ensure Business Objects users won't trigger incidents when making massive RFC calls.
- - Listing tags to be used as identifiers. Tagging an event type works just like specifying SAP roles or profiles, except that tags can be created within the Sentinel workspace, allowing the SOC personnel freedom in excluding users per activity without the dependency on the SAP team. For example, the audit message IDs AUB (authorization changes) and AUD (User master record changes) are assigned with the tag ΓÇ£MassiveAuthChangesΓÇ¥. Users assigned with this tag are excluded from the checks for these activities. Running the workspace function **SAPAuditLogConfigRecommend** will produce a list of recommended tags to be assigned to users, such as 'Add the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV using the SAP_User_Config watchlist'
-- **SAP_User_Config**
- This configuration-based watchlist is there to allow for specifying user related tags and other active directory identifiers for the SAP user. Tags are then used for identifying the user in specific contexts. For example, assigning the user GRC_ADMIN with the tag ΓÇ£MassiveAuthChangesΓÇ¥ will prevent incidents from being created on user master record and authorization events made by GRC_ADMIN.
-
-More information is available [in this blog](https://aka.ms/Sentinel4sapDynamicDeterministicAuditRuleBlog)
----
+8. The SAP audit log monitoring analytics rules
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
SAPAuditLogAnomalies(LearningTime = 14d, DetectingTime=0h, SelectedSystems= dyna
See [Built-in SAP analytics rules for monitoring the SAP audit log](sap-solution-security-content.md#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) for more information. ### SAPAuditLogConfigRecommend
-The **SAPAuditLogConfigRecommend** is a helper function designed to offer recommendations for the configuration of the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](sap-solution-security-content.md#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview) analytics rule. See detailed explanation in the [Configuring the SAP audit log monitoring analytics rules](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules) guide.
+The **SAPAuditLogConfigRecommend** is a helper function designed to offer recommendations for the configuration of the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](sap-solution-security-content.md#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview) analytics rule. Learn how to [configure the rules](configure-audit-log-rules.md).
### SAPUsersGetVIP
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-security-content.md
Use the following built-in workbooks to visualize and monitor data ingested via
For more information, see [Tutorial: Visualize and monitor your data](../monitor-your-data.md) and [Deploy Microsoft Sentinel Solution for SAP](deployment-overview.md). ## Built-in analytics rules+ ### Built-in SAP analytics rules for monitoring the SAP audit log+ The SAP Audit log data is used across many of the analytics rules of the Microsoft Sentinel Solution for SAP. Some analytics rules look for specific events on the log, while others correlate indications from several logs to produce high fidelity alerts and incidents. In addition, there are two analytics rules which are designed to accommodate the entire set of standard SAP audit log events (183 different events), and any other custom events you may choose to log using the SAP audit log.
+Both SAP audit log monitoring analytics rules share the same data sources and the same configuration but differ in one critical aspect. While the ΓÇ£SAP - Dynamic Deterministic Audit Log MonitorΓÇ¥ requires deterministic alert thresholds and user exclusion rules, the ΓÇ£SAP - Dynamic Anomaly-based Audit Log Monitor Alerts (PREVIEW)ΓÇ¥ applies additional machine learning algorithms to filter out background noise in an unsupervised manner. For this reason, by default, most event types (or SAP message IDs) of the SAP audit log are being sent to the "Anomaly based" analytics rule, while the easier to define event types are sent to the deterministic analytics rule. This setting, along with other related settings, can be further configured to suit any system conditions.
+ #### SAP - Dynamic Deterministic Audit Log Monitor A dynamic analytics rule that is intended for covering the entire set of SAP audit log event types which have a deterministic definition in terms of user population, event thresholds.
+- [Configure the rule with the SAP_Dynamic_Audit_Log_Monitor_Configuration watchlist](#available-watchlists)
+- Learn more about how to [configure the rule](configure-audit-log-rules.md#set-up-the-sapdynamic-anomaly-based-audit-log-monitor-alerts-preview-rule-for-anomaly-detection) (full procedure)
+ #### SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW) A dynamic analytics rule designed to learn normal system behavior, and alert on activities observed on the SAP audit log that are considered anomalous. Apply this rule on the SAP audit log event types which are harder to define in terms of user population, network attributes and thresholds.
-Both SAP audit log monitoring analytics rules share the same data sources and the same configuration but differ in one critical aspect. While the ΓÇ£SAP - Dynamic Deterministic Audit Log MonitorΓÇ¥ requires deterministic alert thresholds and user exclusion rules, the ΓÇ£SAP - Dynamic Anomaly-based Audit Log Monitor Alerts (PREVIEW)ΓÇ¥ applies additional machine learning algorithms to filter out background noise in an unsupervised manner. For this reason, by default, most event types (or SAP message IDs) of the SAP audit log are being sent to the ΓÇ£Anomaly basedΓÇ¥ analytics rule, while the easier to define event types are sent to the deterministic analytics rule. This setting, along with other related settings, can be further configured to suit any system conditions. See the [Configuring the SAP audit log monitoring analytics rules](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules)
--
-More information is available [in this blog](https://aka.ms/Sentinel4sapDynamicDeterministicAuditRuleBlog )
+Learn more:
+- [Configure the rule with the SAP_Dynamic_Audit_Log_Monitor_Configuration and SAP_User_Config watchlists](#available-watchlists)
+- Learn more about how to [configure the rule](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/microsoft-sentinel-for-sap-news-dynamic-sap-security-audit-log/ba-p/3326842#feedback-success) (full procedure)
The following tables list the built-in [analytics rules](deploy-sap-security-content.md) that are included in the Microsoft Sentinel Solution for SAP, deployed from the Microsoft Sentinel Solutions marketplace.
These watchlists provide the configuration for the Microsoft Sentinel Solution f
| <a name="programs"></a>**SAP - Obsolete Programs** | Obsolete ABAP programs (reports), whose execution should be governed. <br><br>- **ABAPProgram**:ABAP Program, such as TH_ RSPFLDOC <br>- **Description**: A meaningful ABAP program description | | <a name="transactions"></a>**SAP - Transactions for ABAP Generations** | Transactions for ABAP generations whose execution should be governed. <br><br>- **TransactionCode**:Transaction Code, such as SE11. <br>- **Description**: A meaningful Transaction Code description | | <a name="servers"></a>**SAP - FTP Servers** | FTP Servers for identification of unauthorized connections. <br><br>- **Client**:such as 100. <br>- **FTP_Server_Name**: FTP server name, such as http://contoso.com/ <br>-**FTP_Server_Port**:FTP server port, such as 22. <br>- **Description**A meaningful FTP Server description |
-| <a name="objects"></a>**SAP_Dynamic_Audit_Log_Monitor_Configuration** | Configure the SAP audit log alerts by assigning each message ID a severity level as required by you, per system role (production, non-production). This watchlist details all available SAP standard audit log message IDs and can be extended to contain additional message IDs you might create on your own using ABAP enhancements on their SAP NetWeaver systems. This watchlist also allows for configuring a designated team to handle each of the event types, and excluding users by SAP roles, SAP profiles or by tags from the SAP_User_Config watchlist. This watchlist is one of the core components used for [configuring ](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules) the [built-inSAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) <br><br>- **MessageID**: The SAP Message ID, or event type, such as `AUD` (User master record changes), or `AUB ` (authorization changes) <br>- **DetailedDescription**: A markdown enabled description to be shown on the incident pane <br>- **ProductionSeverity**: The desired severity for the incident to be created with for production systems `High`, `Medium`. Can be set as `Disabled` <br>- **NonProdSeverity**: The desired severity for the incident to be created with for non-production systems `High`, `Medium`. Can be set as `Disabled` <br>- **ProductionThreshold** The "Per hour" count of events to be considered as suspicious for production systems `60` <br>- **NonProdThreshold** The "Per hour" count of events to be considered as suspicious for non-production systems `10` <br>- **RolesTagsToExclude**: This field accepts SAP role name, SAP profile names or tags from the SAP_User_Config watchlist. These are then used to exclude the associated users from specific event types <br>- **RuleType**: Use `Deterministic` for the event type to be sent off to the [SAP - Dynamic Deterministic Audit Log Monitor](#sapdynamic-deterministic-audit-log-monitor), or `AnomaliesOnly` to have this event covered by the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview)
-| <a name="objects"></a>**SAP_User_Config** | allows for fine tuning alerts by excluding /including users in specific contexts and is also used for [configuring ](deployment-solution-configuration.md#configuring-the-sap-audit-log-monitoring-analytics-rules) the [built-inSAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log) <br><br> **SAPUser**: The SAP user <br> **Tags**: Tags are used to identify users against certain activity. For example Adding the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV will prevent RFC related incidents to be created for this specific user <br>**Other active directory user identifiers** <br>- AD User Identifier <br>- User On-Premises Sid <br>- User Principal Name
-|
--
+| <a name="objects"></a>**SAP_Dynamic_Audit_Log_Monitor_Configuration** | Configure the SAP audit log alerts by assigning each message ID a severity level as required by you, per system role (production, non-production). This watchlist details all available SAP standard audit log message IDs. The watchlist can be extended to contain additional message IDs you might create on your own using ABAP enhancements on their SAP NetWeaver systems. This watchlist also allows for configuring a designated team to handle each of the event types, and excluding users by SAP roles, SAP profiles or by tags from the **SAP_User_Config** watchlist. This watchlist is one of the core components used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log). <br><br>- **MessageID**: The SAP Message ID, or event type, such as `AUD` (User master record changes), or `AUB` (authorization changes). <br>- **DetailedDescription**: A markdown enabled description to be shown on the incident pane. <br>- **ProductionSeverity**: The desired severity for the incident to be created with for production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **NonProdSeverity**: The desired severity for the incident to be created with for non-production systems `High`, `Medium`. Can be set as `Disabled`. <br>- **ProductionThreshold** The "Per hour" count of events to be considered as suspicious for production systems `60`. <br>- **NonProdThreshold** The "Per hour" count of events to be considered as suspicious for non-production systems `10`. <br>- **RolesTagsToExclude**: This field accepts SAP role name, SAP profile names or tags from the SAP_User_Config watchlist. These are then used to exclude the associated users from specific event types. See options for role tags at the end of this list. <br>- **RuleType**: Use `Deterministic` for the event type to be sent off to the [SAP - Dynamic Deterministic Audit Log Monitor](#sapdynamic-deterministic-audit-log-monitor), or `AnomaliesOnly` to have this event covered by the [SAP - Dynamic Anomaly based Audit Log Monitor Alerts (PREVIEW)](#sapdynamic-anomaly-based-audit-log-monitor-alerts-preview).<br><br>For the **RolesTagsToExclude** field:<br>- If you list SAP roles or [SAP profiles](sap-solution-deploy-alternate.md#configuring-user-master-data-collection), this excludes any user with the listed roles or profiles from these event types for the same SAP system. For example, if you define the `BASIC_BO_USERS` ABAP role for the RFC related event types, Business Objects users won't trigger incidents when making massive RFC calls.<br>- Tagging an event type is similar to specifying SAP roles or profiles, but tags can be created in the workspace, so SOC teams can exclude users by activity without depending on the SAP team. For example, the audit message IDs AUB (authorization changes) and AUD (user master record changes) are assigned the `MassiveAuthChanges` tag. Users assigned this tag are excluded from the checks for these activities. Running the workspace `SAPAuditLogConfigRecommend` function produces a list of recommended tags to be assigned to users, such as `Add the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV using the SAP_User_Config watchlist`.
+| <a name="objects"></a>**SAP_User_Config** | Allows for fine tuning alerts by excluding /including users in specific contexts and is also used for [configuring](configure-audit-log-rules.md) the [built-in SAP analytics rules for monitoring the SAP audit log](#built-in-sap-analytics-rules-for-monitoring-the-sap-audit-log). <br><br> - **SAPUser**: The SAP user <br> - **Tags**: Tags are used to identify users against certain activity. For example Adding the tags ["GenericTablebyRFCOK"] to user SENTINEL_SRV will prevent RFC related incidents to be created for this specific user <br>**Other active directory user identifiers** <br>- AD User Identifier <br>- User On-Premises Sid <br>- User Principal Name |
## Next steps
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
> > You can also contribute! Join us in the [Microsoft Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## October 2022
+
+### Out of the box anomaly detection on the SAP audit log (Preview)
+
+The SAP audit log records audit and security events on SAP systems, like failed sign-in attempts or other over 200 security related actions. Customers monitor the SAP audit log and generate alerts and incidents out of the box using Microsoft Sentinel built-in analytics rules.
+
+The Microsoft Sentinel for SAP solution now includes the [**SAP - Dynamic Anomaly Detection analytics** rule](https://aka.ms/Sentinel4sapDynamicAnomalyAuditRuleBlog), adding an out of the box capability to identify suspicious anomalies across the SAP audit log events.
+
+Now, together with the existing ability to identify threats deterministically based on predefined patterns and thresholds, customers can easily identify suspicious anomalies in the SAP security log, out of the box, with no coding required.
+
+You can fine-tune the new capability by editing the [SAP_Dynamic_Audit_Log_Monitor_Configuration and SAP_User_Config watchlists](sap-solution-security-content.md#available-watchlists).
+
+Learn more:
+- [Learn about the new feature (blog)](https://aka.ms/Sentinel4sapDynamicAnomalyAuditRuleBlog)
+- [Use the new rule for anomaly detection](sap/configure-audit-log-rules.md#anomaly-detection)
+ ## September 2022 - [Create automation rule conditions based on custom details (Preview)](#create-automation-rule-conditions-based-on-custom-details-preview)
Microsoft Sentinel allows you to flag the entity as malicious, right from within
Learn how to [add an entity to your threat intelligence](add-entity-to-threat-intelligence.md). - ## August 2022 - [Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#heads-up-microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip)
spring-apps How To Configure Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-configure-ingress.md
Title: How to configure ingress for Azure Spring Apps
-description: Describes how to configure ingress for Azure Spring Apps.
+ Title: Customize the ingress configuration in Azure Spring Apps
+description: Learn how to customize the ingress configuration in Azure Spring Apps.
Previously updated : 05/27/2022 Last updated : 09/29/2022
**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to set and update the ingress configuration in Azure Spring Apps by using the Azure portal and Azure CLI.
+This article shows you how to set and update an application's ingress settings in Azure Spring Apps by using the Azure portal and Azure CLI.
-The Azure Spring Apps service uses an underlying ingress controller to handle application traffic management. Currently, the following ingress setting is supported for customization.
+The Azure Spring Apps service uses an underlying ingress controller to handle application traffic management. The following ingress settings are supported for customization.
-| Name | Ingress setting | Default value | Valid range | Description |
-|-|--||-|-|
-| ingress-read-timeout | proxy-read-timeout | 300 | \[1,1800\] | The timeout in seconds for reading a response from a proxied server. |
+| Name | Ingress setting | Default value | Valid range | Description |
+|-|||-|--|
+| `ingress-read-timeout` | `proxy-read-timeout` | 300 | \[1,1800\] | The timeout in seconds for reading a response from a proxied server. |
+| `ingress-send-timeout` | `proxy-send-timeout` | 60 | \[1,1800\] | The timeout in seconds for transmitting a request to the proxied server. |
+| `session-affinity` | `affinity` | None | Session, None | The type of the affinity that will make the request come to the same pod replica that was responding to the previous request. Set `session-affinity` to Cookie to enable session affinity. In the portal only, you must choose the enable session affinity box. |
+| `session-max-age` | `session-cookie-max-age` | 0 | \[0,7 days\] | The time in seconds until the cookie expires, corresponding to the `Max-Age` cookie directive. If you set `session-max-age` to 0, the expiration period is equal to the browser session period. |
+| `backend-protocol` | `backend-protocol` | Default | DefaultGRPC | Sets the backend protocol to indicate how NGINX should communicate with the backend service. Default means HTTP/HTTPS/WebSocket. The `backend-protocol` setting only applies to client-to-app traffic. For app-to-app traffic within the same service instance, choose any protocol for app-to-app traffic without modifying the `backend-protocol` setting. The protocol doesn't restrict your choice of protocol for app-to-app traffic within the same service instance. |
## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [The Azure CLI](/cli/azure/install-azure-cli).-- The Azure Spring Apps extension. Use the following command to remove previous versions and install the latest extension. If you previously installed the spring-cloud extension, uninstall it to avoid configuration and version mismatches.
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension. Use the following command to remove previous versions and install the latest extension. If you previously installed the spring-cloud extension, uninstall it to avoid configuration and version mismatches.
```azurecli az extension remove --name spring
The Azure Spring Apps service uses an underlying ingress controller to handle ap
az extension remove --name spring-cloud ```
-## Set the ingress configuration when creating a service
+## Set the ingress configuration
-You can set the ingress configuration when creating a service by using the following CLI command.
+Use the following Azure CLI command to set the ingress configuration when you create.
```azurecli
-az spring create \
+az spring app create \
--resource-group <resource-group-name> \
+ --service <service-name> \
--name <service-name> \
- --ingress-read-timeout 300
+ --ingress-read-timeout 300 \
+ --ingress-send-timeout 60 \
+ --session-affinity Cookie \
+ --session-max-age 1800 \
+ --backend-protocol Default \
```
-This command will create a service with ingress read timeout set to 300 seconds.
+This command creates an app with the following settings:
-## Update the ingress configuration for an existing service
+- Ingress read timeout: 300 seconds
+- Ingress send timeout: 60 seconds
+- Session affinity: Cookie
+- Session cookie max age: 1800 seconds
+- Backend protocol: Default
+
+## Update the ingress settings for an existing app
### [Azure portal](#tab/azure-portal)
-To update the ingress configuration for an existing service, use the following steps:
+Use the following steps to update the ingress settings for an application hosted by an existing service instance.
1. Sign in to the portal using an account associated with the Azure subscription that contains the Azure Spring Apps instance.
-2. Navigate to the **Networking** pane, then select the **Ingress configuration** tab.
-3. Update the ingress configuration, and then select **Save**.
+1. Navigate to the **Apps** pane, and then select the app you want to configure.
+1. Navigate to the **Configuration** pane, and then select the **Ingress settings** tab.
+1. Update the ingress settings, and then select **Save**.
- :::image type="content" source="media/how-to-configure-ingress/config-ingress-read-timeout.png" lightbox="media/how-to-configure-ingress/config-ingress-read-timeout.png" alt-text="Screenshot of Azure portal example for config ingress read timeout.":::
+ :::image type="content" source="media/how-to-configure-ingress/ingress-settings.jpg" lightbox="media/how-to-configure-ingress/ingress-settings.jpg" alt-text="Screenshot of Azure portal Configuration page showing the Ingress settings tab.":::
### [Azure CLI](#tab/azure-cli)
-To update the ingress configuration for an existing service, use the following command:
+Use the following command to update the ingress settings for an existing app.
```azurecli
-az spring update \
+az spring app update \
--resource-group <resource-group-name> \
+ --service <service-name> \
--name <service-name> \
- --ingress-read-timeout 600
+ --ingress-read-timeout 600 \
+ --ingress-send-timeout 600 \
+ --session-affinity None \
+ --session-max-age 0 \
+ --backend-protocol GRPC \
```
-This command will update the ingress read timeout to 600 seconds.
+This command updates the app with the following settings:
+
+- Ingress read timeout: 600 seconds
+- Ingress send timeout: 600 seconds
+- Session affinity: None
+- Session cookie max age: 0
+- Backend protocol: GRPC
+++
+## FAQ
+
+- How do you enable gRPC?
+
+ Set the backend protocol to *GRPC*.
+
+- How do you enable WebSocket?
+
+ WebSocket is enabled by default if you set the backend protocol to *Default*. The WebSocket connection limit is 20000. When you reach that limit, the connection will fail.
+
+ You can also use RSocket based on WebSocket.
+
+- What is the difference between ingress config and ingress settings?
+
+ Ingress config can still be used in the Azure CLI and SDK, and that setting will apply to all apps within the service instance. Once an app has been configured by ingress settings, the Ingress config won't affect it. We don't recommend that new scripts use ingress config since we plan to stop supporting it in the future.
+
+- When ingress settings are used together with App Gateway/APIM, what happens when you set the timeout in both Azure Spring Apps ingress and the App Gateway/APIM?
+
+ The shorter timeout is used.
+
+- Do you need extra config in App Gateway/APIM if you need to have end-to-end support for gRPC or WebSocket?
+
+ You don't need extra config as long as the App Gateway supports gRPC.
+
+- Is configurable port supported?
+
+ Configurable port isn't currently supported (80/443).
## Next steps -- [Learn more about ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers)-- [Learn more about NGINX ingress controller](https://kubernetes.github.io/ingress-nginx)
+- [Ingress controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers)
+- [NGINX ingress controller](https://kubernetes.github.io/ingress-nginx)
storage Blobfuse2 Commands Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-commands-mount.md
Title: How to use the BlobFuse2 mount command to mount a blob storage container as a file system in Linux, or to display and manage existing mount points (preview). | Microsoft Docs
+ Title: How to use the BlobFuse2 mount command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview). | Microsoft Docs
-description: Learn how to use the BlobFuse2 mount command to mount a blob storage container as a file system in Linux, or to display and manage existing mount points (preview).
+description: Learn how to use the BlobFuse2 mount command to mount a Blob Storage container as a file system in Linux, or to display and manage existing mount points (preview).
Previously updated : 08/02/2022 Last updated : 10/01/2022 # How to use the BlobFuse2 mount command (preview)
-Use the `blobfuse2 mount` command to mount a blob storage container as a file system in Linux, or to display existing mount points.
+Use the `blobfuse2 mount` command to mount a Blob Storage container as a file system in Linux, or to display existing mount points.
> [!IMPORTANT] > BlobFuse2 is the next generation of BlobFuse and is currently in preview.
The supported subcommands for `blobfuse2 mount` are:
| Command | Description | |--|--|
-| [all](blobfuse2-commands-mount-all.md) | Mounts all Azure blob containers in a specified storage account |
+| [all](blobfuse2-commands-mount-all.md) | Mounts all blob containers in a specified storage account |
| [list](blobfuse2-commands-mount-list.md) | Lists all BlobFuse2 mount points | Select one of the command links in the table above to view the documentation for the individual subcommands, including the arguments and flags they support.
The following flags apply only to command `blobfuse2 mount`:
> [!NOTE] > The following examples assume you have already created a configuration file in the current directory.
-1. Mount an individual Azure blob storage container to a new directory using the settings from a configuration file, and with foreground mode disabled:
+1. Mount an individual Azure Blob Storage container to a new directory using the settings from a configuration file, and with foreground mode disabled:
```bash ~$ mkdir bf2a
The following flags apply only to command `blobfuse2 mount`:
1 : /home/<user>/bf2a ```
-1. Mount all blob storage containers in the storage account specified in the configuration file to the path specified in the command. (Each container will be a subdirectory under the directory specified):
+1. Mount all Blob Storage containers in the storage account specified in the configuration file to the path specified in the command. (Each container will be a subdirectory under the directory specified):
```bash ~$ mkdir bf2all
The following flags apply only to command `blobfuse2 mount`:
2 : /home/<user>/bf2all/blobfuse2b ```
-1. Mount a fast storage device, then mount a blob storage container specifying the path to the mounted disk as the BlobFuse2 file caching location:
+1. Mount a fast storage device, then mount a Blob Storage container specifying the path to the mounted disk as the BlobFuse2 file caching location:
```bash ~$ sudo mkdir /mnt/resource/blobfuse2tmp -p
The following flags apply only to command `blobfuse2 mount`:
1 : /home/<user>/bf2a/blobfuse2a ```
-1. Mount a blob storage container in read-only mode and skipping the automatic BlobFuse2 version check:
+1. Mount a Blob Storage container in read-only mode and skipping the automatic BlobFuse2 version check:
```bash blobfuse2 mount ./mount_dir --config-file=./config.yaml --read-only --disable-version-check=true ```
-1. Mount a blob storage container using an existing configuration file, but override the container name (mounting another container in the same storage account):
+1. Mount a Blob Storage container using an existing configuration file, but override the container name (mounting another container in the same storage account):
```bash blobfuse2 mount ./mount_dir2 --config-file=./config.yaml --container-name=container2
storage Blobfuse2 Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-configuration.md
Previously updated : 08/02/2022 Last updated : 09/29/2022
Using a configuration file is the preferred method, but the other methods can be
## Configuration file
-Creating a configuration file is the preferred method of establishing settings for BlobFuse2. Once you have provided the desired settings in the file, reference the configuration file when using the `blobfuse2 mount` or other commands. Example:
+Creating a configuration file is the preferred method of establishing settings for BlobFuse2. Once you have specified the desired settings in the file, reference the configuration file when using the `blobfuse2 mount` or other commands. Example:
````bash blobfuse2 mount ./mount --config-file=./config.yaml
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
Title: How to mount an Azure blob storage container on Linux with BlobFuse2 (preview) | Microsoft Docs
+ Title: How to mount an Azure Blob Storage container on Linux with BlobFuse2 (preview) | Microsoft Docs
-description: How to mount an Azure blob storage container on Linux with BlobFuse2 (preview).
+description: How to mount an Azure Blob Storage container on Linux with BlobFuse2 (preview).
Previously updated : 09/26/2022 Last updated : 10/01/2022
-# How to mount an Azure blob storage container on Linux with BlobFuse2 (preview)
+# How to mount an Azure Blob Storage container on Linux with BlobFuse2 (preview)
-[BlobFuse2](blobfuse2-what-is.md) is a virtual file system driver for Azure Blob storage. BlobFuse2 allows you to access your existing Azure block blob data in your storage account through the Linux file system. For more details see [What is BlobFuse2? (preview)](blobfuse2-what-is.md).
+[BlobFuse2](blobfuse2-what-is.md) is a virtual file system driver for Azure Blob Storage. BlobFuse2 allows you to access your existing Azure block blob data in your storage account through the Linux file system. For more details see [What is BlobFuse2? (preview)](blobfuse2-what-is.md).
> [!IMPORTANT] > BlobFuse2 is the next generation of BlobFuse and is currently in preview.
-> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> This preview version is provided without a service level agreement, and might not be suitable for production workloads. Certain features might not be supported or might have constrained capabilities.
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > > If you need to use BlobFuse in a production environment, BlobFuse v1 is generally available (GA). For information about the GA version, see:
This guide shows you how to install and configure BlobFuse2, mount an Azure blob
There are 2 basic options for installing BlobFuse2:
-1. [Install BlobFuse2 Binary](#option-1-install-blobfuse2-binary-preferred)
+1. [Install the BlobFuse2 Binary](#option-1-install-the-blobfuse2-binary-preferred)
1. [Build it from source](#option-2-build-from-source)
-### Option 1: Install BlobFuse2 Binary (preferred)
+### Option 1: Install the BlobFuse2 Binary (preferred)
For supported distributions see [the BlobFuse2 releases page](https://github.com/Azure/azure-storage-fuse/releases). For libfuse support information, refer to [the BlobFuse2 README](https://github.com/Azure/azure-storage-fuse/blob/main/README.md#distinctive-features-compared-to-blobfuse-v1x).
lsb_release -a
If there are no binaries available for your distribution, you can [build the binaries from source code](https://github.com/MicrosoftDocs/azure-docs-pr/pull/203174#option-2-build-from-source).
-#### Install the BlobFuse2 binaries
+#### Install the BlobFuse2 binariesFstream
To install BlobFuse2:
To build the BlobFuse2 binaries from source:
## Configure BlobFuse2
-You can configure BlobFuse2 with a variety of settings. Some of the common settings used include:
+You can configure BlobFuse2 with a variety of settings. Some of the typical settings used include:
- Logging location and options-- Temporary cache file path
+- Temporary file path for caching
- Information about the Azure storage account and blob container to be mounted
-The settings can be configured in a yaml configuration file, using environment variables, or as parameters passed to the BlobFuse2 commands. The preferred method is to use the yaml configuration file.
+The settings can be configured in a yaml configuration file, using environment variables, or as parameters passed to the BlobFuse2 commands. The preferred method is to use the configuration file.
-For details about all of the configuration parameters for BlobFuse2, consult the complete reference material for each:
+For details about each of the configuration parameters for BlobFuse2 and how to specify them, consult the references below:
- [Complete BlobFuse2 configuration reference (preview)](blobfuse2-configuration.md) - [Configuration file reference (preview)](blobfuse2-configuration.md#configuration-file)
For details about all of the configuration parameters for BlobFuse2, consult the
The basic steps for configuring BlobFuse2 in preparation for mounting are:
-1. [Configure a temporary path for caching or streaming](#configure-a-temporary-path-for-caching)
+1. [Configure caching](#configure-caching)
1. [Create an empty directory for mounting the blob container](#create-an-empty-directory-for-mounting-the-blob-container) 1. [Authorize access to your storage account](#authorize-access-to-your-storage-account)
-### Configure a temporary path for caching
+### Configure caching
-BlobFuse2 provides native-like performance by requiring a temporary path in the file system to buffer and cache any open files. For this temporary path, choose the most performant disk available, or use a ramdisk for the best performance.
+BlobFuse2 provides native-like performance by using local file-caching techniques. The caching configuration and behavior varies, depending on whether you are streaming large files or accessing smaller files.
+
+#### Configure caching for streaming large files
+
+BlobFuse2 supports streaming for both read and write operations as an alternative to disk caching for files. In streaming mode, BlobFuse2 caches blocks of large files in memory for both reading and writing. The configuration settings related to caching for streaming are under the `stream:` settings in your configuration file as follows:
+
+```yml
+stream:
+ block-size-mb:
+ For read only mode, the size of each block to be cached in memory while streaming (in MB)
+ For read/write mode: the size of newly created blocks
+ max-buffers: The total number of buffers to store blocks in
+ buffer-size-mb: The size for each buffer
+```
+
+See [the sample streaming configuration file](https://github.com/Azure/azure-storage-fuse/blob/main/sampleStreamingConfig.yaml) to get started quickly with some settings for a basic streaming scenario.
+
+#### Configure caching for smaller files
+
+Smaller files are cached to a temporary path specified under `file_cache:` in the configuration file as follows:
+
+```yml
+file_cache:
+ path: <path to local disk cache>
+```
> [!NOTE]
-> BlobFuse2 stores all open file contents in the temporary path. Make sure to have enough space to accommodate all open files.
+> BlobFuse2 stores all open file contents in the temporary path. Make sure to have enough space to contain all open files.
>
-#### Choose a caching disk option
-
-There are 3 common options for configuring the temporary path for caching:
+There are 3 common options for configuring the temporary path for file caching:
- [Use a local high-performing disk](#use-a-local-high-performing-disk) - [Use a ramdisk](#use-a-ramdisk)
storage Blobfuse2 What Is https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-what-is.md
Previously updated : 09/26/2022 Last updated : 10/01/2022
A full list of BlobFuse2 features is in the [BlobFuse2 README](https://github.co
- Mount an Azure storage blob container or Data Lake Storage Gen2 file system on Linux - Use basic file system operations, such as mkdir, opendir, readdir, rmdir, open, read, create, write, close, unlink, truncate, stat, and rename-- Local caching to improve subsequent access times
+- Local file caching to improve subsequent access times
- Streaming to support reading and writing large files
+- Gain insights into mount activities and resource usage using BlobFuse2 Health Monitor
- Parallel downloads and uploads to improve access time for large files - Multiple mounts to the same container for read-only workloads
Blobfuse2 has more feature support and improved performance in multiple user sce
- Improved caching - More management support through new Azure CLI commands - Additional logging support
+- The addition of write-streaming for large files (read-streaming was previous supported)
- Gain insights into mount activities and resource usage using BlobFuse2 Health Monitor - Compatibility and upgrade options for existing BlobFuse v1 users - Version checking and upgrade prompting
In many ways, BlobFuse2-mounted storage can be used just like the native Linux f
However, there are some key differences in the way BlobFuse2 behaves: -- **Readdir count of hardlinks**:
+- **Readdir count of hard links**:
For performance reasons, BlobFuse2 does not correctly report the hard links inside a directory. The number of hard links for empty directories is returned as 2. The number for non-empty directories is always returned as 3, regardless of the actual number of hard links.
However, there are some key differences in the way BlobFuse2 behaves:
BlobFuse2 doesn't support extended-attributes (x-attrs) operations.
+- **Write-streaming**:
+
+ Concurrent streaming of read and write operations on large file data can produce unpredictable results. Simultaneously writing to the same blob from different threads is not supported.
+ ### Data integrity
-When a file is written to, the data is first persisted into cache on a local disk. The data is written to blob storage only after the file handle is closed. If there's an issue attempting to persist the data to blob storage, you receive an error message.
+The file caching behavior plays an important role in the integrity of the data being read and written to a Blob Storage file system mount. Streaming mode is recommended for use with large files, which supports streaming for both read and write operations. BlobFuse2 caches blocks of streaming files in memory. For smaller files that do not consist of blocks, the entire file is stored in memory. File cache is the second mode and is recommended for workloads that do not contain large files. Where files are stored on disk in their entirety.
+
+BlobFuse2 supports both read and write operations. Continuous synchronization of data written to storage by using other APIs or other mounts of BlobFuse2 isn't guaranteed. For data integrity, it's recommended that multiple sources don't modify the same blob, especially at the same time. If one or more applications attempt to write to the same file simultaneously, the results could be unexpected. Depending on the timing of multiple write operations and the freshness of the cache for each, the result could be that the last writer wins and previous writes are lost, or generally that the updated file isn't in the desired state.
+
+#### File caching on disk
+
+When a file is written to, the data is first persisted into cache on a local disk. The data is written to blob storage only after the file handle is closed. If there's an issue attempting to persist the data to blob storage, you will receive an error message.
+
+#### Streaming
-BlobFuse2 supports both read and write operations. Continuous synchronization of data written to storage by using other APIs or other mounts of BlobFuse2 aren't guaranteed. For data integrity, it's recommended that multiple sources don't modify the same blob, especially at the same time. If one or more applications attempt to write to the same file simultaneously, the results can be unexpected. Depending on the timing of multiple write operations and the freshness of the cache for each, the result could be that the last writer wins and previous writes are lost, or generally that the updated file isn't in the desired state.
+For streaming during both read and write operations, blocks of data are cached in memory as they are read or updated. Updates are flushed to Azure Storage when a file is closed or when the buffer is filled with dirty blocks.
-> [!WARNING]
-> In cases where multiple file handles are open to the same file, simultaneous write operations could result in data loss.
+Reading the same blob from multiple simultaneous threads is supported. However, simultaneous write operations could result in unexpected file data outcomes, including data loss. Performing simultaneous read operations and a single write operation is supported, but the data being read from some threads might not be current.
### Permissions
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Title: How to mount Azure Blob storage as a file system on Linux with BlobFuse v1 | Microsoft Docs
+ Title: How to mount Azure Blob Storage as a file system on Linux with BlobFuse v1 | Microsoft Docs
-description: Learn how to mount an Azure Blob storage container with BlobFuse v1, a virtual file system driver on Linux.
+description: Learn how to mount an Azure Blob Storage container with BlobFuse v1, a virtual file system driver on Linux.
Previously updated : 09/26/2022 Last updated : 10/01/2022
-# How to mount Blob storage as a file system with BlobFuse v1
-
-## Overview
+# How to mount Azure Blob Storage as a file system with BlobFuse v1
> [!IMPORTANT] > [BlobFuse2](blobfuse2-what-is.md) is the latest version of BlobFuse and has many significant improvements over the version discussed in this article, BlobFuse v1. To learn about the improvements made in BlobFuse2, see [the list of BlobFuse2 enhancements](blobfuse2-what-is.md#blobfuse2-enhancements). BlobFuse2 is currently in preview and might not be suitable for production workloads.
->
-> This article is about the original version of BlobFuse. It is simply referred to as "BlobFuse" in many cases, but is also referred to as "BlobFuse v1" in this and other articles to distinguish it from the next generation of BlobFuse, BlobFuse2.
-[BlobFuse](https://github.com/Azure/azure-storage-fuse) is a virtual file system driver for Azure Blob storage. BlobFuse allows you to access your existing block blob data in your storage account through the Linux file system. BlobFuse uses the virtual directory scheme with the forward-slash '/' as a delimiter.
+[BlobFuse](https://github.com/Azure/azure-storage-fuse) is a virtual file system driver for Azure Blob Storage. BlobFuse allows you to access your existing block blob data in your storage account through the Linux file system. BlobFuse uses the virtual directory scheme with the forward-slash '/' as a delimiter.
-This guide shows you how to use BlobFuse v1, and mount a Blob storage container on Linux and access data. To learn more about BlobFuse, see the [readme](https://github.com/Azure/azure-storage-fuse) and [wiki](https://github.com/Azure/azure-storage-fuse/wiki).
+This guide shows you how to use BlobFuse v1 and mount a Blob Storage container on Linux and access data. To learn more about BlobFuse v1, see the [readme](https://github.com/Azure/azure-storage-fuse) and [wiki](https://github.com/Azure/azure-storage-fuse/wiki).
> [!WARNING] > BlobFuse doesn't guarantee 100% POSIX compliance as it simply translates requests into [Blob REST APIs](/rest/api/storageservices/blob-service-rest-api). For example, rename operations are atomic in POSIX, but not in BlobFuse.
storage Customer Managed Keys Configure Existing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-existing-account.md
Previously updated : 08/31/2022 Last updated : 09/29/2022
The managed identity that authorizes access to the key vault may be either a use
### Use a user-assigned managed identity to authorize access
-A user-assigned is a standalone Azure resource. You must create the user-assigned identity before you configure customer-managed keys. To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
-
-#### [Azure portal](#tab/azure-portal)
-
-When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
-
-#### [PowerShell](#tab/azure-powershell)
-
-To authorize access to the key vault with a user-assigned managed identity, you'll need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity and assign it to a variable that you'll reference in subsequent steps:
-
-```azurepowershell
-$userIdentity = Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>
-$principalId = $userIdentity.PrincipalId
-```
-
-#### [Azure CLI](#tab/azure-cli)
-
-To authorize access to the key vault with a user-assigned managed identity, you'll need the resource ID and principal ID of the user-assigned managed identity. Call [az identity show](/cli/azure/identity#az-identity-show) command to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You'll need these values in subsequent steps:
-
-```azurecli
-userIdentityId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query id)
-principalId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query principalId)
-```
-- ### Use a system-assigned managed identity to authorize access
A system-assigned managed identity is associated with an instance of an Azure se
Only existing storage accounts can use a system-assigned identity to authorize access to the key vault. New storage accounts must use a user-assigned identity, if customer-managed keys are configured on account creation.
+The system-assigned managed identity must have permissions to access the key in the key vault. Assign the **Key Vault Crypto Service Encryption User** role to the system-assigned managed identity with key vault scope to grant these permissions.
+ #### [Azure portal](#tab/azure-portal)
-When you configure customer-managed keys with the Azure portal with a system-assigned managed identity, the system-assigned managed identity is assigned to the storage account for you under the covers. For details, see [Configure customer-managed keys for an existing account](#configure-customer-managed-keys-for-an-existing-account).
+Before you can configure customer-managed keys with a system-assigned managed identity, you must assign the **Key Vault Crypto Service Encryption User** role to the system-assigned managed identity, scoped to the key vault. This role grants the system-assigned managed identity permissions to access the key in the key vault. For more information on assigning Azure RBAC roles with the Azure portal, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+
+When you configure customer-managed keys with the Azure portal with a system-assigned managed identity, the system-assigned managed identity is assigned to the storage account for you under the covers.
#### [PowerShell](#tab/azure-powershell)
-To assign a system-assigned managed identity to your storage account, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
+To assign a system-assigned managed identity to your storage account, first call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount):
```azurepowershell
-$storageAccount = Set-AzStorageAccount -ResourceGroupName <resource_group> `
- -Name <storage-account> `
+$accountName = "<storage-account>"
+
+$storageAccount = Set-AzStorageAccount -ResourceGroupName $rgName `
+ -Name $accountName `
-AssignIdentity ```
-Next, get the principal ID for the system-assigned managed identity, and save it to a variable. You'll need this value in the next step to create the key vault access policy:
+Next, assign to the system-assigned managed identity the required RBAC role, scoped to the key vault. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples:
```azurepowershell $principalId = $storageAccount.Identity.PrincipalId+
+New-AzRoleAssignment -ObjectId $storageAccount.Identity.PrincipalId `
+ -RoleDefinitionName "Key Vault Crypto Service Encryption User" `
+ -Scope $keyVault.ResourceId
``` #### [Azure CLI](#tab/azure-cli)
-To authenticate access to the key vault with a system-assigned managed identity, assign the system-assigned managed identity to the storage account by calling [az storage account update](/cli/azure/storage/account#az-storage-account-update):
+To authenticate access to the key vault with a system-assigned managed identity, first assign the system-assigned managed identity to the storage account by calling [az storage account update](/cli/azure/storage/account#az-storage-account-update):
```azurecli
+accountName="<storage-account>"
+ az storage account update \
- --name <storage-account> \
- --resource-group <resource_group> \
+ --name $accountName \
+ --resource-group $rgName \
--assign-identity ```
-Next, get the principal ID for the system-assigned managed identity, and save it to a variable. You'll need this value in the next step to create the key vault access policy:
+Next, assign to the system-assigned managed identity the required RBAC role, scoped to the key vault. Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples:
```azurecli
-principalId = $(az storage account show --name <storage-account> --resource-group <resource_group> --query identity.principalId)
-```
---
-## Configure the key vault access policy
-
-The next step is to configure the key vault access policy. The key vault access policy grants permissions to the managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
-
-### [Azure portal](#tab/azure-portal)
-
-To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-
-### [PowerShell](#tab/azure-powershell)
-
-To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the managed identity.
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy `
- -VaultName $keyVault.VaultName `
- -ObjectId $principalId `
- -PermissionsToKeys wrapkey,unwrapkey,get
-```
-
-To learn more about assigning the key vault access policy with PowerShell, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-
-### [Azure CLI](#tab/azure-cli)
-
-To configure the key vault access policy with PowerShell, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy), providing the variable for the principal ID that you previously retrieved for the managed identity.
+principalId=$(az storage account show --name $accountName \
+ --resource-group $rgName \
+ --query identity.principalId \
+ --output tsv)
-```azurecli
-az keyvault set-policy \
- --name <key-vault> \
- --resource-group <resource_group>
- --object-id $principalId \
- --key-permissions get unwrapKey wrapKey
+az role assignment create --assignee-object-id $principalId \
+ --role "Key Vault Crypto Service Encryption User" \
+ --scope $kvResourceId
```
-To learn more about assigning the key vault access policy with Azure CLI, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
- ## Configure customer-managed keys for an existing account
To configure customer-managed keys for an existing account with automatic updati
Next, call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings, omitting the key version. Include the **-KeyvaultEncryption** option to enable customer-managed keys for the storage account. ```azurepowershell
-Set-AzStorageAccount -ResourceGroupName <resource-group> `
- -AccountName <storage-account> `
+$accountName = "<storage-account>"
+
+Set-AzStorageAccount -ResourceGroupName $rgName `
+ -AccountName $accountName `
-KeyvaultEncryption ` -KeyName $key.Name ` -KeyVaultUri $keyVault.VaultUri
To configure customer-managed keys for an existing account with automatic updati
Next, call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings, omitting the key version. Include the `--encryption-key-source` parameter and set it to `Microsoft.Keyvault` to enable customer-managed keys for the account. ```azurecli
-key_vault_uri=$(az keyvault show \
- --name <key-vault> \
- --resource-group <resource_group> \
+accountName="<storage-account>"
+
+keyVaultUri=$(az keyvault show \
+ --name $kvName \
+ --resource-group $rgName \
--query properties.vaultUri \ --output tsv)
-az storage account update
- --name <storage-account> \
- --resource-group <resource_group> \
- --encryption-key-name <key> \
+
+az storage account update \
+ --name $accountName \
+ --resource-group $rgName \
+ --encryption-key-name $keyName \
--encryption-key-source Microsoft.Keyvault \
- --encryption-key-vault $key_vault_uri
+ --encryption-key-vault $keyVaultUri
```
To configure customer-managed keys with manual updating of the key version, expl
Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples. ```azurepowershell
-Set-AzStorageAccount -ResourceGroupName <resource-group> `
- -AccountName <storage-account> `
+$accountName = "<storage-account>"
+
+Set-AzStorageAccount -ResourceGroupName $rgName `
+ -AccountName $accountName `
-KeyvaultEncryption ` -KeyName $key.Name ` -KeyVersion $key.Version `
To configure customer-managed keys with manual updating of the key version, expl
Remember to replace the placeholder values in brackets with your own values. ```azurecli
-key_vault_uri=$(az keyvault show \
- --name <key-vault> \
- --resource-group <resource_group> \
+accountName="<storage-account>"
+
+keyVaultUri=$(az keyvault show \
+ --name $kvName \
+ --resource-group $rgName \
--query properties.vaultUri \ --output tsv)
-key_version=$(az keyvault key list-versions \
- --name <key> \
- --vault-name <key-vault> \
+
+keyVersion=$(az keyvault key list-versions \
+ --name $keyName \
+ --vault-name $kvName \
--query [-1].kid \ --output tsv | cut -d '/' -f 6)
-az storage account update
- --name <storage-account> \
- --resource-group <resource_group> \
- --encryption-key-name <key> \
- --encryption-key-version $key_version \
+
+az storage account update \
+ --name $accountName \
+ --resource-group $rgName \
+ --encryption-key-name $keyName \
+ --encryption-key-version $keyVersion \
--encryption-key-source Microsoft.Keyvault \
- --encryption-key-vault $key_vault_uri
+ --encryption-key-vault $keyVaultUri
``` When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az-keyvault-show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az-keyvault-key-list-versions). Then call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
storage Customer Managed Keys Configure New Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-new-account.md
Previously updated : 08/31/2022 Last updated : 09/29/2022
To learn how to configure customer-managed keys for an existing storage account,
## Use a user-assigned managed identity to authorize access to the key vault
-When you enable customer-managed keys for a new storage account, you must specify a user-assigned managed identity. The user-assigned managed identity will be used to authorize access to the key vault that contains the key. The user-assigned managed identity must have permissions to access the key in the key vault.
-
-A user-assigned is a standalone Azure resource. To learn more about user-assigned managed identities, see [Managed identity types](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types). To learn how to create and manage a user-assigned managed identity, see [Manage user-assigned managed identities](../../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md).
-
-Both new and existing storage accounts can use a user-assigned identity to authorize access to the key vault. You must create the user-assigned identity before you configure customer-managed keys.
-
-### [Azure portal](#tab/azure-portal)
-
-When you configure customer-managed keys with the Azure portal, you can select an existing user-assigned identity through the portal user interface. For details, see [Configure customer-managed keys for a new storage account](#configure-customer-managed-keys-for-a-new-storage-account).
-
-### [PowerShell](#tab/azure-powershell)
-
-To authorize access to the key vault with a user-assigned managed identity, you'll need the resource ID and principal ID of the user-assigned managed identity. Call [Get-AzUserAssignedIdentity](/powershell/module/az.managedserviceidentity/get-azuserassignedidentity) to get the user-assigned managed identity and assign it to a variable that you'll reference in subsequent steps:
-
-```azurepowershell
-$userIdentity = Get-AzUserAssignedIdentity -Name <user-assigned-identity> -ResourceGroupName <resource-group>
-```
-
-### [Azure CLI](#tab/azure-cli)
-
-To authorize access to the key vault with a user-assigned managed identity, you'll need the resource ID and principal ID of the user-assigned managed identity. Call [az identity show](/cli/azure/identity#az-identity-show) command to get the user-assigned managed identity, then save the resource ID and principal ID to variables. You'll need these values in subsequent steps:
-
-```azurecli
-userIdentityId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query id)
-principalId=$(az identity show --name sample-user-assigned-identity --resource-group storagesamples-rg --query principalId)
-```
---
-## Configure the key vault access policy
-
-The next step is to configure the key vault access policy. The key vault access policy grants permissions to the user-assigned managed identity that will be used to authorize access to the key vault. To learn more about key vault access policies, see [Azure Key Vault Overview](../../key-vault/general/overview.md#securely-store-secrets-and-keys) and [Azure Key Vault security overview](../../key-vault/general/security-features.md#key-vault-authentication-options).
-
-### [Azure portal](#tab/azure-portal)
-
-To learn how to configure the key vault access policy with the Azure portal, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-
-### [PowerShell](#tab/azure-powershell)
-
-To configure the key vault access policy with PowerShell, call [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyvault/set-azkeyvaultaccesspolicy), providing the variable for the principal ID that you previously retrieved for the user-assigned managed identity.
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy `
- -VaultName $keyVault.VaultName `
- -ObjectId $userIdentity.PrincipalId `
- -PermissionsToKeys wrapkey,unwrapkey,get
-```
-
-To learn more about assigning the key vault access policy with PowerShell, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-
-### [Azure CLI](#tab/azure-cli)
-
-To configure the key vault access policy with PowerShell, call [az keyvault set-policy](/cli/azure/keyvault#az-keyvault-set-policy), providing the variable for the principal ID that you previously retrieved for the user-assigned managed identity.
-
-```azurecli
-az keyvault set-policy \
- --name <key-vault> \
- --resource-group <resource_group>
- --object-id $principalId \
- --key-permissions get unwrapKey wrapKey
-```
-
-To learn more about assigning the key vault access policy with Azure CLI, see [Assign an Azure Key Vault access policy](../../key-vault/general/assign-access-policy.md).
-- ## Configure customer-managed keys for a new storage account
You can also configure customer-managed keys with manual updating of the key ver
To configure customer-managed keys for a new storage account with automatic updating of the key version, call [New-AzStorageAccount](/powershell/module/az.storage/new-azstorageaccount), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You'll also need the key vault URI and key name: ```azurepowershell
-New-AzStorageAccount -ResourceGroupName <resource-group> `
- -Name <storage-account> `
+$accountName = "<storage-account>"
+
+New-AzStorageAccount -ResourceGroupName $rgName `
+ -Name $accountName `
-Kind StorageV2 ` -SkuName Standard_LRS ` -Location $location `
New-AzStorageAccount -ResourceGroupName <resource-group> `
To configure customer-managed keys for a new storage account with automatic updating of the key version, call [az storage account create](/cli/azure/storage/account#az-storage-account-create), as shown in the following example. Use the variable you created previously for the resource ID for the user-assigned managed identity. You'll also need the key vault URI and key name: ```azurecli
+accountName="<storage-account>"
+ az storage account create \
- --name <storage-account> \
- --resource-group <resource-group> \
- --location <location> \
+ --name $accountName \
+ --resource-group $rgName \
+ --location $location \
--sku Standard_LRS \ --kind StorageV2 \ --identity-type SystemAssigned,UserAssigned \
- --user-identity-id <user-assigned-managed-identity> \
+ --user-identity-id $identityResourceId \
--encryption-key-vault <key-vault-uri> \
- --encryption-key-name <key-name> \
+ --encryption-key-name $keyName \
--encryption-key-source Microsoft.Keyvault \
- --key-vault-user-identity-id <user-assigned-managed-identity>
+ --key-vault-user-identity-id $identityResourceId
```
To configure customer-managed keys with manual updating of the key version, expl
Remember to replace the placeholder values in brackets with your own values and to use the variables defined in the previous examples. ```azurepowershell
-New-AzStorageAccount -ResourceGroupName <resource-group> `
- -Name <storage-account> `
+$accountName = "<storage-account>"
+
+New-AzStorageAccount -ResourceGroupName $rgName `
+ -Name $accountName `
-Kind StorageV2 ` -SkuName Standard_LRS ` -Location $location `
New-AzStorageAccount -ResourceGroupName <resource-group> `
-KeyVaultUserAssignedIdentityId $userIdentity.Id ``` - When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, call [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey) to get the latest version of the key. Then call [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example. # [Azure CLI](#tab/azure-cli)
To configure customer-managed keys with manual updating of the key version, expl
Remember to replace the placeholder values in brackets with your own values. ```azurecli
+accountName="<storage-account>"
+ key_vault_uri=$(az keyvault show \ --name <key-vault> \ --resource-group <resource_group> \ --query properties.vaultUri \ --output tsv)+ key_version=$(az keyvault key list-versions \ --name <key> \ --vault-name <key-vault> \ --query [-1].kid \ --output tsv | cut -d '/' -f 6)+ az storage account create \
- --name <storage-account> \
- --resource-group <resource-group> \
- --location <location> \
+ --name $accountName \
+ --resource-group $rgName \
+ --location $location \
--sku Standard_LRS \ --kind StorageV2 \ --identity-type SystemAssigned,UserAssigned \
- --user-identity-id <user-assigned-managed-identity> \
- --encryption-key-vault $key_vault_uri \
- --encryption-key-name <key-name> \
+ --user-identity-id $identityResourceId \
+ --encryption-key-vault $keyVaultUri \
+ --encryption-key-name $keyName \
--encryption-key-source Microsoft.Keyvault \
- --encryption-key-version $key_version \
- --key-vault-user-identity-id <user-assigned-managed-identity>
+ --encryption-key-version $keyVersion \
+ --key-vault-user-identity-id $identityResourceId
``` When you manually update the key version, you'll need to update the storage account's encryption settings to use the new version. First, query for the key vault URI by calling [az keyvault show](/cli/azure/keyvault#az-keyvault-show), and for the key version by calling [az keyvault key list-versions](/cli/azure/keyvault/key#az-keyvault-key-list-versions). Then call [az storage account update](/cli/azure/storage/account#az-storage-account-update) to update the storage account's encryption settings to use the new version of the key, as shown in the previous example.
storage Customer Managed Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-overview.md
Previously updated : 08/31/2022 Last updated : 09/30/2022
You can switch between customer-managed keys and Microsoft-managed keys at any t
> [!IMPORTANT] > Customer-managed keys rely on managed identities for Azure resources, a feature of Azure AD. Managed identities do not currently support cross-tenant scenarios. When you configure customer-managed keys in the Azure portal, a managed identity is automatically assigned to your storage account under the covers. If you subsequently move the subscription, resource group, or storage account from one Azure AD tenant to another, the managed identity associated with the storage account is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see **Transferring a subscription between Azure AD directories** in [FAQs and known issues with managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
-Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
+ The key vault that stores the key must have both soft delete and purge protection enabled. Azure storage encryption supports RSA and RSA-HSM keys of sizes 2048, 3072 and 4096. For more information about keys, see [About keys](../../key-vault/keys/about-keys.md).
Using a key vault or managed HSM has associated costs. For more information, see [Key Vault pricing](https://azure.microsoft.com/pricing/details/key-vault/).
synapse-analytics Restore Sql Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/backuprestore/restore-sql-pool.md
Previously updated : 08/24/2022 Last updated : 09/26/2022
In this article, you learn how to restore an existing dedicated SQL pool in Azur
8. Select either **Automatic Restore Points** or **User-Defined Restore Points**.
- ![Restore points](../media/sql-pools/restore-point.PNG)
+ ![Restore points](../media/sql-pools/restore-point.PNG)
- If the dedicated SQL pool doesn't have any automatic restore points, wait a few hours, or create a user defined restore point before restoring. For User-Defined Restore Points, select an existing one or create a new one.
+ * If the dedicated SQL pool doesn't have any automatic restore points, wait a few hours, or create a user defined restore point before restoring. For User-Defined Restore Points, select an existing one or create a new one.
- If you are restoring a geo-backup, select the workspace located in the source region and the dedicated SQL pool you want to restore.
+ * If you want to restore a dedicated SQL pool from a different workspace, select **New dedicated SQL pool** from your current workspace. Under the **Additional settings** tab find the **Use existing data** and select the **Restore point** option. As shown in the above screenshot, you can then select the **Server or workspace** name from which you can restore.
+
+ * If you are restoring a geo-backup, select the workspace located in the source region and the dedicated SQL pool you want to restore.
+
+ > [!NOTE]
+ > You cannot perform an in-place restore of a SQL pool with the same name as an existing pool. Regardless of the SQL pool being in the same workspace or a different workspace.
9. Select **Review + Create**.
In this article, you learn how to restore an existing dedicated SQL pool in Azur
![ Restore Overview](../media/sql-pools/restore-sqlpool-01.png)
-4. Select either **Automatic Restore Points** or **User-Defined Restore Points**.
+4. Select either **Automatic Restore Points** or **User-Defined Restore Points**.
If the dedicated SQL pool doesn't have any automatic restore points, wait a few hours or create a user-defined restore point before restoring.