Updates from: 09/02/2024 01:07:42
Service Microsoft Docs article Related commit history on GitHub Change details
azure-arc Alternate Key Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/alternate-key-based.md
+
+ Title: Alternate key-based configuration for Cloud Ingest Edge Volumes
+description: Learn about an alternate key-based configuration for Cloud Ingest Edge Volumes.
++++ Last updated : 08/26/2024++
+# Alternate: Key-based authentication configuration for Cloud Ingest Edge Volumes
+
+This article describes an alternate configuration for [Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) (blob upload with local purge) with key-based authentication.
+
+This configuration is an alternative option for use with key-based authentication methods. You should review the recommended configuration using system-assigned managed identities in [Cloud Ingest Edge Volumes configuration](cloud-ingest-edge-volume-configuration.md).
+
+## Prerequisites
+
+1. Create a storage account [following these instructions](/azure/storage/common/storage-account-create?tabs=azure-portal).
+
+ > [!NOTE]
+ > When you create a storage account, it's recommended that you create it under the same resource group and region/location as your Kubernetes cluster.
+
+1. Create a container in the storage account that you created in the previous step, [following these instructions](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+
+## Create a Kubernetes secret
+
+Edge Volumes supports the following three authentication methods:
+
+- Shared Access Signature (SAS) Authentication (recommended)
+- Connection String Authentication
+- Storage Key Authentication
+
+After you complete authentication for one of these methods, proceed to the [Create a Cloud Ingest Persistent Volume Claim (PVC)](#create-a-cloud-ingest-persistent-volume-claim-pvc) section.
+
+### [Shared Access Signature (SAS) authentication](#tab/sas)
+
+### Create a Kubernetes secret using Shared Access Signature (SAS) authentication
+
+You can configure SAS authentication using YAML and `kubectl`, or by using the Azure CLI.
+
+To find your `storageaccountsas`, perform the following procedure:
+
+1. Navigate to your storage account in the Azure portal.
+1. Expand **Security + networking** on the left blade and then select **Shared access signature**.
+1. Under **Allowed resource types**, select **Service > Container > Object**.
+1. Under **Allowed permissions**, unselect **Immutable storage** and **Permanent delete**.
+1. Under **Start and expiry date/time**, choose your desired end date and time.
+1. At the bottom, select **Generate SAS and connection string**.
+1. The values listed under **SAS token** are used for the `storageaccountsas` variables in the next section.
+
+#### Shared Access Signature (SAS) authentication using YAML and `kubectl`
+
+1. Create a file named `sas.yaml` with the following contents. Replace `metadata::name`, `metadata::namespace`, and `storageaccountconnectionstring` with your own values.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ ### This name should look similar to "kharrisStorageAccount-secret" where "kharrisStorageAccount" is replaced with your storage account name
+ name: <your-storage-acct-name-secret>
+ # Use a namespace that matches your intended consuming pod, or "default"
+ namespace: <your-intended-consuming-pod-or-default>
+ stringData:
+ authType: SAS
+ # Container level SAS (must have ? prefixed)
+ storageaccountsas: "?..."
+ type: Opaque
+ ```
+
+1. To apply `sas.yaml`, run:
+
+ ```bash
+ kubectl apply -f "sas.yaml"
+ ```
+
+#### Shared Access Signature (SAS) authentication using CLI
+
+- If you want to scope SAS authentication at the container level, use the following commands. You must update `YOUR_CONTAINER_NAME` from the first command and `YOUR_NAMESPACE`, `YOUR_STORAGE_ACCT_NAME`, and `YOUR_SECRET` from the second command:
+
+ ```bash
+ az storage container generate-sas [OPTIONAL auth via --connection-string "..."] --name YOUR_CONTAINER_NAME --permissions acdrw --expiry '2025-02-02T01:01:01Z'
+ kubectl create secret generic -n "YOUR_NAMESPACE" "YOUR_STORAGE_ACCT_NAME"-secret --from-literal=storageaccountsas="YOUR_SAS"
+ ```
+
+### [Connection string authentication](#tab/connectionstring)
+
+### Create a Kubernetes secret using connection string authentication
+
+You can configure connection string authentication using YAML and `kubectl`, or by using Azure CLI.
+
+To find your `storageaccountconnectionstring`, perform the following procedure:
+
+1. Navigate to your storage account in the Azure portal.
+1. Expand **Security + networking** on the left blade and then select **Shared access signature**.
+1. Under **Allowed resource types**, select **Service > Container > Object**.
+1. Under **Allowed permissions**, unselect **Immutable storage** and **Permanent delete**.
+1. Under **Start and expiry date/time**, choose your desired end date and time.
+1. At the bottom, select **Generate SAS and connection string**.
+1. The values listed under **Connection string** are used for the `storageaccountconnectionstring` variables in the next section..
+
+For more information, see [Create a connection string using a shared access signature](/azure/storage/common/storage-configure-connection-string?toc=%2Fazure%2Fstorage%2Fblobs%2Ftoc.json&bc=%2Fazure%2Fstorage%2Fblobs%2Fbreadcrumb%2Ftoc.json#create-a-connection-string-using-a-shared-access-signature).
+
+#### Connection string authentication using YAML and `kubectl`
+
+1. Create a file named `connectionString.yaml` with the following contents. Replace `metadata::name`, `metadata::namespace`, and `storageaccountconnectionstring` with your own values.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ apiVersion: v1
+ kind: Secret
+ metadata:
+ ### This name should look similar to "kharrisStorageAccount-secret" where "kharrisStorageAccount" is replaced with your storage account name
+ name: <your-storage-acct-name-secret>
+ # Use a namespace that matches your intended consuming pod or "default"
+ namespace: <your-intended-consuming-pod-or-default>
+ stringData:
+ authType: CONNECTION_STRING
+ # Connection string which can contain a storage key or SAS.
+ # Depending on your decision on using storage key or SAS, comment out the undesired storageaccoutnconnectionstring.
+ # - Storage key example -
+ storageaccountconnectionstring: "DefaultEndpointsProtocol=https;AccountName=YOUR_ACCT_NAME_HERE;AccountKey=YOUR_ACCT_KEY_HERE;EndpointSuffix=core.windows.net"
+ # - SAS example -
+ storageaccountconnectionstring: "BlobEndpoint=https://YOUR_BLOB_ENDPOINT_HERE;SharedAccessSignature=YOUR_SHARED_ACCESS_SIG_HERE"
+ type: Opaque
+ ```
+
+1. To apply `connectionString.yaml`, run:
+
+ ```bash
+ kubectl apply -f "connectionString.yaml"
+ ```
+
+#### Connection string authentication using CLI
+
+A connection string can contain a storage key or SAS.
+
+- For a storage key connection string, run the following commands. You must update the `your_storage_acct_name` value from the first command, and the `your_namespace`, `your_storage_acct_name`, and `your_secret` values from the second command:
+
+ ```bash
+ az storage account show-connection-string --name YOUR_STORAGE_ACCT_NAME --output tsv
+ kubectl create secret generic -n "your_namespace" "your_storage_acct_name"-secret --from-literal=storageaccountconnectionstring="your_secret"
+ ```
+
+- For a SAS connection string, run the following commands. You must update the `your_storage_acct_name` and `your_sas_token` values from the first command, and the `your_namespace`, `your_storage_acct_name`, and `your_secret` values from the second command:
+
+ ```bash
+ az storage account show-connection-string --name your_storage_acct_name --sas-token "your_sas_token" -output tsv
+ kubectl create secret generic -n "your_namespace" "your_storage_acct_name"-secret --from-literal=storageaccountconnectionstring="your_secret"
+ ```
+
+### [Storage key authentication](#tab/storagekey)
+
+### Create a Kubernetes secret using storage key authentication
+
+1. Create a file named `add-key.sh` with the following contents. No edits to the contents are necessary:
+
+ ```bash
+ #!/usr/bin/env bash
+
+ while getopts g:n:s: flag
+ do
+ case "${flag}" in
+ g) RESOURCE_GROUP=${OPTARG};;
+ s) STORAGE_ACCOUNT=${OPTARG};;
+ n) NAMESPACE=${OPTARG};;
+ esac
+ done
+
+ SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv)
+
+ kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=storageaccountkey="${SECRET}" --from-literal=storageaccountname="${STORAGE_ACCOUNT}"
+ ```
+
+1. Once you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{your_storage_account}-secret`. This secret name is used for the `secretName` value when you configure the Persistent Volume (PV).
+
+ ```bash
+ chmod +x add-key.sh
+ ./add-key.sh -g "$your_resource_group_name" -s "$your_storage_account_name" -n "$your_kubernetes_namespace"
+ ```
+++
+## Create a Cloud Ingest Persistent Volume Claim (PVC)
+
+1. Create a file named `cloudIngestPVC.yaml` with the following contents. You must edit the `metadata::name` value, and add a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. You must also update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ ### Create a name for the PVC ###
+ name: <your-storage-acct-name-secret>
+ ### Use a namespace that matches your intended consuming pod, or "default" ###
+ namespace: <your-intended-consuming-pod-or-default>
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: cloud-backed-sc
+ ```
+
+2. To apply `cloudIngestPVC.yaml`, run:
+
+ ```bash
+ kubectl apply -f "cloudIngestPVC.yaml"
+ ```
+
+## Attach sub-volume to Edge Volume
+
+1. Get the name of your Edge Volume using the following command:
+
+ ```bash
+ kubectl get edgevolumes
+ ```
+
+1. Create a file named `edgeSubvolume.yaml` and copy the following contents. Update the variables with your information:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ - `metadata::name`: Create a name for your sub-volume.
+ - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`.
+ - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash.
+ - `spec::auth::authType`: Depends on what authentication method you used in the previous steps. Accepted inputs include `sas`, `connection_string`, and `key`.
+ - `spec::auth::secretName`: If you used storage key authentication, your `secretName` is `{your_storage_account_name}-secret`. If you used connection string or SAS authentication, your `secretName` was specified by you.
+ - `spec::auth::secretNamespace`: Matches your intended consuming pod, or `default`.
+ - `spec::container`: The container name in your storage account.
+ - `spec::storageaccountendpoint`: Navigate to your storage account in the Azure portal. On the **Overview** page, near the top right of the screen, select **JSON View**. You can find the `storageaccountendpoint` link under **properties::primaryEndpoints::blob**. Copy the entire link (for example, `https://mytest.blob.core.windows.net/`).
+
+ ```yaml
+ apiVersion: "arccontainerstorage.azure.net/v1"
+ kind: EdgeSubvolume
+ metadata:
+ name: <create-a-subvolume-name-here>
+ spec:
+ edgevolume: <your-edge-volume-name-here>
+ path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash.
+ auth:
+ authType: MANAGED_IDENTITY
+ secretName: <your-secret-name>
+ secretNamespace: <your_namespace>
+ storageaccountendpoint: <your_storage_account_endpoint>
+ container: <your-blob-storage-account-container-name>
+ ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration
+ ```
+
+2. To apply `edgeSubvolume.yaml`, run:
+
+ ```bash
+ kubectl apply -f "edgeSubvolume.yaml"
+ ```
+
+### Optional: Modify the `ingestPolicy` from the default
+
+1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. Update the following variables with your preferences.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ - `metadata::name`: Create a name for your **ingestPolicy**. This name must be updated and referenced in the spec::ingestPolicy section of your `edgeSubvolume.yaml`.
+ - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to **oldest-first**). Options for order are: **oldest-first** or **newest-first**.
+ - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000.
+ - `spec::eviction::order`: How files are evicted (defaults to **unordered**). Options for eviction order are: **unordered** or **never**.
+ - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000.
+
+ ```yaml
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: EdgeIngestPolicy
+ metadata:
+ name: <create-a-policy-name-here> # This will need to be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml
+ spec:
+ ingest:
+ order: <your-ingest-order>
+ minDelaySec: <your-min-delay-sec>
+ eviction:
+ order: <your-eviction-order>
+ minDelaySec: <your-min-delay-sec>
+ ```
+
+1. To apply `myedgeingest-policy.yaml`, run:
+
+ ```bash
+ kubectl apply -f "myedgeingest-policy.yaml"
+ ```
+
+## Attach your app (Kubernetes native application)
+
+1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Replace `containers::name` and `volumes::persistentVolumeClaim::claimName` with your values. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: cloudingestedgevol-deployment ### This will need to be unique for every volume you choose to create
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ name: wyvern-testclientdeployment
+ template:
+ metadata:
+ name: wyvern-testclientdeployment
+ labels:
+ name: wyvern-testclientdeployment
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - wyvern-testclientdeployment
+ topologyKey: kubernetes.io/hostname
+ containers:
+ ### Specify the container in which to launch the busy box. ###
+ - name: <create-a-container-name-here>
+ image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done"
+ volumeMounts:
+ ### This name must match the following volumes::name attribute ###
+ - name: wyvern-volume
+ ### This mountPath is where the PVC will be attached to the pod's filesystem ###
+ mountPath: "/data"
+ volumes:
+ ### User-defined 'name' that is used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ###
+ - name: wyvern-volume
+ persistentVolumeClaim:
+ ### This claimName must refer to your PVC metadata::name
+ claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml>
+ ```
+
+1. To apply `deploymentExample.yaml`, run:
+
+ ```bash
+ kubectl apply -f "deploymentExample.yaml"
+ ```
+
+1. Use `kubectl get pods` to find the name of your pod. Copy this name; you use it in the next step.
+
+ > [!NOTE]
+ > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods will appear using `kubectl get pods`. You can choose either pod name to use for the next step.
+
+1. Run the following command and replace `POD_NAME_HERE` with your copied value from the last step:
+
+ ```bash
+ kubectl exec -it pod_name_here -- sh
+ ```
+
+1. Change directories (`cd`) into the `/data` mount path as specified in your `deploymentExample.yaml`.
+
+1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Now, `cd` into `/your_path_name_here`, and replace `your_path_name_here` with your respective details.
+
+1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`.
+
+1. In the Azure portal, navigate to your storage account and find the container specified from Step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should see `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading.
+
+## Next steps
+
+After completing these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or 3rd party monitoring with Prometheus and Grafana.
+
+[Monitor your deployment](monitor-deployment-edge-volumes.md)
azure-arc Alternate Onelake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/alternate-onelake.md
+
+ Title: Alternate OneLake configuration for Cloud Ingest Edge Volumes
+description: Learn about an alternate Cloud Ingest Edge Volumes configuration.
++++ Last updated : 08/26/2024++
+# Alternate: OneLake configuration for Cloud Ingest Edge Volumes
+
+This article describes an alternate configuration for [Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md) (blob upload with local purge) for OneLake Lakehouses.
+
+This configuration is an alternative option that you can use with key-based authentication methods. You should review the recommended configuration using the system-assigned managed identities described in [Cloud Ingest Edge Volumes configuration](cloud-ingest-edge-volume-configuration.md).
+
+## Configure OneLake for Extension Identity
+
+### Add Extension Identity to OneLake workspace
+
+1. Navigate to your OneLake portal; for example, `https://youraccount.powerbi.com`.
+1. Create or navigate to your workspace.
+ :::image type="content" source="media/onelake-workspace.png" alt-text="Screenshot showing workspace ribbon in portal." lightbox="media/onelake-workspace.png":::
+1. Select **Manage Access**.
+ :::image type="content" source="media/onelake-manage-access.png" alt-text="Screenshot showing manage access screen in portal." lightbox="media/onelake-manage-access.png":::
+1. Select **Add people or groups**.
+1. Enter your extension name from your Azure Container Storage enabled by Azure Arc installation. This must be unique within your tenant.
+ :::image type="content" source="media/add-extension-name.png" alt-text="Screenshot showing add extension name screen." lightbox="media/add-extension-name.png":::
+1. Change the drop-down for permissions from **Viewer** to **Contributor**.
+ :::image type="content" source="media/onelake-set-contributor.png" alt-text="Screenshot showing set contributor screen." lightbox="media/onelake-set-contributor.png":::
+1. Select **Add**.
+
+### Create a Cloud Ingest Persistent Volume Claim (PVC)
+
+1. Create a file named `cloudIngestPVC.yaml` with the following contents. Modify the `metadata::name` value with a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. You must also update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ ### Create a nane for your PVC ###
+ name: <create-a-pvc-name-here>
+ ### Use a namespace that matches your intended consuming pod, or "default" ###
+ namespace: <intended-consuming-pod-or-default-here>
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: cloud-backed-sc
+ ```
+
+1. To apply `cloudIngestPVC.yaml`, run:
+
+ ```bash
+ kubectl apply -f "cloudIngestPVC.yaml"
+ ```
+
+### Attach sub-volume to Edge Volume
+
+You can use the following process to create a sub-volume using Extension Identity to connect to your OneLake LakeHouse.
+
+1. Get the name of your Edge Volume using the following command:
+
+ ```bash
+ kubectl get edgevolumes
+ ```
+
+1. Create a file named `edgeSubvolume.yaml` and copy/paste the following contents. The following variables must be updated with your information:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ - `metadata::name`: Create a name for your sub-volume.
+ - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`.
+ - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash.
+ - `spec::container`: Details of your One Lake Data Lake Lakehouse (for example, `<WORKSPACE>/<DATA_LAKE>/Files`).
+ - `spec::storageaccountendpoint`: Your storage account endpoint is the prefix of your Power BI web link. For example, if your OneLake page is `https://contoso-motors.powerbi.com/`, then your endpoint is `https://contoso-motors.dfs.fabric.microsoft.com`.
+
+ ```yaml
+ apiVersion: "arccontainerstorage.azure.net/v1"
+ kind: EdgeSubvolume
+ metadata:
+ name: <create-a-subvolume-name-here>
+ spec:
+ edgevolume: <your-edge-volume-name-here>
+ path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must to be updated. Don't use a preceding slash.
+ auth:
+ authType: MANAGED_IDENTITY
+ storageaccountendpoint: "https://<Your AZ Site>.dfs.fabric.microsoft.com/" # Your AZ site is the root of your Power BI OneLake interface URI, such as https://contoso-motors.powerbi.com
+ container: "<WORKSPACE>/<DATA_LAKE>/Files" # Details of your One Lake Data Lake Lakehouse
+ ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration
+ ```
+
+2. To apply `edgeSubvolume.yaml`, run:
+
+ ```bash
+ kubectl apply -f "edgeSubvolume.yaml"
+ ```
+
+#### Optional: Modify the `ingestPolicy` from the default
+
+1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. The following variables must be updated with your preferences:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ - `metadata::name`: Create a name for your `ingestPolicy`. This name must be updated and referenced in the `spec::ingestPolicy` section of your `edgeSubvolume.yaml`.
+ - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to `oldest-first`). Options for order are: `oldest-first` or `newest-first`.
+ - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000.
+ - `spec::eviction::order`: How files are evicted (defaults to `unordered`). Options for eviction order are: `unordered` or `never`.
+ - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000.
+
+ ```yaml
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: EdgeIngestPolicy
+ metadata:
+ name: <create-a-policy-name-here> # This will need to be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml
+ spec:
+ ingest:
+ order: <your-ingest-order>
+ minDelaySec: <your-min-delay-sec>
+ eviction:
+ order: <your-eviction-order>
+ minDelaySec: <your-min-delay-sec>
+ ```
+
+1. To apply `myedgeingest-policy.yaml`, run:
+
+ ```bash
+ kubectl apply -f "myedgeingest-policy.yaml"
+ ```
+
+## Attach your app (Kubernetes native application)
+
+1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Replace the values for `containers::name` and `volumes::persistentVolumeClaim::claimName` with your own. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: cloudingestedgevol-deployment ### This must be unique for each deployment you choose to create.
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ name: wyvern-testclientdeployment
+ template:
+ metadata:
+ name: wyvern-testclientdeployment
+ labels:
+ name: wyvern-testclientdeployment
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - wyvern-testclientdeployment
+ topologyKey: kubernetes.io/hostname
+ containers:
+ ### Specify the container in which to launch the busy box. ###
+ - name: <create-a-container-name-here>
+ image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done"
+ volumeMounts:
+ ### This name must match the following volumes::name attribute ###
+ - name: wyvern-volume
+ ### This mountPath is where the PVC is attached to the pod's filesystem ###
+ mountPath: "/data"
+ volumes:
+ ### User-defined name that's used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ###
+ - name: wyvern-volume
+ persistentVolumeClaim:
+ ### This claimName must refer to your PVC metadata::name
+ claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml>
+ ```
+
+1. To apply `deploymentExample.yaml`, run:
+
+ ```bash
+ kubectl apply -f "deploymentExample.yaml"
+ ```
+
+1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it in the next step.
+
+ > [!NOTE]
+ > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step.
+
+1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step:
+
+ ```bash
+ kubectl exec -it POD_NAME_HERE -- sh
+ ```
+
+1. Change directories into the `/data` mount path as specified in `deploymentExample.yaml`.
+
+1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Now, `cd` into `/YOUR_PATH_NAME_HERE`, replacing `YOUR_PATH_NAME_HERE` with your details.
+
+1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`.
+
+1. In the Azure portal, navigate to your storage account and find the container specified from step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should find `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading.
+
+## Next steps
+
+After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or 3rd-party monitoring with Prometheus and Grafana.
+
+[Monitor Your Deployment](monitor-deployment-edge-volumes.md)
azure-arc Attach App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/attach-app.md
+
+ Title: Attach your application using the Azure IoT Operations data processor or Kubernetes native application (preview)
+description: Learn how to attach your app using the Azure IoT Operations data processor or Kubernetes native application in Azure Container Storage enabled by Azure Arc Cache Volumes.
+++ Last updated : 08/26/2024
+zone_pivot_groups: attach-app
++
+# Attach your application (preview)
+
+This article assumes you created a Persistent Volume (PV) and a Persistent Volume Claim (PVC). For information about creating a PV, see [Create a persistent volume](create-persistent-volume.md). For information about creating a PVC, see [Create a Persistent Volume Claim](create-persistent-volume-claim.md).
+
+## Configure the Azure IoT Operations data processor
+
+When you use Azure IoT Operations (AIO), the Data Processor is spawned without any mounts for Cache Volumes. You can perform the following tasks:
+
+- Add a mount for the Cache Volumes PVC you created previously.
+- Reconfigure all pipelines' output stage to output to the Cache Volumes mount you just created.
+
+## Add Cache Volumes to your aio-dp-runner-worker-0 pods
+
+These pods are part of a **statefulSet**. You can't edit the statefulSet in place to add mount points. Instead, follow this procedure:
+
+1. Dump the statefulSet to yaml:
+
+ ```bash
+ kubectl get statefulset -o yaml -n azure-iot-operations aio-dp-runner-worker > stateful_worker.yaml
+ ```
+
+1. Edit the statefulSet to include the new mounts for Cache Volumes in volumeMounts and volumes:
+
+ ```yaml
+ volumeMounts:
+ - mountPath: /etc/bluefin/config
+ name: config-volume
+ readOnly: true
+ - mountPath: /var/lib/bluefin/registry
+ name: nfs-volume
+ - mountPath: /var/lib/bluefin/local
+ name: runner-local
+ ### Add the next 2 lines ###
+ - mountPath: /mnt/esa
+ name: esa4
+
+ volumes:
+ - configMap:
+ defaultMode: 420
+ name: file-config
+ name: config-volume
+ - name: nfs-volume
+ persistentVolumeClaim:
+ claimName: nfs-provisioner
+ ### Add the next 3 lines ###
+ - name: esa4
+ persistentVolumeClaim:
+ claimName: esa4
+ ```
+
+1. Delete the existing statefulSet:
+
+ ```bash
+ kubectl delete statefulset -n azure-iot-operations aio-dp-runner-worker
+ ```
+
+ This deletes all `aio-dp-runner-worker-n` pods. This is an outage-level event.
+
+1. Create a new statefulSet of aio-dp-runner-worker(s) with the Cache Volumes mounts:
+
+ ```bash
+ kubectl apply -f stateful_worker.yaml -n azure-iot-operations
+ ```
+
+ When the `aio-dp-runner-worker-n` pods start, they include mounts to Cache Volumes. The PVC should convey this in the state.
+
+1. Once you reconfigure your Data Processor workers to have access to the Cache Volumes, you must manually update the pipeline configuration to use a local path that corresponds to the mounted location of your Cache Volume on the worker PODs.
+
+ In order to modify the pipeline, use `kubectl edit pipeline <name of your pipeline>`. In that pipeline, replace your output stage with the following YAML:
+
+ ```yaml
+ output:
+ batch:
+ path: .payload
+ time: 60s
+ description: An example file output stage
+ displayName: Sample File output
+ filePath: '{{{instanceId}}}/{{{pipelineId}}}/{{{partitionId}}}/{{{YYYY}}}/{{{MM}}}/{{{DD}}}/{{{HH}}}/{{{mm}}}/{{{fileNumber}}}'
+ format:
+ type: jsonStream
+ rootDirectory: /mnt/esa
+ type: output/file@v1
+ ```
++
+## Configure a Kubernetes native application
+
+1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `configPod.yaml` with the following contents:
+
+ ```yaml
+ kind: Deployment
+ apiVersion: apps/v1
+ metadata:
+ name: example-static
+ labels:
+ app: example-static
+ ### Uncomment the next line and add your namespace only if you are not using the default namespace (if you are using azure-iot-operations) as specified from Line 6 of your pvc.yaml. If you are not using the default namespace, all future kubectl commands require "-n YOUR_NAMESPACE" to be added to the end of your command.
+ # namespace: YOUR_NAMESPACE
+ spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: example-static
+ template:
+ metadata:
+ labels:
+ app: example-static
+ spec:
+ containers:
+ - image: mcr.microsoft.com/cbl-mariner/base/core:2.0
+ name: mariner
+ command:
+ - sleep
+ - infinity
+ volumeMounts:
+ ### This name must match the 'volumes.name' attribute in the next section. ###
+ - name: blob
+ ### This mountPath is where the PVC is attached to the pod's filesystem. ###
+ mountPath: "/mnt/blob"
+ volumes:
+ ### User-defined 'name' that's used to link the volumeMounts. This name must match 'volumeMounts.name' as specified in the previous section. ###
+ - name: blob
+ persistentVolumeClaim:
+ ### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name must match what your PVC resource was actually named. ###
+ claimName: YOUR_CLAIM_NAME_FROM_YOUR_PVC
+ ```
+
+ > [!NOTE]
+ > If you are using your own namespace, all future `kubectl` commands require `-n YOUR_NAMESPACE` to be appended to the command. For example, you must use `kubectl get pods -n YOUR_NAMESPACE` instead of the standard `kubectl get pods`.
+
+1. To apply this .yaml file, run the following command:
+
+ ```bash
+ kubectl apply -f "configPod.yaml"
+ ```
+
+1. Use `kubectl get pods` to find the name of your pod. Copy this name, as you need it for the next step.
+
+1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step:
+
+ ```bash
+ kubectl exec -it POD_NAME_HERE -- bash
+ ```
+
+1. Change directories into the `/mnt/blob` mount path as specified from your `configPod.yaml`.
+
+1. As an example, to write a file, run `touch file.txt`.
+
+1. In the Azure portal, navigate to your storage account and find the container. This is the same container you specified in your `pv.yaml` file. When you select your container, you see `file.txt` populated within the container.
++
+## Next steps
+
+After you complete these steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or third-party monitoring with Prometheus and Grafana:
+
+[Third-party monitoring](third-party-monitoring.md)
azure-arc Azure Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/azure-monitor-kubernetes.md
+
+ Title: Azure Monitor and Kubernetes monitoring (preview)
+description: Learn how to monitor your deployment using Azure Monitor and Kubernetes monitoring in Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024+++
+# Azure Monitor and Kubernetes monitoring (preview)
+
+This article describes how to monitor your deployment using Azure Monitor and Kubernetes monitoring.
+
+## Azure Monitor
+
+[Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) is a full-stack monitoring service that you can use to monitor Azure resources for their availability, performance, and operation.
+
+## Azure Monitor metrics
+
+[Azure Monitor metrics](/azure/azure-monitor/essentials/data-platform-metrics) is a feature of Azure Monitor that collects data from monitored resources into a time-series database.
+
+These metrics can originate from a number of different sources, including native platform metrics, native custom metrics via [Azure Monitor agent Application Insights](/azure/azure-monitor/insights/insights-overview), and [Azure Managed Prometheus](/azure/azure-monitor/essentials/prometheus-metrics-overview).
+
+Prometheus metrics can be stored in an [Azure Monitor workspace](/azure/azure-monitor/essentials/azure-monitor-workspace-overview) for subsequent visualization via [Azure Managed Grafana](/azure/managed-grafana/overview).
+
+### Metrics configuration
+
+To configure the scraping of Prometheus metrics data into Azure Monitor, see the [Azure Monitor managed service for Prometheus scrape configuration](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration#enable-pod-annotation-based-scraping) article, which builds upon [this configmap](https://aka.ms/azureprometheus-addon-settings-configmap). Azure Container Storage enabled by Azure Arc specifies the `prometheus.io/scrape:true` and `prometheus.io/port` values, and relies on the default of `prometheus.io/path: '/metrics'`. You must specify the Azure Container Storage enabled by Azure Arc installation namespace under `pod-annotation-based-scraping` to properly scope your metrics' ingestion.
+
+Once the Prometheus configuration has been completed, follow the [Azure Managed Grafana instructions](/azure/managed-grafana/overview) to create an [Azure Managed Grafana instance](/azure/managed-grafana/quickstart-managed-grafana-portal).
+
+## Azure Monitor logs
+
+[Azure Monitor logs](/azure/azure-monitor/logs/data-platform-logs) is a feature of Azure Monitor that collects and organizes log and performance data from monitored resources, and can be used to [analyze this data in many ways](/azure/azure-monitor/logs/data-platform-logs#what-can-you-do-with-azure-monitor-logs).
+
+### Logs configuration
+
+If you want to access log data via Azure Monitor, you must enable [Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-overview) on your Arc-enabled Kubernetes cluster, and then analyze the collected data with [a collection of views](/azure/azure-monitor/containers/container-insights-analyze) and [workbooks](/azure/azure-monitor/containers/container-insights-reports).
+
+Additionally, you can use [Azure Monitor Log Analytics](/azure/azure-monitor/logs/log-analytics-tutorial) to query collected log data.
+
+## Next steps
+
+[Azure Container Storage enabled by Azure Arc overview](overview.md)
azure-arc Blob Index Metadata Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/blob-index-metadata-tags.md
+
+ Title: Blob index and metadata tags
+description: Learn about blob index and metadata tags in Edge Volumes.
++++ Last updated : 08/26/2024++
+# Blob index and metadata tags
+
+Cloud Ingest Edge Volumes now supports the ability to generate blob index tags and blob metadata tags directly from Azure Container Storage enabled by Azure Arc. This process involves incorporating extended attributes to the files within your Cloud Ingest Edge Volume, where Edge Volumes translates that into your selected index or metadata tag.
+
+## Blob index tags
+
+To generate a blob index tag, create an extended attribute using the prefix `azindex`, followed by the desired key and its corresponding value for the index tag. Edge Volumes subsequently propagates these values to the blob, appearing as the key matching the value.
+
+> [!NOTE]
+> Index tags are only supported for non-hierarchical namespace (HNS) accounts.
+
+### Example 1: index tags
+
+The following example creates the blob index tag `location=chicagoplant2` on `logfile1`:
+
+```bash
+$ attr -s azindex.location -V chicagoplant2 logfile1
+Attribute "azindex.location" set to a 13 byte value for logfile1:
+chicagoplant2
+```
+
+### Example 2: index tags
+
+The following example creates the blob index tag `datecreated=1705523841` on `logfile2`:
+
+```bash
+$ attr -s azindex.datecreated -V $(date +%s) logfile2
+Attribute " azindex.datecreated " set to a 10 byte value for logfile2:
+1705523841
+```
+
+## Blob metadata tags
+
+To generate a blob metadata tag, create an extended attribute using the prefix `azmeta`, followed by the desired key and its corresponding value for the metadata tag. Edge Volumes subsequently propagates these values to the blob, appearing as the key matching the value.
+
+> [!NOTE]
+> Metadata tags are supported for HNS and non-HNS accounts.
+
+> [!NOTE]
+> HNS blobs also receive `x-ms-meta-is_adls=true` to indicate that the blob was created with Datalake APIs.
+
+### Example 1: metadata tags
+
+The following example creates the blob metadata tag `x-ms-meta-location=chicagoplant2` on `logfile1`:
+
+```bash
+$ attr -s azmeta.location -V chicagoplant2 logfile1
+Attribute "azmeta.location" set to a 13 byte value for logfile1:
+chicagoplant2
+```
+
+### Example 2: metadata tags
+
+The following example creates the blob metadata tag `x-ms-meta-datecreated=1705523841` on `logfile2`:
+
+```bash
+$ attr -s azmeta.datecreated -V $(date +%s) logfile2
+Attribute " azmeta.datecreated " set to a 10 byte value for logfile2:
+1705523841
+```
+
+## Next steps
+
+[Azure Container Storage enabled by Azure Arc overview](overview.md)
azure-arc Cache Volumes Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/cache-volumes-overview.md
+
+ Title: Cache Volumes overview
+description: Learn about the Cache Volumes offering from Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024+++
+# Overview of Cache Volumes
+
+This article describes the Cache Volumes offering from Azure Container Storage enabled by Azure Arc.
+
+## How does Cache Volumes work?
++
+Cache Volumes works by performing the following operations:
+
+- **Write** - Your file is processed locally and saved in the cache. If the file doesn't change within 3 seconds, Cache Volumes automatically uploads it to your chosen blob destination.
+- **Read** - If the file is already in the cache, the file is served from the cache memory. If it isn't available in the cache, the file is pulled from your chosen blob storage target.
+
+## Next steps
+
+- [Prepare Linux](prepare-linux.md)
+- [How to install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md)
+- [Create a persistent volume](create-persistent-volume.md)
+- [Monitor your deployment](azure-monitor-kubernetes.md)
azure-arc Cloud Ingest Edge Volume Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/cloud-ingest-edge-volume-configuration.md
+
+ Title: Cloud Ingest Edge Volumes configuration
+description: Learn about Cloud Ingest Edge Volumes configuration for Edge Volumes.
++++ Last updated : 08/26/2024++
+# Cloud Ingest Edge Volumes configuration
+
+This article describes the configuration for *Cloud Ingest Edge Volumes* (blob upload with local purge).
+
+## What is Cloud Ingest Edge Volumes?
+
+*Cloud Ingest Edge Volumes* facilitates limitless data ingestion from edge to blob, including ADLSgen2. Files written to this storage type are seamlessly transferred to blob storage and once confirmed uploaded, are subsequently purged locally. This removal ensures space availability for new data. Moreover, this storage option supports data integrity in disconnected environments, which enables local storage and synchronization upon reconnection to the network.
+
+For example, you can write a file to your cloud ingest PVC, and a process runs a scan to check for new files every minute. Once identified, the file is sent for uploading to your designated blob destination. Following confirmation of a successful upload, Cloud Ingest Edge Volume waits for five minutes, and then deletes the local version of your file.
+
+## Prerequisites
+
+1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal).
+
+ > [!NOTE]
+ > When you create your storage account, it's recommended that you create it under the same resource group and region/location as your Kubernetes cluster.
+
+1. Create a container in the storage account that you created previously, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+
+## Configure Extension Identity
+
+Edge Volumes allows the use of a system-assigned extension identity for access to blob storage. This section describes how to use the system-assigned extension identity to grant access to your storage account, allowing you to upload cloud ingest volumes to these storage systems.
+
+It's recommended that you use Extension Identity. If your final destination is blob storage or ADLSgen2, see the following instructions. If your final destination is OneLake, follow the instructions in [Configure OneLake for Extension Identity](alternate-onelake.md).
+
+While it's not recommended, if you prefer to use key-based authentication, follow the instructions in [Key-based authentication](alternate-key-based.md).
+
+### Obtain Extension Identity
+
+#### [Azure portal](#tab/portal)
+
+#### Azure portal
+
+1. Navigate to your Arc-connected cluster.
+1. Select **Extensions**.
+1. Select your Azure Container Storage enabled by Azure Arc extension.
+1. Note the Principal ID under **Cluster Extension Details**.
+
+#### [Azure CLI](#tab/cli)
+
+#### Azure CLI
+
+In Azure CLI, enter your values for the exports (`CLUSTER_NAME`, `RESOURCE_GROUP`) and run the following command:
+
+```bash
+export CLUSTER_NAME = <your-cluster-name-here>
+export RESOURCE_GROUP = <your-resource-group-here>
+export EXTENSION_TYPE=${1:-"microsoft.arc.containerstorage"}
+az k8s-extension list --cluster-name ${CLUSTER_NAME} --resource-group ${RESOURCE_GROUP} --cluster-type connectedClusters | jq --arg extType ${EXTENSION_TYPE} 'map(select(.extensionType == $extType)) | .[] | .identity.principalId' -r
+```
+++
+### Configure blob storage account for Extension Identity
+
+#### Add Extension Identity permissions to a storage account
+
+1. Navigate to storage account in the Azure portal.
+1. Select **Access Control (IAM)**.
+1. Select **Add+ -> Add role assignment**.
+1. Select **Storage Blob Data Owner**, then select **Next**.
+1. Select **+Select Members**.
+1. To add your principal ID to the **Selected Members:** list, paste the ID and select **+** next to the identity.
+1. Click **Select**.
+1. To review and assign permissions, select **Next**, then select **Review + Assign**.
+
+## Create a Cloud Ingest Persistent Volume Claim (PVC)
+
+1. Create a file named `cloudIngestPVC.yaml` with the following contents. Edit the `metadata::name` line and create a name for your Persistent Volume Claim. This name is referenced on the last line of `deploymentExample.yaml` in the next step. Also, update the `metadata::namespace` value with your intended consuming pod. If you don't have an intended consuming pod, the `metadata::namespace` value is `default`.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ ### Create a name for your PVC ###
+ name: <create-persistent-volume-claim-name-here>
+ ### Use a namespace that matched your intended consuming pod, or "default" ###
+ namespace: <intended-consuming-pod-or-default-here>
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: cloud-backed-sc
+ ```
+
+1. To apply `cloudIngestPVC.yaml`, run:
+
+ ```bash
+ kubectl apply -f "cloudIngestPVC.yaml"
+ ```
+
+## Attach sub-volume to Edge Volume
+
+To create a sub-volume using extension identity to connect to your storage account container, use the following process:
+
+1. Get the name of your Edge Volume using the following command:
+
+ ```bash
+ kubectl get edgevolumes
+ ```
+
+1. Create a file named `edgeSubvolume.yaml` and copy the following contents. These variables must be updated with your information:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ - `metadata::name`: Create a name for your sub-volume.
+ - `spec::edgevolume`: This name was retrieved from the previous step using `kubectl get edgevolumes`.
+ - `spec::path`: Create your own subdirectory name under the mount path. Note that the following example already contains an example name (`exampleSubDir`). If you change this path name, line 33 in `deploymentExample.yaml` must be updated with the new path name. If you choose to rename the path, don't use a preceding slash.
+ - `spec::container`: The container name in your storage account.
+ - `spec::storageaccountendpoint`: Navigate to your storage account in the Azure portal. On the **Overview** page, near the top right of the screen, select **JSON View**. You can find the `storageaccountendpoint` link under **properties::primaryEndpoints::blob**. Copy the entire link (for example, `https://mytest.blob.core.windows.net/`).
+
+ ```yaml
+ apiVersion: "arccontainerstorage.azure.net/v1"
+ kind: EdgeSubvolume
+ metadata:
+ name: <create-a-subvolume-name-here>
+ spec:
+ edgevolume: <your-edge-volume-name-here>
+ path: exampleSubDir # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash.
+ auth:
+ authType: MANAGED_IDENTITY
+ storageaccountendpoint: "https://<STORAGE ACCOUNT NAME>.blob.core.windows.net/"
+ container: <your-blob-storage-account-container-name>
+ ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration
+ ```
+
+2. To apply `edgeSubvolume.yaml`, run:
+
+ ```bash
+ kubectl apply -f "edgeSubvolume.yaml"
+ ```
+
+### Optional: Modify the `ingestPolicy` from the default
+
+1. If you want to change the `ingestPolicy` from the default `edgeingestpolicy-default`, create a file named `myedgeingest-policy.yaml` with the following contents. The following variables must be updated with your preferences:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ - `metadata::name`: Create a name for your **ingestPolicy**. This name must be updated and referenced in the `spec::ingestPolicy` section of your `edgeSubvolume.yaml`.
+ - `spec::ingest::order`: The order in which dirty files are uploaded. This is best effort, not a guarantee (defaults to **oldest-first**). Options for order are: **oldest-first** or **newest-first**.
+ - `spec::ingest::minDelaySec`: The minimum number of seconds before a dirty file is eligible for ingest (defaults to 60). This number can range between 0 and 31536000.
+ - `spec::eviction::order`: How files are evicted (defaults to **unordered**). Options for eviction order are: **unordered** or **never**.
+ - `spec::eviction::minDelaySec`: The number of seconds before a clean file is eligible for eviction (defaults to 300). This number can range between 0 and 31536000.
+
+ ```yaml
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: EdgeIngestPolicy
+ metadata:
+ name: <create-a-policy-name-here> # This must be updated and referenced in the spec::ingestPolicy section of the edgeSubvolume.yaml
+ spec:
+ ingest:
+ order: <your-ingest-order>
+ minDelaySec: <your-min-delay-sec>
+ eviction:
+ order: <your-eviction-order>
+ minDelaySec: <your-min-delay-sec>
+ ```
+
+1. To apply `myedgeingest-policy.yaml`, run:
+
+ ```bash
+ kubectl apply -f "myedgeingest-policy.yaml"
+ ```
+
+## Attach your app (Kubernetes native application)
+
+1. To configure a generic single pod (Kubernetes native application) against the Persistent Volume Claim (PVC), create a file named `deploymentExample.yaml` with the following contents. Modify the `containers::name` and `volumes::persistentVolumeClaim::claimName` values. If you updated the path name from `edgeSubvolume.yaml`, `exampleSubDir` on line 33 must be updated with your new path name.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: cloudingestedgevol-deployment ### This must be unique for each deployment you choose to create.
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ name: wyvern-testclientdeployment
+ template:
+ metadata:
+ name: wyvern-testclientdeployment
+ labels:
+ name: wyvern-testclientdeployment
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - wyvern-testclientdeployment
+ topologyKey: kubernetes.io/hostname
+ containers:
+ ### Specify the container in which to launch the busy box. ###
+ - name: <create-a-container-name-here>
+ image: mcr.microsoft.com/azure-cli:2.57.0@sha256:c7c8a97f2dec87539983f9ded34cd40397986dcbed23ddbb5964a18edae9cd09
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "dd if=/dev/urandom of=/data/exampleSubDir/esaingesttestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done"
+ volumeMounts:
+ ### This name must match the volumes::name attribute below ###
+ - name: wyvern-volume
+ ### This mountPath is where the PVC is attached to the pod's filesystem ###
+ mountPath: "/data"
+ volumes:
+ ### User-defined 'name' that's used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ###
+ - name: wyvern-volume
+ persistentVolumeClaim:
+ ### This claimName must refer to your PVC metadata::name (Line 5)
+ claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml>
+ ```
+
+1. To apply `deploymentExample.yaml`, run:
+
+ ```bash
+ kubectl apply -f "deploymentExample.yaml"
+ ```
+
+1. Use `kubectl get pods` to find the name of your pod. Copy this name to use in the next step.
+
+ > [!NOTE]
+ > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step.
+
+1. Run the following command and replace `POD_NAME_HERE` with your copied value from the last step:
+
+ ```bash
+ kubectl exec -it POD_NAME_HERE -- sh
+ ```
+
+1. Change directories into the `/data` mount path as specified from your `deploymentExample.yaml`.
+
+1. You should see a directory with the name you specified as your `path` in Step 2 of the [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume) section. Change directories into `/YOUR_PATH_NAME_HERE`, replacing the `YOUR_PATH_NAME_HERE` value with your details.
+
+1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`.
+
+1. In the Azure portal, navigate to your storage account and find the container specified from Step 2 of [Attach sub-volume to Edge Volume](#attach-sub-volume-to-edge-volume). When you select your container, you should find `file1.txt` populated within the container. If the file hasn't yet appeared, wait approximately 1 minute; Edge Volumes waits a minute before uploading.
+
+## Next steps
+
+After you complete these steps, you can begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring or 3rd-party monitoring with Prometheus and Grafana.
+
+[Monitor your deployment](monitor-deployment-edge-volumes.md)
azure-arc Create Persistent Volume Claim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/create-persistent-volume-claim.md
+
+ Title: Create a Persistent Volume Claim (PVC) (preview)
+description: Learn how to create a Persistent Volume Claim (PVC) in Cache Volumes.
+++ Last updated : 08/26/2024+++
+# Create a Persistent Volume Claim (PVC) (preview)
+
+The PVC is a persistent volume claim against the persistent volume that you can use to mount a Kubernetes pod.
+
+This size does not affect the ceiling of blob storage used in the cloud to support this local cache. Make a note of the name of this PVC, as you need it when you create your application pod.
+
+## Create PVC
+
+1. Create a file named **pvc.yaml** with the following contents:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ ### Create a name for your PVC ###
+ name: CREATE_A_NAME_HERE
+ ### Use a namespace that matched your intended consuming pod, or "default" ###
+ namespace: INTENDED_CONSUMING_POD_OR_DEFAULT_HERE
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 5Gi
+ storageClassName: esa
+ volumeMode: Filesystem
+ ### This name references your PV name in your PV config ###
+ volumeName: INSERT_YOUR_PV_NAME
+ ```
+
+ > [!NOTE]
+ > If you intend to use your PVC with the Azure IoT Operations Data Processor, use `azure-iot-operations` as the `namespace` on line 7.
+
+1. To apply this .yaml file, run:
+
+ ```bash
+ kubectl apply -f "pvc.yaml"
+ ```
+
+## Next steps
+
+After you create a Persistent Volume Claim (PVC), attach your app (Azure IoT Operations Data Processor or Kubernetes Native Application):
+
+[Attach your app](attach-app.md)
azure-arc Create Persistent Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/create-persistent-volume.md
+
+ Title: Create a persistent volume (preview)
+description: Learn about creating persistent volumes in Cache Volumes.
+++ Last updated : 08/26/2024+++
+# Create a persistent volume (preview)
+
+This article describes how to create a persistent volume using storage key authentication.
+
+## Prerequisites
+
+This section describes the prerequisites for creating a persistent volume (PV).
+
+1. Create a storage account [following the instructions here](/azure/storage/common/storage-account-create?tabs=azure-portal).
+
+ > [!NOTE]
+ > When you create your storage account, create it under the same resource group as your Kubernetes cluster. It is recommended that you also create it under the same region/location as your Kubernetes cluster.
+
+1. Create a container in the storage account that you created in the previous step, [following the instructions here](/azure/storage/blobs/storage-quickstart-blobs-portal#create-a-container).
+
+## Storage key authentication configuration
+
+1. Create a file named **add-key.sh** with the following contents. No edits or changes are necessary:
+
+ ```bash
+ #!/usr/bin/env bash
+
+ while getopts g:n:s: flag
+ do
+ case "${flag}" in
+ g) RESOURCE_GROUP=${OPTARG};;
+ s) STORAGE_ACCOUNT=${OPTARG};;
+ n) NAMESPACE=${OPTARG};;
+ esac
+ done
+
+ SECRET=$(az storage account keys list -g $RESOURCE_GROUP -n $STORAGE_ACCOUNT --query [0].value --output tsv)
+
+ kubectl create secret generic -n "${NAMESPACE}" "${STORAGE_ACCOUNT}"-secret --from-literal=azurestorageaccountkey="${SECRET}" --from-literal=azurestorageaccountname="${STORAGE_ACCOUNT}"
+ ```
+
+1. After you create the file, change the write permissions on the file and execute the shell script using the following commands. Running these commands creates a secret named `{YOUR_STORAGE_ACCOUNT}-secret`. This secret name is used for the `secretName` value when configuring your PV:
+
+ ```bash
+ chmod +x add-key.sh
+ ./add-key.sh -g "$YOUR_RESOURCE_GROUP_NAME" -s "$YOUR_STORAGE_ACCOUNT_NAME" -n "$YOUR_KUBERNETES_NAMESPACE"
+ ```
+
+## Create Persistent Volume (PV)
+
+You must create a Persistent Volume (PV) for Cache Volumes to create a local instance and bind to a remote BLOB storage account.
+
+Make a note of the `metadata: name:` as you must specify it in the `spec: volumeName` of the PVC that binds to it. Use your storage account and container that you created as part of the [prerequisites](#prerequisites).
+
+1. Create a file named **pv.yaml**:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ ### Create a name here ###
+ name: CREATE_A_NAME_HERE
+ spec:
+ capacity:
+ ### This storage capacity value is not enforced at this layer. ###
+ storage: 10Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: esa
+ csi:
+ driver: edgecache.csi.azure.com
+ readOnly: false
+ ### Make sure this volumeid is unique in the cluster. You must specify it in the spec:volumeName of the PVC. ###
+ volumeHandle: YOUR_NAME_FROM_METADATA_NAME_IN_LINE_4_HERE
+ volumeAttributes:
+ protocol: edgecache
+ edgecache-storage-auth: AccountKey
+ ### Fill in the next two/three values with your information. ###
+ secretName: YOUR_SECRET_NAME_HERE ### From the previous step, this name is "{YOUR_STORAGE_ACCOUNT}-secret" ###
+ ### If you use a non-default namespace, uncomment the following line and add your namespace. ###
+ ### secretNamespace: YOUR_NAMESPACE_HERE
+ containerName: YOUR_CONTAINER_NAME_HERE
+ ```
+
+1. To apply this .yaml file, run:
+
+ ```bash
+ kubectl apply -f "pv.yaml"
+ ```
+
+## Next steps
+
+- [Create a persistent volume claim](create-persistent-volume-claim.md)
+- [Azure Container Storage enabled by Azure Arc overview](overview.md)
azure-arc Install Cache Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/install-cache-volumes.md
+
+ Title: Install Cache Volumes (preview)
+description: Learn how to install the Cache Volumes offering from Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024+++
+# Install Azure Container Storage enabled by Azure Arc Cache Volumes (preview)
+
+This article describes the steps to install the Azure Container Storage enabled by Azure Arc extension.
+
+## Optional: increase cache disk size
+
+Currently, the cache disk size defaults to 8 GB. If you're satisfied with the cache disk size, see the next section, [Install the Azure Container Storage enabled by Azure Arc extension](#install-the-azure-container-storage-enabled-by-azure-arc-extension).
+
+If you use Edge Essentials, require a larger cache disk size, and already created a **config.json** file, append the key and value pair (`"cachedStorageSize": "20Gi"`) to your existing **config.json**. Don't erase the previous contents of **config.json**.
+
+If you require a larger cache disk size, create **config.json** with the following contents:
+
+```json
+{
+ "cachedStorageSize": "20Gi"
+}
+```
+
+## Prepare the `azure-arc-containerstorage` namespace
+
+In this step, you prepare a namespace in Kubernetes for `azure-arc-containerstoragee` and add it to your Open Service Mesh (OSM) configuration for link security. If you want to use a namespace other than `azure-arc-containerstorage`, substitute it in the `export extension_namespace`:
+
+```bash
+export extension_namespace=azure-arc-containerstorage
+kubectl create namespace "${extension_namespace}"
+kubectl label namespace "${extension_namespace}" openservicemesh.io/monitored-by=osm
+kubectl annotate namespace "${extension_namespace}" openservicemesh.io/sidecar-injection=enabled
+# Disable OSM permissive mode.
+kubectl patch meshconfig osm-mesh-config \
+ -n "arc-osm-system" \
+ -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":'"false"'}}}' \
+ --type=merge
+```
+
+## Install the Azure Container Storage enabled by Azure Arc extension
+
+Install the Azure Container Storage enabled by Azure Arc extension using the following command:
+
+> [!NOTE]
+> If you created a **config.json** file from the previous steps in [Prepare Linux](prepare-linux.md), append `--config-file "config.json"` to the following `az k8s-extension create` command. Any values set at installation time persist throughout the installation lifetime (including manual and auto-upgrades).
+
+```bash
+az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name hydraext --extension-type microsoft.arc.containerstorage
+```
+
+## Next steps
+
+Once you complete these prerequisites, you can begin to [create a Persistent Volume (PV) with Storage Key Authentication](create-persistent-volume.md).
azure-arc Install Edge Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/install-edge-volumes.md
+
+ Title: Install Edge Volumes (preview)
+description: Learn how to install the Edge Volumes offering from Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024++
+# Install Azure Container Storage enabled by Azure Arc Edge Volumes (preview)
+
+This article describes the steps to install the Azure Container Storage enabled by Azure Arc extension.
+
+## Prepare the `azure-arc-containerstorage` namespace
+
+In this step, you prepare a namespace in Kubernetes for `azure-arc-containerstorage` and add it to your Open Service Mesh (OSM) configuration for link security. If you want to use a namespace other than `azure-arc-containerstorage`, substitute it in the `export extension_namespace`:
+
+```bash
+export extension_namespace=azure-arc-containerstorage
+kubectl create namespace "${extension_namespace}"
+kubectl label namespace "${extension_namespace}" openservicemesh.io/monitored-by=osm
+kubectl annotate namespace "${extension_namespace}" openservicemesh.io/sidecar-injection=enabled
+# Disable OSM permissive mode.
+kubectl patch meshconfig osm-mesh-config \
+ -n "arc-osm-system" \
+ -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":'"false"'}}}' \
+ --type=merge
+```
+
+## Install the Azure Container Storage enabled by Azure Arc extension
+
+Install the Azure Container Storage enabled by Azure Arc extension using the following command:
+
+```azurecli
+az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name azure-arc-containerstorage --extension-type microsoft.arc.containerstorage
+```
+
+> [!NOTE]
+> By default, the `--release-namespace` parameter is set to `azure-arc-containerstorage`. If you want to override this setting, add the `--release-namespace` flag to the following command and populate it with your details. Any values set at installation time persist throughout the installation lifetime (including manual and auto-upgrades).
+
+> [!IMPORTANT]
+> If you use OneLake, you must use a unique extension name for the `--name` variable in the `az k8s-extension create` command.
+
+## Configuration operator
+
+### Configuration CRD
+
+The Azure Container Storage enabled by Azure Arc extension uses a Custom Resource Definition (CRD) in Kubernetes to configure the storage service. Before you publish this CRD on your Kubernetes cluster, the Azure Container Storage enabled by Azure Arc extension is dormant and uses minimal resources. Once your CRD is applied with the configuration options, the appropriate storage classes, CSI driver, and service PODs are deployed to provide services. In this way, you can customize Azure Container Storage enabled by Azure Arc to meet your needs, and it can be reconfigured without reinstalling the Arc Kubernetes Extension. Common configurations are contained here, however this CRD offers the capability to configure non-standard configurations for Kubernetes clusters with differing storage capabilities.
+
+#### [Single node or 2-node cluster](#tab/single)
+
+#### Single node or 2-node cluster with Ubuntu or Edge Essentials
+
+If you run a single node or 2-node cluster with **Ubuntu** or **Edge Essentials**, follow these instructions:
+
+1. Create a file named **edgeConfig.yaml** with the following contents:
+
+ ```yaml
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: EdgeStorageConfiguration
+ metadata:
+ name: edge-storage-configuration
+ spec:
+ defaultDiskStorageClasses:
+ - "default"
+ - "local-path"
+ serviceMesh: "osm"
+ ```
+
+1. To apply this .yaml file, run:
+
+ ```bash
+ kubectl apply -f "edgeConfig.yaml"
+ ```
+
+#### [Multi-node cluster](#tab/multi)
+
+#### Multi-node cluster with Ubuntu or Edge Essentials
+
+If you run a 3 or more node Kubernetes cluster with **Ubuntu** or **Edge Essentials**, follow these instructions. This configuration installs the ACStor storage subsystem to provide fault-tolerant, replicated storage for Kubernetes clusters with 3 or more nodes:
+
+1. Create a file named **edgeConfig.yaml** with the following contents:
+
+ > [!NOTE]
+ > To relocate storage to a different location on disk, update `diskMountPoint` with your desired path.
+
+ ```yaml
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: EdgeStorageConfiguration
+ metadata:
+ name: edge-storage-configuration
+ spec:
+ defaultDiskStorageClasses:
+ - acstor-arccontainerstorage-storage-pool
+ serviceMesh: "osm"
+
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: ACStorConfiguration
+ metadata:
+ name: acstor-configuration
+ spec:
+ diskMountPoint: /mnt
+ diskCapacity: 10Gi
+ createStoragePool:
+ enabled: true
+ replicas: 3
+ ```
+
+1. To apply this .yaml file, run:
+
+ ```bash
+ kubectl apply -f "edgeConfig.yaml"
+ ```
+
+#### [Arc-connected AKS/AKS Arc](#tab/arc)
+
+#### Arc-connected AKS or AKS Arc
+
+If you run a single-node or multi-node cluster with **Arc-connected AKS** or **AKS enabled by Arc**, follow these instructions:
+
+1. Create a file named **edgeConfig.yaml** with the following contents:
+
+ ```yaml
+ apiVersion: arccontainerstorage.azure.net/v1
+ kind: EdgeStorageConfiguration
+ metadata:
+ name: edge-storage-configuration
+ spec:
+ defaultDiskStorageClasses:
+ - "default"
+ - "local-path"
+ serviceMesh: "osm"
+ ```
+
+1. To apply this .yaml file, run:
+
+ ```bash
+ kubectl apply -f "edgeConfig.yaml"
+ ```
+++
+## Next steps
+
+- [Configure your Local Shared Edge volumes](local-shared-edge-volumes.md)
+- [Configure your Cloud Ingest Edge Volumes](cloud-ingest-edge-volume-configuration.md)
azure-arc Jumpstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/jumpstart.md
+
+ Title: Azure Container Storage enabled by Azure Arc using Azure Arc Jumpstart (preview)
+description: Learn about Azure Arc Jumpstart and Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024+++
+# Azure Arc Jumpstart and Azure Container Storage enabled by Azure Arc
+
+Azure Container Storage enabled by Azure Arc partnered with [Azure Arc Jumpstart](https://azurearcjumpstart.com/) to produce both a new Arc Jumpstart scenario and Azure Arc Jumpstart Drops, furthering the capabilities of edge computing solutions. This partnership led to an innovative scenario in which a computer vision AI model detects defects in bolts from real-time video streams, with the identified defects securely stored using Azure Container Storage enabled by Azure Arc on an AKS Edge Essentials instance. This scenario showcases the powerful integration of Azure Arc with AI and edge storage technologies.
+
+Additionally, Azure Container Storage enabled by Azure Arc contributed to Azure Arc Jumpstart Drops, a curated collection of resources that simplify deployment and management for developers and IT professionals. These tools, including Kubernetes files and scripts, are designed to streamline edge storage solutions and demonstrate the practical applications of Microsoft's cutting-edge technology.
+
+## Azure Arc Jumpstart scenario using Azure Container Storage enabled by Azure Arc
+
+Azure Container Storage enabled by Azure Arc collaborated with the [Azure Arc Jumpstart](https://azurearcjumpstart.com/) team to implement a scenario in which a computer vision AI model detects defects in bolts by analyzing video from a supply line video feed streamed over Real-Time Streaming Protocol (RTSP). The identified defects are then stored in a container within a storage account using Azure Container Storage enabled by Azure Arc.
+
+In this automated setup, Azure Container Storage enabled by Azure Arc is deployed on an [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) single-node instance, running in an Azure virtual machine. An Azure Resource Manager template is provided to create the necessary Azure resources and configure the **LogonScript.ps1** custom script extension. This extension handles AKS Edge Essentials cluster creation, Azure Arc onboarding for the Azure VM and AKS Edge Essentials cluster, and Azure Container Storage enabled by Azure Arc deployment. Once AKS Edge Essentials is deployed, Azure Container Storage enabled by Azure Arc is installed as a Kubernetes service that exposes a CSI driven storage class for use by applications in the Edge Essentials Kubernetes cluster.
+
+For more information, see the following articles:
+
+- [Watch the Jumpstart scenario on YouTube](https://youtu.be/Qnh2UH1g6Q4).
+- [See the Jumpstart documentation](https://aka.ms/esajumpstart).
+- [See the Jumpstart architecture diagrams](https://aka.ms/arcposters).
+
+## Azure Arc Jumpstart Drops for Azure Container Storage enabled by Azure Arc
+
+Azure Container Storage enabled by Azure Arc created Jumpstart Drops as part of another collaboration with [Azure Arc Jumpstart](https://azurearcjumpstart.com/).
+
+[Jumpstart Drops](https://aka.ms/jumpstartdrops) is a curated online collection of tools, scripts, and other assets that simplify the daily tasks of developers, IT, OT, and day-2 operations professionals. Jumpstart Drops is designed to showcase the power of Microsoft's products and services and promote mutual support and knowledge sharing among community members.
+
+For more information, see the article [Create an Azure Container Storage enabled by Azure Arc instance on a Single Node Ubuntu K3s system](https://arcjumpstart.com/create_an_edge_storage_accelerator_(esa)_instance_on_a_single_node_ubuntu_k3s_system).
+
+This Jumpstart Drop provides Kubernetes files to create an Azure Container Storage enabled by Azure Arc Cache Volumes instance on an install on Ubuntu with K3s.
+
+## Next steps
+
+- [Azure Container Storage enabled by Azure Arc overview](overview.md)
+- [AKS Edge Essentials overview](/azure/aks/hybrid/aks-edge-overview)
azure-arc Local Shared Edge Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/local-shared-edge-volumes.md
+
+ Title: Local Shared Edge Volume configuration for Edge Volumes
+description: Learn about Local Shared Edge Volume configuration for Edge Volumes.
++++ Last updated : 08/26/2024++
+# Local Shared Edge Volumes
+
+This article describes the configuration for Local Shared Edge Volumes (highly available, durable local storage).
+
+## What is a Local Shared Edge Volume?
+
+The *Local Shared Edge Volumes* feature provides highly available, failover-capable storage, local to your Kubernetes cluster. This shared storage type remains independent of cloud infrastructure, making it ideal for scratch space, temporary storage, and locally persistent data that might be unsuitable for cloud destinations.
+
+## Create a Local Shared Edge Volumes Persistent Volume Claim (PVC) and configure a pod against the PVC
+
+1. Create a file named `localSharedPVC.yaml` with the following contents. Modify the `metadata::name` value with a name for your Persistent Volume Claim. Then, in line 8, specify the namespace that matches your intended consuming pod. The `metadata::name` value is referenced on the last line of `deploymentExample.yaml` in the next step.
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ ### Create a name for your PVC ###
+ name: <create-a-pvc-name-here>
+ ### Use a namespace that matches your intended consuming pod, or "default" ###
+ namespace: <intended-consuming-pod-or-default-here>
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 2Gi
+ storageClassName: unbacked-sc
+ ```
+
+1. Create a file named `deploymentExample.yaml` with the following contents. Add the values for `containers::name` and `volumes::persistentVolumeClaim::claimName`:
+
+ [!INCLUDE [lowercase-note](includes/lowercase-note.md)]
+
+ ```yaml
+ apiVersion: apps/v1
+ kind: Deployment
+ metadata:
+ name: localsharededgevol-deployment ### This will need to be unique for every volume you choose to create
+ spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ name: wyvern-testclientdeployment
+ template:
+ metadata:
+ name: wyvern-testclientdeployment
+ labels:
+ name: wyvern-testclientdeployment
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - wyvern-testclientdeployment
+ topologyKey: kubernetes.io/hostname
+ containers:
+ ### Specify the container in which to launch the busy box. ###
+ - name: <create-a-container-name-here>
+ image: 'mcr.microsoft.com/mirror/docker/library/busybox:1.35'
+ command:
+ - "/bin/sh"
+ - "-c"
+ - "dd if=/dev/urandom of=/data/esalocalsharedtestfile count=16 bs=1M && while true; do ls /data &> || break; sleep 1; done"
+ volumeMounts:
+ ### This name must match the following volumes::name attribute ###
+ - name: wyvern-volume
+ ### This mountPath is where the PVC will be attached to the pod's filesystem ###
+ mountPath: /data
+ volumes:
+ ### User-defined name that is used to link the volumeMounts. This name must match volumeMounts::name as previously specified. ###
+ - name: wyvern-volume
+ persistentVolumeClaim:
+ ### This claimName must refer to your PVC metadata::name from lsevPVC.yaml.
+ claimName: <your-pvc-metadata-name-from-line-5-of-pvc-yaml>
+ ```
+
+1. To apply these YAML files, run:
+
+ ```bash
+ kubectl apply -f "localSharedPVC.yaml"
+ kubectl apply -f "deploymentExample.yaml"
+ ```
+
+1. Run `kubectl get pods` to find the name of your pod. Copy this name, as it's needed in the next step.
+
+ > [!NOTE]
+ > Because `spec::replicas` from `deploymentExample.yaml` was specified as `2`, two pods appear using `kubectl get pods`. You can choose either pod name to use for the next step.
+
+1. Run the following command and replace `POD_NAME_HERE` with your copied value from the previous step:
+
+ ```bash
+ kubectl exec -it pod_name_here -- sh
+ ```
+
+1. Change directories to the `/data` mount path, as specified in `deploymentExample.yaml`.
+
+1. As an example, create a file named `file1.txt` and write to it using `echo "Hello World" > file1.txt`.
+
+After you complete the previous steps, begin monitoring your deployment using Azure Monitor and Kubernetes Monitoring, or third-party monitoring with Prometheus and Grafana.
+
+## Next steps
+
+[Monitor your deployment](monitor-deployment-edge-volumes.md)
azure-arc Monitor Deployment Edge Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/monitor-deployment-edge-volumes.md
+
+ Title: Monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment (preview)
+description: Learn how to monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment.
+++ Last updated : 08/26/2024+++
+# Monitor your Edge Volumes deployment (preview)
+
+This article describes how to monitor your Azure Container Storage enabled by Azure Arc Edge Volumes deployment.
+
+## Deployment monitoring overviews
+
+For information about how to monitor your Edge Volumes deployment using Azure Monitor and Kubernetes Monitoring and 3rd-party monitoring with Prometheus and Grafana, see the following Azure Container Storage enabled by Azure Arc articles:
+
+- [3rd party monitoring with Prometheus and Grafana](third-party-monitoring.md)
+- [Azure Monitor and Kubernetes Monitoring](azure-monitor-kubernetes.md)
+
+## Next steps
+
+[Azure Container Storage enabled by Azure Arc overview](overview.md)
azure-arc Multi Node Cluster Edge Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/multi-node-cluster-edge-volumes.md
+
+ Title: Prepare Linux for Edge Volumes using a multi-node cluster (preview)
+description: Learn how to prepare Linux for Edge Volumes with a multi-node cluster using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
++++ Last updated : 08/26/2024
+zone_pivot_groups: platform-select
++
+# Prepare Linux for Edge Volumes using a multi-node cluster (preview)
+
+This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites).
+
+## Prepare Linux with AKS enabled by Azure Arc
+
+Install and configure Open Service Mesh (OSM) using the following commands:
+
+```azurecli
+az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
+```
++++
+## Prepare Linux with Ubuntu
+
+This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster.
+
+First, install and configure Open Service Mesh (OSM) using the following command:
+
+```azurecli
+az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
+```
++
+## Next steps
+
+[Install Extension](install-edge-volumes.md)
azure-arc Multi Node Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/multi-node-cluster.md
+
+ Title: Prepare Linux for Cache Volumes using a multi-node cluster (preview)
+description: Learn how to prepare Linux for Cache Volumes with a multi-node cluster using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
++++ Last updated : 08/26/2024
+zone_pivot_groups: platform-select
++
+# Prepare Linux using a multi-node cluster (preview)
+
+This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites).
+
+## Prepare Linux with AKS enabled by Azure Arc
+
+Install and configure Open Service Mesh (OSM) using the following commands:
+
+```azurecli
+az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
+kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge
+```
+++
+5. Create a file named **config.json** with the following contents:
+
+ ```json
+ {
+ "acstor.capacityProvisioner.tempDiskMountPoint": /var
+ }
+ ```
+
+ > [!NOTE]
+ > The location/path of this file is referenced later, when you install the Cache Volumes Arc extension.
++
+## Prepare Linux with Ubuntu
+
+This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster.
+
+1. Install and configure Open Service Mesh (OSM) using the following command:
+
+ ```azurecli
+ az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
+ kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge
+ ```
+++
+## Next steps
+
+[Install Azure Container Storage enabled by Azure Arc](install-cache-volumes.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/overview.md
+
+ Title: What is Azure Container Storage enabled by Azure Arc? (preview)
+description: Learn about Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024++++
+# What is Azure Container Storage enabled by Azure Arc? (preview)
+
+> [!IMPORTANT]
+> Azure Container Storage enabled by Azure Arc is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Azure Container Storage enabled by Azure Arc is a first-party storage system designed for Arc-connected Kubernetes clusters. Azure Container Storage enabled by Azure Arc can be deployed to write files to a "ReadWriteMany" persistent volume claim (PVC) where they are then transferred to Azure Blob Storage. Azure Container Storage enabled by Azure Arc offers a range of features to support Azure IoT Operations and other Arc services. Azure Container Storage enabled by Azure Arc with high availability and fault-tolerance will be fully supported and generally available (GA) in the second half of 2024.
+
+## What does Azure Container Storage enabled by Azure Arc do?
+
+Azure Container Storage enabled by Azure Arc serves as a native persistent storage system for Arc-connected Kubernetes clusters. Its primary role is to provide a reliable, fault-tolerant file system that allows data to be tiered to Azure. For Azure IoT Operations (AIO) and other Arc Services, Azure Container Storage enabled by Azure Arc is crucial in making Kubernetes clusters stateful. Key features of Azure Container Storage enabled by Azure Arc for Arc-connected K8s clusters include:
+
+- **Tolerance to node failures:** When configured as a 3 node cluster, Azure Container Storage enabled by Azure Arc replicates data between nodes (triplication) to ensure high availability and tolerance to single node failures.
+- **Data synchronization to Azure:** Azure Container Storage enabled by Azure Arc is configured with a storage target, so data written to volumes is automatically tiered to Azure Blob (block blob, ADLSgen-2 or OneLake) in the cloud.
+- **Low latency operations:** Arc services, such as AIO, can expect low latency for read and write operations.
+- **Simple connection:** Customers can easily connect to an Azure Container Storage enabled by Azure Arc volume using a CSI driver to start making Persistent Volume Claims against their storage.
+- **Flexibility in deployment:** Azure Container Storage enabled by Azure Arc can be deployed as part of AIO or as a standalone solution.
+- **Observable:** Azure Container Storage enabled by Azure Arc supports industry standard Kubernetes monitoring logs and metrics facilities, and supports Azure Monitor Agent observability.
+- **Designed with integration in mind:** Azure Container Storage enabled by Azure Arc integrates seamlessly with AIO's Data Processor to ease the shuttling of data from your edge to Azure.
+- **Platform neutrality:** Azure Container Storage enabled by Azure Arc is a Kubernetes storage system that can run on any Arc Kubernetes supported platform. Validation was done for specific platforms, including Ubuntu + CNCF K3s/K8s, Windows IoT + AKS-EE, and Azure Stack HCI + AKS-HCI.
+
+## What are the different Azure Container Storage enabled by Azure Arc offerings?
+
+The original Azure Container Storage enabled by Azure Arc offering is [*Cache Volumes*](cache-volumes-overview.md). The newest offering is [*Edge Volumes*](install-edge-volumes.md).
+
+## What are Azure Container Storage enabled by Azure Arc Edge Volumes?
+
+The first addition to the Edge Volumes offering is *Local Shared Edge Volumes*, providing highly available, failover-capable storage, local to your Kubernetes cluster. This shared storage type remains independent of cloud infrastructure, making it ideal for scratch space, temporary storage, and locally persistent data unsuitable for cloud destinations.
+
+The second new offering is *Cloud Ingest Edge Volumes*, which facilitates limitless data ingestion from edge to Blob, including ADLSgen2 and OneLake. Files written to this storage type are seamlessly transferred to Blob storage and subsequently purged from the local cache once confirmed uploaded, ensuring space availability for new data. Moreover, this storage option supports data integrity in disconnected environments, enabling local storage and synchronization upon reconnection to the network.
+
+Tailored for IoT applications, Edge Volumes not only eliminates local storage concerns and ingest limitations, but also optimizes local resource utilization and reduces storage requirements.
+
+### How does Edge Volumes work?
+
+You write to Edge Volumes as if it was your local file system. For a Local Shared Edge Volume, your data is stored and left untouched. For a Cloud Ingest Edge Volume, the volume checks for new data to mark for upload every minute, and then uploads that new data to your specified cloud destination. Five minutes after the confirmed upload to the cloud, the local copy is purged, allowing you to keep your local volume clear of old data and continue to receive new data.
+
+Get started with [Edge Volumes](prepare-linux-edge-volumes.md).
+
+### Supported Azure regions for Azure Container Storage enabled by Azure Arc
+
+Azure Container Storage enabled by Azure Arc is only available in the following Azure regions:
+
+- East US
+- East US 2
+- West US
+- West US 2
+- West US 3
+- North Europe
+- West Europe
+
+## Next steps
+
+- [Prepare Linux](prepare-linux-edge-volumes.md)
+- [How to install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md)
azure-arc Prepare Linux Edge Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/prepare-linux-edge-volumes.md
+
+ Title: Prepare Linux for Edge Volumes (preview)
+description: Learn how to prepare Linux in Azure Container Storage enabled by Azure Arc Edge Volumes using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
++++ Last updated : 08/30/2024+++
+# Prepare Linux for Edge Volumes (preview)
+
+The article describes how to prepare Linux for Edge Volumes using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
+
+> [!NOTE]
+> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2.
+
+## Prerequisites
+
+> [!NOTE]
+> Azure Container Storage enabled by Azure Arc is only available in the following regions: East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe.
+
+### Uninstall previous instance of Azure Container Storage enabled by Azure Arc extension
+
+If you previously installed a version of Azure Container Storage enabled by Azure Arc earlier than **2.1.0-preview**, you must uninstall that previous instance in order to install the newer version. If you installed the **1.2.0-preview** release or earlier, [use these instructions](release-notes.md#if-i-installed-the-120-preview-or-any-earlier-release-how-do-i-uninstall-the-extension). Versions after **2.1.0-preview** are upgradeable and do not require this uninstall.
+
+1. In order to delete the old version of the extension, the Kubernetes resources holding references to old version of the extension must be cleaned up. Any pending resources can delay the clean-up of the extension. There are at least two ways to clean up these resources: either using `kubectl delete <resource_type> <resource_name>`, or by "unapplying" the YAML files used to create the resources. The resources that need to be deleted are typically the pods, the PVC referenced, and the subvolume CRD (if Cloud Ingest Edge Volume was configured). Alternatively, the following four YAML files can be passed to `kubectl delete -f` using the following commands in the specified order. These variables must be updated with your information:
+
+ - `YOUR_DEPLOYMENT_FILE_NAME_HERE`: Add your deployment file names. In the example in this article, the file name used was `deploymentExample.yaml`. If you created multiple deployments, each one must be deleted on a separate line.
+ - `YOUR_PVC_FILE_NAME_HERE`: Add your Persistent Volume Claim file names. In the example in this article, if you used the Cloud Ingest Edge Volume, the file name used was `cloudIngestPVC.yaml`. If you used the Local Shared Edge Volume, the file name used was `localSharedPVC.yaml`. If you created multiple PVCs, each one must be deleted on a separate line.
+ - `YOUR_EDGE_SUBVOLUME_FILE_NAME_HERE`: Add your Edge subvolume file names. In the example in this article, the file name used was `edgeSubvolume.yaml`. If you created multiple subvolumes, each one must be deleted on a separate line.
+ - `YOUR_EDGE_STORAGE_CONFIGURATION_FILE_NAME_HERE`: Add your Edge storage configuration file name here. In the example in this article, the file name used was `edgeConfig.yaml`.
+
+ ```bash
+ kubectl delete -f "<YOUR_DEPLOYMENT_FILE_NAME_HERE.yaml>"
+ kubectl delete -f "<YOUR_PVC_FILE_NAME_HERE.yaml>"
+ kubectl delete -f "<YOUR_EDGE_SUBVOLUME_FILE_NAME_HERE.yaml>"
+ kubectl delete -f "<YOUR_EDGE_STORAGE_CONFIGURATION_FILE_NAME_HERE.yaml>"
+ ```
+
+1. After you delete the files for your deployments, PVCs, Edge subvolumes, and Edge storage configuration from the previous step, you can uninstall the extension using the following command. Replace `YOUR_RESOURCE_GROUP_NAME_HERE`, `YOUR_CLUSTER_NAME_HERE`, and `YOUR_EXTENSION_NAME_HERE` with your respective information:
+
+ ```azurecli
+ az k8s-extension delete --resource-group YOUR_RESOURCE_GROUP_NAME_HERE --cluster-name YOUR_CLUSTER_NAME_HERE --cluster-type connectedClusters --name YOUR_EXTENSION_NAME_HERE
+ ```
++
+## Next steps
+
+- [Prepare Linux using a single-node cluster](single-node-cluster-edge-volumes.md)
+- [Prepare Linux using a multi-node cluster](multi-node-cluster-edge-volumes.md)
azure-arc Prepare Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/prepare-linux.md
+
+ Title: Prepare Linux (preview)
+description: Learn how to prepare Linux in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
++++ Last updated : 08/26/2024+++
+# Prepare Linux (preview)
+
+The article describes how to prepare Linux using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
+
+> [!NOTE]
+> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2.
+
+## Prerequisites
+
+> [!NOTE]
+> Azure Container Storage enabled by Azure Arc is only available in the following regions: East US, East US 2, West US, West US 2, West US 3, North Europe, West Europe.
+
+### Arc-connected Kubernetes cluster
+
+These instructions assume that you already have an Arc-connected Kubernetes cluster. To connect an existing Kubernetes cluster to Azure Arc, [see these instructions](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli).
+
+If you want to use Azure Container Storage enabled by Azure Arc with Azure IoT Operations, follow the [instructions to create a cluster for Azure IoT Operations](/azure/iot-operations/get-started/quickstart-deploy?tabs=linux).
+
+Use Ubuntu 22.04 on Standard D8s v3 machines with three SSDs attached for more storage.
+
+## Single-node and multi-node clusters
+
+A single-node cluster is commonly used for development or testing purposes due to its simplicity in setup and minimal resource requirements. These clusters offer a lightweight and straightforward environment for developers to experiment with Kubernetes without the complexity of a multi-node setup. Additionally, in situations where resources such as CPU, memory, and storage are limited, a single-node cluster is more practical. Its ease of setup and minimal resource requirements make it a suitable choice in resource-constrained environments.
+
+However, single-node clusters come with limitations, mostly in the form of missing features, including their lack of high availability, fault tolerance, scalability, and performance.
+
+A multi-node Kubernetes configuration is typically used for production, staging, or large-scale scenarios because of features such as high availability, fault tolerance, scalability, and performance. A multi-node cluster also introduces challenges and trade-offs, including complexity, overhead, cost, and efficiency considerations. For example, setting up and maintaining a multi-node cluster requires extra knowledge, skills, tools, and resources (network, storage, compute). The cluster must handle coordination and communication among nodes, leading to potential latency and errors. Additionally, running a multi-node cluster is more resource-intensive and is costlier than a single-node cluster. Optimization of resource usage among nodes is crucial for maintaining cluster and application efficiency and performance.
+
+In summary, a [single-node Kubernetes cluster](single-node-cluster.md) might be suitable for development, testing, and resource-constrained environments. A [multi-node cluster](multi-node-cluster.md) is more appropriate for production deployments, high availability, scalability, and scenarios in which distributed applications are a requirement. This choice ultimately depends on your specific needs and goals for your deployment.
+
+## Minimum hardware requirements
+
+### Single-node or 2-node cluster
+
+- Standard_D8ds_v5 VM recommended
+- Equivalent specifications per node:
+ - 4 CPUs
+ - 16 GB RAM
+
+### Multi-node cluster
+
+- Standard_D8as_v5 VM recommended
+- Equivalent specifications per node:
+ - 8 CPUs
+ - 32 GB RAM
+
+32 GB RAM serves as a buffer; however, 16 GB RAM should suffice. Edge Essentials configurations require 8 CPUs with 10 GB RAM per node, making 16 GB RAM the minimum requirement.
+
+## Minimum storage requirements
+
+### Edge Volumes requirements
+
+When you use the fault tolerant storage option, Edge Volumes allocates disk space out of a fault tolerant storage pool, which is made up of the storage exported by each node in the cluster.
+
+The storage pool is configured to use 3-way replication to ensure fault tolerance. When an Edge Volume is provisioned, it allocates disk space from the storage pool, and allocates storage on 3 of the replicas.
+
+For example, in a 3-node cluster with 20 GB of disk space per node, the cluster has a storage pool of 60 GB. However, due to replication, it has an effective storage size of 20 GB.
+
+When an Edge Volume is provisioned with a requested size of 10 GB, it allocates a reserved system volume (statically sized to 1 GB) and a data volume (sized to the requested volume size, for example 10 GB). The reserved system volume consumes 3 GB (3 x 1 GB) of disk space in the storage pool, and the data volume will consume 30 GB (3 x 10 GB) of disk space in the storage pool, for a total of 33 GB.
+
+### Cache Volumes requirements
+
+Cache Volumes requires at least 4 GB per node of storage. For example, if you have a 3-node cluster, you need at least 12 GB of storage.
+
+## Next steps
+
+To continue preparing Linux, see the following instructions for single-node or multi-node clusters:
+
+- [Single-node clusters](single-node-cluster.md)
+- [Multi-node clusters](multi-node-cluster.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/release-notes.md
+
+ Title: Azure Container Storage enabled by Azure Arc FAQ and release notes (preview)
+description: Learn about new features and known issues in Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/30/2024+++
+# Azure Container Storage enabled by Azure Arc FAQ and release notes (preview)
+
+This article provides information about new features and known issues in Azure Container Storage enabled by Azure Arc, and answers some frequently asked questions.
+
+## Release notes
+
+### Version 2.1.0-preview
+
+- CRD operator
+- Cloud Ingest Tunable Timers
+- Uninstall during version updates
+- Added regions: West US, West US 2, North Europe
+
+### Version 1.2.0-preview
+
+- Extension identity and OneLake support: Azure Container Storage enabled by Azure Arc now allows use of a system-assigned extension identity for access to blob storage or OneLake lake houses.
+- Security fixes: security maintenance (package/module version updates).
+
+### Version 1.1.0-preview
+
+- Kernel versions: the minimum supported Linux kernel version is 5.1. Currently there are known issues with 6.4 and 6.2.
+
+## FAQ
+
+### Uninstall previous instance of the Azure Container Storage enabled by Azure Arc extension
+
+#### If I installed the 1.2.0-preview or any earlier release, how do I uninstall the extension?
+
+If you previously installed a version of Azure Container Storage enabled by Azure Arc earlier than **2.1.0-preview**, you must uninstall that previous instance in order to install the newer version.
+
+> [!NOTE]
+> The extension name for Azure Container Storage enabled by Azure Arc was previously **Edge Storage Accelerator**. If you still have this instance installed, the extension is referred to as **microsoft.edgestorageaccelerator** in the Azure portal.
+
+1. Before you can delete the extension, you must delete your configPods, Persistent Volume Claims, and Persistent Volumes using the following commands in this order. Replace `YOUR_POD_FILE_NAME_HERE`, `YOUR_PVC_FILE_NAME_HERE`, and `YOUR_PV_FILE_NAME_HERE` with your respective file names. If you have more than one of each type, add one line per instance:
+
+ ```bash
+ kubectl delete -f "YOUR_POD_FILE_NAME_HERE.yaml"
+ kubectl delete -f "YOUR_PVC_FILE_NAME_HERE.yaml"
+ kubectl delete -f "YOUR_PV_FILE_NAME_HERE.yaml"
+ ```
+
+1. After you delete your configPods, PVCs, and PVs in the previous step, you can uninstall the extension using the following command. Replace `YOUR_RESOURCE_GROUP_NAME_HERE`, `YOUR_CLUSTER_NAME_HERE`, and `YOUR_EXTENSION_NAME_HERE` with your respective information:
+
+ ```azurecli
+ az k8s-extension delete --resource-group YOUR_RESOURCE_GROUP_NAME_HERE --cluster-name YOUR_CLUSTER_NAME_HERE --cluster-type connectedClusters --name YOUR_EXTENSION_NAME_HERE
+ ```
+
+1. If you installed the extension before the **1.1.0-preview** release (released on 4/19/24) and have a pre-existing `config.json` file, the `config.json` schema changed. Remove the old `config.json` file using `rm config.json`.
+
+### Encryption
+
+#### What types of encryption are used by Azure Container Storage enabled by Azure Arc?
+
+There are three types of encryption that might be interesting for an Azure Container Storage enabled by Azure Arc customer:
+
+- **Cluster to Blob Encryption**: Data in transit from the cluster to blob is encrypted using standard HTTPS protocols. Data is decrypted once it reaches the cloud.
+- **Encryption Between Nodes**: This encryption is covered by Open Service Mesh (OSM) that is installed as part of setting up your Azure Container Storage enabled by Azure Arc cluster. It uses standard TLS encryption protocols.
+- **On Disk Encryption**: Encryption at rest. Not currently supported by Azure Container Storage enabled by Azure Arc.
+
+#### Is data encrypted in transit?
+
+Yes, data in transit is encrypted using standard HTTPS protocols. Data is decrypted once it reaches the cloud.
+
+#### Is data encrypted at REST?
+
+Data persisted by the Azure Container Storage enabled by Azure Arc extension is encrypted at REST if the underlying platform provides encrypted disks.
+
+### ACStor Triplication
+
+#### What is ACStor triplication?
+
+ACStor triplication stores data across three different nodes, each with its own hard drive. This intended behavior ensures data redundancy and reliability.
+
+#### Can ACStor triplication occur on a single physical device?
+
+No, ACStor triplication isn't designed to operate on a single physical device with three attached hard drives.
+
+## Next steps
+
+[Azure Container Storage enabled by Azure Arc overview](overview.md)
azure-arc Single Node Cluster Edge Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/single-node-cluster-edge-volumes.md
+
+ Title: Prepare Linux for Edge Volumes using a single-node or 2-node cluster (preview)
+description: Learn how to prepare Linux for Edge Volumes with a single-node or 2-node cluster in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
++++ Last updated : 08/26/2024
+zone_pivot_groups: platform-select
++
+# Prepare Linux for Edge Volumes using a single-node or two-node cluster (preview)
+
+This article describes how to prepare Linux using a single-node or two-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux-edge-volumes.md#prerequisites).
+
+## Prepare Linux with AKS enabled by Azure Arc
+
+This section describes how to prepare Linux with AKS enabled by Azure Arc if you run a single-node or two-node cluster.
+
+1. Install Open Service Mesh (OSM) using the following command:
+
+ ```azurecli
+ az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
+ ```
++++
+## Next steps
+
+[Install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md)
azure-arc Single Node Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/single-node-cluster.md
+
+ Title: Prepare Linux for Cache Volumes using a single-node or 2-node cluster (preview)
+description: Learn how to prepare Linux for Cache Volumes with a single-node or 2-node cluster in Azure Container Storage enabled by Azure Arc using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
++++ Last updated : 08/26/2024
+zone_pivot_groups: platform-select
++
+# Prepare Linux for Cache Volumes using a single-node or 2-node cluster (preview)
+
+This article describes how to prepare Linux using a single-node or 2-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites).
+
+## Prepare Linux with AKS enabled by Azure Arc
+
+This section describes how to prepare Linux with AKS enabled by Azure Arc if you run a single-node or 2-node cluster.
+
+1. Install Open Service Mesh (OSM) using the following command:
+
+ ```azurecli
+ az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
+ ```
+
+1. Disable **ACStor** by creating a file named **config.json** with the following contents:
+
+ ```json
+ {
+ "feature.diskStorageClass": "default",
+ "acstorController.enabled": false
+ }
+ ```
+++
+5. Disable **ACStor** by creating a file named **config.json** with the following contents:
+
+ ```json
+ {
+ "acstorController.enabled": false,
+ "feature.diskStorageClass": "local-path"
+ }
+ ```
+++
+3. Disable **ACStor** by creating a file named **config.json** with the following contents:
+
+ ```json
+ {
+ "acstorController.enabled": false,
+ "feature.diskStorageClass": "local-path"
+ }
+ ```
++
+## Next steps
+
+[Install Azure Container Storage enabled by Azure Arc](install-edge-volumes.md)
azure-arc Support Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/support-feedback.md
+
+ Title: Support and feedback for Azure Container Storage enabled by Azure Arc (preview)
+description: Learn how to get support and provide feedback on Azure Container Storage enabled by Azure Arc.
+++ Last updated : 08/26/2024+++
+# Support and feedback for Azure Container Storage enabled by Azure Arc (preview)
+
+If you experience an issue or need support during the preview, see the following video and steps to request support for Azure Container Storage enabled by Azure Arc in the Azure portal:
+
+> [!VIDEO f477de99-2036-41a3-979a-586a39b1854f]
+
+1. Navigate to the desired Arc-connected Kubernetes cluster with the Azure Container Storage enabled by Azure Arc extension that you are experiencing issues with.
+1. To expand the menu, select **Settings** on the left blade.
+1. Select **Extensions**.
+1. Select the name for **Type**: `microsoft.arc.containerstorage`. In this example, the name is `hydraext`.
+1. Select **Help** on the left blade to expand the menu.
+1. Select **Support + Troubleshooting**.
+1. In the search text box, describe the issue you are facing in a few words.
+1. Select "Go" to the right of the search text box.
+1. For **Which service you are having an issue with**, make sure that **Edge Storage Accelerator - Preview** is selected. If not, you might need to search for **Edge Storage Accelerator - Preview** in the drop-down.
+1. Select **Next** after you select **Edge Storage Accelerator - Preview**.
+1. **Subscription** should already be populated with the subscription that you used to set up your Kubernetes cluster. If not, select the subscription to which your Arc-connected Kubernetes cluster is linked.
+1. For **Resource**, select **General question** from the drop-down menu.
+1. Select **Next**.
+1. For **Problem type**, from the drop-down menu, select the problem type that best describes your issue.
+1. For **Problem subtype**, from the drop-down menu, select the subtype that best describes your issue. The subtype options vary based on your selected **Problem type**.
+1. Select **Next**.
+1. Based on the issue, there might be documentation available to help you triage your issue. If these articles are not relevant or don't solve the issue, select **Create a support request** at the top.
+1. After you select **Create a support request at the top**, the fields in the **Problem description** section should already be populated with the details that you provided earlier. If you want to change anything, you can do so in this window.
+1. Select **Next** once you verify that the information in the **Problem description** section is accurate.
+1. In the **Recommended solution** section, recommended solutions appear based on the information you entered. If the recommended solutions are not helpful, select **Next** to continue filing a support request.
+1. In the **Additional details** section, populate the **Problem details** with your information.
+1. Once all required fields are complete, select **Next**.
+1. Review your information from the previous sections, then select **Create**.
+
+## Release notes
+
+See the [release notes for Azure Container Storage enabled by Azure Arc](release-notes.md) for information about new features and known issues.
+
+## Next steps
+
+[What is Azure Container Storage enabled by Azure Arc?](overview.md)
azure-arc Third Party Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/container-storage/third-party-monitoring.md
+
+ Title: Third-party monitoring with Prometheus and Grafana (preview)
+description: Learn how to monitor your Azure Container Storage enabled by Azure Arc deployment using third-party monitoring with Prometheus and Grafana.
+++ Last updated : 08/26/2024+++
+# Third-party monitoring with Prometheus and Grafana (preview)
+
+This article describes how to monitor your deployment using third-party monitoring with Prometheus and Grafana.
+
+## Metrics
+
+### Configure an existing Prometheus instance for use with Azure Container Storage enabled by Azure Arc
+
+This guidance assumes that you previously worked with and/or configured Prometheus for Kubernetes. If you haven't previously done so, [see this overview](/azure/azure-monitor/containers/kubernetes-monitoring-enable#enable-prometheus-and-grafana) for more information about how to enable Prometheus and Grafana.
+
+[See the metrics configuration section](azure-monitor-kubernetes.md#metrics-configuration) for information about the required Prometheus scrape configuration. Once you configure Prometheus metrics, you can deploy [Grafana](/azure/azure-monitor/visualize/grafana-plugin) to monitor and visualize your Azure services and applications.
+
+## Logs
+
+The Azure Container Storage enabled by Azure Arc logs are accessible through the Azure Kubernetes Service [kubelet logs](/azure/aks/kubelet-logs). You can also collect this log data using the [syslog collection feature in Azure Monitor Container Insights](/azure/azure-monitor/containers/container-insights-syslog).
+
+## Next steps
+
+[Azure Container Storage enabled by Azure Arc overview](overview.md)
azure-arc How To Single Node K3s https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/how-to-single-node-k3s.md
- Title: Install Edge Storage Accelerator (ESA) on a single-node K3s cluster using Ubuntu or AKS Edge Essentials (preview)
-description: Learn how to create a single-node K3s cluster for Edge Storage Accelerator and install Edge Storage Accelerator on your Ubuntu or Edge Essentials environment.
---- Previously updated : 04/08/2024--
-# Install Edge Storage Accelerator on a single-node K3s cluster (preview)
-
-This article shows how to set up a single-node [K3s cluster](https://docs.k3s.io/) for Edge Storage Accelerator (ESA) using Ubuntu or [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview), based on the instructions provided in the Edge Storage Accelerator documentation.
-
-## Prerequisites
-
-Before you begin, ensure you have the following prerequisites in place:
--- A machine capable of running K3s, meeting the minimum system requirements.-- Basic understanding of Kubernetes concepts.-
-Follow these steps to create a single-node K3s cluster using Ubuntu or Edge Essentials.
-
-## Step 1: Create and configure a K3s cluster on Ubuntu
-
-Follow the [Azure IoT Operations K3s installation instructions](/azure/iot-operations/get-started/quickstart-deploy?tabs=linux#connect-a-kubernetes-cluster-to-azure-arc) to install K3s on your machine.
-
-## Step 2: Prepare Linux using a single-node cluster
-
-See [Prepare Linux using a single-node cluster](single-node-cluster.md) to set up a single-node K3s cluster.
-
-## Step 3: Install Edge Storage Accelerator
-
-Follow the instructions in [Install Edge Storage Accelerator](install-edge-storage-accelerator.md) to install Edge Storage Accelerator on your single-node Ubuntu K3s cluster.
-
-## Step 4: Create Persistent Volume (PV)
-
-Create a Persistent Volume (PV) by following the steps in [Create a PV](create-pv.md).
-
-## Step 5: Create Persistent Volume Claim (PVC)
-
-To bind with the PV created in the previous step, create a Persistent Volume Claim (PVC). See [Create a PVC](create-pvc.md) for guidance.
-
-## Step 6: Attach application to Edge Storage Accelerator
-
-Follow the instructions in [Edge Storage Accelerator: Attach your app](attach-app.md) to attach your application.
-
-## Next steps
--- [K3s Documentation](https://k3s.io/)-- [Azure IoT Operations K3s installation instructions](/azure/iot-operations/get-started/quickstart-deploy?tabs=linux#connect-a-kubernetes-cluster-to-azure-arc)-- [Azure Arc documentation](/azure/azure-arc/)
azure-arc Install Edge Storage Accelerator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/install-edge-storage-accelerator.md
- Title: Install Edge Storage Accelerator (preview)
-description: Learn how to install Edge Storage Accelerator.
--- Previously updated : 03/12/2024---
-# Install Edge Storage Accelerator (preview)
-
-This article describes the steps to install Edge Storage Accelerator.
-
-## Optional: increase cache disk size
-
-Currently, the cache disk size defaults to 8 GiB. If you're satisfied with the cache disk size, move to the next section, [Install the Edge Storage Accelerator Arc Extension](#install-edge-storage-accelerator-arc-extension).
-
-If you use Edge Essentials, require a larger cache disk size, and already created a **config.json** file, append the key and value pair (`"cachedStorageSize": "20Gi"`) to your existing **config.json**. Don't erase the previous contents of **config.json**.
-
-If you require a larger cache disk size, create **config.json** with the following contents:
-
-```json
-{
- "cachedStorageSize": "20Gi"
-}
-```
-
-## Install Edge Storage Accelerator Arc extension
-
-Install the Edge Storage Accelerator Arc extension using the following command:
-
-> [!NOTE]
-> If you created a **config.json** file from the previous steps in [Prepare Linux](prepare-linux.md), append `--config-file "config.json"` to the following `az k8s-extension create` command. Any values set at installation time persist throughout the installation lifetime (inclusive of manual and auto-upgrades).
-
-```bash
-az k8s-extension create --resource-group "${YOUR-RESOURCE-GROUP}" --cluster-name "${YOUR-CLUSTER-NAME}" --cluster-type connectedClusters --name hydraext --extension-type microsoft.edgestorageaccelerator
-```
-
-## Next steps
-
-Once you complete these prerequisites, you can begin to [create a Persistent Volume (PV) with Storage Key Authentication](create-pv.md).
azure-arc Jumpstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/jumpstart.md
- Title: Azure Arc Jumpstart scenario using Edge Storage Accelerator (preview)
-description: Learn about an Azure Arc Jumpstart scenario that uses Edge Storage Accelerator.
--- Previously updated : 04/18/2024---
-# Azure Arc Jumpstart scenario using Edge Storage Accelerator
-
-Edge Storage Accelerator (ESA) collaborated with the [Azure Arc Jumpstart](https://azurearcjumpstart.com/) team to implement a scenario in which a computer vision AI model detects defects in bolts by analyzing video from a supply line video feed streamed over Real-Time Streaming Protocol (RTSP). The identified defects are then stored in a container within a storage account using Edge Storage Accelerator.
-
-## Scenario description
-
-In this automated setup, ESA is deployed on an [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) single-node instance, running in an Azure virtual machine. An Azure Resource Manager template is provided to create the necessary Azure resources and configure the **LogonScript.ps1** custom script extension. This extension handles AKS Edge Essentials cluster creation, Azure Arc onboarding for the Azure VM and AKS Edge Essentials cluster, and Edge Storage Accelerator deployment. Once AKS Edge Essentials is deployed, ESA is installed as a Kubernetes service that exposes a CSI driven storage class for use by applications in the Edge Essentials Kubernetes cluster.
-
-For more information, see the following articles:
--- [Watch the ESA Jumpstart scenario on YouTube](https://youtu.be/Qnh2UH1g6Q4)-- [Visit the ESA Jumpstart documentation](https://aka.ms/esajumpstart)-- [Visit the ESA Jumpstart architecture diagrams](https://aka.ms/arcposters)-
-## Next steps
--- [Edge Storage Accelerator overview](overview.md)-- [AKS Edge Essentials overview](/azure/aks/hybrid/aks-edge-overview)
azure-arc Multi Node Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/multi-node-cluster.md
- Title: Prepare Linux using a multi-node cluster (preview)
-description: Learn how to prepare Linux with a multi-node cluster in Edge Storage Accelerator using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
---- Previously updated : 04/08/2024
-zone_pivot_groups: platform-select
--
-# Prepare Linux using a multi-node cluster (preview)
-
-This article describes how to prepare Linux using a multi-node cluster, and assumes you [fulfilled the prerequisites](prepare-linux.md#prerequisites).
-
-## Prepare Linux with AKS enabled by Azure Arc
-
-Install and configure Open Service Mesh (OSM) using the following commands:
-
-```bash
-az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
-kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge
-```
--
-## Prepare Linux with AKS Edge Essentials
-
-This section describes how to prepare Linux with AKS Edge Essentials if you run a multi-node cluster.
-
-1. On each node in your cluster, set the number of **HugePages** to 512 using the following command:
-
- ```bash
- Invoke-AksEdgeNodeCommand -NodeType "Linux" -Command 'echo 512 | sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages'
- Invoke-AksEdgeNodeCommand -NodeType "Linux" -Command 'echo "vm.nr_hugepages=512" | sudo tee /etc/sysctl.d/99-hugepages.conf'
- ```
-
-1. On each node in your cluster, install the specific kernel using:
-
- ```bash
- Invoke-AksEdgeNodeCommand -NodeType "Linux" -Command 'sudo apt install linux-modules-extra-`uname -r`'
- ```
-
- > [!NOTE]
- > The minimum supported version is 5.1. At this time, there are known issues with 6.4 and 6.2.
-
-1. On each node in your cluster, increase the maximum number of files using the following command:
-
- ```bash
- Invoke-AksEdgeNodeCommand -NodeType "Linux" -Command 'echo -e "LimitNOFILE=1048576" | sudo tee -a /etc/systemd/system/containerd.service.d/override.conf'
- ```
-
-1. Install and configure Open Service Mesh (OSM) using the following commands:
-
- ```bash
- az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
- kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge
- ```
-
-1. Create a file named **config.json** with the following contents:
-
- ```json
- {
- "acstor.capacityProvisioner.tempDiskMountPoint": /var
- }
- ```
-
- > [!NOTE]
- > The location/path of this file is referenced later, when installing the Edge Storage Accelerator Arc extension.
--
-## Prepare Linux with Ubuntu
-
-This section describes how to prepare Linux with Ubuntu if you run a multi-node cluster.
-
-1. Install and configure Open Service Mesh (OSM) using the following command:
-
- ```bash
- az k8s-extension create --resource-group "YOUR_RESOURCE_GROUP_NAME" --cluster-name "YOUR_CLUSTER_NAME" --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
- kubectl patch meshconfig osm-mesh-config -n "arc-osm-system" -p '{"spec":{"featureFlags":{"enableWASMStats": false }, "traffic":{"outboundPortExclusionList":[443,2379,2380], "inboundPortExclusionList":[443,2379,2380]}}}' --type=merge
- ```
-
-1. Run the following command to determine if you set `fs.inotify.max_user_instances` to 1024:
-
- ```bash
- sysctl fs.inotify.max_user_instances
- ```
-
- After you run this command, if it outputs less than 1024, run the following command to increase the maximum number of files and reload the **sysctl** settings:
-
- ```bash
- echo 'fs.inotify.max_user_instances = 1024' | sudo tee -a /etc/sysctl.conf
- sudo sysctl -p
- ```
-
-1. Install the specific kernel using:
-
- ```bash
- sudo apt install linux-modules-extra-`uname -r`
- ```
-
- > [!NOTE]
- > The minimum supported version is 5.1. At this time, there are known issues with 6.4 and 6.2.
-
-1. On each node in your cluster, set the number of **HugePages** to 512 using the following command:
-
- ```bash
- HUGEPAGES_NR=512
- echo $HUGEPAGES_NR | sudo tee /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
- echo "vm.nr_hugepages=$HUGEPAGES_NR" | sudo tee /etc/sysctl.d/99-hugepages.conf
- ```
--
-## Next steps
-
-[Install Edge Storage Accelerator](install-edge-storage-accelerator.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/overview.md
- Title: What is Edge Storage Accelerator? (preview)
-description: Learn about Edge Storage Accelerator.
--- Previously updated : 04/08/2024---
-# What is Edge Storage Accelerator? (preview)
-
-> [!IMPORTANT]
-> Edge Storage Accelerator is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> For access to the preview, you can [complete this questionnaire](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR19S7i8RsvNAg8hqZuHbEyxUNTEzN1lDT0s3SElLTDc5NlEzQTE2VVdKNi4u) with details about your environment and use case. Once you submit your responses, one of the ESA team members will get back to you with an update on your request.
-
-Edge Storage Accelerator (ESA) is a first-party storage system designed for Arc-connected Kubernetes clusters. ESA can be deployed to write files to a "ReadWriteMany" persistent volume claim (PVC) where they are then transferred to Azure Blob Storage. ESA offers a range of features to support Azure IoT Operations and other Arc Services. ESA with high availability and fault-tolerance will be fully supported and generally available (GA) in the second half of 2024.
-
-## What does Edge Storage Accelerator do?
-
-Edge Storage Accelerator (ESA) serves as a native persistent storage system for Arc-connected Kubernetes clusters. Its primary role is to provide a reliable, fault-tolerant file system that allows data to be tiered to Azure. For Azure IoT Operations (AIO) and other Arc Services, ESA is crucial in making Kubernetes clusters stateful. Key features of ESA for Arc-connected K8s clusters include:
--- **Tolerance to Node Failures:** When configured as a 3 node cluster, ESA replicates data between nodes (triplication) to ensure high availability and tolerance to single node failures.-- **Data Synchronization to Azure:** ESA is configured with a storage target, so data written to ESA volumes is automatically tiered to Azure Blob (block blob, ADLSgen-2 or OneLake) in the cloud.-- **Low Latency Operations:** Arc services, such as AIO, can expect low latency for read and write operations.-- **Simple Connection:** Customers can easily connect to an ESA volume using a CSI driver to start making Persistent Volume Claims against their storage.-- **Flexibility in Deployment:** ESA can be deployed as part of AIO or as a standalone solution.-- **Observable:** ESA supports industry standard Kubernetes monitoring logs and metrics facilities, and supports Azure Monitor Agent observability.-- **Designed with Integration in Mind:** ESA integrates seamlessly with AIO's Data Processor to ease the shuttling of data from your edge to Azure. -- **Platform Neutrality:** ESA is a Kubernetes storage system that can run on any Arc Kubernetes supported platform. Validation was done for specific platforms, including Ubuntu + CNCF K3s/K8s, Windows IoT + AKS-EE, and Azure Stack HCI + AKS-HCI.-
-## How does Edge Storage Accelerator work?
--- **Write** - Your file is processed locally and saved in the cache. When the file doesn't change within 3 seconds, ESA automatically uploads it to your chosen blob destination.-- **Read** - If the file is already in the cache, the file is served from the cache memory. If it isn't available in the cache, the file is pulled from your chosen blob storage target.-
-## Supported Azure Regions
-
-Edge Storage Accelerator is only available in the following Azure regions:
--- East US-- East US 2-- West US 3-- West Europe-
-## Next steps
--- [Prepare Linux](prepare-linux.md)-- [How to install Edge Storage Accelerator](install-edge-storage-accelerator.md)-- [Create a persistent volume](create-pv.md)-- [Monitor your deployment](azure-monitor-kubernetes.md)
azure-arc Prepare Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/prepare-linux.md
- Title: Prepare Linux (preview)
-description: Learn how to prepare Linux in Edge Storage Accelerator using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
---- Previously updated : 04/08/2024--
-# Prepare Linux (preview)
-
-The article describes how to prepare Linux using AKS enabled by Azure Arc, Edge Essentials, or Ubuntu.
-
-> [!NOTE]
-> The minimum supported Linux kernel version is 5.1. At this time, there are known issues with 6.4 and 6.2.
-
-## Prerequisites
-
-> [!NOTE]
-> Edge Storage Accelerator is only available in the following regions: East US, East US 2, West US 3, West Europe.
-
-### Arc-connected Kubernetes cluster
-
-These instructions assume that you already have an Arc-connected Kubernetes cluster. To connect an existing Kubernetes cluster to Azure Arc, [see these instructions](/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli).
-
-If you want to use Edge Storage Accelerator with Azure IoT Operations, follow the [instructions to create a cluster for Azure IoT Operations](/azure/iot-operations/get-started/quickstart-deploy?tabs=linux).
-
-Use Ubuntu 22.04 on Standard D8s v3 machines with three SSDs attached for additional storage.
-
-## Single-node and multi-node clusters
-
-A single-node cluster is commonly used for development or testing purposes due to its simplicity in setup and minimal resource requirements. These clusters offer a lightweight and straightforward environment for developers to experiment with Kubernetes without the complexity of a multi-node setup. Additionally, in situations where resources such as CPU, memory, and storage are limited, a single-node cluster is more practical. Its ease of setup and minimal resource requirements make it a suitable choice in resource-constrained environments.
-
-However, single-node clusters come with limitations, mostly in the form of missing features, including their lack of high availability, fault tolerance, scalability, and performance.
-
-A multi-node Kubernetes configuration is typically used for production, staging, or large-scale scenarios because of its advantages, including high availability, fault tolerance, scalability, and performance. A multi-node cluster also introduces challenges and trade-offs, including complexity, overhead, cost, and efficiency considerations. For example, setting up and maintaining a multi-node cluster requires additional knowledge, skills, tools, and resources (network, storage, compute). The cluster must handle coordination and communication among nodes, leading to potential latency and errors. Additionally, running a multi-node cluster is more resource-intensive and is costlier than a single-node cluster. Optimization of resource usage among nodes is crucial for maintaining cluster and application efficiency and performance.
-
-In summary, a [single-node Kubernetes cluster](single-node-cluster.md) might be suitable for development, testing, and resource-constrained environments, while a [multi-node cluster](multi-node-cluster.md) is more appropriate for production deployments, high availability, scalability, and scenarios where distributed applications are a requirement. This choice ultimately depends on your specific needs and goals for your deployment.
-
-## Minimum hardware requirements
-
-### Single-node or 2-node cluster
--- Standard_D8ds_v4 VM recommended-- Equivalent specifications per node:
- - 4 CPUs
- - 16GB RAM
-
-### Multi-node cluster
--- Standard_D8as_v4 VM recommended-- Equivalent specifications per node:
- - 8 CPUs
- - 32GB RAM
-
-32GB RAM serves as a buffer; however, 16GB RAM should suffice. Edge Essentials configurations require 8 CPUs with 10GB RAM per node, making 16GB RAM the minimum requirement.
-
-## Next steps
-
-To continue preparing Linux, see the following instructions for single-node or multi-node clusters:
--- [Single-node clusters](single-node-cluster.md)-- [Multi-node clusters](multi-node-cluster.md)
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/edge-storage-accelerator/release-notes.md
- Title: Edge Storage Accelerator release notes (preview)
-description: Learn about new features and known issues in Edge Storage Accelerator.
--- Previously updated : 04/08/2024---
-# Edge Storage Accelerator release notes (preview)
-
-This article provides information about new features and known issues in Edge Storage Accelerator.
-
-## Version 1.1.0-preview
--- Kernel versions: the minimum supported Linux kernel version is 5.1. Currently there are known issues with 6.4 and 6.2.-
-## Version 1.2.0-preview
--- Extension identity and OneLake support: ESA now allows use of a system-assigned extension identity for access to blob storage or OneLake lake houses.-- Security fixes: security maintenance (package/module version updates).-
-## Next steps
-
-[Edge Storage Accelerator overview](overview.md)
azure-functions Flex Consumption How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-how-to.md
Title: Create and manage function apps in a Flex Consumption plan description: "Learn how to create function apps hosted in the Flex Consumption plan in Azure Functions and how to modify specific settings for an existing function app." Previously updated : 05/21/2024 Last updated : 08/21/2024 zone_pivot_groups: programming-languages-set-functions
az functionapp scale config always-ready set --resource-group <RESOURCE_GROUP> -
To remove always ready instances, use the [`az functionapp scale config always-ready delete`](/cli/azure/functionapp/scale/config/always-ready#az-functionapp-scale-config-always-ready-delete) command, as in this example that removes all always ready instances from both the HTTP triggers group and also a function named `hello_world`: ```azurecli
-az functionapp scale config always-ready delete --resource-group <RESOURCE_GROUP> --name <APP_NAME> --setting-names http hello_world
+az functionapp scale config always-ready delete --resource-group <RESOURCE_GROUP> --name <APP_NAME> --setting-names http function:hello_world
``` ### [Azure portal](#tab/azure-portal)
azure-functions Flex Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md
Title: Azure Functions Flex Consumption plan hosting
description: Running your function code in the Azure Functions Flex Consumption plan provides virtual network integration, dynamic scale (to zero), and reduced cold starts. Previously updated : 07/26/2024 Last updated : 08/22/2024 # Customer intent: As a developer, I want to understand the benefits of using the Flex Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
Concurrency is a key factor that determines how Flex Consumption function apps s
This _per-function scaling_ behavior is a part of the hosting platform, so you don't need to configure your app or change the code. For more information, see [Per-function scaling](event-driven-scaling.md#per-function-scaling) in the Event-driven scaling article.
-In per function scaling, HTTP, Blob (Event Grid), and Durable triggers are special cases. All HTTP triggered functions in the app are grouped and scale together in the same instances, and all Durable triggered functions (Orchestration, Activity, or Entity triggers) are grouped and scale together in the same instances, and all Blob (Event Grid) functions are grouped and scale together in the same instances. All other functions in the app are scaled individually into their own instances.
+In per-function scaling, decisions are made for certain function triggers based on group aggregations. This table shows the defined set of function scale groups:
+
+| Scale groups | Triggers in group | Settings value |
+| - | - | |
+| HTTP triggers |[HTTP trigger](functions-bindings-http-webhook-trigger.md)<br/>[SignalR trigger](functions-bindings-signalr-service-trigger.md) | `http` |
+| Blob storage triggers<br/>(Event Grid-based) | [Blob storage trigger](functions-bindings-storage-blob-trigger.md) | `blob`|
+| Durable Functions | [Orchestration trigger](./durable/durable-functions-bindings.md#orchestration-trigger)<br/>[Activity trigger](./durable/durable-functions-bindings.md#activity-trigger)<br/>[Entity trigger](./durable/durable-functions-bindings.md#entity-trigger) | `durable` |
+
+All other functions in the app are scaled individually in their own set of instances, which are referenced using the convention `function:<NAMED_FUNCTION>`.
## Always ready instances
In Flex Consumption, many of the standard application settings and site configur
Keep these other considerations in mind when using Flex Consumption plan during the current preview: + **VNet Integration** Ensure that the `Microsoft.App` Azure resource provider is enabled for your subscription by [following these instructions](/azure/azure-resource-manager/management/resource-providers-and-types#register-resource-provider). The subnet delegation required by Flex Consumption apps is `Microsoft.App/environments`.
-+ **Triggers**: All triggers are fully supported except for Kafka, Azure SQL, and SignalR triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version.
++ **Triggers**: All triggers are fully supported except for Kafka and Azure SQL triggers. The Blob storage trigger only supports the [Event Grid source](./functions-event-grid-blob-trigger.md). Non-C# function apps must use version `[4.0.0, 5.0.0)` of the [extension bundle](./functions-bindings-register.md#extension-bundles), or a later version. + **Regions**: + Not all regions are currently supported. To learn more, see [View currently supported regions](flex-consumption-how-to.md#view-currently-supported-regions). + There is a temporary limitation where App Service quota limits for creating new apps are also being applied to Flex Consumption apps. If you see the following error "This region has quota of 0 instances for your subscription. Try selecting different region or SKU." please raise a support ticket so that your app creation can be unblocked.
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
Title: Automate function app resource deployment to Azure
description: Learn how to build, validate, and use a Bicep file or an Azure Resource Manager template to deploy your function app and related Azure resources. ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 07/16/2024 Last updated : 08/22/2024 zone_pivot_groups: functions-hosting-plan
In a Flex Consumption plan, you configure your function app in Azure with two ty
| Application configuration | `functionAppConfig` | | Application settings | `siteConfig.appSettings` collection |
-These configurations are maintained in `functionAppConfig`:
+These application configurations are maintained in `functionAppConfig`:
| Behavior | Setting in `functionAppConfig`| | | |
+| [Always ready instances](flex-consumption-plan.md#always-ready-instances) | `scaleAndConcurrency.alwaysReady` |
+| [Deployment source](#deployment-sources) | `deployment` |
+| [Instance memory size](flex-consumption-plan.md#instance-memory) | `scaleAndConcurrency.instanceMemoryMB` |
+| [HTTP trigger concurrency](functions-concurrency.md#http-trigger-concurrency) | `scaleAndConcurrency.triggers.http.perInstanceConcurrency` |
| [Language runtime](functions-app-settings.md#functions_worker_runtime) | `runtime.name` | | [Language version](supported-languages.md) | `runtime.version` | | [Maximum instance count](event-driven-scaling.md#flex-consumption-plan) | `scaleAndConcurrency.maximumInstanceCount` |
-| [Instance memory size](flex-consumption-plan.md#instance-memory) | `scaleAndConcurrency.instanceMemoryMB` |
-| [Deployment source](#deployment-sources) | `deployment` |
The Flex Consumption plan also supports these application settings:
These application settings are required for container deployments:
::: zone-end Keep these considerations in mind when working with site and application settings using Bicep files or ARM templates:
- ::: zone pivot="consumption-plan,premium-plan,dedicated-plan"
++ The optional `alwaysReady` setting contains an array of one or more `{name,instanceCount}` objects, with one for each [per-function scale group](flex-consumption-plan.md#per-function-scaling). These are the scale groups being used to make always-ready scale decisions. This example sets always-ready counts for both the `http` group and a single function named `helloworld`, which is of a non-grouped trigger type:
+ ### [Bicep](#tab/bicep)
+ ```bicep
+ alwaysReady: [
+ {
+ name: 'http'
+ instanceCount: 2
+ }
+ {
+ name: 'function:helloworld'
+ instanceCount: 1
+ }
+ ]
+ ```
+ ### [ARM template](#tab/json)
+ ```json
+ "alwaysReady": [
+ {
+ "name": "http",
+ "instanceCount": 2
+ },
+ {
+ "name": "function:helloworld",
+ "instanceCount": 1
+ }
+ ]
+ ```
+ There are important considerations for when you should set `WEBSITE_CONTENTSHARE` in an automated deployment. For detailed guidance, see the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) reference. ::: zone-end ::: zone pivot="container-apps,azure-arc,premium-plan,dedicated-plan"
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
This article describes how to enable and configure OpenTelemetry-based data collection within [Application Insights](app-insights-overview.md#application-insights-overview). The Azure Monitor OpenTelemetry Distro:
-* Provides an [OpenTelemetry distribution](https://opentelemetry.io/docs/concepts/distributions/#what-is-a-distribution) which includes support for features specific to Azure Monitor,
-* Enables [automatic](opentelemetry-add-modify.md#automatic-data-collection) telemetry by including OpenTelemetry instrumentation libraries for collecting traces, metrics, logs, and exceptions,
-* Allows collecting [custom](opentelemetry-add-modify.md#collect-custom-telemetry) telemetry, and
+* Provides an [OpenTelemetry distribution](https://opentelemetry.io/docs/concepts/distributions/#what-is-a-distribution) which includes support for features specific to Azure Monitor.
+* Enables [automatic](opentelemetry-add-modify.md#automatic-data-collection) telemetry by including OpenTelemetry instrumentation libraries for collecting traces, metrics, logs, and exceptions.
+* Allows collecting [custom](opentelemetry-add-modify.md#collect-custom-telemetry) telemetry.
* Supports [Live Metrics](live-stream.md) to monitor and collect more telemetry from live, in-production web applications. For more information about the advantages of using the Azure Monitor OpenTelemetry Distro, see [Why should I use the Azure Monitor OpenTelemetry Distro](#why-should-i-use-the-azure-monitor-opentelemetry-distro).
backup Quick Backup Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-portal.md
Title: Quickstart - Back up a VM with the Azure portal by using Azure Backup description: In this Quickstart, learn how to create a Recovery Services vault, enable protection on an Azure VM, and back up the VM, with the Azure portal. Previously updated : 02/26/2024 Last updated : 09/02/2024 ms.devlang: azurecli
Sign in to the [Azure portal](https://portal.azure.com).
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
+>[!Important]
+>If you have [Azure Files for protection](azure-file-share-backup-overview.md), after vault creation, [configure backup for Azure Files, and then initiate an on-demand backup](backup-azure-files.md). Learn more [about the best practices for Azure Files backp](backup-azure-files.md?tabs=backup-center#best-practices).
+ ## Apply a backup policy To apply a backup policy to your Azure VMs, follow these steps:
dev-box How To Configure Multiple Monitors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-multiple-monitors.md
+
+ Title: Configure multiple monitors for your dev box
+
+description: Learn how to configure multiple monitors in remote desktop clients, so you can use multiple monitors when connecting to a dev box.
++++ Last updated : 08/30/2024++
+#Customer intent: As a dev box user, I want to use multiple monitors when connecting to my dev box so that I can have more screen real estate to work with.
++
+# Use multiple monitors on a dev box
+
+In this article, you configure a remote desktop client to use dual or more monitors when you connect to your dev box. Using multiple monitors gives you more screen real estate to work with. You can spread your work across multiple screens, or use one screen for your development environment and another for documentation, email, or messaging.
+
+When you connect to your cloud-hosted developer machine in Microsoft Dev Box by using a remote desktop client, you can take advantage of a multi-monitor setup. The following table lists remote desktop clients that support multiple monitors and provides links to the instructions for configuring multiple monitors in each client.
+
+| Client | Multiple monitor support | Configure multiple monitors |
+|--|:-:|--|
+| Windows App | <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/yes.svg" border="false":::</sub> | [Configure display settings in Windows App](/windows-app/display-settings?tabs=windows) |
+| Microsoft Remote Desktop client| <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/yes.svg" border="false":::</sub> | [Microsoft Remote Desktop client](/azure/dev-box/how-to-configure-multiple-monitors?branch=main&tabs=windows-app#configure-remote-desktop-to-use-multiple-monitors) |
+| Microsoft Store Remote Desktop client | <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/no.svg" border="false":::</sub> | Does not support multiple monitors |
+| Remote Desktop Connection (MSTSC) | <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/yes.svg" border="false":::</sub> | [Microsoft Remote Desktop Connection](/azure/dev-box/how-to-configure-multiple-monitors?branch=main&tabs=windows-connection#configure-remote-desktop-to-use-multiple-monitors) |
+| Microsoft Remote Desktop for macOS | <sub>:::image type="icon" source="./media/how-to-configure-multiple-monitors/yes.svg" border="false":::</sub> | [Microsoft Remote Desktop for macOS](/azure/dev-box/how-to-configure-multiple-monitors?branch=main&tabs=macOS#configure-remote-desktop-to-use-multiple-monitors) |
+
+## Prerequisites
+
+To complete the steps in this article, you must install the appropriate Remote Desktop client on your local machine.
+
+## Configure Remote Desktop to use multiple monitors
+
+Use the following steps to configure Remote Desktop to use multiple monitors.
+
+# [Microsoft Remote Desktop client](#tab/windows-client)
+
+1. Open the Remote Desktop client.
+
+ :::image type="content" source="./media/how-to-configure-multiple-monitors/remote-desktop-app.png" alt-text="Screenshot of the Windows 11 start menu with Remote desktop showing and open highlighted.":::
+
+1. Right-click the dev box you want to configure, and then select **Settings**.
+
+1. On the settings pane, turn off **Use default settings**.
+
+ :::image type="content" source="media/how-to-configure-multiple-monitors/turn-off-default-settings.png" alt-text="Screenshot showing the Use default settings slider.":::
+
+1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
+
+ | Value | Description | Options |
+ ||||
+ | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
+ | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
+ | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
+
+ :::image type="content" source="media/how-to-configure-multiple-monitors/remote-desktop-select-display.png" alt-text="Screenshot showing the Remote Desktop display settings, highlighting the option to select the number of displays.":::
+
+1. Close the settings pane, and then select your dev box to begin the Remote Desktop session.
+
+# [Microsoft Remote Desktop Connection](#tab/windows-connection)
+
+1. Open a Remote Desktop Connection.
+
+ :::image type="content" source="media/how-to-configure-multiple-monitors/remote-desktop-connection-open.png" alt-text="Screenshot of the Start menu showing the Remote Desktop Connection." lightbox="media/how-to-configure-multiple-monitors/remote-desktop-connection-open.png":::
+
+1. In **Computer**, enter the name of your dev box, then select **Show Options**.
+
+ :::image type="content" source="media/how-to-configure-multiple-monitors/remote-desktop-connection-show-options.png" alt-text="Screenshot of the Remote Desktop Connection dialog box with Show options highlighted." lightbox="media/how-to-configure-multiple-monitors/remote-desktop-connection-show-options.png":::
+
+1. On the **Display** tab, select **Use all my monitors for the remote session**.
+
+ :::image type="content" source="media/how-to-configure-multiple-monitors/remote-desktop-connection-all-monitors.png" alt-text="Screenshot of the Remote Desktop Connection Display tab and Use all my monitors for the current session highlighted." lightbox="media/how-to-configure-multiple-monitors/remote-desktop-connection-all-monitors.png":::
+
+1. Select **Connect** to start the Remote Desktop session.
+
+# [Microsoft Remote Desktop for macOS](#tab/macOS)
+
+1. Open Remote Desktop.
+
+1. Select **PCs**.
+
+1. On the Connections menu, select **Edit PC**.
+
+1. Select **Display**.
+
+1. On the Display tab, select **Use all monitors**, and then select **Save**.
+
+ :::image type="content" source="media/how-to-configure-multiple-monitors/remote-desktop-for-mac.png" alt-text="Screenshot showing the Edit PC dialog box with the display configuration options.":::
+
+1. Select your dev box to begin the Remote Desktop session.
+
+
+
+> [!TIP]
+> For more information about the Microsoft remote desktop clients currently available, see:
+> - [Remote Desktop clients for Azure Virtual Desktop](/azure/virtual-desktop/users/remote-desktop-clients-overview)
+> - [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](/azure/virtual-desktop/users/connect-windows)
+
+## Related content
+
+- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)
+- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)
dev-box Tutorial Configure Multiple Monitors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-configure-multiple-monitors.md
- Title: 'Tutorial: Configure multiple monitors for your dev box'-
-description: In this tutorial, you configure an RDP client to use multiple monitors when connecting to a dev box.
---- Previously updated : 07/26/2024--
-#Customer intent: As a dev box user, I want to use multiple monitors when connecting to my dev box so that I can have more screen real estate to work with.
--
-# Tutorial: Use multiple monitors on a dev box
-
-In this tutorial, you configure a remote desktop protocol (RDP) client to use dual or more monitors when you connect to your dev box.
-
-Using multiple monitors gives you more screen real estate to work with. You can spread your work across multiple screens, or use one screen for your development environment and another for documentation, email, or messaging.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Configure the remote desktop client for multiple monitors.
-
-## Prerequisites
-
-To complete this tutorial, you must [install the Remote desktop app](tutorial-connect-to-dev-box-with-remote-desktop-app.md#download-the-remote-desktop-client-for-windows) on your local machine.
-
-## Configure Remote Desktop to use multiple monitors
-
-When you connect to your cloud-hosted developer machine in Microsoft Dev Box by using a remote desktop app, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors.
-
-> [!IMPORTANT]
-> The Windows Store version of Microsoft Remote Desktop doesn't support multiple monitors. For more information, see [Get started with the Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows).
-
-Use the following steps to configure Remote Desktop to use multiple monitors.
-
-# [Microsoft Remote Desktop app](#tab/windows-app)
-
-1. Open the Remote Desktop app.
-
- :::image type="content" source="./media/tutorial-configure-multiple-monitors/remote-desktop-app.png" alt-text="Screenshot of the Windows 11 start menu with Remote desktop showing and open highlighted.":::
-
-1. Right-click the dev box you want to configure, and then select **Settings**.
-
-1. On the settings pane, turn off **Use default settings**.
-
- :::image type="content" source="media/tutorial-configure-multiple-monitors/turn-off-default-settings.png" alt-text="Screenshot showing the Use default settings slider.":::
-
-1. In **Display Settings**, in the **Display configuration** list, select the displays to use and configure the options:
-
- | Value | Description | Options |
- ||||
- | All displays | Remote desktop uses all available displays. | - Use only a single display when in windowed mode. <br> - Fit the remote session to the window. |
- | Single display | Remote desktop uses a single display. | - Start the session in full screen mode. <br> - Fit the remote session to the window. <br> - Update the resolution on when a window is resized. |
- | Select displays | Remote Desktop uses only the monitors you select. | - Maximize the session to the current displays. <br> - Use only a single display when in windowed mode. <br> - Fit the remote connection session to the window. |
-
- :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-select-display.png" alt-text="Screenshot showing the Remote Desktop display settings, highlighting the option to select the number of displays.":::
-
-1. Close the settings pane, and then select your dev box to begin the Remote Desktop session.
-
-# [Microsoft Remote Desktop Connection](#tab/windows-connection)
-
-1. Open a Remote Desktop Connection.
-
- :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-connection-open.png" alt-text="Screenshot of the Start menu showing the Remote Desktop Connection." lightbox="media/tutorial-configure-multiple-monitors/remote-desktop-connection-open.png":::
-
-1. In **Computer**, enter the name of your dev box, then select **Show Options**.
-
- :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-connection-show-options.png" alt-text="Screenshot of the Remote Desktop Connection dialog box with Show options highlighted." lightbox="media/tutorial-configure-multiple-monitors/remote-desktop-connection-show-options.png":::
-
-1. On the **Display** tab, select **Use all my monitors for the remote session**.
-
- :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-connection-all-monitors.png" alt-text="Screenshot of the Remote Desktop Connection Display tab and Use all my monitors for the current session highlighted." lightbox="media/tutorial-configure-multiple-monitors/remote-desktop-connection-all-monitors.png":::
-
-1. Select **Connect** to start the Remote Desktop session.
-
-# [Non-Windows](#tab/non-Windows)
-
-1. Open Remote Desktop.
-
-1. Select **PCs**.
-
-1. On the Connections menu, select **Edit PC**.
-
-1. Select **Display**.
-
-1. On the Display tab, select **Use all monitors**, and then select **Save**.
-
- :::image type="content" source="media/tutorial-configure-multiple-monitors/remote-desktop-for-mac.png" alt-text="Screenshot showing the Edit PC dialog box with the display configuration options.":::
-
-1. Select your dev box to begin the Remote Desktop session.
-
-
-
-> [!TIP]
-> For more information about the Microsoft remote desktop clients currently available, see:
-> - [Remote Desktop clients for Remote Desktop Services and remote PCs](https://aka.ms/rdapps)
-> - [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](/azure/virtual-desktop/users/connect-windows)
-
-## Clean up resources
-
-Dev boxes incur costs whenever they're running. When you finish using your dev box, shut down or stop it to avoid incurring unnecessary costs.
-
-You can stop a dev box from the developer portal:
-
-1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
-
-1. For the dev box that you want to stop, select More options (**...**), and then select **Stop**.
-
- :::image type="content" source="./media/tutorial-configure-multiple-monitors/stop-dev-box.png" alt-text="Screenshot of the menu command to stop a dev box.":::
-
-The dev box might take a few moments to stop.
-
-## Related content
--- [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)-- Learn how to [connect to a dev box through the browser](./quickstart-create-dev-box.md#connect-to-a-dev-box)-
dev-box Tutorial Connect To Dev Box With Remote Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app.md
Previously updated : 01/30/2024 Last updated : 08/30/2024 +
+#customer intent: As a developer, I want to connect to my dev box by using a remote desktop client so that I can access my development environment from a different device.
# Tutorial: Use a remote desktop client to connect to a dev box
To complete this tutorial, you must have access to a dev box through the develop
You can use a remote desktop client application to access your dev box in Microsoft Dev Box. Remote desktop clients are available for many operating systems and devices, including mobile devices running iOS, iPadOS or Android.
-Select the relevant tab to view the steps to download and use the Remote Desktop client application from Windows or non-Windows operating systems.
+For information about Microsoft Remote Desktop clients for macOS, iOS/iPadOS, and Android/Chrome OS, see: [Remote Desktop clients for Remote Desktop Services and remote PCs](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
+
+Select the relevant tab to view the steps to download and use the Remote Desktop client application from Windows or macOS.
# [Windows](#tab/windows)
To open the Remote Desktop client:
:::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/open-windows-desktop.png" alt-text="Screenshot of the option to open the Windows Remote Desktop client in the connection dialog.":::
-# [Non-Windows](#tab/non-Windows)
+# [macOS](#tab/macOS)
### Download the Remote Desktop client
-To use a non-Windows Remote Desktop client to connect to your dev box:
+To use a macOS Remote Desktop client to connect to your dev box:
1. Sign in to the [developer portal](https://aka.ms/devbox-portal).
To use a non-Windows Remote Desktop client to connect to your dev box:
1. Your dev box appears in the Remote Desktop client's **Workspaces** area. Double-click the dev box to connect.
- :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png" alt-text="Screenshot of a dev box in a non-Windows Remote Desktop client Workspace." lightbox="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png":::
+ :::image type="content" source="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png" alt-text="Screenshot of a dev box in a macOS Remote Desktop client Workspace." lightbox="./media/tutorial-connect-to-dev-box-with-remote-desktop-app/non-windows-rdp-connect-dev-box.png":::
## Clean up resources
healthcare-apis Configure Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-import-data.md
For this step, you need to get the request URL and JSON body:
3. Select **JSON View**. 4. Select the API version as **2022-06-01** or later.
-To specify the Azure storage account in JSON view, you need to use the [REST API](/rest/api/healthcareapis/fhir-services/create-or-update) to update the FHIR service.
+
+To specify the Azure storage account in JSON view which is in **READ** mode, you need to use the [REST API](/rest/api/healthcareapis/services/create-or-update) to update the FHIR service.
[![Screenshot of selections for opening the JSON view.](media/bulk-import/fhir-json-view.png)](media/bulk-import/fhir-json-view.png#lightbox)
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
To achieve the best performance with the `import` operation, consider these fact
- The data must be in the same tenant as the FHIR service.
+- To obtain an access token, see [Access Token](using-rest-client.md)
+ ### Make a call
The `import` operation fails and returns `500 Internal Server Error`. The respon
Cause: You reached the storage limit of the FHIR service. Solution: Reduce the size of your data or consider Azure API for FHIR, which has a higher storage limit.
+#### 423 Locked
+
+**Behavior:** The `import` operation fails and returns `423 Locked`. The response body includes this content:
+
+```json
+{
+ "resourceType": "OperationOutcome",
+ "id": "13876ec9-3170-4525-87ec-9e165052d70d",
+ "issue": [
+ {
+ "severity": "error",
+ "code": "processing",
+ "diagnostics": "import operation failed for reason: Service is locked for initial import mode."
+ }
+ ]
+}
+```
+**Cause:** The FHIR Service is configured with Initial import mode which will blocked other operations.
+
+**Solution:** Switch the FHIR service's Initial import mode off, or select Incremental mode.
+ ## Limitations - The maximum number of files allowed for each `import` operation is 10,000. - The number of files ingested in the FHIR server with same lastUpdated field value upto milliseconds can't exceed beyond 10,000.
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Previously updated : 07/25/2024 Last updated : 08/29/2024 #Customer intent: I need to understand the Azure Red Hat OpenShift support policies for OpenShift 4.0.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
### Cluster management * Don't remove or modify the 'arosvc.azurecr.io' cluster pull secret.
-* Don't override any of the cluster's MachineConfig objects (for example, the kubelet configuration) in any way.
+* Don't create new MachineConfig objects or modify existing ones, unless explicitly supported in the Azure Red Hat OpenShift documentation.
+* Don't create new KubeletConfig objects or modify existing ones, unless explicitly supported in the Azure Red Hat OpenShift documentation.
* Don't set any unsupportedConfigOverrides options. Setting these options prevents minor version upgrades. * Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don't require tags on the Azure Red Hat OpenShift RP-managed cluster resource group. * Don't circumvent the deny assignment that is configured as part of the service, or perform administrative tasks normally prohibited by the deny assignment. * OpenShift relies on the ability to automatically tag Azure resources. If you have configured a tagging policy, don't apply more than 10 user-defined tags to resources in the managed resource group. - ## Incident management An incident is an event that results in a degradation or outage Azure Red Hat OpenShift services. Incidents are raised by a customer or Customer Experience and Engagement (CEE) member through a [support case](openshift-service-definitions.md#support), directly by the centralized monitoring and alerting system, or directly by a member of the ARO Site Reliability Engineer (SRE) team.
sentinel Geographical Availability Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/geographical-availability-data-residency.md
Microsoft Sentinel can run on workspaces in the following regions:
| **North America**| **Canada** | ΓÇó Canada Central<br>ΓÇó Canada East | | | **United States** | ΓÇó Central US<br>ΓÇó East US<br>ΓÇó East US 2<br>ΓÇó East US 2 EUAP<br>ΓÇó North Central US<br>ΓÇó South Central US<br>ΓÇó West US<br>ΓÇó West US 2<br>ΓÇó West US 3<br>ΓÇó West Central US<br><br>**Azure government** <br>ΓÇó USGov Arizona<br>ΓÇó USGov Virginia<br>ΓÇó USNat East<br>ΓÇó USNat West<br>ΓÇó USSec East<br>ΓÇó USSec West| |**South America** | **Brazil** | ΓÇó Brazil South<br>ΓÇó Brazil Southeast |
-|**Asia** | |ΓÇó East Asia<br>ΓÇó Southeast Asia |
+|**Asia and Middle East** | |ΓÇó East Asia<br>ΓÇó Southeast Asia |
| | **China 21Vianet**| ΓÇó China East 2<br>ΓÇó China North 3| | | **India**| ΓÇó Central India<br>ΓÇó Jio India West<br>ΓÇó Jio India Central|
-| | **Israel** | ΓÇó Israel |
+| | **Israel** | ΓÇó Israel Central |
| | **Japan** | ΓÇó Japan East<br>ΓÇó Japan West| | | **Korea**| ΓÇó Korea Central<br>ΓÇó Korea South| | | **Quatar** | ΓÇó Qatar Central|