Updates from: 01/05/2023 02:05:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Admin Units Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-assign-roles.md
The following Azure AD roles can be assigned with administrative unit scope. Add
| [SharePoint Administrator](permissions-reference.md#sharepoint-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. For SharePoint sites associated with Microsoft 365 groups in an administrative unit, can also update site properties (site name, URL, and external sharing policy) using the Microsoft 365 admin center. Cannot use the SharePoint admin center or SharePoint APIs to manage sites. | | [Teams Administrator](permissions-reference.md#teams-administrator) | Can manage Microsoft 365 groups in the assigned administrative unit only. Can manage team members in the Microsoft 365 admin center for teams associated with groups in the assigned administrative unit only. Cannot use the Teams admin center. | | [Teams Devices Administrator](permissions-reference.md#teams-devices-administrator) | Can perform management related tasks on Teams certified devices. |
-| [User Administrator](permissions-reference.md#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only. |
+| [User Administrator](permissions-reference.md#user-administrator) | Can manage all aspects of users and groups, including resetting passwords for limited admins within the assigned administrative unit only. Cannot currently manage users' profile photographs. |
| [<Custom role>](custom-create.md) | Can perform actions that apply to users, groups, or devices, according to the definition of the custom role. | Certain role permissions apply only to non-administrator users when assigned with the scope of an administrative unit. In other words, administrative unit scoped [Helpdesk Administrators](permissions-reference.md#helpdesk-administrator) can reset passwords for users in the administrative unit only if those users do not have administrator roles. The following list of permissions are restricted when the target of an action is another administrator:
aks Support Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/support-policies.md
Base infrastructure as a service (IaaS) cloud components, such as compute or net
With AKS, you get a fully managed *control plane*. The control plane contains all of the components and services you need to operate and provide Kubernetes clusters to end users. All Kubernetes components are maintained and operated by Microsoft.
-Microsoft manages and monitors the following components through the control pane:
+Microsoft manages and monitors the following components through the control plane:
* Kubelet or Kubernetes API servers * Etcd or a compatible key-value store, providing Quality of Service (QoS), scalability, and runtime
When a technical support issue is root-caused by one or more upstream bugs, AKS
* Rough timelines for the issue's inclusion, based on the upstream release cadence.
-[add-ons]: integrations.md#add-ons
+[add-ons]: integrations.md#add-ons
api-management Identity Provider Adal Retirement Sep 2025 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/breaking-changes/identity-provider-adal-retirement-sep-2025.md
Your service is impacted by this change if:
## What is the deadline for the change?
-On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in December 2022.
+On 30 September, 2025, these identity providers will stop functioning. To avoid disruption of your developer portal, you need to update your Azure AD applications and identity provider configuration in Azure API Management by that date. Your developer portal might be at a security risk after Microsoft ADAL support ends in June 1, 2023. Learn more in [the official announcement](/azure/active-directory/fundamentals/whats-new#adal-end-of-support-announcement).
Developer portal sign-in and sign-up with Azure AD or Azure AD B2C will stop working past 30 September, 2025 if you don't update your ADAL-based Azure AD or Azure AD B2C identity providers. This new authentication method is more secure, as it relies on the OAuth 2.0 authorization code flow with PKCE and uses an up-to-date software library.
application-gateway Ingress Controller Add Health Probes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-add-health-probes.md
spec:
spec: containers: - name: aspnetapp
- image: mcr.microsoft.com/dotnet/core/samples:aspnetapp
+ image: mcr.microsoft.com/dotnet/samples:aspnetapp
imagePullPolicy: IfNotPresent ports: - containerPort: 80
application-gateway Ingress Controller Install New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-install-new.md
Follow the steps below to create an Azure Active Directory (Azure AD) [service p
The `appId` and `password` values from the JSON output will be used in the following steps
-1. Use the `appId` from the previous command's output to get the `objectId` of the new service principal:
+1. Use the `appId` from the previous command's output to get the `id` of the new service principal:
```azurecli
- objectId=$(az ad sp show --id $appId --query "objectId" -o tsv)
+ objectId=$(az ad sp show --id $appId --query "id" -o tsv)
``` The output of this command is `objectId`, which will be used in the Azure Resource Manager template below
Kubernetes. We'll use it to install the `application-gateway-kubernetes-ingress`
nano helm-config.yaml ```
+ > [!NOTE]
+ > **For deploying to Sovereign Clouds (e.g., Azure Government)**, the `appgw.environment` configuration parameter must be added and set to the appropriate value as documented below.
++ Values: - `verbosityLevel`: Sets the verbosity level of the AGIC logging infrastructure. See [Logging Levels](https://github.com/Azure/application-gateway-kubernetes-ingress/blob/463a87213bbc3106af6fce0f4023477216d2ad78/docs/troubleshooting.md#logging-levels) for possible values.
+ - `appgw.environment`: Sets cloud environment. Possbile values: `AZURECHINACLOUD`, `AZUREGERMANCLOUD`, `AZUREPUBLICCLOUD`, `AZUREUSGOVERNMENTCLOUD`
- `appgw.subscriptionId`: The Azure Subscription ID in which Application Gateway resides. Example: `a123b234-a3b4-557d-b2df-a0bc12de1234` - `appgw.resourceGroup`: Name of the Azure Resource Group in which Application Gateway was created. Example: `app-gw-resource-group` - `appgw.name`: Name of the Application Gateway. Example: `applicationgatewayd0f0`
metadata:
app: aspnetapp spec: containers:
- - image: "mcr.microsoft.com/dotnet/core/samples:aspnetapp"
+ - image: "mcr.microsoft.com/dotnet/samples:aspnetapp"
name: aspnetapp-image ports: - containerPort: 80
kubectl apply -f aspnetapp.yaml
## Other Examples This [how-to guide](ingress-controller-expose-service-over-http-https.md) contains more examples on how to expose an AKS
-service via HTTP or HTTPS, to the Internet with Application Gateway.
+service via HTTP or HTTPS, to the Internet with Application Gateway.
application-gateway Ingress Controller Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ingress-controller-troubleshoot.md
metadata:
app: test-agic-app spec: containers:
- - image: "mcr.microsoft.com/dotnet/core/samples:aspnetapp"
+ - image: "mcr.microsoft.com/dotnet/samples:aspnetapp"
name: aspnetapp-image ports: - containerPort: 80
application-gateway Understanding Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/understanding-pricing.md
Monthly price estimates are based on 730 hours of usage per month.
Fixed Price = $0.246 * 730 (Hours) = $179.58
-Variable Costs = $0.008 * 2 (capacity units) * 730 (Hours) = $11.68
+Variable Costs = $0.008 * ( 2 (Instance Units) * 10 (capacity units) * 730 (Hours) = $116.8
DDoS Network Protection Cost = $2,944 * 1 (month) = $2,944
container-registry Container Registry Oci Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oci-artifacts.md
To demonstrate this capability, this article shows how to use the [OCI Registry
## Prerequisites * **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).
-* **ORAS tool** - Download and install a current ORAS release for your operating system from the [GitHub repo](https://github.com/deislabs/oras/releases). The tool is released as a compressed tarball (`.tar.gz` file). Extract and install the file using standard procedures for your operating system.
+* **ORAS tool** - Download and install ORAS CLI v0.16.0 for your operating system from the [ORAS installation guide](https://oras.land/cli/).
* **Azure Active Directory service principal (optional)** - To authenticate directly with ORAS, create a [service principal](container-registry-auth-service-principal.md) to access your registry. Ensure that the service principal is assigned a role such as AcrPush so that it has permissions to push and pull artifacts. * **Azure CLI (optional)** - To use an individual identity, you need a local installation of the Azure CLI. Version 2.0.71 or later is recommended. Run `az --version `to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * **Docker (optional)** - To use an individual identity, you must also have Docker installed locally, to authenticate with the registry. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system.
To demonstrate this capability, this article shows how to use the [OCI Registry
## Sign in to a registry
-This section shows two suggested workflows to sign into the registry, depending on the identity used. Choose the method appropriate for your environment.
-
-### Sign in with ORAS
-
-Using a [service principal](container-registry-auth-service-principal.md) with push rights, run the `oras login` command to sign in to the registry using the service principal application ID and password. Specify the fully qualified registry name (all lowercase), in this case *myregistry.azurecr.io*. The service principal application ID is passed in the environment variable `$SP_APP_ID`, and the password in the variable `$SP_PASSWD`.
-
-```bash
-oras login myregistry.azurecr.io --username $SP_APP_ID --password $SP_PASSWD
-```
-
-To read the password from Stdin, use `--password-stdin`.
+This section shows two suggested workflows to sign into the registry, depending on the identity used. Choose the one of the two methods below appropriate for your environment.
### Sign in with Azure CLI
az acr login --name myregistry
> [!NOTE] > `az acr login` uses the Docker client to set an Azure Active Directory token in the `docker.config` file. The Docker client must be installed and running to complete the individual authentication flow.
+### Sign in with ORAS
+
+This section shows options to sign into the registry. Choose one method below appropriate for your environment.
+
+Run `oras login` to authenticate with the registry. You may pass [registry credentials](container-registry-authentication.md) appropriate for your scenario, such as service principal credentials, user identity, or a repository-scoped token (preview).
+
+- Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to use an AD token. Always use "000..." as the token is parsed through the `PASSWORD` variable.
+
+ ```azurecli
+ USER_NAME="00000000-0000-0000-0000-000000000000"
+ PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken)
+ ```
+
+- Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview) to use non-AD based tokens.
+
+ ```azurecli
+ USER_NAME="oras-token"
+ PASSWORD=$(az acr token create -n $USER_NAME \
+ -r $ACR_NAME \
+ --repository $REPO content/write \
+ --only-show-errors \
+ --query "credentials.passwords[0].value" -o tsv)
+ ```
+
+- Authenticate with an Azure Active Directory [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry.
+
+ ```azurecli
+ SERVICE_PRINCIPAL_NAME="oras-sp"
+ ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
+ PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME \
+ --scopes $(az acr show --name $ACR_NAME --query id --output tsv) \
+ --role acrpush \
+ --query "password" --output tsv)
+ USER_NAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_NAME --query "[].appId" --output tsv)
+ ```
+
+ Supply the credentials to `oras login` after authentication configured.
+
+ ```bash
+ oras login $REGISTRY \
+ --username $USER_NAME \
+ --password $PASSWORD
+ ```
+
+To read the password from Stdin, use `--password-stdin`.
+ ## Push an artifact Create a text file in a local working working directory with some sample text. For example, in a bash shell:
Use the `oras push` command to push this text file to your registry. The followi
```bash oras push myregistry.azurecr.io/samples/artifact:1.0 \
- --manifest-config :application/vnd.unknown.config.v1+json \
+ --config :application/vnd.unknown.v1\
./artifact.txt:application/vnd.unknown.layer.v1+txt ```
oras push myregistry.azurecr.io/samples/artifact:1.0 \
```cmd .\oras.exe push myregistry.azurecr.io/samples/artifact:1.0 ^
- --manifest-config NUL:application/vnd.unknown.config.v1+json ^
+ --config NUL:application/vnd.unknown.v1 ^
.\artifact.txt:application/vnd.unknown.layer.v1+txt ```
rm artifact.txt
Run `oras pull` to pull the artifact, and specify the media type used to push the artifact: ```bash
-oras pull myregistry.azurecr.io/samples/artifact:1.0 \
- --media-type application/vnd.unknown.layer.v1+txt
+oras pull myregistry.azurecr.io/samples/artifact:1.0
``` Verify that the pull was successful:
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
# Push and pull supply chain artifacts using Azure Registry (Preview)
-Use an Azure container registry to store and manage a graph of artifacts, including signatures, software bill of materials (SBoM), security scan results or other types.
+Use an Azure container registry to store and manage a graph of supply chain artifacts along side container images, including signatures, software bill of materials (SBoM), security scan results or other types.
![Graph of artifacts, including a container image, signature and signed software bill of materials](./media/container-registry-artifacts/oras-artifact-graph.svg)
-To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)](https://oras.land) tool to push and pull a graph of artifacts to an Azure container registry.
+To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)](https://oras.land) tool to push and pull a graph of supply chain artifacts to an Azure container registry.
-ORAS Artifacts support is a preview feature and subject to [limitations](#preview-limitations). It requires [zone redundancy](zone-redundancy.md), which is available in the Premium service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
+Supply chain artifact is a type of [OCI Artifact Manifest][oci-artifact-manifest]. OCI Artifact Manifest support is a preview feature and subject to [limitations](#preview-limitations).
## Prerequisites
-* **ORAS CLI** - The ORAS CLI enables attach, copy, push, discover, pull of artifacts to an ORAS Artifacts enabled registry.
+* **ORAS CLI** - The ORAS CLI enables attach, copy, push, discover, pull of artifacts to an OCI Artifact Manifest enabled registry.
* **Azure CLI** - To create an identity, list and delete repositories, you need a local installation of the Azure CLI. Version 2.29.1 or later is recommended. Run `az --version `to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * **Docker (optional)** - To complete the walkthrough, a container image is referenced. You can use Docker installed locally to build and push a container image, or reference an existing container image. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system. ## Preview limitations
-ORAS Artifacts support is not available in the government or China clouds, but available in all other regions.
+OCI Artifact Manifest support is not available in the government or China clouds, but available in all other regions.
## ORAS installation
-Download and install a preview ORAS release for your operating system. See [ORAS installation instructions][oras-install-docs] for how to extract and install the file for your operating system. This article uses ORAS CLI 0.14.1 to demonstrate how to manage supply chain artifacts in ACR.
+Download and install a preview ORAS release for your operating system. See [ORAS installation instructions][oras-install-docs] for how to extract and install ORAS for your operating system. This article uses ORAS CLI 0.16.0 to demonstrate how to manage supply chain artifacts in ACR.
## Configure a registry
If needed, run the [az group create](/cli/azure/group#az-group-create) command t
```azurecli az group create --name $ACR_NAME --location southcentralus ```
-### Create ORAS Artifact enabled registry
+### Create OCI Artifact Manifest enabled registry
-Preview support for ORAS Artifacts requires Zone Redundancy, which requires a Premium service tier, in the South Central US region. Run the [az acr create](/cli/azure/acr#az-acr-create) command to create an ORAS Artifacts enabled registry. See the `az acr create` command help for more registry options.
+Preview support for OCI Artifact Manifest requires Zone Redundancy, which requires a Premium service tier, in the South Central US region. Run the [az acr create](/cli/azure/acr#az-acr-create) command to create an OCI Artifact Manifest enabled registry. See the `az acr create` command help for more registry options.
```azurecli az acr create \
az acr create \
--output jsonc ```
-In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant, and ORAS Artifact enabled.
+In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant, and OCI Artifact Manifest enabled.
```output {
Attach the multi-file artifact as a reference.
```bash oras attach $IMAGE \ ./readme.md:application/markdown \
- ./readme-details.md:application/markdown
+ ./readme-details.md:application/markdown \
--artifact-type readme/example ``` ## Discovering artifact references
-The ORAS Artifacts Specification defines a [referrers API][oras-artifacts-referrers] for discovering references to a `subject` artifact. The `oras discover` command can show the list of references to the container image.
+The [OCI v1.1 Specification][oci-spec] defines a [referrers API][oci-artifacts-referrers] for discovering references to a `subject` artifact. The `oras discover` command can show the list of references to the container image.
Using `oras discover`, view the graph of artifacts now stored in the registry.
myregistry.azurecr.io/net-monitor:v1
## Creating a deep graphs of artifacts
-The ORAS Artifacts specification enables deep graphs, enabling signed software bill of materials (SBoM) and other artifact types.
+The OCI v1.1 Specification enables deep graphs, enabling signed software bill of materials (SBoM) and other artifact types.
### Create a sample SBoM
Artifacts that are pushed as references, typically do not have tags as they are
```bash SBOM_DIGEST=$(oras discover -o json \ --artifact-type sbom/example \
- $IMAGE | jq -r ".referrers[0].digest")
+ $IMAGE | jq -r ".manifests[0].digest")
``` Create a signature of an SBoM
To pull a referenced type, the digest of reference is discovered with the `oras
```bash DOC_DIGEST=$(oras discover -o json \ --artifact-type 'readme/example' \
- $IMAGE | jq -r ".referrers[0].digest")
+ $IMAGE | jq -r ".manifests[0].digest")
``` ### Create a clean directory for downloading
ls ./download
## View the repository and tag listing
-ORAS Artifacts enables artifact graphs to be pushed, discovered, pulled and copied without having to assign tags. This enables a tag listing to focus on the artifacts users think about, as opposed to the signatures and SBoMs that are associated with the container images, helm charts and other artifacts.
+OCI Artifact Manifest enables artifact graphs to be pushed, discovered, pulled and copied without having to assign tags. This enables a tag listing to focus on the artifacts users think about, as opposed to the signatures and SBoMs that are associated with the container images, helm charts and other artifacts.
### View a list of tags
The signature is untagged, but tracked as a `oras.artifact.manifest` reference t
``` ## Delete all artifacts in the graph
-Support for the ORAS Artifacts specification enables deleting the graph of artifacts associated with the root artifact. Use the [az acr repository delete][az-acr-repository-delete] command to delete the signature, SBoM and the signature of the SBoM.
+Support for the OCI v1.1 Specification enables deleting the graph of artifacts associated with the root artifact. Use the [az acr repository delete][az-acr-repository-delete] command to delete the signature, SBoM and the signature of the SBoM.
```azurecli az acr repository delete \
az acr manifest list-metadata \
## Next steps * Learn more about [the ORAS CLI](https://oras.land/cli/)
-* Learn more about [ORAS Artifacts][oras-artifacts] for how to push, discover, pull, copy a graph of supply chain artifacts
+* Learn more about [OCI Artifact Manifest][oci-artifact-manifest] for how to push, discover, pull, copy a graph of supply chain artifacts
<!-- LINKS - external --> [docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms
az acr manifest list-metadata \
[docker-windows]: https://docs.docker.com/docker-for-windows/ [oras-install-docs]: https://oras.land/cli/ [oras-docs]: https://oras.land/
-[oras-artifacts]: https://github.com/oras-project/artifacts-spec/
+[oci-artifacts-referrers]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers/
+[oci-artifact-manifest]: https://github.com/opencontainers/image-spec/blob/main/artifact.md/
+[oci-spec]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md/
+ <!-- LINKS - internal --> [az-acr-repository-show]: /cli/azure/acr/repository?#az_acr_repository_show [az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Now you can use the SPN to automatically access EA APIs. The SPN has the Departm
| | | | `properties.principalId` | It is the value of Object ID. See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). | | `properties.principalTenantId` | See [Find your SPN and tenant ID](#find-your-spn-and-tenant-id). |
- | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/196987/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71` |
+ | `properties.roleDefinitionId` | `/providers/Microsoft.Billing/billingAccounts/{BillingAccountID}/enrollmentAccounts/{enrollmentAccountID}/billingRoleDefinitions/a0bcee42-bf30-4d1b-926a-48d21664ef71` |
The billing account name is the same parameter that you used in the API parameters. It's the enrollment ID that you see in the EA portal and the Azure portal.
marketplace Marketplace Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-containers.md
For a Kubernetes application-based offer, the following requirements apply:
| Requirement | Details | |: |: | | Billing and metering | Support one of the PerCore, PerEveryCoreInCluster, or BYOL billing models. |
-| Artifacts packaged as a Cloud Native Application Bundle (CNAB) | The Helm chart, manifest, createUiDefinition.json, and Azure Resource Manager template must be packaged as a CNAB. For more information, see [prepare technical assets][azure-kubernetes-technical-assets]. |
+| Artifacts packaged as a Cloud Native Application Bundle (CNAB) | The Helm chart, manifest, createUiDefinition.json, and Azure Resource Manager template must be packaged as a CNAB. For more information, see [Prepare technical assets](azure-container-technical-assets.md). |
| Hosting in an Azure Container Registry repository | The CNAB must be hosted in an Azure Container Registry repository. For more information about working with Azure Container Registry, see [Quickstart: Create a private container registry by using the Azure portal](../container-registry/container-registry-get-started-portal.md).<br><br> | ## Next steps
notification-hubs Notification Hubs Push Notification Registration Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-push-notification-registration-management.md
public async Task<HttpResponseMessage> Put(DeviceInstallation deviceUpdate)
} ```
-### Example code to register with a notification hub from a device using a registration ID
+### Example code to register with a notification hub from a backend using a registration ID
From your app backend, you can perform basic CRUDS operations on registrations. For example:
storage Data Lake Storage Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-best-practices.md
Every workload has different requirements on how the data is consumed, but these
In IoT workloads, there can be a great deal of data being ingested that spans across numerous products, devices, organizations, and customers. It's important to pre-plan the directory layout for organization, security, and efficient processing of the data for down-stream consumers. A general template to consider might be the following layout:
-`*{Region}/{SubjectMatter(s)}/{yyyy}/{mm}/{dd}/{hh}/*`
+- *{Region}/{SubjectMatter(s)}/{yyyy}/{mm}/{dd}/{hh}/*
For example, landing telemetry for an airplane engine within the UK might look like the following structure:
-`*UK/Planes/BA1293/Engine1/2017/08/11/12/*`
+- *UK/Planes/BA1293/Engine1/2017/08/11/12/*
-In this example, by putting the date at the end of the directory structure, you can use ACLs to more easily secure regions and subject matters to specific users and groups. If you put the data structure at the beginning, it would be much more difficult to secure these regions and subject matters. For example, if you wanted to provide access only to UK data or certain planes, you'd need to apply a separate permission for numerous directories under every hour directory. This structure would also exponentially increase the number of directories as time went on.
+In this example, by putting the date at the end of the directory structure, you can use ACLs to more easily secure regions and subject matters to specific users and groups. If you put the date structure at the beginning, it would be much more difficult to secure these regions and subject matters. For example, if you wanted to provide access only to UK data or certain planes, you'd need to apply a separate permission for numerous directories under every hour directory. This structure would also exponentially increase the number of directories as time went on.
#### Batch jobs structure
A commonly used approach in batch processing is to place data into an "in" direc
Sometimes file processing is unsuccessful due to data corruption or unexpected formats. In such cases, a directory structure might benefit from a **/bad** folder to move the files to for further inspection. The batch job might also handle the reporting or notification of these *bad* files for manual intervention. Consider the following template structure:
-`*{Region}/{SubjectMatter(s)}/In/{yyyy}/{mm}/{dd}/{hh}/*\`
-`*{Region}/{SubjectMatter(s)}/Out/{yyyy}/{mm}/{dd}/{hh}/*\`
-`*{Region}/{SubjectMatter(s)}/Bad/{yyyy}/{mm}/{dd}/{hh}/*`
+- *{Region}/{SubjectMatter(s)}/In/{yyyy}/{mm}/{dd}/{hh}/*
+- *{Region}/{SubjectMatter(s)}/Out/{yyyy}/{mm}/{dd}/{hh}/*
+- *{Region}/{SubjectMatter(s)}/Bad/{yyyy}/{mm}/{dd}/{hh}/*
For example, a marketing firm receives daily data extracts of customer updates from their clients in North America. It might look like the following snippet before and after being processed:
-`*NA/Extracts/ACMEPaperCo/In/2017/08/14/updates_08142017.csv*\`
-`*NA/Extracts/ACMEPaperCo/Out/2017/08/14/processed_updates_08142017.csv*`
+- *NA/Extracts/ACMEPaperCo/In/2017/08/14/updates_08142017.csv*
+- *NA/Extracts/ACMEPaperCo/Out/2017/08/14/processed_updates_08142017.csv*
In the common case of batch data being processed directly into databases such as Hive or traditional SQL databases, there isn't a need for an **/in** or **/out** directory because the output already goes into a separate folder for the Hive table or external database. For example, daily extracts from customers would land into their respective directories. Then, a service such as [Azure Data Factory](../../data-factory/introduction.md), [Apache Oozie](https://oozie.apache.org/), or [Apache Airflow](https://airflow.apache.org/) would trigger a daily Hive or Spark job to process and write the data into a Hive table.
For Hive workloads, partition pruning of time-series data can help some queries
Those pipelines that ingest time-series data, often place their files with a structured naming for files and folders. Below is a common example we see for data that is structured by date:
-*\DataSet\YYYY\MM\DD\datafile_YYYY_MM_DD.tsv*
+*/DataSet/YYYY/MM/DD/datafile_YYYY_MM_DD.tsv*
Notice that the datetime information appears both as folders and in the filename. For date and time, the following is a common pattern
-*\DataSet\YYYY\MM\DD\HH\mm\datafile_YYYY_MM_DD_HH_mm.tsv*
+*/DataSet/YYYY/MM/DD/HH/mm/datafile_YYYY_MM_DD_HH_mm.tsv*
Again, the choice you make with the folder and file organization should optimize for the larger file sizes and a reasonable number of files in each folder.
storage Data Lake Storage Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-known-issues.md
In the storage browser that appears in the Azure portal, you can't access a file
Third party applications that use REST APIs to work will continue to work if you use them with Data Lake Storage Gen2. Applications that call Blob APIs will likely work.
-## Storage Analytics logs (classic)
-
-The setting for retention days is not yet supported, but you can delete logs manually by using any supported tool such as Azure Storage Explorer, REST or an SDK.
## Windows Azure Storage Blob (WASB) driver
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
For a blob snapshot or version, the condition that is checked is the number of d
## Optionally enable access time tracking
-Before you configure a lifecycle management policy, you can choose to enable blob access time tracking. When access time tracking is enabled, a lifecycle management policy can include an action based on the time that the blob was last accessed with a read or write operation.
+Before you configure a lifecycle management policy, you can choose to enable blob access time tracking. When access time tracking is enabled, a lifecycle management policy can include an action based on the time that the blob was last accessed with a read or write operation.To minimize the effect on read access latency, only the first read of the last 24 hours updates the last access time. Subsequent reads in the same 24-hour period don't update the last access time. If a blob is modified between reads, the last access time is the more recent of the two values.
#### [Portal](#tab/azure-portal)
A lifecycle management policy must be read or written in full. Partial updates a
## See also - [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)-- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
This article presents known issues and solutions for common problems in Azure Virtual Desktop Insights. >[!IMPORTANT]
->[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Virtual Desktop Insights currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to Azure Virtual Desktop Insights by August 31, 2024. We'll provide instructions for how to migrate when we release the update that allows Azure Virtual Desktop Insights to support the Azure Virtual Desktop Insights Agent. Until then, continue to use the Log Analytics Agent.
+>[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Virtual Desktop Insights currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to Azure Virtual Desktop Insights by August 31, 2024. We'll provide instructions for how to migrate when we release the update that allows Azure Virtual Desktop Insights to support the Azure Monitor Agent. Until then, continue to use the Log Analytics Agent.
## Issues with configuration and setup
vpn-gateway Vpn Gateway Troubleshoot Site To Site Disconnected Intermittently https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-disconnected-intermittently.md
Make sure that the on-premises VPN device is set to have **one VPN tunnel per su
### Step 5 Check for Security Association Limitations
-The virtual network gateway has limit of 200 subnet Security Association pairs. If the number of Azure virtual network subnets multiplied times by the number of local subnets is greater than 200, you may see sporadic subnets disconnecting.
+The virtual network gateway has limit of 200 subnet Security Association pairs. If the number of Azure virtual network subnets multiplied times by the number of local subnets is greater than 200, you might see sporadic subnets disconnecting.
### Step 6 Check on-premises VPN device external interface address
web-application-firewall Waf Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/waf-sentinel.md
Using Sentinel ingested WAF logs, you can use Sentinel analytics rules to automa
Azure WAF also comes in with built-in Sentinel detection rules templates for SQLi, XSS, and Log4J attacks. These templates can be found under the Analytics tab in the 'Rule Templates' section of Sentinel. You can use these templates or define your own templates based on the WAF logs.
-The automation section of these rules can help you automatically respond to the incident by running a playbook An example of such a playbook to respond to attack can be found in network security GitHub repository [here](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Playbook%20-%20WAF%20Sentinel%20Playbook%20Block%20IP%20-%20New). This playbook automatically creates WAF policy custom rules to block the source IPs of the attacker as detected by the WAF analytics detection rules.
+The automation section of these rules can help you automatically respond to the incident by running a playbook. An example of such a playbook to respond to attack can be found in network security GitHub repository [here](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Playbook%20-%20WAF%20Sentinel%20Playbook%20Block%20IP%20-%20New). This playbook automatically creates WAF policy custom rules to block the source IPs of the attacker as detected by the WAF analytics detection rules.
## Next steps