Updates from: 01/12/2024 02:13:51
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json) - `2023-07-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-
+- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
#### Example request
-You can make requests using [Azure AI Search](./concepts/use-your-data.md?tabs=ai-search#ingesting-your-data) and [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db#ingesting-your-data).
+You can make requests using [Azure AI Search](./concepts/use-your-data.md?tabs=ai-search#ingesting-your-data), [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db#ingesting-your-data), [Azure Machine learning](/azure/machine-learning/overview-what-is-azure-machine-learning), [Pinecone](https://www.pinecone.io/), and [Elasticsearch](https://www.elastic.co/).
##### Azure AI Search
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten
' ```
+##### Elasticsearch
+
+```console
+curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-12-01-preview \
+-H "Content-Type: application/json" \
+-H "api-key: YOUR_API_KEY" \
+-d \
+{
+ "messages": [
+ {
+ "role": "system",
+ "content": "you are a helpful assistant that talks like a pirate"
+ },
+ {
+ "role": "user",
+ "content": "can you tell me how to care for a parrot?"
+ }
+ ],
+ "dataSources": [
+ {
+ "type": "Elasticsearch",
+ "parameters": {
+ "endpoint": "{search endpoint}",
+ "indexName": "{index name}",
+ "authentication": {
+ "type": "KeyAndKeyId",
+ "key": "{key}",
+ "keyId": "{key id}"
+ }
+ }
+ }
+ ]
+}
+```
+
+##### Azure Machine Learning
+
+```console
+curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-12-01-preview \
+-H "Content-Type: application/json" \
+-H "api-key: YOUR_API_KEY" \
+-d \
+'
+{
+ "messages": [
+ {
+ "role": "system",
+ "content": "you are a helpful assistant that talks like a pirate"
+ },
+ {
+ "role": "user",
+ "content": "can you tell me how to care for a parrot?"
+ }
+ ],
+ "dataSources": [
+ {
+ "type": "AzureMLIndex",
+ "parameters": {
+ "projectResourceId": "/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/Microsoft.MachineLearningServices/workspaces/{workspace-id}",
+ "name": "my-project",
+ "version": "5"
+ }
+ }
+ ]
+}
+'
+```
+
+##### Pinecone
+
+```console
+curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/extensions/chat/completions?api-version=2023-12-01-preview \
+-H "Content-Type: application/json" \
+-H "api-key: YOUR_API_KEY" \
+-d \
+'
+{
+ "messages": [
+ {
+ "role": "system",
+ "content": "you are a helpful assistant that talks like a pirate"
+ },
+ {
+ "role": "user",
+ "content": "can you tell me how to care for a parrot?"
+ }
+ ],
+ "dataSources": [
+ {
+ "type": "Pinecone",
+ "parameters": {
+ "authentication": {
+ "type": "APIKey",
+ "apiKey": "{api key}"
+ },
+ "environment": "{environment name}",
+ "indexName": "{index name}",
+ "embeddingDependency": {
+ "type": "DeploymentName",
+ "deploymentName": "{embedding deployment name}"
+ },
+ "fieldsMapping": {
+ "titleField": "title",
+ "urlField": "url",
+ "filepathField": "filepath",
+ "contentFields": [
+ "content"
+ ],
+ "contentFieldsSeparator": "\n"
+ }
+ }
+ }
+ ]
+}
+'
+```
+ #### Example response ```json
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten
} ``` ++ | Parameters | Type | Required? | Default | Description | |--|--|--|--|--| | `messages` | array | Required | null | The messages to generate chat completions for, in the chat format. |
The following parameters can be used inside of the `parameters` field inside of
| Parameters | Type | Required? | Default | Description | |--|--|--|--|--|
-| `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure AI Search the value is `AzureCognitiveSearch`. For Azure Cosmos DB for MongoDB vCore, the value is `AzureCosmosDB`. |
+| `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure AI Search the value is `AzureCognitiveSearch`. For Azure Cosmos DB for MongoDB vCore, the value is `AzureCosmosDB`. For Elasticsearch the value is `Elasticsearch`. For Azure Machine Learning, the value is `AzureMLIndex`. For Pinecone, the value is `Pinecone`. |
| `indexName` | string | Required | null | The search index to be used. | | `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. | | `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
The following parameters can be used inside of the `parameters` field inside of
| `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. |
-**The following parameters are used for Azure AI Search**
+### Azure AI Search parameters
+
+The following parameters are used for Azure AI Search.
| Parameters | Type | Required? | Default | Description | |--|--|--|--|--|
The following parameters can be used inside of the `parameters` field inside of
| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure AI Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. | | `fieldsMapping` | dictionary | Optional for Azure AI Search. | null | defines which [fields](./concepts/use-your-data.md?tabs=ai-search#index-field-mapping) you want to map when you add your data source. |
+The following parameters are used inside of the `authentication` field, which enables you to use Azure OpenAI [without public network access](./how-to/use-your-data-securely.md).
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `type` | string | Required | null | The authentication type. |
+| `managedIdentityResourceId` | string | Required | null | The resource ID of the user-assigned managed identity to use for authentication. |
+
+```json
+"authentication": {
+ "type": "UserAssignedManagedIdentity",
+ "managedIdentityResourceId": "/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{resource-name}"
+},
+```
+ The following parameters are used inside of the `fieldsMapping` field. | Parameters | Type | Required? | Default | Description |
The following parameters are used inside of the `fieldsMapping` field.
| `urlField` | string | Optional | null | The field in your index that contains the original URL of each document. | | `filepathField` | string | Optional | null | The field in your index that contains the original file name of each document. | | `contentFields` | dictionary | Optional | null | The fields in your index that contain the main text content of each document. |
-| `contentFieldsSeparator` | string | Optional | null | The separator for the your content fields. Use `\n` by default. |
+| `contentFieldsSeparator` | string | Optional | null | The separator for the content fields. Use `\n` by default. |
```json "fieldsMapping": {
The following parameters are used inside of the `fieldsMapping` field.
} ```
-**The following parameters are used for Azure Cosmos DB for MongoDB vCore**
+The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `deploymentName` | string | Optional | null | The type of vectorization source to use. |
+| `type` | string | Optional | null | The embedding model deployment name, located within the same Azure OpenAI resource. This enables you to use vector search without an Azure OpenAI API key and without Azure OpenAI public network access. |
+
+```json
+"embeddingDependency": {
+ "type": "DeploymentName",
+ "deploymentName": "{embedding deployment name}"
+},
+```
+
+### Azure CosmosDB for MongoDB vCore parameters
+
+The following parameters are used for Azure Cosmos DB for MongoDB vCore.
| Parameters | Type | Required? | Default | Description | |--|--|--|--|--|
The following parameters are used inside of the `fieldsMapping` field.
| `containerName` | string | Required | null | Azure Cosmos DB for MongoDB vCore only. The Azure Cosmos Mongo vCore container name in the database. | | `type` (found inside of`embeddingDependencyType`) | string | Required | null | Indicates the embedding model dependency. | | `deploymentName` (found inside of`embeddingDependencyType`) | string | Required | null | The embedding model deployment name. |
-| `fieldsMapping` | dictionary | Required for Azure Cosmos DB for MongoDB vCore. | null | Index data column mapping. When using Azure Cosmos DB for MongoDB vCore, the value `vectorFields` is required, which indicates the fields that store vectors. |
+| `fieldsMapping` | dictionary | Required for Azure Cosmos DB for MongoDB vCore. | null | Index data column mapping. When you use Azure Cosmos DB for MongoDB vCore, the value `vectorFields` is required, which indicates the fields that store vectors. |
+
+The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `deploymentName` | string | Optional | null | The type of vectorization source to use. |
+| `type` | string | Optional | null | The embedding model deployment name, located within the same Azure OpenAI resource. This enables you to use vector search without an Azure OpenAI API key and without Azure OpenAI public network access. |
+
+```json
+"embeddingDependency": {
+ "type": "DeploymentName",
+ "deploymentName": "{embedding deployment name}"
+},
+```
+
+### Elasticsearch parameters
+
+The following parameters are used for Elasticsearch.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `endpoint` | string | Required | null | The endpoint for connecting to Elasticsearch. |
+| `indexName` | string | Required | null | The name of the Elasticsearch index. |
+| `type` (found inside of `authentication`) | string | Required | null | The authentication to be used. For Elasticsearch, the value is `KeyAndKeyId`. |
+| `key` (found inside of `authentication`) | string | Required | null | The key used to connect to Elasticsearch. |
+| `keyId` (found inside of `authentication`) | string | Required | null | The key ID to be used. For Elasticsearch. |
+
+The following parameters are used inside of the `fieldsMapping` field.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `titleField` | string | Optional | null | The field in your index that contains the original title of each document. |
+| `urlField` | string | Optional | null | The field in your index that contains the original URL of each document. |
+| `filepathField` | string | Optional | null | The field in your index that contains the original file name of each document. |
+| `contentFields` | dictionary | Optional | null | The fields in your index that contain the main text content of each document. |
+| `contentFieldsSeparator` | string | Optional | null | The separator for the content fields. Use `\n` by default. |
+| `vectorFields` | dictionary | Optional | null | The names of fields that represent vector data |
+
+```json
+"fieldsMapping": {
+ "titleField": "myTitleField",
+ "urlField": "myUrlField",
+ "filepathField": "myFilePathField",
+ "contentFields": [
+ "myContentField"
+ ],
+ "contentFieldsSeparator": "\n",
+ "vectorFields": [
+ "myVectorField"
+ ]
+}
+```
+
+The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `deploymentName` | string | Optional | null | The type of vectorization source to use. |
+| `type` | string | Optional | null | The embedding model deployment name, located within the same Azure OpenAI resource. This enables you to use vector search without an Azure OpenAI API key and without Azure OpenAI public network access. |
+
+```json
+"embeddingDependency": {
+ "type": "DeploymentName",
+ "deploymentName": "{embedding deployment name}"
+},
+```
+
+### Azure Machine Learning parameters
+
+The following parameters are used for Azure Machine Learning.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `projectResourceId` | string | Required | null | The project resource ID. |
+| `name` | string | Required | null | The name of the Azure Machine Learning project name. |
+| `version` (found inside of `authentication`) | string | Required | null | The version of the Azure Machine Learning vector index. |
+
+The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `deploymentName` | string | Optional | null | The type of vectorization source to use. |
+| `type` | string | Optional | null | The embedding model deployment name, located within the same Azure OpenAI resource. This enables you to use vector search without an Azure OpenAI API key and without Azure OpenAI public network access. |
+
+```json
+"embeddingDependency": {
+ "type": "DeploymentName",
+ "deploymentName": "{embedding deployment name}"
+},
+```
+
+### Pinecone parameters
+
+The following parameters are used for Pinecone.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `type` (found inside of `authentication`) | string | Required | null | The authentication to be used. For Pinecone, the value is `APIKey`. |
+| `apiKey` (found inside of `authentication`) | string | Required | null | The API key for Pinecone. |
+| `environment` | string | Required | null | The name of the Pinecone environment. |
+| `indexName` | string | Required | null | The name of the Pinecone index. |
+| `embeddingDependency` | string | Required | null | The embedding dependency for vector search. |
+| `type` (found inside of `embeddingDependency`) | string | Required | null | The type of dependency. For Pinecone the value is `DeploymentName`. |
+| `deploymentName` (found inside of `embeddingDependency`) | string | Required | null | The name of the deployment. |
+| `titleField` (found inside of `fieldsMapping`) | string | Required | null | The name of the index field to use as a title. |
+| `urlField` (found inside of `fieldsMapping`) | string | Required | null | The name of the index field to use as a URL. |
+| `filepathField` (found inside of `fieldsMapping`) | string | Required | null | The name of the index field to use as a file path. |
+| `contentFields` (found inside of `fieldsMapping`) | string | Required | null | The name of the index fields that should be treated as content. |
+| `vectorFields` | dictionary | Optional | null | The names of fields that represent vector data |
+| `contentFieldsSeparator` (found inside of `fieldsMapping`) | string | Required | null | The separator for the your content fields. Use `\n` by default. |
+
+The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
+
+| Parameters | Type | Required? | Default | Description |
+|--|--|--|--|--|
+| `deploymentName` | string | Optional | null | The type of vectorization source to use. |
+| `type` | string | Optional | null | The embedding model deployment name, located within the same Azure OpenAI resource. This enables you to use vector search without an Azure OpenAI API key and without Azure OpenAI public network access. |
+
+```json
+"embeddingDependency": {
+ "type": "DeploymentName",
+ "deploymentName": "{embedding deployment name}"
+},
+```
### Start an ingestion job
ai-services Personal Voice Create Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-create-consent.md
Previously updated : 12/1/2023 Last updated : 1/10/2024
With the personal voice feature, it's required that every voice be created with explicit consent from the user. A recorded statement from the user is required acknowledging that the customer (Azure AI Speech resource owner) will create and use their voice.
-To add user consent to the personal voice project, you get the prerecorded consent audio file from a publicly accessible URL (`Consents_Create`) or upload the audio file (`Consents_Post`). In this article, you add consent from a URL.
+To add user consent to the personal voice project, you provide the prerecorded consent audio file [from a publicly accessible URL](#add-consent-from-a-url) (`Consents_Create`) or [upload the audio file](#add-consent-from-a-file) (`Consents_Post`).
## Consent statement
You can get the consent statement text for each locale from the text to speech G
"I [state your first and last name] am aware that recordings of my voice will be used by [state the name of the company] to create and use a synthetic version of my voice." ```
+## Add consent from a file
+
+In this scenario, the audio files must be available locally.
+
+To add consent to a personal voice project from a local audio file, use the `Consents_Post` operation of the custom voice API. Construct the request body according to the following instructions:
+
+- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
+- Set the required `voiceTalentName` property. The voice talent name can't be changed later.
+- Set the required `companyName` property. The company name can't be changed later.
+- Set the required `audiodata` property with the consent audio file.
+- Set the required `locale` property. This should be the locale of the consent. The locale can't be changed later. You can find the text to speech locale list [here](/azure/ai-services/speech-service/language-support?tabs=tts).
+
+Make an HTTP POST request using the URI as shown in the following `Consents_Post` example.
+- Replace `YourResourceKey` with your Speech resource key.
+- Replace `YourResourceRegion` with your Speech resource region.
+- Replace `JessicaConsentId` with a consent ID of your choice. The case sensitive ID will be used in the consent's URI and can't be changed later.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourResourceKey" -F 'description="Consent for Jessica voice"' -F 'projectId="ProjectId"' -F 'voiceTalentName="Jessica Smith"' -F 'companyName="Contoso"' -F 'audiodata=@"D:\PersonalVoiceTest\jessica-consent.wav"' -F 'locale="en-US"' "https://YourResourceRegion.api.cognitive.microsoft.com/customvoice/consents/JessicaConsentId?api-version=2023-12-01-preview"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "id": "JessicaConsentId",
+ "description": "Consent for Jessica voice",
+ "projectId": "ProjectId",
+ "voiceTalentName": "Jessica Smith",
+ "companyName": "Contoso",
+ "locale": "en-US",
+ "status": "NotStarted",
+ "createdDateTime": "2023-04-01T05:30:00.000Z",
+ "lastActionDateTime": "2023-04-02T10:15:30.000Z"
+}
+```
+
+The response header contains the `Operation-Location` property. Use this URI to get details about the `Consents_Post` operation. Here's an example of the response header:
+
+```HTTP 201
+Operation-Location: https://eastus.api.cognitive.microsoft.com/customvoice/operations/070f7986-ef17-41d0-ba2b-907f0f28e314?api-version=2023-12-01-preview
+Operation-Id: 070f7986-ef17-41d0-ba2b-907f0f28e314
+```
+ ## Add consent from a URL
+In this scenario, the audio files must already be stored in an Azure Blob Storage container.
+ To add consent to a personal voice project from the URL of an audio file, use the `Consents_Create` operation of the custom voice API. Construct the request body according to the following instructions: - Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
ai-services Personal Voice Create Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-create-project.md
Previously updated : 12/1/2023 Last updated : 1/10/2024
ai-services Personal Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-create-voice.md
Previously updated : 12/1/2023 Last updated : 1/10/2024
To use personal voice in your application, you need to get a speaker profile ID.
You create a speaker profile ID based on the speaker's verbal consent statement and an audio prompt (a clean human voice sample between 50 - 90 seconds). The user's voice characteristics are encoded in the `speakerProfileId` property that's used for text to speech. For more information, see [use personal voice in your application](./personal-voice-how-to-use.md).
-## Create personal voice
+> [!NOTE]
+> The personal voice ID and speaker profile ID aren't same. You can choose the personal voice ID, but the speaker profile ID is generated by the service. The personal voice ID is used to manage the personal voice. The speaker profile ID is used for text to speech.
+
+You provide the audio files [from a publicly accessible URL](#create-personal-voice-from-a-url) (`PersonalVoices_Create`) or [upload the audio files](#create-personal-voice-from-a-file) (`PersonalVoices_Post`).
-To create a personal voice and get the speaker profile ID, use the `PersonalVoices_Create` operation of the custom voice API.
+## Create personal voice from a file
-Before calling this API, please store audio files in Azure Blob. In the example below, audio files are https://contoso.blob.core.windows.net/voicecontainer/jessica/*.wav.
+In this scenario, the audio files must be available locally.
-Construct the request body according to the following instructions:
+To create a personal voice and get the speaker profile ID, use the `PersonalVoices_Post` operation of the custom voice API. Construct the request body according to the following instructions:
+
+- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md).
+- Set the required `consentId` property. See [add user consent](./personal-voice-create-consent.md).
+- Set the required `audiodata` property. You can specify one or more audio files in the same request.
+
+Make an HTTP POST request using the URI as shown in the following `PersonalVoices_Post` example.
+- Replace `YourResourceKey` with your Speech resource key.
+- Replace `YourResourceRegion` with your Speech resource region.
+- Replace `JessicaPersonalVoiceId` with a personal voice ID of your choice. The case sensitive ID will be used in the personal voice's URI and can't be changed later.
+
+```azurecli-interactive
+curl -v -X POST -H "Ocp-Apim-Subscription-Key: YourResourceKey" -F 'projectId="ProjectId"' -F 'consentId="JessicaConsentId"' -F 'audiodata=@"D:\PersonalVoiceTest\CNVSample001.wav"' -F 'audiodata=@"D:\PersonalVoiceTest\CNVSample002.wav"' "
+https://YourResourceRegion.api.cognitive.microsoft.com/customvoice/personalvoices/JessicaPersonalVoiceId?api-version=2023-12-01-preview"
+```
+
+You should receive a response body in the following format:
+
+```json
+{
+ "id": "JessicaPersonalVoiceId",
+ "speakerProfileId": "3059912f-a3dc-49e3-bdd0-02e449df1fe3",
+ "projectId": "ProjectId",
+ "consentId": "JessicaConsentId",
+ "status": "NotStarted",
+ "createdDateTime": "2023-04-01T05:30:00.000Z",
+ "lastActionDateTime": "2023-04-02T10:15:30.000Z"
+}
+```
+
+Use the `speakerProfileId` property to integrate personal voice in your text to speech application. For more information, see [use personal voice in your application](./personal-voice-how-to-use.md).
+
+The response header contains the `Operation-Location` property. Use this URI to get details about the `PersonalVoices_Post` operation. Here's an example of the response header:
+
+```HTTP 201
+Operation-Location: https://eastus.api.cognitive.microsoft.com/customvoice/operations/1321a2c0-9be4-471d-83bb-bc3be4f96a6f?api-version=2023-12-01-preview
+Operation-Id: 1321a2c0-9be4-471d-83bb-bc3be4f96a6f
+```
+
+## Create personal voice from a URL
+
+In this scenario, the audio files must already be stored in an Azure Blob Storage container.
+
+To create a personal voice and get the speaker profile ID, use the `PersonalVoices_Create` operation of the custom voice API. Construct the request body according to the following instructions:
- Set the required `projectId` property. See [create a project](./personal-voice-create-project.md). - Set the required `consentId` property. See [add user consent](./personal-voice-create-consent.md).
Construct the request body according to the following instructions:
- Set the required `extensions` property to the extensions of the audio files. - Optionally, set the `prefix` property to set a prefix for the blob name.
-> [!NOTE]
-> The personal voice ID and speaker profile ID aren't same. You can choose the personal voice ID, but the speaker profile ID is generated by the service. The personal voice ID is used to manage the personal voice. The speaker profile ID is used for text to speech.
- Make an HTTP PUT request using the URI as shown in the following `PersonalVoices_Create` example. - Replace `YourResourceKey` with your Speech resource key. - Replace `YourResourceRegion` with your Speech resource region.
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
Previously updated : 11/15/2023 Last updated : 1/10/2024
ai-services Personal Voice Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-overview.md
Previously updated : 12/1/2023 Last updated : 1/10/2024
ai-studio Cli Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/cli-install.md
You can install the Azure AI CLI locally as described previously, or run it usin
### Option 1: Using VS Code (web) in Azure AI Studio
-VS Code (web) in Azure AI Studio creates and runs the development container on a compute instance. To get started with this approach, follow the instructions in [How to work with Azure AI Studio projects in VS Code (Web)](vscode-web.md).
+VS Code (web) in Azure AI Studio creates and runs the development container on a compute instance. To get started with this approach, follow the instructions in [Work with Azure AI projects in VS Code](develop-in-vscode.md).
Our prebuilt development environments are based on a docker container that has the Azure AI SDK generative packages, the Azure AI CLI, the Prompt flow SDK, and other tools. It's configured to run VS Code remotely inside of the container. The docker container is similar to [this Dockerfile](https://github.com/Azure/aistudio-copilot-sample/blob/main/.devcontainer/Dockerfile), and is based on [Microsoft's Python 3.10 Development Container Image](https://mcr.microsoft.com/en-us/product/devcontainers/python/about).
As mentioned in step 2 above, your flow.dag.yaml should reference connection and
If you're working in your own development environment (including Codespaces), you might need to manually update these fields so that your flow runs connected to Azure resources.
-If you launched VS Code from the AI Studio, you are in an Azure-connected custom container experience, and you can work directly with flows stored in the `shared` folder. These flow files are the same underlying files prompt flow references in the Studio, so they should already be configured with your project connections and deployments. To learn more about the folder structure in the VS Code container experience, see [Get started with Azure AI projects in VS Code (Web)](vscode-web.md)
+If you launched VS Code from the AI Studio, you are in an Azure-connected custom container experience, and you can work directly with flows stored in the `shared` folder. These flow files are the same underlying files prompt flow references in the Studio, so they should already be configured with your project connections and deployments. To learn more about the folder structure in the VS Code container experience, see [Work with Azure AI projects in VS Code](develop-in-vscode.md)
## ai chat
ai help
## Next steps -- [Try the Azure AI CLI from Azure AI Studio in a browser](vscode-web.md)
+- [Try the Azure AI CLI from Azure AI Studio in a browser](develop-in-vscode.md)
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
To create a compute instance in Azure AI Studio:
:::image type="content" source="../media/compute/compute-scheduling.png" alt-text="Screenshot of the option to enable idle shutdown and create a schedule." lightbox="../media/compute/compute-scheduling.png"::: > [!IMPORTANT]
- > The compute can't be idle if you have [prompt flow runtime](./create-manage-runtime.md) in **Running** status on the compute. You need to delete any active runtime before the compute instance can be eligible for idle shutdown. You also can't have any active [VS Code (Web)](./vscode-web.md) sessions hosted on the compute instance.
+ > The compute can't be idle if you have [prompt flow runtime](./create-manage-runtime.md) in **Running** status on the compute. You need to delete any active runtime before the compute instance can be eligible for idle shutdown. You also can't have any active [VS Code (Web)](./develop-in-vscode.md) sessions hosted on the compute instance.
1. You can update the schedule days and times to meet your needs. You can also add additional schedules. For example, you can create a schedule to start at 9 AM and stop at 6 PM from Monday-Thursday, and a second schedule to start at 9 AM and stop at 4 PM for Friday. You can create a total of four schedules per compute instance.
Note that disabling SSH prevents SSH access from the public internet. But when a
To avoid getting charged for a compute instance that is switched on but inactive, you can configure when to shut down your compute instance due to inactivity. > [!IMPORTANT]
-> The compute can't be idle if you have [prompt flow runtime](./create-manage-runtime.md) in **Running** status on the compute. You need to delete any active runtime before the compute instance can be eligible for idle shutdown. You also can't have any active [VS Code (Web)](./vscode-web.md) sessions hosted on the compute instance.
+> The compute can't be idle if you have [prompt flow runtime](./create-manage-runtime.md) in **Running** status on the compute. You need to delete any active runtime before the compute instance can be eligible for idle shutdown. You also can't have any active [VS Code (Web)](./develop-in-vscode.md) sessions hosted on the compute instance.
The setting can be configured during compute instance creation or for existing compute instances.
ai-studio Develop In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop-in-vscode.md
+
+ Title: Work with Azure AI projects in VS Code
+
+description: This article provides instructions on how to get started with Azure AI projects in VS Code.
+++
+ - ignite-2023
+ Last updated : 1/10/2024+++++
+# Get started with Azure AI projects in VS Code
++
+Azure AI Studio supports developing in VS Code - Web and Desktop. In each scenario, your VS Code instance is remotely connected to a prebuilt custom container running on a virtual machine, also known as a compute instance. To work in your local environment instead, or to learn more, follow the steps in [Install the Azure AI SDK](sdk-install.md) and [Install the Azure AI CLI](cli-install.md).
+
+## Launch VS Code from Azure AI Studio
+
+1. Go to [Azure AI Studio](https://ai.azure.com).
+
+1. Go to **Build** > **Projects** and select or create the project you want to work with.
+
+1. At the top-right of any page in the **Build** tab, select **Open project in VS Code (Web)** if you want to work in the browser. If you want to work in your local VS Code instance instead, select the dropdown arrow and choose **Open project in VS Code (Desktop)**.
+
+1. Within the dialog that opened following the previous step, select or create the compute instance that you want to use.
+
+1. Once the compute is running, select **Set up** which configures the container on your compute for you. The compute setup might take a few minutes to complete. Once you set up the compute the first time, you can directly launch subsequent times. You might need to authenticate your compute when prompted.
+
+ > [!WARNING]
+ > Even if you [enable and configure idle shutdown on your compute instance](./create-manage-compute.md#configure-idle-shutdown), any computes that host this custom container for VS Code won't idle shutdown. This is to ensure the compute doesn't shut down unexpectedly while you're working within a container.
+
+1. Once the container is ready, select **Launch**. This launches your previously selected VS Code experience, remotely connected to a custom development environment running on your compute instance.
+
+ If you selected VS Code (Web), a new browser tab connected to *vscode.dev* opens. If you selected VS Code (Desktop), a new local instance of VS Code opens on your local machine.
+
+## The custom container folder structure
+
+Our prebuilt development environments are based on a docker container that has the Azure AI SDK generative packages, the Azure AI CLI, the Prompt flow SDK, and other tools. The environment is configured to run VS Code remotely inside of the container. The container is defined in a similar way to [this Dockerfile](https://github.com/Azure/aistudio-copilot-sample/blob/main/.devcontainer/Dockerfile), and is based on [Microsoft's Python 3.10 Development Container Image](https://mcr.microsoft.com/product/devcontainers/python/about).
+
+Your file explorer is opened to the specific project directory you launched from in AI Studio.
+
+The container is configured with the Azure AI folder hierarchy (`afh` directory), which is designed to orient you within your current development context, and help you work with your code, data and shared files most efficiently. This `afh` directory houses your Azure AI projects, and each project has a dedicated project directory that includes `code`, `data` and `shared` folders.
+
+This table summarizes the folder structure:
+
+| Folder | Description |
+| | |
+| `code` | Use for working with git repositories or local code files.<br/><br/>The `code` folder is a storage location directly on your compute instance and performant for large repositories. It's an ideal location to clone your git repositories, or otherwise bring in or create your code files. |
+| `data` | Use for storing local data files. We recommend you use the `data` folder to store and reference local data in a consistent way.|
+| `shared` | Use for working with a project's shared files and assets such as prompt flows.<br/><br/>For example, `shared\Users\{user-name}\promptflow` is where you find the project's prompt flows. |
+
+> [!IMPORTANT]
+> It's recommended that you work within this project directory. Files, folders, and repos you include in your project directory persist on your host machine (your compute instance). Files stored in the code and data folders will persist even when the compute instance is stopped or restarted, but will be lost if the compute is deleted. However, the shared files are saved in your Azure AI resource's storage account, and therefore aren't lost if the compute instance is deleted.
+
+### The Azure AI SDK
+
+To get started with the AI SDK, we recommend the [aistudio-copilot-sample repo](https://github.com/azure/aistudio-copilot-sample) as a comprehensive starter repository that includes a few different copilot implementations. For the full list of samples, check out the [Azure AI Samples repository](https://github.com/azure-samples/azureai-samples).
+
+1. Open a terminal
+1. Clone a sample repo into your project's `code` folder. You might be prompted to authenticate to GitHub
+
+ ```bash
+ cd code
+ git clone https://github.com/azure/aistudio-copilot-sample
+ ```
+
+1. If you have existing notebooks or code files, you can import `import azure.ai.generative` and use intellisense to browse capabilities included in that package
+
+### The Azure AI CLI
+
+If you prefer to work interactively, the Azure AI CLI has everything you need to build generative AI solutions.
+
+1. Open a terminal to get started
+1. `ai help` guides you through CLI capabilities
+1. `ai init` configures your resources in your development environment
+
+### Working with prompt flows
+
+You can use the Azure AI SDK and Azure AI CLI to create, reference and work with prompt flows.
+
+Prompt flows already created in the Azure AI Studio can be found at `shared\Users\{user-name}\promptflow`. You can also create new flows in your `code` or `shared` folder using the Azure AI CLI and SDK.
+
+- To reference an existing flow using the AI CLI, use `ai flow invoke`.
+- To create a new flow using the AI CLI, use `ai flow new`.
+
+Prompt flow will automatically use the Azure AI connections your project has access to when you use the AI CLI or SDK.
+
+You can also work with the prompt flow extension in VS Code, which is preinstalled in this environment. Within this extension, you can set the connection provider to your Azure AI project. See [consume connections from Azure AI](https://microsoft.github.io/promptflow/cloud/azureai/consume-connections-from-azure-ai.html).
+
+For prompt flow specific capabilities that aren't present in the AI SDK and CLI, you can work directly with the prompt flow CLI or SDK. For more information, see [prompt flow capabilities](https://microsoft.github.io/promptflow/reference/https://docsupdatetracker.net/index.html).
+
+## Remarks
+
+If you plan to work across multiple code and data directories, or multiple repositories, you can use the split root file explorer feature in VS Code. To try this feature, follow these steps:
+
+1. Enter *Ctrl+Shift+p* to open the command palette. Search for and select **Workspaces: Add Folder to Workspace**.
+1. Select the repository folder that you want to load. You should see a new section in your file explorer for the folder you opened. If it was a repository, you can now work with source control in VS Code.
+1. If you want to save this configuration for future development sessions, again enter *Ctrl+Shift+p* and select **Workspaces: Save Workspace As**. This action saves a config file to your current folder.
+
+For cross-language compatibility and seamless integration of Azure AI capabilities, explore the Azure AI Hub at [https://aka.ms/azai](https://aka.ms/azai). Discover app templates and SDK samples in your preferred programming language.
+
+## Next steps
+
+- [Get started with the Azure AI CLI](cli-install.md)
+- [Quickstart: Generate product name ideas in the Azure AI Studio playground](../quickstarts/playground-completions.md)
ai-studio Sdk Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-install.md
You can install the Azure AI SDK locally as described previously, or run it via
### Option 1: Using VS Code (web) in Azure AI Studio
-VS Code (web) in Azure AI Studio creates and runs the development container on a compute instance. To get started with this approach, follow the instructions in [How to work with Azure AI Studio projects in VS Code (Web)](vscode-web.md).
+VS Code (web) in Azure AI Studio creates and runs the development container on a compute instance. To get started with this approach, follow the instructions in [Work with Azure AI projects in VS Code](develop-in-vscode.md).
Our prebuilt development environments are based on a docker container that has the Azure AI Generative SDK, the Azure AI CLI, the prompt flow SDK, and other tools. It's configured to run VS Code remotely inside of the container. The docker container is defined in [this Dockerfile](https://github.com/Azure/aistudio-copilot-sample/blob/main/.devcontainer/Dockerfile), and is based on [Microsoft's Python 3.10 Development Container Image](https://mcr.microsoft.com/en-us/product/devcontainers/python/about).
The Azure AI code samples in GitHub Codespaces help you quickly get started with
## Next steps - [Get started building a sample copilot application](https://github.com/azure/aistudio-copilot-sample)-- [Try the Azure AI CLI from Azure AI Studio in a browser](vscode-web.md)
+- [Try the Azure AI CLI from Azure AI Studio in a browser](develop-in-vscode.md)
- [Azure SDK for Python reference documentation](/python/api/overview/azure/ai)
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kub
description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster. Previously updated : 11/30/2023 Last updated : 01/11/2024 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
You can request a larger volume for a PVC. Edit the PVC object, and specify a la
> [!NOTE] > A new PV is never created to satisfy the claim. Instead, an existing volume is resized.
+>
+> Shrinking persistent volumes is currently not supported.
In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100 GiB file share. We can confirm that by running:
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
description: Learn about security in Azure Kubernetes Service (AKS), including m
Previously updated : 10/31/2023 Last updated : 01/11/2024
Because of compliance or regulatory requirements, certain workloads may require
* [Confidential Containers][confidential-containers] (preview), also based on Kata Confidential Containers, encrypts container memory and prevents data in memory during computation from being in clear text, readable format, and tampering. It helps isolate your containers from other container groups/pods, as well as VM node OS kernel. Confidential Containers (preview) uses hardware based memory encryption (SEV-SNP). * [Pod Sandboxing][pod-sandboxing] (preview) provides an isolation boundary between the container application and the shared kernel and compute resources (CPU, memory, and network) of the container host.
-## Cluster upgrades
-
-Azure provides upgrade orchestration tools to upgrade of an AKS cluster and components, maintain security and compliance, and access the latest features. This upgrade orchestration includes both the Kubernetes master and agent components.
-
-To start the upgrade process, specify one of the [listed available Kubernetes versions](supported-kubernetes-versions.md). Azure then safely cordons and drains each AKS node and upgrades.
-
-### Cordon and drain
-
-During the upgrade process, AKS nodes are individually cordoned from the cluster to prevent new pods from being scheduled on them. The nodes are then drained and upgraded as follows:
-
-1. A new node is deployed into the node pool.
- * This node runs the latest OS image and patches.
-1. One of the existing nodes is identified for upgrade.
-1. Pods on the identified node are gracefully terminated and scheduled on the other nodes in the node pool.
-1. The emptied node is deleted from the AKS cluster.
-1. Steps 1-4 are repeated until all nodes are successfully replaced as part of the upgrade process.
-
-For more information, see [Upgrade an AKS cluster][aks-upgrade-cluster].
- ## Network security For connectivity and security with on-premises networks, you can deploy your AKS cluster into existing Azure virtual network subnets. These virtual networks connect back to your on-premises network using Azure Site-to-Site VPN or Express Route. Define Kubernetes ingress controllers with private, internal IP addresses to limit services access to the internal network connection.
aks Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cost-analysis.md
az aks create --resource-group <resource_group> --name <name> --location <locati
You can disable cost analysis at any time using `az aks update`. ```azurecli-interactive
-az aks update --name myAKSCluster --resource-group myResourceGroup ΓÇô-disable-cost-analysis
+az aks update --name myAKSCluster --resource-group myResourceGroup --disable-cost-analysis
``` > [!NOTE]
aks Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md
It takes a few minutes to create the AKS cluster. Wait for the cluster successfu
## Delete the cluster
-If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges.
### [Azure CLI](#tab/azure-cli)
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
When the application runs, a Kubernetes service exposes the application front en
## Delete the cluster
-If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges.
### [Azure CLI](#tab/azure-cli)
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI' description: Learn how to quickly create a Kubernetes cluster, deploy an application, and monitor performance in Azure Kubernetes Service (AKS) using Azure CLI. Previously updated : 12/27/2023 Last updated : 01/10/2024 #Customer intent: As a developer or cluster operator, I want to create an AKS cluster and deploy an application so I can see how to run and monitor applications using the managed Kubernetes service in Azure.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
## Before you begin -- This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].-- You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md).-- This article requires Azure CLI version 2.0.64 or later. If you're using Azure Cloud Shell, the latest version is already installed.-- Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].-- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [`az account`][az-account] command.-- Verify you have the *Microsoft.OperationsManagement* and *Microsoft.OperationalInsights* providers registered on your subscription. These Azure resource providers are required to support [Container insights][azure-monitor-containers]. Check the registration status using the following commands:
+This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
- ```azurecli
- az provider show -n Microsoft.OperationsManagement -o table
- az provider show -n Microsoft.OperationalInsights -o table
- ```
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
- If they're not registered, register them using the following commands:
- ```azurecli
- az provider register --namespace Microsoft.OperationsManagement
- az provider register --namespace Microsoft.OperationalInsights
- ```
-
- If you plan to run these commands locally instead of in Azure Cloud Shell, make sure you run them with administrative privileges.
-
-> [!NOTE]
-> The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux].
+- This article requires version 2.0.64 or later of the Azure CLI. If you are using Azure Cloud Shell, then the latest version is already installed.
+- Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account](/cli/azure/account) command.
## Create a resource group
Create a resource group using the [`az group create`][az-group-create] command.
## Create an AKS cluster
-The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity.
-
-Create an AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-addons monitoring` and `--enable-msi-auth-for-monitoring` parameters to enable [Azure Monitor Container insights][azure-monitor-containers] with managed identity authentication (preview).
+To create an AKS cluster, use the [`az aks create`][az-aks-create] command. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity.
```azurecli az aks create \
Create an AKS cluster using the [`az aks create`][az-aks-create] command with th
--name myAKSCluster \ --enable-managed-identity \ --node-count 1 \
- --enable-addons monitoring
- --enable-msi-auth-for-monitoring \
--generate-ssh-keys ```
Create an AKS cluster using the [`az aks create`][az-aks-create] command with th
## Connect to the cluster
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
-
-1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command.
-
- ```azurecli
- az aks install-cli
- ```
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. To install `kubectl` locally, use the [`az aks install-cli`][az-aks-install-cli] command.
1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl
The following sample output shows the single node created in the previous steps. Make sure the node status is *Ready*. ```output
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-31718369-0 Ready agent 6m44s v1.12.8
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-11853318-vmss000000 Ready agent 2m26s v1.27.7
``` ## Deploy the application
To deploy the application, you use a manifest file to create all the objects req
For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
+ If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system.
+
+1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
```azurecli kubectl apply -f aks-store-quickstart.yaml
To deploy the application, you use a manifest file to create all the objects req
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
-1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make sure all pods are `Running` before proceeding.
1. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
When the application runs, a Kubernetes service exposes the application front en
1. Open a web browser to the external IP address of your service to see the Azure Store app in action.
- :::image type="content" source="media/quick-kubernetes-deploy-portal/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-portal/aks-store-application.png":::
+ :::image type="content" source="media/quick-kubernetes-deploy-cli/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-cli/aks-store-application.png":::
## Delete the cluster
-If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges.
- Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command.
To learn more about AKS and walk through a complete code-to-deployment example,
[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli [az-group-create]: /cli/azure/group#az-group-create [az-group-delete]: /cli/azure/group#az-group-delete
-[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE [intro-azure-linux]: ../../azure-linux/intro-azure-linux.md
aks Quick Kubernetes Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md
When the application runs, a Kubernetes service exposes the application front en
## Delete the cluster
-If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges.
1. In the Azure portal, navigate to your AKS cluster resource group. 1. Select **Delete resource group**.
aks Quick Kubernetes Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md
Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure PowerShell' description: Learn how to quickly create a Kubernetes cluster and deploy an application in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 12/27/2023 Last updated : 01/10/2024 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you:
-* Deploy an AKS cluster using Azure PowerShell.
-* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
-
+- Deploy an AKS cluster using Azure PowerShell.
+- Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario.
## Before you begin
-* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
-* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-powershell.md).
-
+This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md).
-* If you're running PowerShell locally, install the `Az PowerShell` module and connect to your Azure account using the [`Connect-AzAccount`](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell].
-* The identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
-* If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the
-[`Set-AzContext`](/powershell/module/az.accounts/set-azcontext) cmdlet.
+- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]
+- For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).
- ```azurepowershell-interactive
- Set-AzContext -SubscriptionId 00000000-0000-0000-0000-000000000000
- ```
+ If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. Make sure that you run the commands with administrative privileges. For more information, see [Install Azure PowerShell][install-azure-powershell].
-> [!NOTE]
-> If you plan to run the commands locally instead of in Azure Cloud Shell, make sure you run the commands with administrative privileges.
+- Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).
+- If you have more than one Azure subscription, set the subscription that you wish to use for the quickstart by calling the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
## Create a resource group
An [Azure resource group][azure-resource-group] is a logical group in which Azur
The following example creates a resource group named *myResourceGroup* in the *eastus* location.
-* Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
+- Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.
- ```azurepowershell-interactive
+ ```azurepowershell
New-AzResourceGroup -Name myResourceGroup -Location eastus ```
The following example creates a resource group named *myResourceGroup* in the *e
## Create AKS cluster
-The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity.
+To create an AKS cluster, use the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity.
-* Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet with the `-WorkspaceResourceId` parameter to enable [Azure Monitor container insights][azure-monitor-containers].
+```azurepowershell
+New-AzAksCluster -ResourceGroupName myResourceGroup `
+ -Name myAKSCluster `
+ -NodeCount 1 `
+ -EnableManagedIdentity `
+ -GenerateSshKey
+```
- ```azurepowershell-interactive
- New-AzAksCluster -ResourceGroupName myResourceGroup -Name myAKSCluster -NodeCount 1 -GenerateSshKey -WorkspaceResourceId <WORKSPACE_RESOURCE_ID>
- ```
-
- After a few minutes, the command completes and returns information about the cluster.
+After a few minutes, the command completes and returns information about the cluster.
- > [!NOTE]
- > When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../faq.md#why-are-two-resource-groups-created-with-aks)
+> [!NOTE]
+> When you create an AKS cluster, a second resource group is automatically created to store the AKS resources. For more information, see [Why are two resource groups created with AKS?](../faq.md#why-are-two-resource-groups-created-with-aks)
## Connect to the cluster
-To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell.
+To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. To install `kubectl` locally, use the `Install-AzAksCliTool` cmdlet.
-1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet.
+1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
- ```azurepowershell-interactive
- Install-AzAksCliTool
- ```
-
-2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.
-
- ```azurepowershell-interactive
+ ```azurepowershell
Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ```
-3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
+1. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes.
- ```azurepowershell-interactive
+ ```azurepowershell
kubectl get nodes ``` The following example output shows the single node created in the previous steps. Make sure the node status is *Ready*. ```output
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-31718369-0 Ready agent 6m44s v1.15.10
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-11853318-vmss000000 Ready agent 2m26s v1.27.7
``` ## Deploy the application
To deploy the application, you use a manifest file to create all the objects req
:::image type="content" source="media/quick-kubernetes-deploy-powershell/aks-store-architecture.png" alt-text="Screenshot of Azure Store sample architecture." lightbox="media/quick-kubernetes-deploy-powershell/aks-store-architecture.png":::
-* **Store front**: Web application for customers to view products and place orders.
-* **Product service**: Shows product information.
-* **Order service**: Places orders.
-* **Rabbit MQ**: Message queue for an order queue.
+- **Store front**: Web application for customers to view products and place orders.
+- **Product service**: Shows product information.
+- **Order service**: Places orders.
+- **Rabbit MQ**: Message queue for an order queue.
> [!NOTE] > We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
To deploy the application, you use a manifest file to create all the objects req
For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system.
+ 1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ```console
When the application runs, a Kubernetes service exposes the application front en
1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
-2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
+1. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument.
```azurecli-interactive kubectl get service store-front --watch
When the application runs, a Kubernetes service exposes the application front en
store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m ```
-3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
+1. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process.
The following example output shows a valid public IP address assigned to the service:
When the application runs, a Kubernetes service exposes the application front en
store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m ```
-4. Open a web browser to the external IP address of your service to see the Azure Store app in action.
+1. Open a web browser to the external IP address of your service to see the Azure Store app in action.
:::image type="content" source="media/quick-kubernetes-deploy-powershell/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-powershell/aks-store-application.png"::: ## Delete the cluster
-If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges. Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet.
-* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet
+```azurepowershell
+Remove-AzResourceGroup -Name myResourceGroup
+```
- ```azurepowershell-interactive
- Remove-AzResourceGroup -Name myResourceGroup
- ```
-
- > [!NOTE]
- > The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and doesn't require removal.
+> [!NOTE]
+> The AKS cluster was created with system-assigned managed identity (default identity option used in this quickstart), the identity is managed by the platform and doesn't require removal.
## Next steps
To learn more about AKS and walk through a complete code-to-deployment example,
> [AKS tutorial][aks-tutorial] <!-- LINKS - external -->
-[azure-monitor-containers]: ../../azure-monitor/containers/container-insights-overview.md
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply <!-- LINKS - internal -->
-[kubernetes-concepts]: ../concepts-clusters-workloads.md
[install-azure-powershell]: /powershell/azure/install-az-ps [new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup [new-azakscluster]: /powershell/module/az.aks/new-azakscluster
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
To deploy the application, you use a manifest file to create all the objects req
## Delete the cluster
-If you don't plan on going through the following tutorials, clean up unnecessary resources to avoid Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges.
### [Azure CLI](#tab/azure-cli)
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
The ASP.NET sample application is provided as part of the [.NET Framework Sample
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
+ 1. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```console
When the application runs, a Kubernetes service exposes the application front en
:::image type="content" source="media/quick-windows-container-deploy-cli/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application.":::
- > [!NOTE]
- > If you receive a connection timeout when trying to load the page, you should verify the sample app is ready using the `kubectl get pods --watch` command. Sometimes, the Windows container isn't started by the time your external IP address is available.
- ## Delete resources
-If you don't plan on going through the following tutorials, you should delete your cluster to avoid incurring Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], you should delete your cluster to avoid incurring Azure charges.
Delete your resource group, container service, and all related resources using the [az group delete](/cli/azure/group#az_group_delete) command.
aks Quick Windows Container Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md
You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K
The following sample output shows all the nodes in the cluster. Make sure the status of all nodes is *Ready*: ```output
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-12345678-vmssfedcba Ready agent 13m v1.16.7
- aksnpwin987654 Ready agent 108s v1.16.7
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41946322-vmss000001 Ready agent 28h v1.27.7
+ aks-agentpool-41946322-vmss000002 Ready agent 28h v1.27.7
+ aks-npwin-41946322-vmss000000 Ready agent 28h v1.27.7
+ aks-userpool-41946322-vmss000001 Ready agent 28h v1.27.7
+ aks-userpool-41946322-vmss000002 Ready agent 28h v1.27.7
``` ### [Azure PowerShell](#tab/azure-powershell)
You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K
The following sample output shows all the nodes in the cluster. Make sure the status of all nodes is *Ready*: ```output
- NAME STATUS ROLES AGE VERSION
- aks-agentpool-41946322-vmss000001 Ready agent 7m51s v1.27.7
- aks-agentpool-41946322-vmss000002 Ready agent 7m5s v1.27.7
- aks-npwin-41946322-vmss000000 Ready agent 7m43s v1.27.7
- aks-userpool-41946322-vmss000001 Ready agent 7m47s v1.27.7
- aks-userpool-41946322-vmss000002 Ready agent 6m57s v1.27.7
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-41946322-vmss000001 Ready agent 28h v1.27.7
+ aks-agentpool-41946322-vmss000002 Ready agent 28h v1.27.7
+ aks-npwin-41946322-vmss000000 Ready agent 28h v1.27.7
+ aks-userpool-41946322-vmss000001 Ready agent 28h v1.27.7
+ aks-userpool-41946322-vmss000002 Ready agent 28h v1.27.7
```
The ASP.NET sample application is provided as part of the [.NET Framework Sample
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
+ 1. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. ```console
When the application runs, a Kubernetes service exposes the application front en
:::image type="content" source="media/quick-windows-container-deploy-portal/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application." lightbox="media/quick-windows-container-deploy-portal/asp-net-sample-app.png":::
- > [!NOTE]
- > If you receive a connection timeout when trying to load the page, you should verify the sample app is ready using the `kubectl get pods --watch` command. Sometimes, the Windows container isn't started by the time your external IP address is available.
- ## Delete resources
-If you don't plan on going through the following tutorials, you should delete your cluster to avoid incurring Azure charges.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], you should delete your cluster to avoid incurring Azure charges.
1. In the Azure portal, navigate to your resource group. 1. Select **Delete resource group**.
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
This article assumes a basic understanding of Kubernetes concepts. For more info
- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - For ease of use, try the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Quickstart for Azure Cloud Shell](/azure/cloud-shell/quickstart).
- If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information, see [Install Azure PowerShell][install-azure-powershell].
+ If you want to use PowerShell locally, then install the [Az PowerShell](/powershell/azure/new-azureps-module-az) module and connect to your Azure account using the [Connect-AzAccount](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. Make sure that you run the commands with administrative privileges. For more information, see [Install Azure PowerShell][install-azure-powershell].
- Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - If you have more than one Azure subscription, set the subscription that you wish to use for the quickstart by calling the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet.
The ASP.NET sample application is provided as part of the [.NET Framework Sample
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding.
+ 1. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```azurepowershell
When the application runs, a Kubernetes service exposes the application front en
sample LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m ```
-2. See the sample app in action by opening a web browser to the external IP address of your service.
+1. See the sample app in action by opening a web browser to the external IP address of your service.
:::image type="content" source="media/quick-windows-container-deploy-powershell/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application.":::
- > [!NOTE]
- > If you receive a connection timeout when trying to load the page, you should verify the sample app is ready using the `kubectl get pods --watch` command. Sometimes, the Windows container isn't started by the time your external IP address is available.
- ## Delete resources
-If you don't plan on going through the following tutorials, then delete your cluster to avoid incurring Azure charges.
-
-Delete your resource group, container service, and all related resources using the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to remove the resource group, container service, and all related resources.
+If you don't plan on going through the [AKS tutorial][aks-tutorial], then delete your cluster to avoid incurring Azure charges. Call the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to remove the resource group, container service, and all related resources.
```azurepowershell Remove-AzResourceGroup -Name myResourceGroup
To learn more about AKS, and to walk through a complete code-to-deployment examp
<!-- LINKS - internal --> [install-azure-powershell]: /powershell/azure/install-az-ps [new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup
-[azure-cni-about]: ../concepts-network.md#azure-cni-advanced-networking
[new-azakscluster]: /powershell/module/az.aks/new-azakscluster [import-azakscredential]: /powershell/module/az.aks/import-azakscredential [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
Previously updated : 05/30/2023 Last updated : 01/10/2024
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
## Create an AKS cluster with a managed NAT gateway * Create an AKS cluster with a new managed NAT gateway using the [`az aks create`][az-aks-create] command with the `--outbound-type managedNATGateway`, `--nat-gateway-managed-outbound-ip-count`, and `--nat-gateway-idle-timeout` parameters. If you want the NAT gateway to operate out of a specific availability zone, specify the zone using `--zones`.
-* If no zone is specified when creating a managed NAT gateway, than NAT gateway is deployed to "no zone" by default. No zone NAT gateway resources are deployed to a single availability zone for you by Azure. For more information on non-zonal deployment model, see [non-zonal NAT gateway](/azure/nat-gateway/nat-availability-zones#non-zonal).
+* If no zone is specified when creating a managed NAT gateway, then NAT gateway is deployed to "no zone" by default. When NAT gateway is placed in **no zone**, Azure places the resource in a zone for you. For more information on non-zonal deployment model, see [non-zonal NAT gateway](/azure/nat-gateway/nat-availability-zones#non-zonal).
* A managed NAT gateway resource can't be used across multiple availability zones.
- ```azurecli-interactive
+ ```azurecli-interactive
az aks create \ --resource-group myResourceGroup \ --name myNatCluster \
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
--outbound-type managedNATGateway \ --nat-gateway-managed-outbound-ip-count 2 \ --nat-gateway-idle-timeout 4
- ```
-
-### Update the number of outbound IP addresses
* Update the outbound IP address or idle timeout using the [`az aks update`][az-aks-update] command with the `--nat-gateway-managed-outbound-ip-count` or `--nat-gateway-idle-timeout` parameter.
This article shows you how to create an Azure Kubernetes Service (AKS) cluster w
This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-kubenet] or [Azure CNI][byo-vnet-azure-cni]) and that the NAT gateway is preconfigured on the subnet. The following commands create the required resources for this scenario.
-> [!IMPORTANT]
-> Zonal configuration for your NAT gateway resource can be done with managed or user-assigned NAT gateway resources.
-> If no value for the outbound IP address is specified, the default value is one.
+ 1. Create a resource group using the [`az group create`][az-group-create] command.
This configuration requires bring-your-own networking (via [Kubenet][byo-vnet-ku
--public-ip-addresses myNatGatewayPip ``` > [!Important]
- > A single NAT gateway resource cannot be used across multiple availability zones. To ensure zone-resiliency, it is recommended to deploy a NAT gateway resource to each availability zone and assign to subnets containing AKS clusters in each zone. For more information on this deployment model, see [NAT gateway for each zone](/azure/nat-gateway/nat-availability-zones#zonal-nat-gateway-resource-for-each-zone-in-a-region-to-create-zone-resiliency).
+ > A single NAT gateway resource can't be used across multiple availability zones. To ensure zone-resiliency, it is recommended to deploy a NAT gateway resource to each availability zone and assign to subnets containing AKS clusters in each zone. For more information on this deployment model, see [NAT gateway for each zone](/azure/nat-gateway/nat-availability-zones#zonal-nat-gateway-resource-for-each-zone-in-a-region-to-create-zone-resiliency).
> If no zone is configured for NAT gateway, the default zone placement is "no zone", in which Azure places NAT gateway into a zone for you. 5. Create a virtual network using the [`az network vnet create`][az-network-vnet-create] command.
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
The following example script copies a custom Tomcat to a local folder, performs
#### Finalize configuration
-Finally, you'll place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it doesn't already exist.) To upload these files to your App Service instance, perform the following steps:
+Finally, you'll place the driver JARs in the Tomcat classpath and restart your App Service. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR:
-1. In the [Cloud Shell](https://shell.azure.com), install the webapp extension:
-
- ```azurecli-interactive
- az extension add -ΓÇôname webapp
- ```
-
-2. Run the following CLI command to create an SSH tunnel from your local system to App Service:
-
- ```azurecli-interactive
- az webapp remote-connection create --resource-group <resource-group-name> --name <app-name> --port <port-on-local-machine>
- ```
-
-3. Connect to the local tunneling port with your SFTP client and upload the files to the */home/tomcat/lib* folder.
-
-Alternatively, you can use an FTP client to upload the JDBC driver. Follow these [instructions for getting your FTP credentials](deploy-configure-credentials.md).
+```azurecli-interactive
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --target-path <jar-name>.jar
+```
An example xsl file is provided below. The example xsl file adds a new connector
Finally, place the driver JARs in the Tomcat classpath and restart your App Service.
-1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/tomcat/lib* directory. (Create this directory if it doesn't already exist.) To upload these files to your App Service instance, perform the following steps:
-
- 1. In the [Cloud Shell](https://shell.azure.com), install the webapp extension:
-
- ```azurecli-interactive
- az extension add -ΓÇôname webapp
- ```
-
- 2. Run the following CLI command to create an SSH tunnel from your local system to App Service:
+1. Ensure that the JDBC driver files are available to the Tomcat classloader by placing them in the */home/site/lib* directory. In the [Cloud Shell](https://shell.azure.com), run `az webapp deploy --type=lib` for each driver JAR:
- ```azurecli-interactive
- az webapp remote-connection create --resource-group <resource-group-name> --name <app-name> --port <port-on-local-machine>
- ```
-
- 3. Connect to the local tunneling port with your SFTP client and upload the files to the */home/tomcat/lib* folder.
-
- Alternatively, you can use an FTP client to upload the JDBC driver. Follow these [instructions for getting your FTP credentials](deploy-configure-credentials.md).
+```azurecli-interactive
+az webapp deploy --resource-group <group-name> --name <app-name> --src-path <jar-name>.jar --type=lib --path <jar-name>.jar
+```
-2. If you created a server-level data source, restart the App Service Linux application. Tomcat will reset `CATALINA_BASE` to `/home/tomcat` and use the updated configuration.
+If you created a server-level data source, restart the App Service Linux application. Tomcat will reset `CATALINA_BASE` to `/home/tomcat` and use the updated configuration.
### JBoss EAP Data Sources
app-service Deploy Zip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-zip.md
The table below shows the available query parameters, their allowed values, and
| Key | Allowed values | Description | Required | Type | |-|-|-|-|-|
-| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `path`. <br/> - `type=static`: Deploy a static file (e.g. a script). By default, the file is deployed to `/home/site/wwwroot`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `path`. | Yes | String |
+| `type` | `war`\|`jar`\|`ear`\|`lib`\|`startup`\|`static`\|`zip` | The type of the artifact being deployed, this sets the default target path and informs the web app how the deployment should be handled. <br/> - `type=zip`: Deploy a ZIP package by unzipping the content to `/home/site/wwwroot`. `target-path` parameter is optional. <br/> - `type=war`: Deploy a WAR package. By default, the WAR package is deployed to `/home/site/wwwroot/app.war`. The target path can be specified with `target-path`. <br/> - `type=jar`: Deploy a JAR package to `/home/site/wwwroot/app.jar`. The `target-path` parameter is ignored <br/> - `type=ear`: Deploy an EAR package to `/home/site/wwwroot/app.ear`. The `target-path` parameter is ignored <br/> - `type=lib`: Deploy a JAR library file. By default, the file is deployed to `/home/site/libs`. The target path can be specified with `target-path`. <br/> - `type=static`: Deploy a static file (e.g. a script). By default, the file is deployed to `/home/site/wwwroot`. <br/> - `type=startup`: Deploy a script that App Service automatically uses as the startup script for your app. By default, the script is deployed to `D:\home\site\scripts\<name-of-source>` for Windows and `home/site/wwwroot/startup.sh` for Linux. The target path can be specified with `target-path`. | Yes | String |
| `restart` | `true`\|`false` | By default, the API restarts the app following the deployment operation (`restart=true`). To deploy multiple artifacts, prevent restarts on the all but the final deployment by setting `restart=false`. | No | Boolean | | `clean` | `true`\|`false` | Specifies whether to clean (delete) the target deployment before deploying the artifact there. | No | Boolean | | `ignorestack` | `true`\|`false` | The publish API uses the `WEBSITE_STACK` environment variable to choose safe defaults depending on your site's language stack. Setting this parameter to `false` disables any language-specific defaults. | No | Boolean |
-| `path` | `"<absolute-path>"` | The absolute path to deploy the artifact to. For example, `"/home/site/deployments/tools/driver.jar"`, `"/home/site/scripts/helper.sh"`. | No | String |
+| `target-path` | `"<absolute-path>"` | The absolute path to deploy the artifact to. For example, `"/home/site/deployments/tools/driver.jar"`, `"/home/site/scripts/helper.sh"`. | No | String |
## Next steps
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
If you require fine-grained control over name resolution, App Service allows you
|Property name|Windows default value|Linux default value|Allowed values|Description| |-|-|-|-| |dnsRetryAttemptCount|1|5|1-5|Defines the number of attempts to resolve where one means no retries.|
-|dnsMaxCacheTimeout|30|0|0-60|Cache timeout defined in seconds. Setting cache to zero means you've disabled caching.|
+|dnsMaxCacheTimeout|30|0|0-60|DNS results will be cached according to the individual records TTL, but no longer than the defined max cache timeout. Setting cache to zero means you've disabled caching.|
|dnsRetryAttemptTimeout|3|1|1-30|Timeout before retrying or failing. Timeout also defines the time to wait for secondary server results if the primary doesn't respond.| >[!NOTE]
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
Create the following variables containing your Azure DevOps information.
```azurecli gitrepo=<Replace with your Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) repo URL>
-token=<Replace with a Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) personal access token>
+token=<Replace with an Azure DevOps Services (formerly Visual Studio Team Services, or VSTS) personal access token>
``` Configure continuous deployment from Azure DevOps Services (formerly Visual Studio Team Services, or VSTS). The `--git-token` parameter is required only once per Azure account (Azure remembers token).
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Previously updated : 11/18/2023 Last updated : 01/10/2024
You can use different types of logs in Azure to manage and troubleshoot applicat
* **Firewall log**: You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds. > [!NOTE]
-> Logs are available only for resources deployed in the Azure Resource Manager deployment model. You cannot use logs for resources in the classic deployment model. For a better understanding of the two models, see the [Understanding Resource Manager deployment and classic deployment](../azure-resource-manager/management/deployment-models.md) article.
+> Logs are available only for resources deployed in the Azure Resource Manager deployment model. You can't use logs for resources in the classic deployment model. For a better understanding of the two models, see the [Understanding Resource Manager deployment and classic deployment](../azure-resource-manager/management/deployment-models.md) article.
## Storage locations
The access log is generated only if you've enabled it on each Application Gatewa
|originalRequestUriWithArgs| This field contains the original request URL | |requestUri| This field contains the URL after the rewrite operation on Application Gateway | |upstreamSourcePort| The source port used by Application Gateway when initiating a connection to the backend target|
-|originalHost| This field contains the original request host name
+|originalHost| This field contains the original request host name|
+|error_info|The reason for the 4xx and 5xx error. Displays an error code for a failed request. More details in [Error code information.](./application-gateway-diagnostics.md#error-code-information) |
+|contentType|The type of content or data that is being processed or delivered by the application gateway
++ ```json { "timeStamp": "2021-10-14T22:17:11+00:00",
The access log is generated only if you've enabled it on each Application Gatewa
"serverResponseLatency": "0.028", "upstreamSourcePort": "21564", "originalHost": "20.110.30.194",
- "host": "20.110.30.194"
+ "host": "20.110.30.194",
+ "error_info":"ERRORINFO_NO_ERROR",
+ "contentType":"application/json"
} } ```
The access log is generated only if you've enabled it on each Application Gatewa
|sentBytes| Size of packet sent, in bytes.| |timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | |sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
-|host| The hostname with which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that.|
-|originalHost| The hostname with which the request was received by the Application Gateway from the client.|
+|host| The hostname for which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that.|
+|originalHost| The hostname for which the request was received by the Application Gateway from the client.|
```json {
The access log is generated only if you've enabled it on each Application Gatewa
} } ```
+### Error code Information
+If the application gateway can't complete the request, it stores one of the following reason codes in the error_info field of the access log.
+
+|4XX Errors |The 4xx error codes indicate that there was an issue with the client's request, and the server can't fulfill it |
+|||
+| ERRORINFO_INVALID_METHOD| The client has sent a request which is non-RFC compliant. Possible reasons: client using HTTP method not supported by server, misspelled method, incompatible HTTP protocol version etc.|
+ | ERRORINFO_INVALID_REQUEST | The server can't fulfill the request because of incorrect syntax.|
+ | ERRORINFO_INVALID_VERSION| The application gateway received a request with an invalid or unsupported HTTP version.|
+ | ERRORINFO_INVALID_09_METHOD| The client sent request with HTTP Protocol version 0.9.|
+ | ERRORINFO_INVALID_HOST |The value provided in the "Host" header is either missing, improperly formatted, or doesn't match the expected host value (when there is no Basic listener, and none of the hostnames of Multisite listeners match with the host).|
+ | ERRORINFO_INVALID_CONTENT_LENGTH | The length of the content specified by the client in the content-Length header doesn't match the actual length of the content in the request.|
+ | ERRORINFO_INVALID_METHOD_TRACE | The client sent HTTP TRACE method which is not supported by the application gateway.|
+ | ERRORINFO_CLIENT_CLOSED_REQUEST | The client closed the connection with the application gateway before the idle timeout period elapsed.Check whether the client timeout period is greater than the [idle timeout period](./application-gateway-faq.yml#what-are-the-settings-for-keep-alive-timeout-and-tcp-idle-timeout) for the application gateway.|
+ | ERRORINFO_REQUEST_URI_INVALID |Indicates issue with the Uniform Resource Identifier (URI) provided in the client's request. |
+ | ERRORINFO_HTTP_NO_HOST_HEADER | Client sent a request without Host header. |
+ | ERRORINFO_HTTP_TO_HTTPS_PORT |The client sent a plain HTTP request to an HTTPS port. |
+ | ERRORINFO_HTTPS_NO_CERT | Indicates client is not sending a valid and properly configured TLS certificate during Mutual TLS authentication. |
++
+|5XX Errors |Description |
+|||
+ | ERRORINFO_UPSTREAM_NO_LIVE | The application gateway is unable to find any active or reachable backend servers to handle incoming requests |
+ | ERRORINFO_UPSTREAM_CLOSED_CONNECTION | The backend server closed the connection unexpectedly or before the request was fully processed. This could happen due to backend server reaching its limits, crashing etc.|
+ | ERRORINFO_UPSTREAM_TIMED_OUT | The established TCP connection with the server was closed as the connection took longer than the configured timeout value. |
### Performance log The performance log is generated only if you have enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged:
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Title: Azure Arc-enabled Open Service Mesh
-description: Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster
+description: Deploy the Open Service Mesh (OSM) extension on Azure Arc-enabled Kubernetes cluster
Previously updated : 10/12/2022 Last updated : 01/11/2024 -- # Azure Arc-enabled Open Service Mesh
OSM runs an Envoy-based control plane on Kubernetes, can be configured with [SMI](https://smi-spec.io/) APIs, and works by injecting an Envoy proxy as a sidecar container next to each instance of your application. [Read more](https://docs.openservicemesh.io/#features) on the service mesh scenarios enabled by Open Service Mesh.
+All components of Azure Arc-enabled OSM are deployed on availability zones, making them zone redundant.
+ ## Installation options and requirements Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure CLI, an ARM template, or a built-in Azure policy.
export RESOURCE_GROUP=<resource-group-name>
If you're using an OpenShift cluster, skip to the [OpenShift installation steps](#install-osm-on-an-openshift-cluster). Create the extension:+ > [!NOTE]
-> If you would like to pin a specific version of OSM, add the `--version x.y.z` flag to the `create` command. Note that this will set the value for `auto-upgrade-minor-version` to false.
+> To pin a specific version of OSM, add the `--version x.y.z` flag to the `create` command. Note that this will set the value for `auto-upgrade-minor-version` to false.
```azurecli-interactive az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm
Now, [install OSM with custom values](#setting-values-during-osm-installation).
[cert-manager](https://cert-manager.io/) is a provider that can be used for issuing signed certificates to OSM without the need for storing private keys in Kubernetes. Refer to OSM's [cert-manager documentation](https://docs.openservicemesh.io/docs/guides/certificates/) and [demo](https://docs.openservicemesh.io/docs/demos/cert-manager_integration/) to learn more.+ > [!NOTE] > Use the commands provided in the OSM GitHub documentation with caution. Ensure that you use the correct namespace in commands or specify with flag `--osm-namespace arc-osm-system`.+ To install OSM with cert-manager as the certificate provider, create or append to your existing JSON settings file the `certificateProvider.kind` value set to cert-manager as shown here. To change from the default cert-manager values specified in OSM documentation, also include and update the subsequent `certmanager.issuer` lines.
To set required values for configuring Contour during OSM installation, append t
} ```
-Now, [install OSM with custom values](#setting-values-during-osm-installation).
- ### Setting values during OSM installation Any values that need to be set during OSM installation need to be saved to a single JSON file and passed in through the Azure CLI
install command.
After you create a JSON file with applicable values as described in the custom installation sections, set the file path as an environment variable:
- ```azurecli-interactive
- export SETTINGS_FILE=<json-file-path>
- ```
+```azurecli-interactive
+export SETTINGS_FILE=<json-file-path>
+```
-Run the `az k8s-extension create` command to create the OSM extension, passing in the settings file using the
+Run the `az k8s-extension create` command to create the OSM extension, passing in the settings file using the `--configuration-settings-file` flag:
-`--configuration-settings-file` flag:
- ```azurecli-interactive
- az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm --configuration-settings-file $SETTINGS_FILE
- ```
+```azurecli-interactive
+az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.openservicemesh --scope cluster --name osm --configuration-settings-file $SETTINGS_FILE
+```
## Install Azure Arc-enabled OSM using ARM template
-After connecting your cluster to Azure Arc, create a JSON file with the following format, making sure to update the \<cluster-name\> and \<osm-arc-version\> values:
+After connecting your cluster to Azure Arc, create a JSON file with the following format, making sure to update the `<cluster-name>` and `<osm-arc-version>` values:
```json {
export TEMPLATE_FILE_NAME=<template-file-path>
export DEPLOYMENT_NAME=<desired-deployment-name> ```
-Run this command to install the OSM extension using the az CLI:
+Run this command to install the OSM extension:
```azurecli-interactive az deployment group create --name $DEPLOYMENT_NAME --resource-group $RESOURCE_GROUP --template-file $TEMPLATE_FILE_NAME
You should now be able to view the OSM resources and use the OSM extension in yo
## Install Azure Arc-enabled OSM using built-in policy
-A built-in policy is available on Azure portal under the category of **Kubernetes** by the name of **Azure Arc-enabled Kubernetes clusters should have the Open Service Mesh extension installed**. This policy can be assigned at the scope of a subscription or a resource group. The default action of this policy is **Deploy if not exists**. However, you can choose to audit the clusters for extension installations by changing the parameters during assignment. You're also prompted to specify the version you wish to install (v1.0.0-1 or higher) as a parameter.
+A built-in policy is available on Azure portal under the **Kubernetes** category: **Azure Arc-enabled Kubernetes clusters should have the Open Service Mesh extension installed**. This policy can be assigned at the scope of a subscription or a resource group.
+
+The default action of this policy is **Deploy if not exists**. However, you can choose to audit the clusters for extension installations by changing the parameters during assignment. You're also prompted to specify the version you wish to install (v1.0.0-1 or higher) as a parameter.
## Validate installation
You should see a JSON output similar to:
} ```
+For more commands that you can use to validate and troubleshoot the deployment of the Open Service Mesh (OSM) extension components on your cluster, see [our troubleshooting guide](extensions-troubleshooting.md#azure-arc-enabled-open-service-mesh)
+ ## OSM controller configuration
-OSM deploys a MeshConfig resource `osm-mesh-config` as a part of its control plane in arc-osm-system namespace. The purpose of this MeshConfig is to provide the mesh owner/operator the ability to update some of the mesh configurations based on their needs. to view the default values, use the following command.
+OSM deploys a MeshConfig resource `osm-mesh-config` as a part of its control plane in `arc-osm-system` namespace. The purpose of this MeshConfig is to provide the mesh owner/operator the ability to update some of the mesh configurations based on their needs. To view the default values, use the following command.
```azurecli-interactive kubectl describe meshconfig osm-mesh-config -n arc-osm-system ```
-The output would show the default values:
+The output shows the default values:
```azurecli-interactive Certificate:
For more information, see the [Config API reference](https://docs.openservicemes
> [!NOTE] > Values in the MeshConfig `osm-mesh-config` are persisted across upgrades.+ Changes to `osm-mesh-config` can be made using the `kubectl patch` command. In the following example, the permissive traffic policy mode is changed to false. ```azurecli-interactive
Alternatively, to edit `osm-mesh-config` in Azure portal, select **Edit configur
## Using Azure Arc-enabled OSM
-To start using OSM capabilities, you need to first onboard the application namespaces to the service mesh. Download the OSM CLI from [OSM GitHub releases page](https://github.com/openservicemesh/osm/releases/). Once the namespaces are added to the mesh, you can configure the SMI policies to achieve the desired OSM capability.
+To start using OSM capabilities, you need to first onboard the application namespaces to the service mesh. Download the OSM CLI from the [OSM GitHub releases page](https://github.com/openservicemesh/osm/releases/). Once the namespaces are added to the mesh, you can configure the SMI policies to achieve the desired OSM capability.
### Onboard namespaces to the service mesh
Add namespaces to the mesh by running the following command:
```azurecli-interactive osm namespace add <namespace_name> ```+ Namespaces can be onboarded from Azure portal as well by selecting **+Add** in the cluster's Open Service Mesh section. [![+Add button located on top of the Open Service Mesh section](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-portal-add-namespace.jpg#lightbox)
-More information about onboarding services can be found [here](https://docs.openservicemesh.io/docs/guides/app_onboarding/#onboard-services).
+For more information about onboarding services, see the [Open Service Mesh documentation](https://docs.openservicemesh.io/docs/guides/app_onboarding/#onboard-services).
### Configure OSM with Service Mesh Interface (SMI) policies You can start with a [sample application](https://docs.openservicemesh.io/docs/getting_started/install_apps/) or use your test environment to try out SMI policies. > [!NOTE]
-> If you are using a sample applications, ensure that their versions match the version of the OSM extension installed on your cluster. For example, if you are using v1.0.0 of the OSM extension, use the bookstore manifest from release-v1.0 branch of OSM upstream repository.
+> If you use sample applications, ensure that their versions match the version of the OSM extension installed on your cluster. For example, if you are using v1.0.0 of the OSM extension, use the bookstore manifest from release-v1.0 branch of OSM upstream repository.
### Configuring your own Jaeger, Prometheus and Grafana instances
InsightsMetrics
### Navigating the OSM dashboard 1. Access your Arc connected Kubernetes cluster using this [link](https://aka.ms/azmon/osmux).
-2. Go to Azure Monitor and navigate to the Reports tab to access the OSM workbook.
+2. Go to Azure Monitor and navigate to the **Reports** tab to access the OSM workbook.
3. Select the time-range & namespace to scope your services. [![OSM workbook](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg)](media/tutorial-arc-enabled-open-service-mesh/osm-workbook.jpg#lightbox) #### Requests tab -- This tab shows a summary of all the http requests sent via service to service in OSM.
+The **Requests** tab shows a summary of all the http requests sent via service to service in OSM.
+ - You can view all the services by selecting the service in the grid. - You can view total requests, request error rate & P90 latency. - You can drill down to destination and view trends for HTTP error/success code, success rate, pod resource utilization, and latencies at different percentiles. #### Connections tab -- This tab shows a summary of all the connections between your services in Open Service Mesh.
+The **Connections** tab shows a summary of all the connections between your services in Open Service Mesh.
+ - Outbound connections: total number of connections between Source and destination services. - Outbound active connections: last count of active connections between source and destination in selected time range. - Outbound failed connections: total number of failed connections between source and destination service.
When you use the `az k8s-extension` command to delete the OSM extension, the `ar
> [!NOTE] > Use the az k8s-extension CLI to uninstall OSM components managed by Arc. Using the OSM CLI to uninstall is not supported by Arc and can result in undesirable behavior.
-## Troubleshooting
-
-Refer to the [extension troubleshooting guide](extensions-troubleshooting.md#azure-arc-enabled-open-service-mesh) for help with issues.
-
-## Frequently asked questions
-
-### Is the extension of Azure Arc-enabled OSM zone redundant?
-
-Yes, all components of Azure Arc-enabled OSM are deployed on availability zones and are hence zone redundant.
- ## Next steps
-> **Just want to try things out?**
-> Get started quickly with an [Azure Arc Jumpstart](https://aka.ms/arc-jumpstart-osm) scenario using Cluster API.
+- Just want to try things out? Get started quickly with an [Azure Arc Jumpstart](https://aka.ms/arc-jumpstart-osm) scenario using Cluster API.
+- Get [troubleshooting help for Azure Arc-enabled OSM](extensions-troubleshooting.md#azure-arc-enabled-open-service-mesh).
+
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
For example, to upgrade a resource bridge on VMware, run: `az arcappliance upgra
To upgrade a resource bridge on System Center Virtual Machine Manager (SCVMM), run: `az arcappliance upgrade scvmm --config-file c:\contosoARB01-appliance.yaml`
-Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrade hci --config-file c:\contosoARB01-appliance.yaml`
+To upgrade a resource bridge on Azure Stack HCI, please transition to 23H2 and use the built-in upgrade management tool. More info available [here](/azure-stack/hci/update/whats-the-lifecycle-manager-23h2).
## Private cloud providers
azure-arc Deployment Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deployment-options.md
The following table highlights each method so that you can determine which works
| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md) | At scale | [Connect Windows machines using Group Policy](onboard-group-policy-powershell.md) | At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. |
-| At scale | [Connect your VMware vCenter server to Azure Arc by using the helper script](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) if you're using VMware vCenter to manage your on-premises estate. |
-| At scale | [Connect your System Center Virtual Machine Manager management server to Azure Arc by using the helper script](../system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md) if you're using SCVMM to manage your on-premises estate. |
+| At scale | [Install the Arc agent on VMware VMs at scale using Arc enabled VMware vSphere](../vmware-vsphere/enable-guest-management-at-scale.md). Arc enabled VMware vSphere allows you to [connect your VMware vCenter server to Azure](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md), automatically discover your VMware VMs, and install the Arc agent on them. Requires VMware tools on VMs.|
+| At scale | [Install the Arc agent on SCVMM VMs at scale using Arc-enabled System Center Virtual Machine Manager](../system-center-virtual-machine-manager/enable-guest-management-at-scale.md). Arc-enabled System Center Virtual Machine Manager allows you to [connect your SCVMM management server to Azure](../system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md), automatically discover your SCVMM VMs, and install the Arc agent on them. |
> [!IMPORTANT] > The Connected Machine agent cannot be installed on an Azure virtual machine. The install script will warn you and roll back if it detects the server is running in Azure.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated worker process
-description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which supports non-LTS versions of .NET and .NET Framework apps.
+description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which lets you run your functions on currently supported versions of .NET and .NET Framework.
Previously updated : 11/02/2023 Last updated : 12/13/2023 - template-concept - devx-track-dotnet
recommendations: false
# Guide for running C# Azure Functions in an isolated worker process
-This article is an introduction to working with .NET Functions isolated worker process, which runs your functions in an isolated worker process in Azure. This allows you to run your .NET class library functions on a version of .NET that is different from the version used by the Functions host process. For information about specific .NET versions supported, see [supported version](#supported-versions).
+This article is an introduction to working with Azure Functions in .NET, using the isolated worker model. This model allows your project to target versions of .NET independently of other runtime components. For information about specific .NET versions supported, see [supported version](#supported-versions).
-Use the following links to get started right away building .NET isolated worker process functions.
+Use the following links to get started right away building .NET isolated worker model functions.
| Getting started | Concepts| Samples | |--|--|--| | <ul><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md?tabs=isolated-process)</li><li>[Using command line tools](create-first-function-cli-csharp.md?tabs=isolated-process)</li><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md?tabs=isolated-process)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Monitoring](functions-monitoring.md)</li> | <ul><li>[Reference samples](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples)</li></ul> |
-If you still need to run your functions in the same process as the host, see [In-process C# class library functions](functions-dotnet-class-library.md).
+To learn just about deploying an isolated worker model project to Azure, see [Deploy to Azure Functions](#deploy-to-azure-functions).
-For a comprehensive comparison between isolated worker process and in-process .NET Functions, see [Differences between in-process and isolate worker process .NET Azure Functions](dotnet-isolated-in-process-differences.md).
+## Benefits of the isolated worker model
-To learn about migration from the in-process model to the isolated worker model, see [Migrate .NET apps from the in-process model to the isolated worker model][migrate].
+There are two modes in which you can run your .NET class library functions: either [in the same process](functions-dotnet-class-library.md) as the Functions host runtime (_in-process_) or in an isolated worker process. When your .NET functions run in an isolated worker process, you can take advantage of the following benefits:
-## Why .NET Functions isolated worker process?
-
-When it was introduced, Azure Functions only supported a tightly integrated mode for .NET functions. In this _in-process_ mode, your [.NET class library functions](functions-dotnet-class-library.md) run in the same process as the host. This mode provides deep integration between the host process and the functions. For example, when running in the same process .NET class library functions can share binding APIs and types. However, this integration also requires a tight coupling between the host process and the .NET function. For example, .NET functions running in-process are required to run on the same version of .NET as the Functions runtime. This means that your in-process functions can only run on version of .NET with Long Term Support (LTS). To enable you to run on non-LTS version of .NET, you can instead choose to run in an isolated worker process. This process isolation lets you develop functions that use current .NET releases not natively supported by the Functions runtime, including .NET Framework. Both isolated worker process and in-process C# class library functions run on LTS versions. To learn more, see [Supported versions][supported-versions].
-
-Because these functions run in a separate process, there are some [feature and functionality differences](./dotnet-isolated-in-process-differences.md) between .NET isolated function apps and .NET class library function apps.
-
-### Benefits of isolated worker process
-
-When your .NET functions run in an isolated worker process, you can take advantage of the following benefits:
++ **Fewer conflicts:** Because your functions run in a separate process, assemblies used in your app don't conflict with different version of the same assemblies used by the host process. ++ **Full control of the process**: You control the start-up of the app, which means that you can manage the configurations used and the middleware started.++ **Standard dependency injection:** Because you have full control of the process, you can use current .NET behaviors for dependency injection and incorporating middleware into your function app.++ **.NET version flexibility:** Running outside of the host process means that your functions can on versions of .NET not natively supported by the Functions runtime, including the .NET Framework.
+
+If you have an existing C# function app that runs in-process, you need to migrate your app to take advantage of these benefits. For more information, see [Migrate .NET apps from the in-process model to the isolated worker model][migrate].
-+ Fewer conflicts: because the functions run in a separate process, assemblies used in your app won't conflict with different version of the same assemblies used by the host process.
-+ Full control of the process: you control the start-up of the app and can control the configurations used and the middleware started.
-+ Dependency injection: because you have full control of the process, you can use current .NET behaviors for dependency injection and incorporating middleware into your function app.
+For a comprehensive comparison between the two modes, see [Differences between in-process and isolate worker process .NET Azure Functions](./dotnet-isolated-in-process-differences.md).
[!INCLUDE [functions-dotnet-supported-versions](../../includes/functions-dotnet-supported-versions.md)]
-## .NET isolated worker model project
+## Project structure
A .NET project for Azure Functions using the isolated worker model is basically a .NET console app project that targets a supported .NET runtime. The following are the basic files required in any .NET isolated project:
-+ [host.json](functions-host-json.md) file.
-+ [local.settings.json](functions-develop-local.md#local-settings-file) file.
+ C# project file (.csproj) that defines the project and dependencies. + Program.cs file that's the entry point for the app.
-+ Any code files [defining your functions](#bindings).
++ Any code files [defining your functions](#methods-recognized-as-functions).++ [host.json](functions-host-json.md) file that defines configuration shared by functions in your project.++ [local.settings.json](functions-develop-local.md#local-settings-file) file that defines environment variables used by your project when run locally on your machine. For complete examples, see the [.NET 8 sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/FunctionApp) and the [.NET Framework 4.8 sample project](https://github.com/Azure/azure-functions-dotnet-worker/tree/main/samples/NetFxWorker).
-> [!NOTE]
-> To be able to publish a project using the isolated worker model to either a Windows or a Linux function app in Azure, you must set a value of `dotnet-isolated` in the remote [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) application setting. To support [zip deployment](deployment-zip-push.md) and [running from the deployment package](run-functions-from-deployment-package.md) on Linux, you also need to update the `linuxFxVersion` site config setting to `DOTNET-ISOLATED|7.0`. To learn more, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
- ## Package references A .NET project for Azure Functions using the isolated worker model uses a unique set of packages, for both core functionality and binding extensions.
The following packages are required to run your .NET functions in an isolated wo
Because .NET isolated worker process functions use different binding types, they require a unique set of binding extension packages.
-You'll find these extension packages under [Microsoft.Azure.Functions.Worker.Extensions](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions).
+You find these extension packages under [Microsoft.Azure.Functions.Worker.Extensions](https://www.nuget.org/packages?q=Microsoft.Azure.Functions.Worker.Extensions).
## Start-up and configuration
This code requires `using Microsoft.Extensions.DependencyInjection;`.
Before calling `Build()` on the `HostBuilder`, you should: -- Call either `ConfigureFunctionsWebApplication()` if using [ASP.NET Core integration](#aspnet-core-integration) or `ConfigureFunctionsWorkerDefaults()` otherwise. See [HTTP trigger](#http-trigger) for details on these options.
- - If you're writing your application using F#, some trigger and binding extensions require extra configuration here. See the setup documentation for the [Blobs extension][fsharp-blobs], the [Tables extension][fsharp-tables], and the [Cosmos DB extension][fsharp-cosmos] if you plan to use this in your app.
-- Configure any services or app configuration your project requires. See [Configuration] for details.
- - If you are planning to use Application Insights, you need to call `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` in the `ConfigureServices()` delegate. See [Application Insights](#application-insights) for details.
+- Call either `ConfigureFunctionsWebApplication()` if using [ASP.NET Core integration](#aspnet-core-integration) or `ConfigureFunctionsWorkerDefaults()` otherwise. See [HTTP trigger](#http-trigger) for details on these options.
+ If you're writing your application using F#, some trigger and binding extensions require extra configuration. See the setup documentation for the [Blobs extension][fsharp-blobs], the [Tables extension][fsharp-tables], and the [Cosmos DB extension][fsharp-cosmos] when you plan to use these extensions in an F# app.
+- Configure any services or app configuration your project requires. See [Configuration](#configuration) for details.
+ If you're planning to use Application Insights, you need to call `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` in the `ConfigureServices()` delegate. See [Application Insights](#application-insights) for details.
If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. For more information, see [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework).
The [ConfigureFunctionsWorkerDefaults] method is used to add the settings requir
Having access to the host builder pipeline means that you can also set any app-specific configurations during initialization. You can call the [ConfigureAppConfiguration] method on [HostBuilder] one or more times to add the configurations required by your function app. To learn more about app configuration, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0&preserve-view=true).
-These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you'll still need to use the [host.json file](functions-host-json.md).
+These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you still need to use the [host.json file](functions-host-json.md).
### Dependency injection
-Dependency injection is simplified, compared to .NET class libraries. Rather than having to create a startup class to register services, you just have to call [ConfigureServices] on the host builder and use the extension methods on [IServiceCollection] to inject specific services.
+Dependency injection is simplified when compared to .NET in-process functions, which requires you to create a startup class to register services.
+
+For a .NET isolated process app, you use the .NET standard way of call [ConfigureServices] on the host builder and use the extension methods on [IServiceCollection] to inject specific services.
The following example injects a singleton service dependency:
namespace MyFunctionApp
} ```
-The [`ILogger<T>`][ILogger&lt;T&gt;] in this example was also obtained through dependency injection. It is registered automatically. To learn more about configuration options for logging, see [Logging](#logging).
+The [`ILogger<T>`][ILogger&lt;T&gt;] in this example was also obtained through dependency injection, so it's registered automatically. To learn more about configuration options for logging, see [Logging](#logging).
> [!TIP] > The example used a literal string for the name of the client in both `Program.cs` and the function. Consider instead using a shared constant string defined on the function class. For example, you could add `public const string CopyStorageClientName = nameof(_copyContainerClient);` and then reference `BlobCopier.CopyStorageClientName` in both locations. You could similarly define the configuration section name with the function rather than in `Program.cs`.
The following extension methods on [FunctionContext] make it easier to work with
| **`GetHttpResponseData`** | Gets the `HttpResponseData` instance when called by an HTTP trigger. | | **`GetInvocationResult`** | Gets an instance of `InvocationResult`, which represents the result of the current function execution. Use the `Value` property to get or set the value as needed. | | **`GetOutputBindings`** | Gets the output binding entries for the current function execution. Each entry in the result of this method is of type `OutputBindingData`. You can use the `Value` property to get or set the value as needed. |
-| **`BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be accessed or updated by your middleware. |
+| **`BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be used by your middleware. |
-The following is an example of a middleware implementation that reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header.
+This is an example of a middleware implementation that reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution:
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/CustomMiddleware/StampHttpHeaderMiddleware.cs" id="docsnippet_middleware_example_stampheader" :::
-For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
-
-## Cancellation tokens
-
-A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-
-Cancellation tokens are supported in .NET functions when running in an isolated worker process. The following example raises an exception when a cancellation request has been received:
-
-
-The following example performs clean-up actions if a cancellation request has been received:
--
-## Performance optimizations
-
-This section outlines options you can enable that improve performance around [cold start](./event-driven-scaling.md#cold-start).
-
-In general, your app should use the latest versions of its core dependencies. At a minimum, you should update your project as follows:
--- Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later.-- Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.16.4 or later.-- Add a framework reference to `Microsoft.AspNetCore.App`, unless your app targets .NET Framework.-
-The following example shows this configuration in the context of a project file:
-
-```xml
- <ItemGroup>
- <FrameworkReference Include="Microsoft.AspNetCore.App" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
- <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" />
- </ItemGroup>
-```
-
-### Placeholders
-
-Placeholders are a platform capability that improves cold start for apps targeting .NET 6 or later. The feature requires some opt-in configuration. To enable placeholders:
+This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header. For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
-- **Update your project as detailed in the preceding section.**-- Set the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` application setting to "1".-- Ensure that the `netFrameworkVersion` property of the function app matches your project's target framework, which must be .NET 6 or later.-- Ensure that the function app is configured to use a 64-bit process.-
-> [!IMPORTANT]
-> Setting the `WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED` to "1" requires all other aspects of the configuration to be set correctly. Any deviation can cause startup failures.
-
-The following CLI commands will set the application setting, update the `netFrameworkVersion` property, and make the app run as 64-bit. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v8.0", "v7.0", or "v6.0", according to your target .NET version.
-
-```azurecli
-az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
-az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
-az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
-```
-
-### Optimized executor
-
-The function executor is a component of the platform that causes invocations to run. An optimized version of this component is enabled by default starting with version 1.16.3 of the SDK. No additional configuration is required.
-
-### ReadyToRun
-
-You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of cold starts when running in a [Consumption plan](consumption-plan.md). ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
-
-ReadyToRun requires you to build the project against the runtime architecture of the hosting app. **If these are not aligned, your app will encounter an error at startup.** Select your runtime identifier from the table below:
-
-|Operating System | App is 32-bit<sup>1</sup> | Runtime identifier |
-|-|-|-|
-| Windows | True | `win-x86` |
-| Windows | False | `win-x64` |
-| Linux | True | N/A (not supported) |
-| Linux | False | `linux-x64` |
-
-<sup>1</sup> Only 64-bit apps are eligible for some other performance optimizations.
-
-To check if your Windows app is 32-bit or 64-bit, you can run the following CLI command, substituting `<group_name>` with the name of your resource group and `<app_name>` with the name of your application. An output of "true" indicates that the app is 32-bit, and "false" indicates 64-bit.
-
-```azurecli
- az functionapp config show -g <group_name> -n <app_name> --query "use32BitWorkerProcess"
-```
+## Methods recognized as functions
-You can change your application to 64-bit with the following command, using the same substitutions:
+A function method is a public method of a public class with a `Function` attribute applied to the method and a trigger attribute applied to an input parameter, as shown in the following example:
-```azurecli
-az functionapp config set -g <group_name> -n <app_name> --use-32bit-worker-process false`
-```
-To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following example shows a configuration for publishing to a Windows 64-bit function app.
+The trigger attribute specifies the trigger type and binds input data to a method parameter. The previous example function is triggered by a queue message, and the queue message is passed to the method in the `myQueueItem` parameter.
-```xml
-<PropertyGroup>
- <TargetFramework>net8.0</TargetFramework>
- <AzureFunctionsVersion>v4</AzureFunctionsVersion>
- <RuntimeIdentifier>win-x64</RuntimeIdentifier>
- <PublishReadyToRun>true</PublishReadyToRun>
-</PropertyGroup>
-```
+The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name. The method must be a public member of a public class. It should generally be an instance method so that services can be passed in via [dependency injection](#dependency-injection).
-If you don't want to set the `<RuntimeIdentifier>` as part of the project file, you can also configure this as part of the publish gesture itself. For example, with a Windows 64-bit function app, the .NET CLI command would be:
+## Function parameters
-```dotnetcli
-dotnet publish --runtime win-x64
-```
+Here are some of the parameters that you can include as part of a function method signature:
-In Visual Studio, the "Target Runtime" option in the publish profile should be set to the correct runtime identifier. If it is set to the default value "Portable", ReadyToRun will not be used.
+- [Bindings](#bindings), which are marked as such by decorating the parameters as attributes. The function must contain exactly one trigger parameter.
+- An [execution context object](#execution-context), which provides information about the current invocation.
+- A [cancellation token](#cancellation-tokens), used for graceful shutdown.
-## Methods recognized as functions
+### Execution context
-A function method is a public method of a public class with a `Function` attribute applied to the method and a trigger attribute applied to an input parameter, as shown in the following example:
+.NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [`ILogger`][ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. You can use this context to obtain an [`ILogger`][ILogger] without having to use dependency injection. To learn more, see [Logging](#logging).
+### Cancellation tokens
-The trigger attribute specifies the trigger type and binds input data to a method parameter. The previous example function is triggered by a queue message, and the queue message is passed to the method in the `myQueueItem` parameter.
+A function can accept a [CancellationToken](/dotnet/api/system.threading.cancellationtoken) parameter, which enables the operating system to notify your code when the function is about to be terminated. You can use this notification to make sure the function doesn't terminate unexpectedly in a way that leaves data in an inconsistent state.
-The `Function` attribute marks the method as a function entry point. The name must be unique within a project, start with a letter and only contain letters, numbers, `_`, and `-`, up to 127 characters in length. Project templates often create a method named `Run`, but the method name can be any valid C# method name. The method must be a public member of a public class. It should generally be an instance method so that services can be passed in via [dependency injection](#dependency-injection).
+Cancellation tokens are supported in .NET functions when running in an isolated worker process. The following example raises an exception when a cancellation request is received:
-## Execution context
+
+The following example performs clean-up actions when a cancellation request is received:
-.NET isolated passes a [FunctionContext] object to your function methods. This object lets you get an [`ILogger`][ILogger] instance to write to the logs by calling the [GetLogger] method and supplying a `categoryName` string. To learn more, see [Logging](#logging).
## Bindings
-Bindings are defined by using attributes on methods, parameters, and return types. Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). You can also bind to [types from some service SDKs](#sdk-types). For HTTP triggers, see the [HTTP trigger](#http-trigger) section below.
+Bindings are defined by using attributes on methods, parameters, and return types. Bindings can provide data as strings, arrays, and serializable types, such as plain old class objects (POCOs). For some binding extensions, you can also [bind to service-specific types](#sdk-types) defined in service SDKs.
-For a complete set of reference samples for using triggers and bindings with isolated worker process functions, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
+For HTTP triggers, see the [HTTP trigger](#http-trigger) section.
+
+For a complete set of reference samples using triggers and bindings with isolated worker process functions, see the [binding extensions reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/Extensions).
### Input bindings
The response from an HTTP trigger is always considered an output, so a return va
### SDK types
-For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide additional capability beyond what a serialized string or plain-old CLR object (POCO) can offer. To use the newer types, your project needs to be updated to use newer versions of core dependencies.
+For some service-specific binding types, binding data can be provided using types from service SDKs and frameworks. These provide more capability beyond what a serialized string or plain-old CLR object (POCO) can offer. To use the newer types, your project needs to be updated to use newer versions of core dependencies.
| Dependency | Version requirement | |-|-| |[Microsoft.Azure.Functions.Worker]| 1.18.0 or later | |[Microsoft.Azure.Functions.Worker.Sdk]| 1.13.0 or later |
-When testing SDK types locally on your machine, you will also need to use [Azure Functions Core Tools version 4.0.5000 or later](./functions-run-local.md). You can check your current version using the command `func version`.
+When testing SDK types locally on your machine, you also need to use [Azure Functions Core Tools](./functions-run-local.md), version 4.0.5000 or later. You can check your current version using the `func version` command.
-Each trigger and binding extension also has its own minimum version requirement, which is described in the extension reference articles. The following service-specific bindings offer additional SDK types:
+Each trigger and binding extension also has its own minimum version requirement, which is described in the extension reference articles. The following service-specific bindings provide SDK types:
| Service | Trigger | Input binding | Output binding | |-|-|-|-| | [Azure Blobs][blob-sdk-types] | **Generally Available** | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Queues][queue-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Service Bus][servicebus-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Queues][queue-sdk-types] | **Generally Available** | _Input binding doesn't exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Service Bus][servicebus-sdk-types] | **Generally Available** | _Input binding doesn't exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Event Hubs][eventhub-sdk-types] | **Generally Available** | _Input binding doesn't exist_ | _SDK types not recommended.<sup>1</sup>_ |
| [Azure Cosmos DB][cosmos-sdk-types] | _SDK types not used<sup>2</sup>_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Tables][tables-sdk-types] | _Trigger does not exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
-| [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding does not exist_ | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Tables][tables-sdk-types] | _Trigger doesn't exist_ | **Generally Available** | _SDK types not recommended.<sup>1</sup>_ |
+| [Azure Event Grid][eventgrid-sdk-types] | **Generally Available** | _Input binding doesn't exist_ | _SDK types not recommended.<sup>1</sup>_ |
[blob-sdk-types]: ./functions-bindings-storage-blob.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types [cosmos-sdk-types]: ./functions-bindings-cosmosdb-v2.md?tabs=isolated-process%2Cextensionv4&pivots=programming-language-csharp#binding-types
Each trigger and binding extension also has its own minimum version requirement,
[eventhub-sdk-types]: ./functions-bindings-event-hubs.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types [servicebus-sdk-types]: ./functions-bindings-service-bus.md?tabs=isolated-process%2Cextensionv5&pivots=programming-language-csharp#binding-types
-<sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. See [Register Azure clients](#register-azure-clients) for an example of how to do this with dependency injection.
+<sup>1</sup> For output scenarios in which you would use an SDK type, you should create and work with SDK clients directly instead of using an output binding. See [Register Azure clients](#register-azure-clients) for a dependency injection example.
<sup>2</sup> The Cosmos DB trigger uses the [Azure Cosmos DB change feed](../cosmos-db/change-feed.md) and exposes change feed items as JSON-serializable types. The absence of SDK types is by-design for this scenario. > [!NOTE] > When using [binding expressions](./functions-bindings-expressions-patterns.md) that rely on trigger data, SDK types for the trigger itself cannot be used.
-### HTTP trigger
+## HTTP trigger
[HTTP triggers](./functions-bindings-http-webhook-trigger.md) allow a function to be invoked by an HTTP request. There are two different approaches that can be used: - An [ASP.NET Core integration model](#aspnet-core-integration) that uses concepts familiar to ASP.NET Core developers-- A [built-in model](#built-in-http-model) which does not require additional dependencies and uses custom types for HTTP requests and responses
+- A [built-in model](#built-in-http-model), which doesn't require extra dependencies and uses custom types for HTTP requests and responses. This approach is maintained for backward compatibility with previous .NET isolated worker apps.
-#### ASP.NET Core integration
+### ASP.NET Core integration
-This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. This model is not available to [apps targeting .NET Framework][supported-versions], which should instead leverage the [built-in model](#built-in-http-model).
+This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. This model isn't available to [apps targeting .NET Framework][supported-versions], which should instead use the [built-in model](#built-in-http-model).
> [!NOTE]
-> Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available.
+> Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available. ASP.NET Core integration requires you to use updated packages.
+
+To enable ASP.NET Core integration for HTTP:
-1. Add a reference to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore NuGet package, version 1.0.0 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/) to your project.
+1. Add a reference in your project to the [Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore/) package, version 1.0.0 or later.
- You must also update your project to use [version 1.11.0 or later of Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) and [version 1.16.0 or later of Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/).
+1. Update your project to use these specific package versions:
-2. In your `Program.cs` file, update the host builder configuration to use `ConfigureFunctionsWebApplication()` instead of `ConfigureFunctionsWorkerDefaults()`. The following example shows a minimal setup without other customizations:
+ + [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/), version 1.11.0. or later
+ + [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/), version 1.16.0 or later.
+
+1. In your `Program.cs` file, update the host builder configuration to use `ConfigureFunctionsWebApplication()` instead of `ConfigureFunctionsWorkerDefaults()`. The following example shows a minimal setup without other customizations:
```csharp using Microsoft.Extensions.Hosting;
This section shows how to work with the underlying HTTP request and response obj
host.Run(); ```
-3. You can then update your HTTP-triggered functions to use the ASP.NET Core types. The following example shows `HttpRequest` and an `IActionResult` used for a simple "hello, world" function:
+1. Update any existing HTTP-triggered functions to use the ASP.NET Core types. This example shows the standard `HttpRequest` and an `IActionResult` used for a simple "hello, world" function:
```csharp [Function("HttpFunction")]
This section shows how to work with the underlying HTTP request and response obj
} ```
-#### Built-in HTTP model
+### Built-in HTTP model
-In the built-in model, the system translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optionally a message `Body`. This object is a representation of the HTTP request but is not directly connected to the underlying HTTP listener or the received message.
+In the built-in model, the system translates the incoming HTTP request message into an [HttpRequestData] object that is passed to the function. This object provides data from the request, including `Headers`, `Cookies`, `Identities`, `URL`, and optionally a message `Body`. This object is a representation of the HTTP request but isn't directly connected to the underlying HTTP listener or the received message.
Likewise, the function returns an [HttpResponseData] object, which provides data used to create the HTTP response, including message `StatusCode`, `Headers`, and optionally a message `Body`.
The following example demonstrates the use of `HttpRequestData` and `HttpRespons
:::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Http/HttpFunction.cs" id="docsnippet_http_trigger" ::: - ## Logging In .NET isolated, you can write to logs by using an [`ILogger<T>`][ILogger&lt;T&gt;] or [`ILogger`][ILogger] instance. The logger can be obtained through [dependency injection](#dependency-injection) of an [`ILogger<T>`][ILogger&lt;T&gt;] or of an [ILoggerFactory]:
var host = new HostBuilder()
### Application Insights
-You can configure your isolated process application to emit logs directly [Application Insights](../azure-monitor/app/app-insights-overview.md?tabs=net), giving you control over how those logs are emitted. This replaces the default behavior of [relaying custom logs through the host](./configure-monitoring.md#custom-application-logs). To work with Application Insights directly, you will need to add a reference to [Microsoft.Azure.Functions.Worker.ApplicationInsights, version 1.0.0 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights/). You will also need to reference [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService). Add these packages to your isolated process project:
+You can configure your isolated process application to emit logs directly [Application Insights](../azure-monitor/app/app-insights-overview.md?tabs=net). This behavior replaces the default behavior of [relaying logs through the host](./configure-monitoring.md#custom-application-logs), and is recommended because it gives you control over how those logs are emitted.
+
+#### Install packages
+
+To write logs directly to Application Insights from your code, add references to these packages in your project:
+++ [Microsoft.Azure.Functions.Worker.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.ApplicationInsights/), version 1.0.0 or later. ++ [Microsoft.ApplicationInsights.WorkerService](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WorkerService).+
+You can run the following commands to add these references to your project:
```dotnetcli dotnet add package Microsoft.ApplicationInsights.WorkerService dotnet add package Microsoft.Azure.Functions.Worker.ApplicationInsights ```
-You then need to call to `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file:
+#### Configure startup
+
+With the packages installed, you must call `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` during service configuration in your `Program.cs` file, as in this example:
```csharp using Microsoft.Azure.Functions.Worker;
var host = new HostBuilder()
host.Run(); ```
-The call to `ConfigureFunctionsApplicationInsights()` adds an `ITelemetryModule` listening to a Functions-defined `ActivitySource`. This creates dependency telemetry needed to support distributed tracing in Application Insights. To learn more about `AddApplicationInsightsTelemetryWorkerService()` and how to use it, see [Application Insights for Worker Service applications](../azure-monitor/app/worker-service.md).
+The call to `ConfigureFunctionsApplicationInsights()` adds an `ITelemetryModule`, which listens to a Functions-defined `ActivitySource`. This creates the dependency telemetry required to support distributed tracing. To learn more about `AddApplicationInsightsTelemetryWorkerService()` and how to use it, see [Application Insights for Worker Service applications](../azure-monitor/app/worker-service.md).
+
+#### Managing log levels
> [!IMPORTANT] > The Functions host and the isolated process worker have separate configuration for log levels, etc. Any [Application Insights configuration in host.json](./functions-host-json.md#applicationinsights) will not affect the logging from the worker, and similarly, configuration made in your worker code will not impact logging from the host. You need to apply changes in both places if your scenario requires customization at both layers.
var host = new HostBuilder()
host.Run(); ```
-## Debugging when targeting .NET Framework
+## Performance optimizations
+
+This section outlines options you can enable that improve performance around [cold start](./event-driven-scaling.md#cold-start).
+
+In general, your app should use the latest versions of its core dependencies. At a minimum, you should update your project as follows:
+
+1. Upgrade [Microsoft.Azure.Functions.Worker] to version 1.19.0 or later.
+1. Upgrade [Microsoft.Azure.Functions.Worker.Sdk] to version 1.16.4 or later.
+1. Add a framework reference to `Microsoft.AspNetCore.App`, unless your app targets .NET Framework.
+
+The following snippet shows this configuration in the context of a project file:
+
+```xml
+ <ItemGroup>
+ <FrameworkReference Include="Microsoft.AspNetCore.App" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.19.0" />
+ <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.16.4" />
+ </ItemGroup>
+```
+
+### Placeholders
+
+Placeholders are a platform capability that improves cold start for apps targeting .NET 6 or later. To use this optimization, you must explicitly enable placeholders using these steps:
+
+1. Update your project configuration to use the latest dependency versions, as detailed in the previous section.
+
+1. Set the [`WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED`](./functions-app-settings.md#website_use_placeholder_dotnetisolated) application setting to `1`, which you can do by using this [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set) command:
+
+ ```azurecli
+ az functionapp config appsettings set -g <groupName> -n <appName> --settings 'WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED=1'
+ ```
+
+ In this example, replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app.
+
+1. Make sure that the [`netFrameworkVersion`](./functions-app-settings.md#netframeworkversion) property of the function app matches your project's target framework, which must be .NET 6 or later. You can do this by using this [az functionapp config set](/cli/azure/functionapp/config#az-functionapp-config-set) command:
+
+ ```azurecli
+ az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
+ ```
+
+ In this example, also replace `<framework>` with the appropriate version string, such as `v8.0`, `v7.0`, or `v6.0`, according to your target .NET version.
+
+1. Make sure that your function app is configured to use a 64-bit process, which you can do by using this [az functionapp config set](/cli/azure/functionapp/config#az-functionapp-config-set) command:
+
+ ```azurecli
+ az functionapp config set -g <groupName> -n <appName> --use-32bit-worker-process false
+ ```
+
+> [!IMPORTANT]
+> When setting the [`WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED`](./functions-app-settings.md#website_use_placeholder_dotnetisolated) to `1`, all other function app configurations must be set correctly. Otherwise, your function app might fail to start.
+
+### Optimized executor
+
+The function executor is a component of the platform that causes invocations to run. An optimized version of this component is enabled by default starting with version 1.16.2 of the SDK. No other configuration is required.
+
+### ReadyToRun
+
+You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of cold starts when running in a [Consumption plan](consumption-plan.md). ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
+
+ReadyToRun requires you to build the project against the runtime architecture of the hosting app. **If these are not aligned, your app will encounter an error at startup.** Select your runtime identifier from this table:
+
+|Operating System | App is 32-bit<sup>1</sup> | Runtime identifier |
+|-|-|-|
+| Windows | True | `win-x86` |
+| Windows | False | `win-x64` |
+| Linux | True | N/A (not supported) |
+| Linux | False | `linux-x64` |
+
+<sup>1</sup> Only 64-bit apps are eligible for some other performance optimizations.
+
+To check if your Windows app is 32-bit or 64-bit, you can run the following CLI command, substituting `<group_name>` with the name of your resource group and `<app_name>` with the name of your application. An output of "true" indicates that the app is 32-bit, and "false" indicates 64-bit.
+
+```azurecli
+ az functionapp config show -g <group_name> -n <app_name> --query "use32BitWorkerProcess"
+```
+
+You can change your application to 64-bit with the following command, using the same substitutions:
+
+```azurecli
+az functionapp config set -g <group_name> -n <app_name> --use-32bit-worker-process false`
+```
+
+To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following example shows a configuration for publishing to a Windows 64-bit function app.
+
+```xml
+<PropertyGroup>
+ <TargetFramework>net8.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ <RuntimeIdentifier>win-x64</RuntimeIdentifier>
+ <PublishReadyToRun>true</PublishReadyToRun>
+</PropertyGroup>
+```
+
+If you don't want to set the `<RuntimeIdentifier>` as part of the project file, you can also configure this as part of the publishing gesture itself. For example, with a Windows 64-bit function app, the .NET CLI command would be:
+
+```dotnetcli
+dotnet publish --runtime win-x64
+```
+
+In Visual Studio, the **Target Runtime** option in the publish profile should be set to the correct runtime identifier. When set to the default value of **Portable**, ReadyToRun isn't used.
+
+## Deploy to Azure Functions
+
+When running in Azure, your function code project must run in either a function app or in a Linux container. The function app and other required Azure resources must exist before you deploy your code.
+
+You can also deploy your function app in a Linux container. For more information, see [Working with containers and Azure Functions](functions-how-to-custom-container.md).
+
+### Create Azure resources
+
+You can create your function app and other required resources in Azure using one of these methods:
+++ [Visual Studio](functions-develop-vs.md#publish-to-azure): Visual Studio can create resources for you during the code publishing process.++ [Visual Studio Code](functions-develop-vs-code.md#publish-to-azure): Visual Studio Code can connect to your subscription, create the resources needed by your app, and then publish your code.++ [Azure CLI](create-first-function-cli-csharp.md#create-supporting-azure-resources-for-your-function): You can use the Azure CLI to create the required resources in Azure. ++ [Azure PowerShell](./create-resources-azure-powershell.md#create-a-serverless-function-app-for-c): You can use Azure PowerShell to create the required resources in Azure. ++ [Deployment templates](./functions-infrastructure-as-code.md): You can use ARM templates and Bicep files to automate the deployment of the required resources to Azure. Make sure your template includes any [required settings](#deployment-requirements).++ [Azure portal](./functions-create-function-app-portal.md): You can create the required resources in the [Azure portal](https://portal.azure.com).+
+### Publish code project
+
+After creating your function app and other required resources in Azure, you can deploy the code project to Azure using one of these methods:
+++ [Visual Studio](functions-develop-vs.md#publish-to-azure): Simple manual deployment during development. ++ [Visual Studio Code](functions-develop-vs-code.md?tabs=isolated-process&pivots=programming-language-csharp#republish-project-files): Simple manual deployment during development.++ [Azure Functions Core Tools](functions-run-local.md?tabs=linuxisolated-process&pivots=programming-language-csharp#project-file-deployment): Deploy project file from the command line.++ [Continuous deployment](./functions-continuous-deployment.md): Useful for ongoing maintenance, frequently to a [staging slot](./functions-deployment-slots.md). ++ [Deployment templates](./functions-infrastructure-as-code.md#zip-deployment-package): You can use ARM templates or Bicep files to automate package deployments.+
+For more information, see [Deployment technologies in Azure Functions](functions-deployment-technologies.md).
+
+### Deployment requirements
+
+There are a few requirements for running .NET functions in the isolated worker model in Azure, depending on the operating system:
+
+### [Windows](#tab/windows)
+++ [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) must be set to a value of `dotnet-isolated`.++ [netFrameworkVersion](functions-app-settings.md#netframeworkversion) must be set to the desired version.+
+### [Linux](#tab/linux)
+++ [FUNCTIONS_WORKER_RUNTIME](functions-app-settings.md#functions_worker_runtime) must be set to a value of `dotnet-isolated`.++ [`linuxFxVersion`](./functions-app-settings.md#linuxfxversion) must be set to the [correct base image](update-language-versions.md?tabs=azure-cli%2Clinux&pivots=programming-language-csharp#update-the-language-version), like `DOTNET-ISOLATED|8.0`. +++
+When you create your function app in Azure using the methods in the previous section, these required settings are added for you. When you create these resources [by using ARM templates or Bicep files for automation](functions-infrastructure-as-code.md), you must make sure to set them in the template.
+
+## Debugging
+
+When running locally using Visual Studio or Visual Studio Code, you're able to debug your .NET isolated worker project as normal. However, there are two debugging scenarios that don't work as expected.
+
+### Remote Debugging using Visual Studio
+
+Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
+
+### Debugging when targeting .NET Framework
If your isolated project targets .NET Framework 4.8, the current preview scope requires manual steps to enable debugging. These steps aren't required if using another target framework.
-Your app should start with a call to `FunctionsDebugger.Enable();` as its first operation. This occurs in the `Main()` method before initializing a HostBuilder. Your `Program.cs` file should look similar to the following:
+Your app should start with a call to `FunctionsDebugger.Enable();` as its first operation. This occurs in the `Main()` method before initializing a HostBuilder. Your `Program.cs` file should look similar to this:
```csharp using System;
In your project directory (or its build output directory), run:
func host start --dotnet-isolated-debug ```
-This will start your worker, and the process will stop with the following message:
+This starts your worker, and the process stops with the following message:
```azurecli Azure Functions .NET Worker (PID: <process id>) initialized in debug mode. Waiting for debugger to attach...
Where `<process id>` is the ID for your worker process. You can now use Visual S
After the debugger is attached, the process execution resumes, and you'll be able to debug.
-## Remote Debugging using Visual Studio
-
-Because your isolated worker process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
- ## Preview .NET versions
-Before a generally available release, a .NET version might be released in a "Preview" or "Go-live" state. See the [.NET Official Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for details on these states.
+Before a generally available release, a .NET version might be released in a _Preview_ or _Go-live_ state. See the [.NET Official Support Policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) for details on these states.
-While it might be possible to target a given release from a local Functions project, function apps hosted in Azure might not have that release available. Azure Functions can only be used with "Preview" or "Go-live" releases noted in this section.
+While it might be possible to target a given release from a local Functions project, function apps hosted in Azure might not have that release available. Azure Functions can only be used with Preview or Go-live releases noted in this section.
-Azure Functions does not currently work with any "Preview" or "Go-live" .NET releases. See [Supported versions][supported-versions] for a list of generally available releases that you can use.
+Azure Functions doesn't currently work with any "Preview" or "Go-live" .NET releases. See [Supported versions][supported-versions] for a list of generally available releases that you can use.
### Using a preview .NET SDK
To use Azure Functions with a preview version of .NET, you need to update your p
1. Installing the relevant .NET SDK version in your development 1. Changing the `TargetFramework` setting in your `.csproj` file
-When deploying to a function app in Azure, you also need to ensure that the framework is made available to the app. To do so on Windows, you can use the following CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as "v8.0".
+When deploying to a function app in Azure, you also need to ensure that the framework is made available to the app. To do so on Windows, you can use the following CLI command. Replace `<groupName>` with the name of the resource group, and replace `<appName>` with the name of your function app. Replace `<framework>` with the appropriate version string, such as `v8.0`.
```azurecli az functionapp config set -g <groupName> -n <appName> --net-framework-version <framework>
az functionapp config set -g <groupName> -n <appName> --net-framework-version <f
Keep these considerations in mind when using Functions with preview versions of .NET:
-If you author your functions in Visual Studio, you must use [Visual Studio Preview](https://visualstudio.microsoft.com/vs/preview/), which supports building Azure Functions projects with .NET preview SDKs. You should also ensure you have the latest Functions tools and templates. To update these, navigate to `Tools->Options`, select `Azure Functions` under `Projects and Solutions`, and then click the `Check for updates` button, installing updates as prompted.
++ When you author your functions in Visual Studio, you must use [Visual Studio Preview](https://visualstudio.microsoft.com/vs/preview/), which supports building Azure Functions projects with .NET preview SDKs. +++ Make sure you have the latest Functions tools and templates. To update your tools:+
+ 1. Navigate to **Tools** > **Options**, choose **Azure Functions** under **Projects and Solutions**.
+ 1. Select **Check for updates** and install updates as prompted.
+++ During a preview period, your development environment might have a more recent version of the .NET preview than the hosted service. This can cause your function app to fail when deployed. To address this, you can specify the version of the SDK to use in [`global.json`](/dotnet/core/tools/global-json).
-During the preview period, your development environment might have a more recent version of the .NET preview than the hosted service. This can cause the application to fail when deployed. To address this, you can configure which version of the SDK to use in [`global.json`](/dotnet/core/tools/global-json). First, identify which versions you have installed using `dotnet --list-sdks` and note the version that matches what the service supports. Then you can run `dotnet new globaljson --sdk-version <sdk-version> --force`, substituting `<sdk-version>` for the version you noted in the previous command. For example, `dotnet new globaljson --sdk-version dotnet-sdk-8.0.100-preview.7.23376.3 --force` will cause the system to use the .NET 8 Preview 7 SDK when building your project.
+ 1. Run the `dotnet --list-sdks` command and note the preview version you're currently using during local development.
+ 1. Run the `dotnet new globaljson --sdk-version <SDK_VERSION> --force` command, where `<SDK_VERSION>` is the version you're using locally. For example, `dotnet new globaljson --sdk-version dotnet-sdk-8.0.100-preview.7.23376.3 --force` causes the system to use the .NET 8 Preview 7 SDK when building your project.
-Note that due to just-in-time loading of preview frameworks, function apps running on Windows can experience increased cold start times when compared against earlier GA versions.
+> [!NOTE]
+> Because of the just-in-time loading of preview frameworks, function apps running on Windows can experience increased cold start times when compared against earlier GA versions.
## Next steps
azure-functions Durable Functions Dotnet Isolated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-dotnet-isolated-overview.md
This article is an overview of Durable Functions in the [.NET isolated worker](.
## Why use Durable Functions in the .NET isolated worker?
-Using this model lets you get all the great benefits that come with the Azure Functions .NET isolated worker process. For more information, see [here](../dotnet-isolated-process-guide.md#why-net-functions-isolated-worker-process). Additionally, this new SDK includes some new [features](#feature-improvements-over-in-process-durable-functions).
+Using this model lets you get all the great benefits that come with the Azure Functions .NET isolated worker process. For more information, see [Benefits of the isolated worker model](../dotnet-isolated-process-guide.md#benefits-of-the-isolated-worker-model). Additionally, this new SDK includes some new [features](#feature-improvements-over-in-process-durable-functions).
### Feature improvements over in-process Durable Functions
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Indicates whether to use a specific [cold start](event-driven-scaling.md#cold-st
||| |WEBSITE_USE_PLACEHOLDER|`1`| +
+## WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED
+
+Indicates whether to use a specific [cold start](event-driven-scaling.md#cold-start) optimization when running .NET isolated worker process functions on the [Consumption plan](consumption-plan.md). Set to `0` to disable the cold-start optimization on the Consumption plan.
+
+|Key|Sample value|
+|||
+|WEBSITE_USE_PLACEHOLDER_DOTNETISOLATED|`1`|
+ ## WEBSITE\_VNET\_ROUTE\_ALL > [!IMPORTANT]
azure-functions Functions Develop Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs-code.md
If you want to get started right away, complete the [Visual Studio Code quicksta
These prerequisites are only required to [run and debug your functions locally](#run-functions-locally). They aren't required to create or publish projects to Azure Functions.
-+ The [Azure Functions Core Tools](functions-run-local.md), which enables an integrated local debugging experience. When using the Azure Functions extension, the easiest way to install Core Tools is by running the `Azure Functions: Install or Update Azure Functions Core Tools` command from the command pallet.
++ The [Azure Functions Core Tools](functions-run-local.md), which enables an integrated local debugging experience. When you have the Azure Functions extension installed, the easiest way to install or update Core Tools is by running the `Azure Functions: Install or Update Azure Functions Core Tools` command from the command pallet. ::: zone pivot="programming-language-csharp" + The [C# extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) for Visual Studio Code.
These prerequisites are only required to [run and debug your functions locally](
The Functions extension lets you create a function app project, along with your first function. The following steps show how to create an HTTP-triggered function in a new Functions project. [HTTP trigger](functions-bindings-http-webhook.md) is the simplest function trigger template to demonstrate.
-1. 1. Choose the Azure icon in the Activity bar, then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
+1. choose the Azure icon in the Activity bar, then in the **Workspace (local)** area, select the **+** button, choose **Create Function** in the dropdown. When prompted, choose **Create new project**.
:::image type="content" source="./media/functions-create-first-function-vs-code/create-new-project.png" alt-text="Screenshot of create a new project window.":::
The project template creates a project in your chosen language and installs requ
Depending on your language, these other files are created: ::: zone pivot="programming-language-csharp"
-An HttpExample.cs class library file, the contents of which vary depending on whether your project runs in an [isolated worker process](dotnet-isolated-process-guide.md#net-isolated-worker-model-project) or [in-process](functions-dotnet-class-library.md#functions-class-library-project) with the Functions host.
+An HttpExample.cs class library file, the contents of which vary depending on whether your project runs in an [isolated worker process](dotnet-isolated-process-guide.md#project-structure) or [in-process](functions-dotnet-class-library.md#functions-class-library-project) with the Functions host.
::: zone-end ::: zone pivot="programming-language-java" + A pom.xml file in the root folder that defines the project and deployment parameters, including project dependencies and the [Java version](functions-reference-java.md#java-versions). The pom.xml also contains information about the Azure resources that are created during a deployment.
At this point, you can do one of these tasks:
## Add a function to your project
-You can add a new function to an existing project by using one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
+You can add a new function to an existing project baswed on one of the predefined Functions trigger templates. To add a new function trigger, select F1 to open the command palette, and then search for and run the command **Azure Functions: Create Function**. Follow the prompts to choose your trigger type and define the required attributes of the trigger. If your trigger requires an access key or connection string to connect to a service, get it ready before you create the function trigger.
::: zone pivot="programming-language-csharp" The results of this action are that a new C# class library (.cs) file is added to your project.
The Azure Functions extension lets you run individual functions. You can run fun
For HTTP trigger functions, the extension calls the HTTP endpoint. For other kinds of triggers, it calls administrator APIs to start the function. The message body of the request sent to the function depends on the type of trigger. When a trigger requires test data, you're prompted to enter data in a specific JSON format.
-### Run functions in Azure.
+### Run functions in Azure
To execute a function in Azure from Visual Studio Code.
For more information, see [Local settings file](#local-settings).
#### <a name="debugging-functions-locally"></a>Debug functions locally
-To debug your functions, select F5. If you haven't already downloaded [Core Tools][Azure Functions Core Tools], you're prompted to do so. When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the `func start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
+To debug your functions, select F5. If [Core Tools][Azure Functions Core Tools] isn't available, you're prompted to install it. When Core Tools is installed and running, output is shown in the Terminal. This step is the same as running the `func start` Core Tools command from the Terminal, but with extra build tasks and an attached debugger.
When the project is running, you can use the **Execute Function Now...** feature of the extension to trigger your functions as you would when the project is deployed to Azure. With the project running in debug mode, breakpoints are hit in Visual Studio Code as you would expect.
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
The following considerations apply to Core Tools installations:
+ When upgrading to the latest version of Core Tools, you should use the same method that you used for original installation to perform the upgrade. For example, if you used an MSI on Windows, uninstall the current MSI and install the latest one. Or if you used npm, rerun the `npm install command`.
-+ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of life (EOL). For more information, see [Azure Functions runtime versions overview](functions-versions.md).
++ Version 2.x and 3.x of Core Tools were used with versions 2.x and 3.x of the Functions runtime, which have reached their end of support. For more information, see [Azure Functions runtime versions overview](functions-versions.md). ::: zone pivot="programming-language-csharp,programming-language-javascript" + Version 1.x of Core Tools is required when using version 1.x of the Functions Runtime, which is still supported. This version of Core Tools can only be run locally on Windows computers. If you're currently running on version 1.x, you should consider [migrating your app to version 4.x](migrate-version-1-version-4.md) today. ::: zone-end
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
zone_pivot_groups: programming-languages-set-functions
| 1.x | GA ([support ends September 14, 2026](https://aka.ms/azure-functions-retirements/hostv1)) | Supported only for C# apps that must use .NET Framework. This version is in maintenance mode, with enhancements provided only in later versions. **Support will end for version 1.x on September 14, 2026.** We highly recommend you [migrate your apps to version 4.x](migrate-version-1-version-4.md?pivots=programming-language-csharp), which supports .NET Framework 4.8, .NET 6, .NET 7, and .NET 8.| > [!IMPORTANT]
-> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support. For more information, see [Retired versions](#retired-versions).
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of extended support. For more information, see [Retired versions](#retired-versions).
This article details some of the differences between supported versions, how you can create each version, and how to change the version on which your functions run.
To learn more about extension bundles, see [Extension bundles](functions-binding
[!INCLUDE [functions-runtime-1x-retirement-note](../../includes/functions-runtime-1x-retirement-note.md)]
-These versions of the Functions runtime reached the end of life (EOL) for extended support on December 13, 2022.
+These versions of the Functions runtime reached the end of extended support on December 13, 2022.
| Version | Current support level | Previous support level | | | | |
-| 3.x | Out-of-support |GA |
+| 3.x | Out-of-support | GA |
| 2.x | Out-of-support | GA | As soon as possible, you should migrate your apps to version 4.x to obtain full support. For a complete set of language-specific migration instructions, see [Migrate apps to Azure Functions version 4.x](migrate-version-3-version-4.md).
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
zone_pivot_groups: programming-languages-set-functions
Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. For more information about Functions runtime versions, see [Azure Functions runtime versions overview](./functions-versions.md). > [!IMPORTANT]
-> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of life (EOL) of extended support. For more information, see [Retired versions](functions-versions.md#retired-versions).
+> As of December 13, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime have reached the end of extended support. For more information, see [Retired versions](functions-versions.md#retired-versions).
This article walks you through the process of safely migrating your function app to run on version 4.x of the Functions runtime. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top).
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
Last updated 05/17/2023
# How to target Azure Functions runtime versions
-A function app runs on a specific version of the Azure Functions runtime. There have been four major versions: [4.x, 3.x, 2.x, and 1.x](functions-versions.md). By default, function apps are created in version 4.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
+A function app runs on a specific [version of the Azure Functions runtime](functions-versions.md). By default, function apps are created in version 4.x of the runtime. This article explains how to configure a function app in Azure to run on the version you choose. For information about how to configure a local development environment for a specific version, see [Code and test Azure Functions locally](functions-run-local.md).
The way that you manually target a specific version depends on whether you're running Windows or Linux.
When a new version is publicly available, a prompt in the portal gives you the c
The following table shows the `FUNCTIONS_EXTENSION_VERSION` values for each major version to enable automatic updates:
-| Major version | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration |
+| Major version<sup>2</sup> | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration |
| - | -- | - | | 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#upgrade-your-function-app-in-azure)<sup>1</sup> |
-| 3.x<sup>2</sup>| `~3` | |
-| 2.x<sup>2</sup>| `~2` | |
| 1.x<sup>3</sup>| `~1` | |
-<sup>1</sup> If using a later version with the .NET Isolated worker model, instead enable that version.
-
-<sup>2</sup>Reached the end of life (EOL) for extended support on December 13, 2022. For a detailed support statement about end-of-life versions, see [this migration article](migrate-version-3-version-4.md).
-
-<sup>3</sup>[Support for version 1.x of the Azure Functions runtime ends on September 14, 2026](https://aka.ms/azure-functions-retirements/hostv1). Before that date, [migrate your version 1.x apps to version 4.x](./migrate-version-1-version-4.md) to maintain full support.
+<sup>1</sup> If using a later version with the .NET Isolated worker model, instead enable that version.
+<sup>2</sup>Reached the end of extended support on December 13, 2022. For a detailed support statement about end-of-support versions, see [this migration article](migrate-version-3-version-4.md).
+<sup>3</sup>[Support for version 1.x of the Azure Functions runtime ends on September 14, 2026](https://aka.ms/azure-functions-retirements/hostv1). Before that date, [migrate your version 1.x apps to version 4.x](./migrate-version-1-version-4.md) to maintain full support.
A change to the runtime version causes a function app to restart.
azure-functions Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/supported-languages.md
Title: Supported languages in Azure Functions
-description: Learn which languages are supported for developing your Functions in Azure, the support level of the various language versions, and potential end-of-life dates.
+description: Learn which languages are supported for developing your Functions in Azure, the support level of the various language versions, and potential end-of-support dates.
Last updated 08/27/2023
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
+### [3.1.0] (January 12, 2024)
+
+#### New features (3.1.0)
+
+- Added a new control, `atlas.control.ScaleControl`, to display a scale bar on the map.
+
+- Introduced functions for accessing, updating, and deleting a feature state.
+
+#### Bug fixes (3.1.0)
+
+- Addressed the issue of layer ordering after a style update, when a user layer is inserted before another user layer.
+
+- **\[BREAKING\]** Aligned the polygon fill pattern behavior with Maplibre. Now, the `fillPattern` option consistently disables the `fillColor` option. When configuring `fillColor` for polygon layers, ensure that `fillPattern` is set to `undefined`.
+ ### [3.0.3] (November 29, 2023) #### New features (3.0.3)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2
+### [2.3.6] (January 12, 2024)
+
+#### New features (2.3.6)
+
+- Added a new control, `atlas.control.ScaleControl`, to display a scale bar on the map.
+
+- Introduced functions for accessing, updating, and deleting a feature state.
+
+#### Bug fixes (2.3.6)
+
+- Addressed the issue of layer ordering after a style update, when a user layer is inserted before another user layer.
+ ### [2.3.5] (November 29, 2023) #### Other changes (2.3.5)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.1.0]: https://www.npmjs.com/package/azure-maps-control/v/3.1.0
[3.0.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.3 [3.0.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.2 [3.0.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.1
Stay up to date on Azure Maps:
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.3.6]: https://www.npmjs.com/package/azure-maps-control/v/2.3.6
[2.3.5]: https://www.npmjs.com/package/azure-maps-control/v/2.3.5 [2.3.4]: https://www.npmjs.com/package/azure-maps-control/v/2.3.4 [2.3.3]: https://www.npmjs.com/package/azure-maps-control/v/2.3.3
azure-maps Release Notes Spatial Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md
This document contains information about new features and other changes to the Azure Maps Spatial IO Module.
+## [0.1.7]
+
+#### New features (0.1.7)
+
+- Introduced a new customization option, `bubbleRadiusFactor`, to enable users to adjust the default multiplier for the bubble radius in a SimpleDataLayer.
+ ## [0.1.6] ### Other changes (0.1.6)
Stay up to date on Azure Maps:
> [Azure Maps Blog] [WmsClient.getFeatureInfoHtml]: /javascript/api/azure-maps-spatial-io/atlas.io.ogc.wfsclient#azure-maps-spatial-io-atlas-io-ogc-wfsclient-getfeatureinfo
+[0.1.7]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.7
[0.1.6]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.6 [0.1.5]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.5 [0.1.4]: https://www.npmjs.com/package/azure-maps-spatial-io/v/0.1.4
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| December 2023 |**Windows** <ul><li>Support new settings that control agent disk size and file location</li><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
+| December 2023 |**Windows** <ul><li>Support new settings that control agent disk size</li><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4|
| October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multi-tenant mode</li><li>AMA installer will not install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (aka GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None | | August 2023| **Windows** <ul><li>AMA: Allow prefixes in the tag names to handle regression</li><li>Updating package version for AzSecPack 4.28 release</li></ui>|1.19.0| None |
azure-monitor Azure Monitor Agent Mma Removal Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md
# Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
-# MMA Discovery and Removal Tool
+# MMA Discovery and Removal Tool (Preview)
After you migrate your machines to AMA, you need to remove the MMA agent to avoid duplication of logs. AzTS MMA Discovery and Removal Utility can centrally remove MMA extension from Azure Virtual Machine (VMs), Azure Virtual Machine Scale Sets and Azure Arc Servers from a tenant. The utility works in two steps 1. Discovery ΓÇô First the utility creates an inventory of all machines that have the MMA agents installed. We recommend that no new VMs, Virtual Machine Scale Sets or Azure Arc Servers with MMA extension are created while the utility is running.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data that's sent to you
## Limitations - Custom logs created using the [HTTP Data Collector API](./data-collector-api.md) can't be exported, including text-based logs consumed by Log Analytics agent. Custom logs created using [data collection rules](./logs-ingestion-api-overview.md), including text-based logs, can be exported. -- Data export will gradually support more tables, but is currently limited to tables specified in the [supported tables](#supported-tables) section.
+- Data export will gradually support more tables, but is currently limited to tables specified in the [supported tables](#supported-tables) section. You can include tables that aren't yet supported in rules, but no data will be exported for them until the tables are supported.
- You can define up to 10 enabled rules in your workspace, each can include multiple tables. You can create more rules in workspace in disabled state. - Destinations must be in the same region as the Log Analytics workspace. - The Storage Account must be unique across rules in the workspace.
If you've configured your Storage Account to allow access from selected networks
- Use Premium or Dedicated tiers for higher throughput. ### Create or update a data export rule
-A data export rule defines the destination and tables for which data is exported. You can create 10 rules in the **Enabled** state in your workspace. More rules are allowed in the **Disabled** state. The Storage Account must be unique across rules in the workspace. Multiple rules can use the same Event Hubs namespace when you're sending to separate Event Hubs.
-
-> [!NOTE]
-> - You can include tables that aren't yet supported in rules, but no data will be exported for them until the tables are supported.
-> - Export to a Storage Account: A separate container is created in the Storage Account for each table.
-> - Export to Event Hubs: If an Event Hub name isn't provided, a separate Event Hub is created for each table. The [number of supported Event Hubs in Basic and Standard namespace tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When you're exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces or provide an Event Hub name in the rule to export all tables to it.
+A data export rule defines the destination and tables for which data is exported. The rule provisioning takes about 30 minutes before the export operation initiated. Data export rules considerations:
+- The Storage Account must be unique across rules in the workspace.
+- Multiple rules can use the same Event Hubs namespace when you're sending to separate Event Hubs.
+- Export to a Storage Account: A separate container is created in the Storage Account for each table.
+- Export to Event Hubs: If an Event Hub name isn't provided, a separate Event Hub is created for each table. The [number of supported Event Hubs in Basic and Standard namespace tiers is 10](../../event-hubs/event-hubs-quotas.md#common-limits-for-all-tiers). When you're exporting more than 10 tables to these tiers, either split the tables between several export rules to different Event Hubs namespaces or provide an Event Hub name in the rule to export all tables to it.
# [Azure portal](#tab/portal)
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
na Previously updated : 01/26/2023 Last updated : 01/11/2024 # What is Azure NetApp Files?
-Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides volumes as a service for which you can create NetApp accounts, capacity pools, select service and performance levels, create volumes, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises. Azure NetApp Files supports SMB and NFS protocols and can be used for various use cases such as file sharing, home directories, databases, high-performance computing and more. Additionally, it also provides built-in availability, data protection and disaster recovery capabilities.
+Azure NetApp Files is an Azure native, first-party, enterprise-class, high-performance file storage service. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, volumes, select service and performance levels, and manage data protection. It allows you to create and manage high-performance, highly available, and scalable file shares, using the same protocols and tools that you're familiar with and enterprise applications rely on on-premises.
-## High performance
+Azure NetApp FilesΓÇÖ key attributes are:
-Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads. Key features that contribute to the high performance include:
+- Performance, cost optimization and scale
+- Simplicity and availability
+- Data management and security
-* High throughput:
- Azure NetApp Files supports high throughput for large file transfers and can handle many random read and write operations with high concurrency, over the Azure high-speed network. This functionality helps to ensure that your workloads aren't bottlenecked by VM disk storage performance. Azure NetApp Files supports multiple service levels, such that you can choose the optimal mix of capacity, performance and cost.
-* Low latency:
- Azure NetApp Files is built on top of an all-flash bare-metal fleet, which is optimized for low latency, high throughput, and random IO. This functionality helps to ensure that your workloads experience optimal (low) storage latency.
-* Protocols:
- Azure NetApp Files supports both SMB, NFSv3/NFSv4.1, and dual-protocol volumes, which are the most common protocols used in enterprise environments. This functionality allows you to use the same protocols and tools that you use on-premises, which helps to ensure compatibility and ease of use. It supports NFS `nconnect` and SMB multichannel for increased network performance.
-* Scale:
- Azure NetApp Files can scale up or down to meet the performance and capacity needs of your workloads. You can increase or decrease the size of your volumes as needed, and the service automatically provisions the necessary throughput.
-* Changing of service levels:
- With Azure NetApp Files, you can dynamically and online change your volumesΓÇÖ service levels to tune your capacity and performance needs whenever you need to. This functionality can even be fully automated through APIs.
-* Optimized for workloads:
- Azure NetApp Files is optimized for workloads like HPC, IO-intensive, and database scenarios. It provides high performance, high availability, and scalability for demanding workloads.
+Azure NetApp Files supports SMB, NFS and dual protocols volumes and can be used for various use cases such as:
+- file sharing
+- home directories
+- databases
+- high-performance computing and more
-All these features work together to provide a high-performance file storage solution that can handle the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency.
+For more information about workload solutions leveraging Azure NetApp Files, see [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md).
-## High availability
+## Performance, cost optimization, and scale
-Azure NetApp Files is designed to provide high availability for your file storage needs. Key features that contribute to the high availability include:
+Azure NetApp Files is designed to provide high-performance file storage for enterprise workloads and provide functionality to provide cost optimization and scale. Key features that contribute to these include:
-* Automatic failover:
- Azure NetApp Files supports automatic failover within the bare-metal fleet if there's disruption or maintenance event. This functionality helps to ensure that your data is always available, even in a failure.
-* Multi-protocol access:
- Azure NetApp Files supports both SMB and NFS protocols, helping to ensure that your applications can access your data, regardless of the protocol they use.
-* Self-healing:
- Azure NetApp Files is built on top of a self-healing storage infrastructure, which helps to ensure that your data is always available and recoverable.
-* Support for Availability Zones:
- Volumes can be deployed in an Availability Zones of choice, enabling you to build HA application architectures for increased application availability.
-* Data replication:
- Azure NetApp Files supports data replication between different Azure regions and Availability Zones, which helps to ensure that your data is always available, even in an outage.
-* Azure NetApp Files provides a high [availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/).
+| Functionality | Description | Benefit |
+| - | - | - |
+| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance
+| Multi-protocol support | Supports multiple protocols including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1 and simultaneous dual-protocol | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. |
+| Three flexible performance tiers (standard, premium, ultra) | Three performance tiers with dynamic service level change capability based on workload needs, including cool access for cold data | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
+| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
+| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced size storage pool compared to the initial 4 TiB minimum | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs.
+| 1000-TiB maximum capacity pool | 1000-TiB capacity pool is an increased storage pool compared to the initial 500 TiB maximum | Reduce waste by creating larger, pooled capacity and performance budget and share/distribute across volumes.
+| 100-500 TiB large volumes | Store large volumes of data up to 500 TiB in a single volume | Manage large data sets and high-performance workloads with ease.
+| User and group quotas | Set quotas on storage usage for individual users and groups | Control storage usage and optimize resource allocation.
+| Virtual machine (VM) networked storage performance | Higher VM network throughput compared to disk IO limits enable more-demanding workloads on smaller Azure VMs | Improve application performance at a smaller virtual machine footprint, improving overall efficiency and lowering application license cost.
+| Deep workload readiness | Seamless deployment and migration of any-size workload with well-documented deployment guides | Easily migrate any workload of any size to the platform. Enjoy a seamless, cost-effective deployment and migration experience.
+| Datastores for Azure VMware Solution | Use Azure NetApp Files as a storage solution for VMware workloads in Azure, reducing the need for superfluous compute nodes normally included with Azure VMware Solution expansions | Save money by eliminating the need for unnecessary compute nodes when expanding storage, resulting in significant cost savings.
+| Standard storage with cool access | Use the cool access option of Azure NetApp Files Standard service level to move inactive data transparently from Azure NetApp Files Standard service-level storage (the hot tier) to an Azure storage account (the cool tier) | Save money by transitioning data that resides within Azure NetApp Files volumes (the hot tier) by moving blocks to the lower cost storage (the cool tier). |
-All these features work together to provide a high-availability file storage solution to ensure that your data is always available, recoverable, and accessible to your applications, even in an outage.
+These features work together to provide a high-performance file storage solution for the demands of enterprise workloads. They help to ensure that your workloads experience optimal (low) storage latency, cost and scale.
-## Data protection
+## Simplicity and availability
-Azure NetApp Files provides built-in data protection to help ensure the safe storage, availability, and recoverability of your data. Key features include:
+Azure NetApp Files is designed to provide simplicity and high availability for your file storage needs. Key features include:
-* Snapshot copies:
- Azure NetApp Files allows you to create point-in-time snapshots of your volumes, which can be restored or reverted to a previous state. The snapshots are incremental. That is, they only capture the changes made since the last snapshot, at the block level, which helps to drastically reduce storage consumption.
-* Backup and restore:
- Azure NetApp Files provides integrated backup, which allows you to create backups of your volume snapshots to lower-cost Azure storage and restore them if data loss happens.
-* Data replication:
- Azure NetApp Files supports data replication between different Azure regions and Availability Zones, which helps to ensure high availability and disaster recovery. Replication can be done asynchronously, and the service can fail over to a secondary region or zone in an outage.
-* Security:
- Azure NetApp Files provides built-in security features such as RBAC/IAM, Active Directory Domain Services (AD DS), Microsoft Entra Domain Services and LDAP integration, and Azure Policy. This functionality helps to protect data from unauthorized access, breaches, and misconfigurations.
-
-All these features work together to provide a comprehensive data protection solution that helps to ensure that your data is always available, recoverable, and secure.
+| Functionality | Description | Benefit |
+| - | - | - |
+| Volumes as a Service | Provision and manage volumes in minutes with a few clicks like any other Azure service | Enables businesses to quickly and easily provision and manage volumes without the need for dedicated hardware or complex configurations.
+| Native Azure Integration | Integration with the Azure portal, REST, CLI, billing, monitoring, and security | Simplifies management and ensures consistency with other Azure services, while providing a familiar interface and integration with existing tools and workflows.
+| High availability | Azure NetApp Files provides a [high availability SLA](https://azure.microsoft.com/support/legal/sla/netapp/) with automatic failover | Ensures that data is always available and accessible, avoiding downtime and disruption to business operations.
+| Application migration | Migrate applications to Azure without refactoring | Enables businesses to move their workloads to Azure quickly and easily without the need for costly and time-consuming application refactoring or redesign.
+| Cross-region and cross-zone replication | Replicate data between regions or zones | Provide disaster recovery capabilities and ensure data availability and redundancy across different Azure regions or availability zones.
+| Application volume groups | Application volume groups enable you to deploy all application volumes according to best practices in a single one-step and optimized workflow | Simplified multi-volume deployment for applications, ensuring volumes and mount points are optimized and adhere to best practices in a single step, saving time and effort.
+| Programmatic deployment | Automate deployment and management with APIs and SDKs | Enables businesses to integrate Azure NetApp Files with their existing automation and management tools, reducing the need for manual intervention and improving efficiency.
+| Fault-tolerant bare metal | Built on a fault-tolerant bare metal fleet powered by ONTAP | Ensures high performance and reliability by leveraging a robust, fault-tolerant storage platform and powerful data management capabilities provided by ONTAP.
+| Azure native billing | Integrates natively with Azure billing, providing a seamless and easy-to-use billing experience, based on hourly usage | Easily and accurately manage and track the cost of using the service, allowing for seamless budgeting and cost control. Easily track usage and expenses directly from the Azure portal, providing a unified experience for billing and management. |
+
+These features work together to provide a simple-to-use and highly available file storage solution to ensure that your data is easy to manage and always available, recoverable, and accessible to your applications even in an outage.
+
+## Data management and security
+
+Azure NetApp Files provides built-in data management and security capabilities to help ensure the secure storage, availability, and manageability of your data. Key features include:
+
+| Functionality | Description | Benefit |
+| - | - | - |
+| Efficient snapshots and backup | Advanced data protection and faster recovery of data by leveraging block-efficient, incremental snapshots and vaulting | Quickly and easily backup data and restore to a previous point in time, minimizing downtime and reducing the risk of data loss.
+| Snapshot restore to a new volume | Instantly restore data from a previously taken snapshot quickly and accurately | Reduce downtime and save time and resources that would otherwise be spent on restoring data from backups.
+| Snapshot revert | Revert volume to the state it was in when a previous snapshot was taken | Easily and quickly recover data (in-place) to a known good state, ensuring business continuity and maintaining productivity.
+| Application-aware snapshots and backup | Ensure application-consistent snapshots with guaranteed recoverability | Automate snapshot creation and deletion processes, reducing manual efforts and potential errors while increasing productivity by allowing teams to focus on other critical tasks.
+| Efficient cloning | Create and access clones in seconds | Save time and reduce costs for test, development, system refresh and analytics.
+| Data-in-transit encryption | Secure data transfers with protocol encryption | Ensure the confidentiality and integrity of data being transmitted, with peace of mind that information is safe and secure.
+| Data-at-rest encryption | Data-at-rest encryption with platform- or customer-managed keys | Prevent unrestrained access to stored data, meet compliance requirements and enhance data security.
+| Azure platform integration and compliance certifications | Compliance with regulatory requirements and Azure platform integration | Adhere to Azure standards and regulatory compliance, ensure audit and governance completion.
+| Azure Identity and Access Management (IAM) | Azure role-based access control (RBAC) service allows you to manage permissions for resources at any level | Simplify access management and improve compliance with Azure-native RBAC, empowering you to easily control user access to configuration management.
+| AD/LDAP authentication, export policies & access control lists (ACLs) | Authenticate and authorize access to data using existing AD/LDAP credentials and allow for the creation of export policies and ACLs to govern data access and usage | Prevent data breaches and ensure compliance with data security regulations, with enhanced granular control over access to data volumes, directories and files. |
+
+These features work together to provide a comprehensive data management solution that helps to ensure that your data is always available, recoverable, and secure.
## Next steps * [Understand the storage hierarchy of Azure NetApp Files](azure-netapp-files-understand-storage-hierarchy.md) * [Quickstart: Set up Azure NetApp Files and create an NFS volume](azure-netapp-files-quickstart-set-up-account-create-volumes.md)
-* [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md)
+* [Understand NAS concepts in Azure NetApp Files](network-attached-storage-concept.md)
+* [Register for NetApp Resource Provider](azure-netapp-files-register.md)
+* [Solution architectures using Azure NetApp Files](azure-netapp-files-solution-architectures.md)
* [Azure NetApp Files videos](azure-netapp-files-videos.md)
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
The following diagram demonstrates how customer-managed keys work with Azure Net
## Considerations
-> [!IMPORTANT]
-> Customer-managed keys for Azure NetApp Files volume encryption is currently in preview. You need to submit a waitlist request for accessing the feature through the **[Customer-managed keys for Azure NetApp Files volume encryption](https://aka.ms/anfcmkpreviewsignup)** page. Customer-managed keys feature is expected to be enabled within a week after you submit the waitlist request. You can check the status of feature registration by using the following command:
->
-> ```azurepowershell-interactive
-> Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption
->
-> FeatureName ProviderName RegistrationState
-> -- --
-> ANFAzureKeyVaultEncryption Microsoft.NetApp Registered
-> ```
- * Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption. * To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key.
-* Automatic Managed System Identity (MSI) certificate renewal isn't currently supported. It is recommended to set up an Azure monitor alert for when the MSI certificate is going to expire.
+* Automatic Managed System Identity (MSI) certificate renewal isn't currently supported. It's recommended you create an Azure monitor alert to notify you when the MSI certificate is set to expire.
* The MSI certificate has a lifetime of 90 days. It becomes eligible for renewal after 46 days. **After 90 days, the certificate is no longer be valid and the customer-managed key volumes under the NetApp account will go offline.** * To renew, you need to call the NetApp account operation `renewCredentials` if eligible for renewal. If it's not eligible, an error message communicates the date of eligibility. * Version 2.42 or later of the Azure CLI supports running the `renewCredentials` operation with the [az netappfiles account command](/cli/azure/netappfiles/account#az-netappfiles-account-renew-credentials). For example:
The following diagram demonstrates how customer-managed keys work with Azure Net
`az netappfiles account renew-credentials ΓÇô-account-name myaccount ΓÇôresource-group myresourcegroup` * If the account isn't eligible for MSI certificate renewal, an error message communicates the date and time when the account is eligible. It's recommended you run this operation periodically (for example, daily) to prevent the certificate from expiring and from the customer-managed key volume going offline.- * Applying Azure network security groups on the private link subnet to Azure Key Vault isn't supported for Azure NetApp Files customer-managed keys. Network security groups don't affect connectivity to Private Link unless `Private endpoint network policy` is enabled on the subnet. It's recommended to keep this option disabled. * If Azure NetApp Files fails to create a customer-managed key volume, error messages are displayed. Refer to the [Error messages and troubleshooting](#error-messages-and-troubleshooting) section for more information. * If Azure Key Vault becomes inaccessible, Azure NetApp Files loses its access to the encryption keys and the ability to read or write data to volumes enabled with customer-managed keys. In this situation, create a support ticket to have access manually restored for the affected volumes.
Azure NetApp Files customer-managed keys is supported for the following regions:
## Requirements
-Before creating your first customer-managed key volume, you must have set up:
+Before creating your first customer-managed key volume, you must set up:
* An [Azure Key Vault](../key-vault/general/overview.md), containing at least one key. * The key vault must have soft delete and purge protection enabled. * The key must be of type RSA. * The key vault must have an [Azure Private Endpoint](../private-link/private-endpoint-overview.md). * The private endpoint must reside in a different subnet than the one delegated to Azure NetApp Files. The subnet must be in the same VNet as the one delegated to Azure NetApp.
+* You must register the feature before you can use customer-managed keys.
For more information about Azure Key Vault and Azure Private Endpoint, refer to: * [Quickstart: Create a key vault ](../key-vault/general/quick-create-portal.md)
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
* [Network security groups](../virtual-network/network-security-groups-overview.md) * [Manage network policies for private endpoints](../private-link/disable-private-endpoint-network-policy.md)
+## Register the feature
+
+You must register customer-managed keys before using it for the first time.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption
+ ```
+
+2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAzureKeyVaultEncryption
+ ```
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ ## Configure a NetApp account to use customer-managed keys
+### [Portal](#tab/azure-portal)
+ 1. In the Azure portal and under Azure NetApp Files, select **Encryption**. The **Encryption** page enables you to manage encryption settings for your NetApp account. It includes an option to let you set your NetApp account to use your own encryption key, which is stored in [Azure Key Vault](../key-vault/general/basic-concepts.md). This setting provides a system-assigned identity to the NetApp account, and it adds an access policy for the identity with the required key permissions.
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
* `Microsoft.KeyVault/vaults/keys/decrypt/action` The user-assigned identity you select is added to your NetApp account. Due to the customizable nature of role-based access control (RBAC), the Azure portal doesn't configure access to the key vault. See [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../key-vault/general/rbac-guide.md) for details on configuring Azure Key Vault.
-1. After selecting **Save** button, you'll receive a notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
+1. Select **Save** then observe the notification communicating the status of the operation. If the operation was not successful, an error message displays. Refer to [error messages and troubleshooting](#error-messages-and-troubleshooting) for assistance in resolving the error.
+
+### [Azure CLI](#tab/azure-cli)
+
+The process to configure a NetApp account with customer-managed keys in the Azure CLI depends on whether you are using a [system-assigned identity](#use-a-system-assigned-identity) or an [user-assigned identity](#use-a-new-user-assigned-identity).
+
+#### Use a system-assigned identity
+
+1. Update your NetApp account to use a system-assigned identity.
+
+ ```azurecli
+ az netappfiles account update \
+ --name <account_name> \
+ --resource-group <resource_group> \
+ --identity-type SystemAssigned
+ ```
+
+1. To use an access policy, create a variable that includes the principal ID of the account identity, then run `az keyvault set-policy` and assign permissions of "Get", "Encrypt", and "Decrypt".
+
+ ```azurecli
+ netapp_account_principal=$(az netappfiles account show \
+ --name <account_name> \
+ --resource-group <resource_group> \
+ --query identity.principalId \
+ --output tsv)
+
+ az keyvault set-policy \
+ --name <key_vault_name> \
+ --resource-group <resource-group> \
+ --object-id $netapp_account_principal \
+ --key-permissions get encrypt decrypt
+ ```
+
+1. Update the NetApp account with your key vault.
+
+ ```azurecli
+ key_vault_uri=$(az keyvault show \
+ --name <key-vault> \
+ --resource-group <resource_group> \
+ --query properties.vaultUri \
+ --output tsv)
+ az netappfiles account update --name <account_name> \
+ --resource-group <resource_group> \
+ --key-source Microsoft.Keyvault \
+ --key-vault-uri $key_vault_uri \
+ --key-name <key>
+ ```
+
+#### Use a new user-assigned identity
+
+1. Create a new user-assigned identity.
+
+ ```azurecli
+ az identity create \
+ --name <identity_name> \
+ --resource-group <resource_group>
+ ```
+
+1. Set an access policy for the key vault.
+ ```azurecli
+ user_assigned_identity_principal=$(az identity show \
+ --name <identity_name> \
+ --resource-group <resource_group> \
+ --query properties.principalId \
+ -output tsv)
+ az keyvault set-policy \
+ --name <key_vault_name> \
+ --resource-group <resource-group> \
+ --object-id $user_assigned_identity_principal \
+ --key-permissions get encrypt decrypt
+ ```
+
+ >[!NOTE]
+ >You can alternately [use role-based access control to grant access to the key vault](#use-role-based-access-control).
+
+1. Assign the user-assigned identity to the NetApp account and update the key vault encryption.
+
+ ```azurecli
+ key_vault_uri=$(az keyvault show \
+ --name <key-vault> \
+ --resource-group <resource_group> \
+ --query properties.vaultUri \
+ --output tsv)
+ user_assigned_identity=$(az identity show \
+ --name <identity_name> \
+ --resource-group <resource_group> \
+ --query id \
+ -output tsv)
+ az netappfiles account update --name <account_name> \
+ --resource-group <resource_group> \
+ --identity-type UserAssigned \
+ --key-source Microsoft.Keyvault \
+ --key-vault-uri $key_vault_uri \
+ --key-name <key> \
+ --keyvault-resource-id <key-vault> \
+ --user-assigned-identity $user_assigned_identity
+ ```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+The process to configure a NetApp account with customer-managed keys in the Azure CLI depends on whether you are using a [system-assigned identity](#enable-access-for-system-assigned-identity) or an [user-assigned identity](#enable-access-for-user-assigned-identity).
+
+#### Enable access for system-assigned identity
+
+1. Update your NetApp account to use system-assigned identity.
+
+ ```azurepowershell
+ $netappAccount = Update-AzNetAppFilesAccount -ResourceGroupName <resource_group> -Name <account_name> -AssignIdentity
+ ```
+
+1. To use an access policy, run `Set-AzKeyVaultAccessPolicy` with the key vault name, the principal ID of the account identity, and the permissions "Get", "Encrypt", and "Decrypt".
+
+ ```azurepowershell
+ Set-AzKeyVaultAccessPolicy -VaultName <key_vault_name> -ResourceGroupname <resource_group> -ObjectId $netappAccount.Identity.PrincipalId -PermissionsToKeys get,encrypt,decrypt
+ ```
+
+1. Update your NetApp account with the key vault information.
+
+ ```azurepowershell
+ Update-AzNetAppFilesAccount -ResourceGroupName $netappAccount.ResourceGroupName -AccountName $netappAccount.ResourceGroupName -KeyVaultEncryption -KeyVaultUri <keyVaultUri> -KeyName <keyName>
+ ```
+
+#### Enable access for user-assigned identity
+
+1. Create a new user-assigned identity.
+
+ ```azurepowershell
+ $userId = New-AzUserAssignedIdentity -ResourceGroupName <resourceGroupName> -Name $userIdName
+ ```
+
+1. Assign the access policy to the key vault.
+
+ ```azurepowershell
+ Set-AzKeyVaultAccessPolicy -VaultName <key_vault_name> `
+ -ResourceGroupname <resource_group> `
+ -ObjectId $userId.PrincipalId `
+ -PermissionsToKeys get,encrypt,decrypt `
+ -BypassObjectIdValidation
+ ```
+
+ >[!NOTE]
+ >You can alternately [use role-based access control to grant access to the key vault](#use-role-based-access-control).
+
+1. Assign the user-assigned identity to the NetApp account and update the key vault encryption.
+
+ ```azurepowershell
+ $netappAccount = Update-AzNetAppFilesAccount -ResourceGroupName <resource_group> `
+ -Name <account_name> `
+ -IdentityType UserAssigned `
+ -UserAssignedIdentityId $userId.Id `
+ -KeyVaultEncryption `
+ -KeyVaultUri <keyVaultUri> `
+ -KeyName <keyName> `
+ -EncryptionUserAssignedIdentity $userId.Id
+ ```
+ ## Use role-based access control
azure-netapp-files Create Cross Zone Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-cross-zone-replication.md
Cross-zone replication is currently in preview. You need to register the feature
2. Check the status of the feature registration: > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing.
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
```azurepowershell-interactive Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFCrossZoneReplication
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
# What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.+
+## January 2024
+
+* [Customer-managed keys](configure-customer-managed-keys.md) is now generally available (GA).
+
+ You still must register the feature before using it for the first time.
## November 2023
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Last updated 09/27/2023
# Configure your Bicep environment
-Bicep supports a configuration file named `bicepconfig.json`. Within this file, you can add values that customize your Bicep development experience. If you don't add this file, Bicep uses default values.
+Bicep supports an optional configuration file named `bicepconfig.json`. Within this file, you can add values that customize your Bicep development experience.
-To customize values, create this file in the directory where you store Bicep files. You can add `bicepconfig.json` files in multiple directories. The configuration file closest to the Bicep file in the directory hierarchy is used.
+To customize configuration, create this file in the same directory, or a parent directory of your Bicep files. If multiple parent directories contain `bicepconfig.json` files, Bicep uses configuration from the nearest one. If a configuration file is not found, Bicep uses default values.
To configure Bicep extension settings, see [VS Code and Bicep extension](./install.md#visual-studio-code-and-bicep-extension).
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best prac
## Enable experimental features
-You can enable preview features by adding:
+You can enable experimental features by adding the following section to your `bicepconfig.json` file.
+
+Here is an example of enabling features 'compileTimeImports' and 'userDefinedFunctions`.
```json { "experimentalFeaturesEnabled": {
- "userDefinedTypes": true,
- "extensibility": true
+ "compileTimeImports": true,
+ "userDefinedFunctions": true
} } ```
-> [!WARNING]
-> To utilize the experimental features, it's necessary to have the latest version of [Azure CLI](./install.md#azure-cli).
-
-The preceding sample enables 'userDefineTypes' and 'extensibility`. The available experimental features include:
--- **assertions**: Should be enabled in tandem with `testFramework` experimental feature flag for expected functionality. Allows you to author boolean assertions using the `assert` keyword comparing the actual value of a parameter, variable, or resource name to an expected value. Assert statements can only be written directly within the Bicep file whose resources they reference. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967).-- **compileTimeImports**: Allows you to use symbols defined in another Bicep file. See [Import types, variables and functions](./bicep-import.md#import-types-variables-and-functions-preview).-- **extensibility**: Allows Bicep to use a provider model to deploy non-ARM resources. Currently, we only support a Kubernetes provider. See [Bicep extensibility Kubernetes provider](./bicep-extensibility-kubernetes-provider.md).-- **sourceMapping**: Enables basic source mapping to map an error location returned in the ARM template layer back to the relevant location in the Bicep file.-- **resourceTypedParamsAndOutputs**: Enables the type for a parameter or output to be of type resource to make it easier to pass resource references between modules. This feature is only partially implemented. See [Simplifying resource referencing](https://github.com/azure/bicep/issues/2245).-- **symbolicNameCodegen**: Allows the ARM template layer to use a new schema to represent resources as an object dictionary rather than an array of objects. This feature improves the semantic equivalent of the Bicep and ARM templates, resulting in more reliable code generation. Enabling this feature has no effect on the Bicep layer's functionality.-- **testFramework**: Should be enabled in tandem with `assertions` experimental feature flag for expected functionality. Allows you to author client-side, offline unit-test test blocks that reference Bicep files and mock deployment parameters in a separate `test.bicep` file using the new `test` keyword. Test blocks can be run with the command *bicep test <filepath_to_file_with_test_blocks>* which runs all `assert` statements in the Bicep files referenced by the test blocks. For more information, see [Bicep Experimental Test Framework](https://github.com/Azure/bicep/issues/11967).-- **userDefinedFunctions**: Allows you to define your own custom functions. See [User-defined functions in Bicep](./user-defined-functions.md).
+For information on the current set of experimental features, see [Experimental Features](https://aka.ms/bicep/experimental-features).
## Next steps
azure-resource-manager File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md
See [Arrays](./data-types.md#arrays) and [Objects](./data-types.md#objects) for
## Known limitations * No support for the concept of apiProfile, which is used to map a single apiProfile to a set apiVersion for each resource type.
-* No support for user-defined functions.
+* User-defined functions are not supported at the moment. However, an experimental feature is currently accessible. For more information, see [User-defined functions in Bicep](./user-defined-functions.md).
* Some Bicep features require a corresponding change to the intermediate language (Azure Resource Manager JSON templates). We announce these features as available when all of the required updates have been deployed to global Azure. If you're using a different environment, such as Azure Stack, there may be a delay in the availability of the feature. The Bicep feature is only available when the intermediate language has also been updated in that environment. ## Next steps
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
param <second-parameter-name> = <second-value>
You can use expressions with the default value. For example: ```bicep
-using 'storageaccount.bicep'
+using 'main.bicep'
param storageName = toLower('MyStorageAccount') param intValue = 2 + 2
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
The valid type expressions include:
type mixedTypeArray = ('fizz' | 42 | {an: 'object'} | null)[] ```
-In addition to be used in the `type` statement, type expressions can also be used in these places for creating user-defined date types:
+In addition to be used in the `type` statement, type expressions can also be used in these places for creating user-defined data types:
- As the type clause of a `param` statement. For example:
azure-sql-edge Deploy Onnx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md
description: Learn how to train a model, convert it to ONNX, deploy it to Azure
Previously updated : 09/14/2023 Last updated : 01/10/2024++ keywords: deploy SQL Edge
azure-vmware Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-storage.md
vSAN datastores use data-at-rest encryption by default using keys stored in Azur
## Datastore capacity expansion options
-The existing cluster vSAN storage capacity can be expanded by connecting Azure storage resources such as [Azure NetApp Files volumes as additional datastores](./attach-azure-netapp-files-to-azure-vmware-solution-hosts.md). Virtual machines can be migrated between vSAN and Azure NetApp Files datastores using storage vMotion.
-Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](../azure-netapp-files/azure-netapp-files-service-levels.md) to allow for adjusting performance and cost to the requirements of the workloads.
+The existing cluster vSAN storage capacity can be expanded by connecting Azure storage resources including Azure NetApp Files or Azure Elastic SAN. Virtual machines can be migrated between vSAN datastores and other datastores non-disruptively using storage vMotion. Expanding datastore capacity using Azure storage resources allows increased datastore capacity without scaling the clusters.
+
+### Azure NetApp Files
+
+Azure NetApp Files is an enterprise-class, high-performance, metered file storage service. The service supports the demanding enterprise file-workloads in the cloud: databases, SAP, and high-performance computing applications, with no code changes.
+
+You can create Network File System (NFS) datastores with Azure NetApp Files volumes and attach them to clusters of your choice. By using NFS datastores backed by Azure NetApp Files, you can expand your storage instead of scaling the clusters. Azure NetApp Files is available in [Ultra, Premium and Standard performance tiers](../azure-netapp-files/azure-netapp-files-service-levels.md) to allow for adjusting performance and cost to the requirements of the workloads.
+
+For more information, see [Attach Azure NetApp Files datastores to Azure VMware Solution hosts](attach-azure-netapp-files-to-azure-vmware-solution-hosts.md).
+
+### Azure Elastic SAN
+
+Azure Elastic storage area network (SAN) is MicrosoftΓÇÖs answer to the problem of workload optimization and integration between your large-scale databases and performance-intensive mission-critical applications.
+
+Azure VMware Solution supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
+
+For more information, see [Use Azure VMware Solution with Azure Elastic SAN](configure-azure-elastic-san.md).
## Azure storage integration
-You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, and Blob Storage. The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
+You can use Azure storage services in workloads running in your private cloud. The Azure storage services include Storage Accounts, Table Storage, Blob Storage, and file storage (Azure Files and Azure NetApp Files). The connection of workloads to Azure storage services doesn't traverse the internet. This connectivity provides more security and enables you to use SLA-based Azure storage services in your private cloud workloads.
## Alerts and monitoring
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
This table provides the list of RAID configuration supported and host requiremen
|RAID-1 (Mirroring) Default setting.| 1 | 3 | |RAID-5 (Erasure Coding) | 1 | 4 | |RAID-1 (Mirroring) | 2 | 5 |+
+## Storage
+
+Azure VMware Solution supports the expansion of datastore capacity beyond what is included with vSAN using Azure storage services, enabling you to expand datastore capacity without scaling the clusters. For more information, see [Datastore capacity expansion options](concepts-storage.md#datastore-capacity-expansion-options).
## Networking
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
Previously updated : 01/02/2024 Last updated : 01/12/2024 # Tutorial: Create a serverless real-time chat app with Azure Functions and Azure Web PubSub service
In this tutorial, you learn how to:
## Prerequisites
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/) -- [Node.js](https://nodejs.org/en/download/), version 10.x.
+- [Node.js](https://nodejs.org/en/download/), version 18.x or above.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+- [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+
+- The [Azure CLI](/cli/azure) to manage Azure resources.
+
+# [JavaScript Model v3](#tab/javascript-v3)
+
+- A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+
+- [Node.js](https://nodejs.org/en/download/), version 18.x or above.
> [!NOTE] > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages). - [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (v4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
In this tutorial, you learn how to:
1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. And then create an empty directory for the project. Run command under this working directory.
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ ```bash
+ func init --worker-runtime javascript --model V4
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
```bash
- func init --worker-runtime javascript
+ func init --worker-runtime javascript --model V3
``` # [C# in-process](#tab/csharp-in-process)
In this tutorial, you learn how to:
2. Install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`.
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ Confirm and update `host.json`'s extensionBundle to version _4.*_ or later to get Web PubSub support.
+
+ ```json
+ {
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+ }
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
- Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
+ Confirm and update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support.
```json {
- "version": "2.0",
"extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[3.3.*, 4.0.0)"
In this tutorial, you learn how to:
```bash func new -n index -t HttpTrigger ```
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ - Update `src/functions/index.js` and copy following codes.
+ ```js
+ const { app } = require('@azure/functions');
+ const { readFile } = require('fs/promises');
+
+ app.http('index', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ handler: async (context) => {
+ const content = await readFile('https://docsupdatetracker.net/index.html', 'utf8', (err, data) => {
+ if (err) {
+ context.err(err)
+ return
+ }
+ });
+
+ return {
+ status: 200,
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: content,
+ };
+ }
+ });
+ ```
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v3](#tab/javascript-v3)
- Update `index/function.json` and copy following json codes. ```json
In this tutorial, you learn how to:
> [!NOTE] > In this sample, we use [Microsoft Entra ID](../app-service/configure-authentication-user-identities.md) user identity header `x-ms-client-principal-name` to retrieve `userId`. And this won't work in a local function. You can make it empty or change to other ways to get or generate `userId` when playing in local. For example, let client type a user name and pass it in query like `?user={$username}` when call `negotiate` function to get service connection url. And in the `negotiate` function, set `userId` with value `{query.user}`.
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+ - Update `src/functions/negotiate` and copy following codes.
+ ```js
+ const { app, input } = require('@azure/functions');
+
+ const connection = input.generic({
+ type: 'webPubSubConnection',
+ name: 'connection',
+ userId: '{headers.x-ms-client-principal-name}',
+ hub: 'simplechat'
+ });
+
+ app.http('negotiate', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [connection],
+ handler: async (request, context) => {
+ return { body: JSON.stringify(context.extraInputs.get('connection')) };
+ },
+ });
+ ```
++
+ # [JavaScript Model v3](#tab/javascript-v3)
- Update `negotiate/function.json` and copy following json codes. ```json
In this tutorial, you learn how to:
func new -n message -t HttpTrigger ```
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ - Update `src/functions/message.js` and copy following codes.
+ ```js
+ const { app, output, trigger } = require('@azure/functions');
+
+ const wpsMsg = output.generic({
+ type: 'webPubSub',
+ name: 'actions',
+ hub: 'simplechat',
+ });
+
+ const wpsTrigger = trigger.generic({
+ type: 'webPubSubTrigger',
+ name: 'request',
+ hub: 'simplechat',
+ eventName: 'message',
+ eventType: 'user'
+ });
+
+ app.generic('message', {
+ trigger: wpsTrigger,
+ extraOutputs: [wpsMsg],
+ handler: async (request, context) => {
+ context.extraOutputs.set(wpsMsg, [{
+ "actionName": "sendToAll",
+ "data": `[${context.triggerMetadata.connectionContext.userId}] ${request.data}`,
+ "dataType": request.dataType
+ }]);
+
+ return {
+ data: "[SYSTEM] ack.",
+ dataType: "text",
+ };
+ }
+ });
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
- Update `message/function.json` and copy following json codes. ```json
In this tutorial, you learn how to:
</html> ```
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ # [JavaScript Model v3](#tab/javascript-v3)
# [C# in-process](#tab/csharp-in-process)
Use the following commands to create these items.
1. Create the function app in Azure:
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
+
+ > [!NOTE]
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+
+ # [JavaScript Model v3](#tab/javascript-v3)
```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` > [!NOTE]
azure-web-pubsub Reference Functions Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-functions-bindings.md
Previously updated : 11/24/2023 Last updated : 01/12/2024 # Azure Web PubSub trigger and bindings for Azure Functions
public static UserEventResponse Run(
} ```
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+The following example shows a Web PubSub trigger [JavaScript function](../azure-functions/functions-reference-node.md).
+
+```js
+const { app, trigger } = require('@azure/functions');
+
+const wpsTrigger = trigger.generic({
+ type: 'webPubSubTrigger',
+ name: 'request',
+ hub: '<hub>',
+ eventName: 'message',
+ eventType: 'user'
+});
+
+app.generic('message', {
+ trigger: wpsTrigger,
+ handler: async (request, context) => {
+ context.log('Request from: ', request.connectionContext.userId);
+ context.log('Request message data: ', request.data);
+ context.log('Request message dataType: ', request.dataType);
+ }
+});
+```
+
+`WebPubSubTrigger` binding also supports return value in synchronize scenarios, for example, system `Connect` and user event, when server can check and deny the client request, or send message to the request client directly. In JavaScript weakly typed language, it's deserialized regarding the object keys. And `EventErrorResponse` has the highest priority compare to rest objects, that if `code` is in the return, then it's parsed to `EventErrorResponse` and client connection is dropped.
+
+```js
+app.generic('message', {
+ trigger: wpsTrigger,
+ handler: async (request, context) => {
+ return {
+ "data": "ack",
+ "dataType" : "text"
+ };
+ }
+});
+```
+
+# [JavaScript Model v3](#tab/javascript-v3)
Define trigger binding in `function.json`.
public static WebPubSubConnection Run(
} ```
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+```js
+const { app, input } = require('@azure/functions');
+
+const connection = input.generic({
+ type: 'webPubSubConnection',
+ name: 'connection',
+ userId: '{query.userId}',
+ hub: '<hub>'
+});
+
+app.http('negotiate', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [connection],
+ handler: async (request, context) => {
+ return { body: JSON.stringify(context.extraInputs.get('connection')) };
+ },
+});
+```
+
+# [JavaScript Model v3](#tab/javascript-v3)
Define input bindings in `function.json`.
public static WebPubSubConnection Run(
# [C#](#tab/csharp) > [!NOTE]
-> Limited to the binding parameter types don't support a way to pass list nor array, the `WebPubSubConnection` is not fully supported with all the parameters server SDK has, especially `roles`, and also includes `groups` and `expiresAfter`. In the case customer needs to add roles or delay build the access token in the function, it's suggested to work with [server SDK for C#](/dotnet/api/overview/azure/messaging.webpubsub-readme?view=azure-dotnet).
+> Limited to the binding parameter types don't support a way to pass list nor array, the `WebPubSubConnection` is not fully supported with all the parameters server SDK has, especially `roles`, and also includes `groups` and `expiresAfter`. In the case customer needs to add roles or delay build the access token in the function, it's suggested to work with [server SDK for C#](/dotnet/api/overview/azure/messaging.webpubsub-readme).
> ```cs > [FunctionName("WebPubSubConnectionCustomRoles")] > public static async Task<Uri> Run(
public static WebPubSubConnection Run(
> } > ```
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+> [!NOTE]
+> Limited to the binding parameter types don't support a way to pass list nor array, the `WebPubSubConnection` is not fully supported with all the parameters server SDK has, especially `roles`, and also includes `groups` and `expiresAfter`. In the case customer needs to add roles or delay build the access token in the function, it's suggested to work with [server SDK for JavaScript](/javascript/api/overview/azure/web-pubsub).
+> ```js
+> const { app } = require('@azure/functions');
+> const { WebPubSubServiceClient } = require('@azure/web-pubsub');
+> app.http('negotiate', {
+> methods: ['GET', 'POST'],
+> authLevel: 'anonymous',
+> handler: async (request, context) => {
+> const serviceClient = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, "<hub>");
+> let token = await serviceClient.getAuthenticationToken({ userId: req.query.userid, roles: ["webpubsub.joinLeaveGroup", "webpubsub.sendToGroup"] });
+> return { body: token.url };
+> },
+> });
+> ```
+
+# [JavaScript Model v3](#tab/javascript-v3)
> [!NOTE]
-> Limited to the binding parameter types don't support a way to pass list nor array, the `WebPubSubConnection` is not fully supported with all the parameters server SDK has, especially `roles`, and also includes `groups` and `expiresAfter`. In the case customer needs to add roles or delay build the access token in the function, it's suggested to work with [server SDK for JavaScript](/javascript/api/overview/azure/web-pubsub?view=azure-node-latest).
+> Limited to the binding parameter types don't support a way to pass list nor array, the `WebPubSubConnection` is not fully supported with all the parameters server SDK has, especially `roles`, and also includes `groups` and `expiresAfter`. In the case customer needs to add roles or delay build the access token in the function, it's suggested to work with [server SDK for JavaScript](/javascript/api/overview/azure/web-pubsub).
> > Define input bindings in `function.json`. >
public static WebPubSubConnection Run(
> > module.exports = async function (context, req) { > const serviceClient = new WebPubSubServiceClient(process.env.WebPubSubConnectionString, "<hub>");
-> token = await serviceClient.getAuthenticationToken({ userId: req.query.userid, roles: ["webpubsub.joinLeaveGroup", "webpubsub.sendToGroup"] });
+> let token = await serviceClient.getAuthenticationToken({ userId: req.query.userid, roles: ["webpubsub.joinLeaveGroup", "webpubsub.sendToGroup"] });
> context.res = { body: token.url }; > context.done(); > };
public static object Run(
} ```
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+```js
+const { app, input } = require('@azure/functions');
+
+const wpsContext = input.generic({
+ type: 'webPubSubContext',
+ name: 'wpsContext'
+});
+
+app.http('connect', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [wpsContext],
+ handler: async (request, context) => {
+ var wpsRequest = context.extraInputs.get('wpsContext');
+
+ return { "userId": wpsRequest.request.connectionContext.userId };
+ }
+});
+```
+
+# [JavaScript Model v3](#tab/javascript-v3)
Define input bindings in `function.json`.
The following table explains the binding configuration properties that you set i
| Uri | Uri | Absolute Uri of the Web PubSub connection, contains `AccessToken` generated base on the request. | | AccessToken | string | Generated `AccessToken` based on request UserId and service information. |
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+`WebPubSubConnection` provides below properties.
+
+| Binding Name | Description |
+|||
+| baseUrl | Web PubSub client connection uri. |
+| url | Absolute Uri of the Web PubSub connection, contains `AccessToken` generated base on the request. |
+| accessToken | Generated `AccessToken` based on request UserId and service information. |
+
+# [JavaScript Model v3](#tab/javascript-v3)
`WebPubSubConnection` provides below properties.
public static async Task RunAsync(
} ```
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+```js
+const { app, output } = require('@azure/functions');
+const wpsMsg = output.generic({
+ type: 'webPubSub',
+ name: 'actions',
+ hub: '<hub>',
+});
+
+app.http('message', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraOutputs: [wpsMsg],
+ handler: async (request, context) => {
+ context.extraOutputs.set(wpsMsg, [{
+ "actionName": "sendToAll",
+ "data": `Hello world`,
+ "dataType": `text`
+ }]);
+ }
+});
+```
+
+# [JavaScript Model v3](#tab/javascript-v3)
Define bindings in `functions.json`.
In C# language, we provide a few static methods under `WebPubSubAction` to help
| `GrantPermissionAction`|ConnectionId, Permission, TargetName | | `RevokePermissionAction`|ConnectionId, Permission, TargetName |
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
+
+In weakly typed language like `javascript`, **`actionName`** is the key parameter to resolve the type, available actions are listed as below.
+
+| ActionName | Properties |
+| -- | -- |
+| `sendToAll`|Data, DataType, Excluded |
+| `sendToGroup`|Group, Data, DataType, Excluded |
+| `sendToUser`|UserId, Data, DataType |
+| `sendToConnection`|ConnectionId, Data, DataType |
+| `addUserToGroup`|UserId, Group |
+| `removeUserFromGroup`|UserId, Group |
+| `removeUserFromAllGroups`|UserId |
+| `addConnectionToGroup`|ConnectionId, Group |
+| `removeConnectionFromGroup`|ConnectionId, Group |
+| `closeAllConnections`|Excluded, Reason |
+| `closeClientConnection`|ConnectionId, Reason |
+| `closeGroupConnections`|Group, Excluded, Reason |
+| `grantPermission`|ConnectionId, Permission, TargetName |
+| `revokePermission`|ConnectionId, Permission, TargetName |
+
+> [!IMPORTANT]
+> The message data property in the send message related actions must be `string` if data type is set to `json` or `text` to avoid data conversion ambiguity. Please use `JSON.stringify()` to convert the json object in need. This is applied to any place using message property, for example, `UserEventResponse.Data` working with `WebPubSubTrigger`.
+>
+> When data type is set to `binary`, it's allowed to leverage binding naturally supported `dataType` as `binary` configured in the `function.json`, see [Trigger and binding definitions](../azure-functions/functions-triggers-bindings.md?tabs=csharp#trigger-and-binding-definitions) for details.
+
+# [JavaScript Model v3](#tab/javascript-v3)
In weakly typed language like `javascript`, **`actionName`** is the key parameter to resolve the type, available actions are listed as below. | ActionName | Properties | | -- | -- |
-| `SendToAll`|Data, DataType, Excluded |
-| `SendToGroup`|Group, Data, DataType, Excluded |
-| `SendToUser`|UserId, Data, DataType |
-| `SendToConnection`|ConnectionId, Data, DataType |
-| `AddUserToGroup`|UserId, Group |
-| `RemoveUserFromGroup`|UserId, Group |
-| `RemoveUserFromAllGroups`|UserId |
-| `AddConnectionToGroup`|ConnectionId, Group |
-| `RemoveConnectionFromGroup`|ConnectionId, Group |
-| `CloseAllConnections`|Excluded, Reason |
-| `CloseClientConnection`|ConnectionId, Reason |
-| `CloseGroupConnections`|Group, Excluded, Reason |
-| `GrantPermission`|ConnectionId, Permission, TargetName |
-| `RevokePermission`|ConnectionId, Permission, TargetName |
+| `sendToAll`|Data, DataType, Excluded |
+| `sendToGroup`|Group, Data, DataType, Excluded |
+| `sendToUser`|UserId, Data, DataType |
+| `sendToConnection`|ConnectionId, Data, DataType |
+| `addUserToGroup`|UserId, Group |
+| `removeUserFromGroup`|UserId, Group |
+| `removeUserFromAllGroups`|UserId |
+| `addConnectionToGroup`|ConnectionId, Group |
+| `removeConnectionFromGroup`|ConnectionId, Group |
+| `closeAllConnections`|Excluded, Reason |
+| `closeClientConnection`|ConnectionId, Reason |
+| `closeGroupConnections`|Group, Excluded, Reason |
+| `grantPermission`|ConnectionId, Permission, TargetName |
+| `revokePermission`|ConnectionId, Permission, TargetName |
> [!IMPORTANT] > The message data property in the send message related actions must be `string` if data type is set to `json` or `text` to avoid data conversion ambiguity. Please use `JSON.stringify()` to convert the json object in need. This is applied to any place using message property, for example, `UserEventResponse.Data` working with `WebPubSubTrigger`.
azure-web-pubsub Socket Io Howto Integrate Apim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-howto-integrate-apim.md
+
+ Title: Integrate - How to use Web PubSub for Socket.IO with Azure API Management
+description: A how-to guide about how to use Web PubSub for Socket.IO with Azure API Management
+keywords: Socket.IO, Socket.IO on Azure, webapp Socket.IO, Socket.IO integration, APIM
+++++ Last updated : 1/11/2024+
+# How-to: Use Web PubSub for Socket.IO with Azure API Management
+
+Azure API Management service provides a hybrid, multicloud management platform for APIs across all environments. This article shows you how to add real-time capability to your application with Azure API Management and Web PubSub for Socket.IO.
++
+## Limitations
+
+Socket.IO clients support WebSocket and Long Polling and by default, the client connects to the service with Long Polling and then upgrade to WebSocket. However, as for now, API Management doesn't yet support different types of APIs (WebSocket or Http) with the same path. You must set either `websocket` or `polling` in client settings.
+
+## Create resources
+
+In order to follow the step-by-step guide, you need
+
+- Follow [Create a Web PubSub for Socket.IO resource](./socketio-quickstart.md#create-a-web-pubsub-for-socketio-resource) to create a Web PubSub for Socket.IO instance.
+- Follow [Quickstart: Use an ARM template to deploy Azure API Management](../api-management/quickstart-arm-template.md) and create an API Management instance.
+
+## Set up API Management
+
+### Configure APIs when client connects with `websocket` transport
+
+This section describes the steps to configure API Management when the Socket.IO clients connect with `websocket` transport.
+
+1. Go to **APIs** tab in the portal for API Management instance, select **Add API** and choose **WebSocket**, **Create** with the following parameters:
+
+ - Display name: `Web PubSub for Socket.IO`
+ - Web service URL: `wss://<your-webpubsubforsocketio-service-url>/clients/socketio/hubs/eio_hub`
+ - API URL suffix: `clients/socketio/hubs/eio_hub`
+
+ The hub name can be changed to meet your application.
+
+1. Press **Create** to create the API and after created, switch to **Settings** tab and uncheck **Subscription required** for quick demo purpose
+
+### Configure APIs when client connects with `polling` transport
+
+This section describes the steps to configure API Management when the Socket.IO clients connect with `websocket` transport.
+
+1. Go to **APIs** tab in the portal for API Management instance, select **Add API** and choose **WebSocket**, **Create** with the following parameters:
+
+ - Display name: `Web PubSub for Socket.IO`
+ - Web service URL: `https://<your-webpubsubforsocketio-service-url>/clients/socketio/hubs/eio_hub`
+ - API URL suffix: `clients/socketio/hubs/eio_hub`
+
+ The hub name can be changed to meet your application.
+
+1. Switch to **Settings** tab and uncheck Subscription required for quick demo purpose
+
+1. Switch to **Design** tab and select **Add operation**, and Save with the following parameters:
+
+ Add operation for post data
+ - Display name: connect
+ - URL: POST /
+
+ Add operation for get data
+ - Display name: connect get
+ - GET /
+
+## Try Sample
+
+Now, the traffic can reach Web PubSub for Socket.IO through API Management. There are some configurations in application. LetΓÇÖs use a chat application as an example.
+
+Clone GitHub repo https://github.com/Azure/azure-webpubsub and investigate to `sdk/webpubsub-socketio-extension/examples/chat` folder
+
+Then make some changes to let the sample work with API Management
+
+1. Open `public/main.js` and it's the Socket.IO client side codes
+
+ Edit the constructor of Socket.IO. You have to select either `websocket` or `polling` as the transport:
+
+ ```javascript
+ const webPubSubEndpoint = "https://<api-management-url>";
+ var socket = io(webPubSubEndpoint, {
+ transports: ["websocket"], // Depends on your transport choice. If you use WebSocket in API Management, set it to "websocket". If choosing Long Polling, set it to "polling"
+ path: "/clients/socketio/hubs/eio_hub", // The path also need to match the settings in API Management
+ });
+ ```
+
+2. On the **Keys** tab of Web PubSub for Socket.IO. Copy the **Connection String** and use the following command to run the server:
+
+ ```bash
+ npm install
+ npm run start -- <connection-string>
+ ```
+
+3. According to the output, use browser to visit the endpoint
+
+ ```
+ Visit http://localhost:3000
+ ```
+
+4. In the sample, you can chat with other users.
+
+## Next steps
+> [!div class="nextstepaction"]
+> [Check out more Socket.IO samples](https://aka.ms/awps/sio/sample)
azure-web-pubsub Tutorial Serverless Iot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-iot.md
Previously updated : 06/30/2022 Last updated : 01/10/2024 # Tutorial: Visualize IoT device data from IoT Hub using Azure Web PubSub service and Azure Functions
In this tutorial, you learn how to:
## Prerequisites
-# [JavaScript](#tab/javascript)
- * A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-* [Node.js](https://nodejs.org/en/download/), version 10.x.
+* [Node.js](https://nodejs.org/en/download/), version 18.x or above.
> [!NOTE] > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
In this tutorial, you learn how to:
* The [Azure CLI](/cli/azure) to manage Azure resources. -- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create an IoT hub
If you already have a Web PubSub instance in your Azure subscription, you can sk
1. Create an empty folder for the project, and then run the following command in the new folder.
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
```bash
- func init --worker-runtime javascript
+ func init --worker-runtime javascript --model V4
```
-
-2. Update `host.json`'s `extensionBundle` to version _3.3.0_ or later to get Web PubSub support.
+ # [JavaScript Model v3](#tab/javascript-v3)
+ ```bash
+ func init --worker-runtime javascript --model V3
+ ```
+
-```json
-{
- "version": "2.0",
- "extensionBundle": {
- "id": "Microsoft.Azure.Functions.ExtensionBundle",
- "version": "[3.3.*, 4.0.0)"
- }
-}
-```
+2. Create an `index` function to read and host a static web page for clients.
-3. Create an `index` function to read and host a static web page for clients.
```bash func new -n index -t HttpTrigger ```
- # [JavaScript](#tab/javascript)
- - Update `index/index.js` with following code, which serves the HTML content as a static site.
- ```js
- var fs = require("fs");
- var path = require("path");
-
- module.exports = function (context, req) {
- let index = path.join(
- context.executionContext.functionDirectory,
- "https://docsupdatetracker.net/index.html"
- );
- fs.readFile(index, "utf8", function (err, data) {
- if (err) {
- console.log(err);
- context.done(err);
- return;
- }
- context.res = {
- status: 200,
- headers: {
- "Content-Type": "text/html",
- },
- body: data,
- };
- context.done();
- });
- };
-
- ```
+
+ # [JavaScript Model v4](#tab/javascript-v4)
+ Update `src/functions/index.js` with following code, which serves the HTML content as a static site.
+ ```js
+ const { app } = require('@azure/functions');
+ const { readFile } = require('fs/promises');
+
+ app.http('index', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ handler: async (context) => {
+ const content = await readFile('https://docsupdatetracker.net/index.html', 'utf8', (err, data) => {
+ if (err) {
+ context.err(err)
+ return
+ }
+ });
+
+ return {
+ status: 200,
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: content,
+ };
+ }
+ });
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
+ Update `index/index.js` with following code, which serves the HTML content as a static site.
+ ```js
+ var fs = require("fs");
+ var path = require("path");
+ module.exports = function (context, req) {
+ let index = path.join(
+ context.executionContext.functionDirectory,
+ "/../https://docsupdatetracker.net/index.html"
+ );
+ fs.readFile(index, "utf8", function (err, data) {
+ if (err) {
+ console.log(err);
+ context.done(err);
+ return;
+ }
+ context.res = {
+ status: 200,
+ headers: {
+ "Content-Type": "text/html",
+ },
+ body: data,
+ };
+ context.done();
+ });
+ };
+ ```
+
-4. Create an `https://docsupdatetracker.net/index.html` file under the same folder as file `index.js`.
+4. Create an `https://docsupdatetracker.net/index.html` file under the root folder.
```html <!doctype html>
If you already have a Web PubSub instance in your Azure subscription, you can sk
```bash func new -n negotiate -t HttpTrigger ```
- # [JavaScript](#tab/javascript)
- - Update `negotiate/function.json` to include an input binding [`WebPubSubConnection`](reference-functions-bindings.md#input-binding), with the following json code.
+
+ # [JavaScript Model v4](#tab/javascript-v4)
+ Update `src/functions/negotiate.js` to use [`WebPubSubConnection`](reference-functions-bindings.md#input-binding) that contains the generated token.
+ ```js
+ const { app, input } = require('@azure/functions');
+
+ const connection = input.generic({
+ type: 'webPubSubConnection',
+ name: 'connection',
+ hub: '%hubName%'
+ });
+
+ app.http('negotiate', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [connection],
+ handler: async (request, context) => {
+ return { body: JSON.stringify(context.extraInputs.get('connection')) };
+ },
+ });
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
+ - Update `negotiate/function.json` to include an input binding [`WebPubSubConnection`](reference-functions-bindings.md#input-binding), with the following json code.
```json { "bindings": [
If you already have a Web PubSub instance in your Azure subscription, you can sk
] } ```
- - Update `negotiate/index.js` to return the `connection` binding that contains the generated token.
+ - Update `negotiate/index.js` to return the `connection` binding that contains the generated token.
```js module.exports = function (context, req, connection) { // Add your own auth logic here
If you already have a Web PubSub instance in your Azure subscription, you can sk
context.done(); }; ```
+
6. Create a `messagehandler` function to generate notifications by using the `"IoT Hub (Event Hub)"` template. ```bash
- func new --template "IoT Hub (Event Hub)" --name messagehandler
+ func new --template "Azure Event Hub trigger" --name messagehandler
```
- # [JavaScript](#tab/javascript)
- - Update _messagehandler/function.json_ to add [Web PubSub output binding](reference-functions-bindings.md#output-binding) with the following json code. We use variable `%hubName%` as the hub name for both IoT eventHubName and Web PubSub hub.
+
+ # [JavaScript Model v4](#tab/javascript-v4)
+ - Update `src/functions/messagehandler.js` to add [Web PubSub output binding](reference-functions-bindings.md#output-binding) with the following json code. We use variable `%hubName%` as the hub name for both IoT eventHubName and Web PubSub hub.
+
+ ```js
+ const { app, output } = require('@azure/functions');
+
+ const wpsAction = output.generic({
+ type: 'webPubSub',
+ name: 'action',
+ hub: '%hubName%'
+ });
+
+ app.eventHub('messagehandler', {
+ connection: 'IOTHUBConnectionString',
+ eventHubName: '%hubName%',
+ cardinality: 'many',
+ extraOutputs: [wpsAction],
+ handler: (messages, context) => {
+ var actions = [];
+ if (Array.isArray(messages)) {
+ context.log(`Event hub function processed ${messages.length} messages`);
+ for (const message of messages) {
+ context.log('Event hub message:', message);
+ actions.push({
+ actionName: "sendToAll",
+ data: JSON.stringify({
+ IotData: message,
+ MessageDate: message.date || new Date().toISOString(),
+ DeviceId: message.deviceId,
+ })});
+ }
+ } else {
+ context.log('Event hub function processed message:', messages);
+ actions.push({
+ actionName: "sendToAll",
+ data: JSON.stringify({
+ IotData: message,
+ MessageDate: message.date || new Date().toISOString(),
+ DeviceId: message.deviceId,
+ })});
+ }
+ context.extraOutputs.set(wpsAction, actions);
+ }
+ });
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
+ - Update _messagehandler/function.json_ to add [Web PubSub output binding](reference-functions-bindings.md#output-binding) with the following json code. We use variable `%hubName%` as the hub name for both IoT eventHubName and Web PubSub hub.
```json { "bindings": [
If you already have a Web PubSub instance in your Azure subscription, you can sk
] } ```
- - Update `messagehandler/index.js` with the following code. It sends every message from IoT hub to every client connected to Web PubSub service using the Web PubSub output bindings.
+ - Update `messagehandler/index.js` with the following code. It sends every message from IoT hub to every client connected to Web PubSub service using the Web PubSub output bindings.
```js module.exports = function (context, IoTHubMessages) { IoTHubMessages.forEach((message) => {
If you already have a Web PubSub instance in your Azure subscription, you can sk
context.done(); }; ```
+
7. Update the Function settings.
If you already have a Web PubSub instance in your Azure subscription, you can sk
``` > [!NOTE]
- > The `IoT Hub (Event Hub)` function trigger used in the sample has dependency on Azure Storage, but you can use a local storage emulator when the function is running locally. If you get an error such as `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`, you'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
+ > The `Azure Event Hub trigger` function trigger used in the sample has dependency on Azure Storage, but you can use a local storage emulator when the function is running locally. If you get an error such as `There was an error performing a read operation on the Blob Storage Secret Repository. Please ensure the 'AzureWebJobsStorage' connection string is valid.`, you'll need to download and enable [Storage Emulator](../storage/common/storage-use-emulator.md).
8. Run the function locally.
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
Previously updated : 05/05/2023 Last updated : 01/12/2024 # Tutorial: Create a serverless notification app with Azure Functions and Azure Web PubSub service
In this tutorial, you learn how to:
## Prerequisites
-# [JavaScript](#tab/javascript)
+# [JavaScript Model v4](#tab/javascript-v4)
* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
-* [Node.js](https://nodejs.org/en/download/), version 10.x.
+* [Node.js](https://nodejs.org/en/download/), version 18.x or above.
+ > [!NOTE]
+ > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
+
+* [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) (V4 or higher preferred) to run Azure Function apps locally and deploy to Azure.
+
+* The [Azure CLI](/cli/azure) to manage Azure resources.
+
+# [JavaScript Model v3](#tab/javascript-v3)
+
+* A code editor, such as [Visual Studio Code](https://code.visualstudio.com/)
+
+* [Node.js](https://nodejs.org/en/download/), version 18.x or above.
> [!NOTE] > For more information about the supported versions of Node.js, see [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages).
In this tutorial, you learn how to:
1. Make sure you have [Azure Functions Core Tools](https://github.com/Azure/azure-functions-core-tools#installing) installed. Now, create an empty directory for the project. Run command under this working directory. Use one of the given options below.
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+ ```bash
+ func init --worker-runtime javascript --model V4
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
```bash
- func init --worker-runtime javascript
+ func init --worker-runtime javascript --model V3
``` # [C# in-process](#tab/csharp-in-process)
In this tutorial, you learn how to:
``` 2. Follow the steps to install `Microsoft.Azure.WebJobs.Extensions.WebPubSub`.+
+ # [JavaScript Model v4](#tab/javascript-v4)
+ Confirm or update `host.json`'s extensionBundle to version _4.*_ or later to get Web PubSub support. For updating the `host.json`, open the file in editor, and then replace the existing version extensionBundle to version _4.*_ or later.
+ ```json
+ {
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+ }
+ ```
- # [JavaScript](#tab/javascript)
- Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support. For updating the `host.json`, open the file in editor, and then replace the existing version extensionBundle to version _3.3.0_ or later.
+ # [JavaScript Model v3](#tab/javascript-v3)
+ Confirm or update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support. For updating the `host.json`, open the file in editor, and then replace the existing version extensionBundle to version _3.3.0_ or later.
```json {
- "version": "2.0",
"extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[3.3.*, 4.0.0)"
In this tutorial, you learn how to:
dotnet add package Microsoft.Azure.Functions.Worker.Extensions.WebPubSub --prerelease ``` - # [Python](#tab/python) Update `host.json`'s extensionBundle to version _3.3.0_ or later to get Web PubSub support. For updating the `host.json`, open the file in editor, and then replace the existing version extensionBundle to version _3.3.0_ or later. ```json {
- "version": "2.0",
"extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[3.3.*, 4.0.0)"
In this tutorial, you learn how to:
```bash func new -n index -t HttpTrigger ```
- # [JavaScript](#tab/javascript)
- - Create a folder index and make a new file fuction.json inside the folder. Update `index/function.json` and copy following json codes.
+ # [JavaScript Model v4](#tab/javascript-v4)
+ - Update `src/functions/index.js` and copy following codes.
+ ```js
+ const { app } = require('@azure/functions');
+ const { readFile } = require('fs/promises');
+
+ app.http('index', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ handler: async (context) => {
+ const content = await readFile('https://docsupdatetracker.net/index.html', 'utf8', (err, data) => {
+ if (err) {
+ context.err(err)
+ return
+ }
+ });
+
+ return {
+ status: 200,
+ headers: {
+ 'Content-Type': 'text/html'
+ },
+ body: content,
+ };
+ }
+ });
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
+ - Update `index/function.json` and copy following json codes.
```json { "bindings": [
In this tutorial, you learn how to:
] } ```
- - In the index folder, create a new file index.js. Update `index/index.js` and copy following codes.
+ - Update `index/index.js` and copy following codes.
```js var fs = require('fs'); var path = require('path');
In this tutorial, you learn how to:
```bash func new -n negotiate -t HttpTrigger ```
- # [JavaScript](#tab/javascript)
- - Update `negotiate/function.json` and copy following json codes.
+ # [JavaScript Model v4](#tab/javascript-v4)
+ - Update `src/functions/negotiate.js` and copy following codes.
+ ```js
+ const { app, input } = require('@azure/functions');
+
+ const connection = input.generic({
+ type: 'webPubSubConnection',
+ name: 'connection',
+ hub: 'notification'
+ });
+
+ app.http('negotiate', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [connection],
+ handler: async (request, context) => {
+ return { body: JSON.stringify(context.extraInputs.get('connection')) };
+ },
+ });
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
+ - Update `negotiate/function.json` and copy following json codes.
```json { "bindings": [
In this tutorial, you learn how to:
] } ```
- - Create a folder negotiate and update `negotiate/index.js` and copy following codes.
+ - Create a folder negotiate and update `negotiate/index.js` and copy following codes.
```js module.exports = function (context, req, connection) { context.res = { body: connection };
In this tutorial, you learn how to:
```bash func new -n notification -t TimerTrigger ```
- # [JavaScript](#tab/javascript)
- - Create a folder notification and update `notification/function.json` and copy following json codes.
+ # [JavaScript Model v4](#tab/javascript-v4)
+ - Update `src/functions/notification.js` and copy following codes.
+ ```js
+ const { app, output } = require('@azure/functions');
+
+ const wpsAction = output.generic({
+ type: 'webPubSub',
+ name: 'action',
+ hub: 'notification'
+ });
+
+ app.timer('notification', {
+ schedule: "*/10 * * * * *",
+ extraOutputs: [wpsAction],
+ handler: (myTimer, context) => {
+ context.extraOutputs.set(wpsAction, {
+ actionName: 'sendToAll',
+ data: `[DateTime: ${new Date()}] Temperature: ${getValue(22, 1)}\xB0C, Humidity: ${getValue(40, 2)}%`,
+ dataType: 'text',
+ });
+ },
+ });
+
+ function getValue(baseNum, floatNum) {
+ return (baseNum + 2 * floatNum * (Math.random() - 0.5)).toFixed(3);
+ }
+ ```
+
+ # [JavaScript Model v3](#tab/javascript-v3)
+ - Update `notification/function.json` and copy following json codes.
```json { "bindings": [
In this tutorial, you learn how to:
] } ```
- - Create a folder notification and update `notification/index.js` and copy following codes.
+ - Update `notification/index.js` and copy following codes.
```js module.exports = function (context, myTimer) { context.bindings.actions = {
In this tutorial, you learn how to:
</body> </html> ```
-
- # [JavaScript](#tab/javascript)
+
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ # [JavaScript Model v3](#tab/javascript-v3)
# [C# in-process](#tab/csharp-in-process) Since C# project compiles files to a different output folder, you need to update your `*.csproj` to make the content page go with it.
Use the following commands to create these items.
1. Create the function app in Azure:
- # [JavaScript](#tab/javascript)
+ # [JavaScript Model v4](#tab/javascript-v4)
+
+ ```azurecli
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ ```
+ > [!NOTE]
+ > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
+
+ # [JavaScript Model v3](#tab/javascript-v3)
```azurecli
- az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 14 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
+ az functionapp create --resource-group WebPubSubFunction --consumption-plan-location <REGION> --runtime node --runtime-version 18 --functions-version 4 --name <FUNCIONAPP_NAME> --storage-account <STORAGE_NAME>
``` > [!NOTE] > Check [Azure Functions runtime versions documentation](../azure-functions/functions-versions.md#languages) to set `--runtime-version` parameter to supported value.
backup Backup Mabs Whats New Mabs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-whats-new-mabs.md
Title: What's new in Microsoft Azure Backup Server
description: Microsoft Azure Backup Server gives you enhanced backup capabilities for protecting VMs, files and folders, workloads, and more. Previously updated : 12/04/2023 Last updated : 01/09/2024
The following table lists the included features in MABS V4:
| Parallel online backup jobs - limit enhancement | MABS V4 supports increasing the maximum parallel online backup jobs from *eight* to a configurable limit based on your hardware and network limitations through a registry key for faster online backups. [Learn more](backup-azure-microsoft-azure-backup.md). | | Faster Item Level Recoveries | MABS V4 moves away from File Catalog for online backup of file/folder workloads. File Catalog was necessary to restore individual files and folders from online recovery points, but increased backup time by uploading file metadata. <br><br> MABS V4 uses an *iSCSI mount* to provide faster individual file restores and reduces backup time, because file metadata doesn't need to be uploaded. |
+## What's new in MABS V3 UR2 Hotfix?
+
+This update contains the following enhancement to improve the backup time. For more information on the enhancements and the installation, see the [KB article](https://help.microsoft.com/support/5031799).
+
+**Removed File Catalog dependency for online backup of file/folder workloads**: This update removes the dependency of MABS V3 on File Catalog (list of files in a recovery point maintained in the cloud) which was needed to restore individual files and folders from the online recovery points. This Hotfix allows MABS V3 UR2 to use a modern *iSCSI mount* method to provide individual file restoration.
+
+**Advantages**:
+
+- Reduces the backup time by up to *15%* because file catalog metadata (list of files in a recovery point) isn't generated during the backup operation.
+++
+>[!Note]
+>We recommend that you update your MABS V3 installation to Hotfix for Update Rollup 2 to benefit from the enhancement. Ensure that you also update your MARS Agent to the latest version (2.0.9262.0 or higher).
+ ## WhatΓÇÖs new in MABS v3 UR2? Microsoft Azure Backup Server (MABS) version 3 UR2 supports the following new features/feature updates.
bastion Quickstart Developer Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-developer-sku.md
description: Learn how to deploy Bastion using the Developer SKU.
Previously updated : 12/04/2023 Last updated : 01/11/2024
The Bastion Developer SKU is a new [lower-cost](https://azure.microsoft.com/pric
When you deploy Bastion using the Developer SKU, the deployment requirements are different than when you deploy using other SKUs. Typically when you create a bastion host, a host is deployed to the AzureBastionSubnet in your virtual network. The Bastion host is dedicated for your use. When using the Developer SKU, a bastion host isn't deployed to your virtual network and you don't need an AzureBastionSubnet. However, the Developer SKU bastion host isn't a dedicated resource and is, instead, part of a shared pool.
-Because the Developer SKU bastion resource isn't dedicated, the features for the Developer SKU are limited. See the Bastion configuration settings [SKU](configuration-settings.md) section for features by SKU. For more information about pricing, see the [Pricing](https://azure.microsoft.com/pricing/details/azure-bastion/) page. You can always upgrade the Developer SKU to a higher SKU if you need more features. See [Upgrade a SKU](upgrade-sku.md).
+Because the Developer SKU bastion resource isn't dedicated, the features for the Developer SKU are limited. See the Bastion configuration settings [SKU](configuration-settings.md) section for features by SKU. You can always upgrade the Developer SKU to a higher SKU if you need more features. See [Upgrade a SKU](upgrade-sku.md).
## <a name="prereq"></a>Prerequisites
Because the Developer SKU bastion resource isn't dedicated, the features for the
* If you need example values, see the [Example values](#values) section. * If you already have a virtual network, make sure it's selected on the Networking tab when you create your VM. * If you don't have a virtual network, you can create one at the same time you create your VM.
+ * If you have a virtual network, make sure you have the rights to write to it.
* **Required VM roles:**
bastion Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/troubleshoot.md
Title: 'Troubleshoot Azure Bastion' description: Learn how to troubleshoot Azure Bastion. -+ Previously updated : 05/08/2023- Last updated : 01/11/2024+ # Troubleshoot Azure Bastion
The key's randomart image is:
**A:** You can troubleshoot your connectivity issues by navigating to the **Connection Troubleshoot** tab (in the **Monitoring** section) of your Azure Bastion resource in the Azure portal. Network Watcher Connection Troubleshoot provides the capability to check a direct TCP connection from a virtual machine (VM) to a VM, fully qualified domain name (FQDN), URI, or IPv4 address. To start, choose a source to start the connection from, and the destination you wish to connect to and select "Check". For more information, see [Connection Troubleshoot](../network-watcher/network-watcher-connectivity-overview.md).
+If just-in-time (JIT) is enabled, you might need to add additional role assignments to connect to Bastion. Add the following permissions to the user, and then try reconnecting to Bastion. For more information, see [Enable just-in-time access on VMs](../defender-for-cloud/just-in-time-access-usage.md).
+
+| Setting | Description|
+|||
+|Microsoft.Security/locations/jitNetworkAccessPolicies/read|Gets the just-in-time network access policies|
+Microsoft.Security/locations/jitNetworkAccessPolicies/write | Creates a new just-in-time network access policy or updates an existing one |
+ ## <a name="filetransfer"></a>File transfer issues
The key's randomart image is:
**Q:** When I try to connect using Azure Bastion, I can't connect to the target VM, and I get a black screen in the Azure portal.
-**A:** This happens when there's either a network connectivity issue between your web browser and Azure Bastion (your client Internet firewall may be blocking WebSockets traffic or similar), or between the Azure Bastion and your target VM. Most cases include an NSG applied either to AzureBastionSubnet, or on your target VM subnet that is blocking the RDP/SSH traffic in your virtual network. Allow WebSockets traffic on your client internet firewall, and check the NSGs on your target VM subnet. See [Unable to connect to virtual machine](#connectivity) to learn how to use **Connection Troubleshoot** to troubleshoot your connectivity issues.
+**A:** This happens when there's either a network connectivity issue between your web browser and Azure Bastion (your client Internet firewall might be blocking WebSockets traffic or similar), or between the Azure Bastion and your target VM. Most cases include an NSG applied either to AzureBastionSubnet, or on your target VM subnet that is blocking the RDP/SSH traffic in your virtual network. Allow WebSockets traffic on your client internet firewall, and check the NSGs on your target VM subnet. See [Unable to connect to virtual machine](#connectivity) to learn how to use **Connection Troubleshoot** to troubleshoot your connectivity issues.
## Next steps
communication-services Network Traversal Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/network-traversal-logs.md
Azure Communication Services offers logging capabilities that you can use to monitor and debug your Communication Services solution. These capabilities can be configured through the Azure portal. + > [!IMPORTANT] > The following refers to logs enabled through [Azure Monitor](../../../../azure-monitor/overview.md) (see also [FAQ](../../../../azure-monitor/overview.md#frequently-asked-questions)). To enable these logs for your Communications Services, see: [Enable logging in Diagnostic Settings](../enable-logging.md)
communication-services Turn Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/turn-metrics.md
Azure Communication Services currently provides metrics for all Communication Se
- Investigate abnormalities in your metric values. - Understand your API traffic by using the metrics data that Chat requests emit. + ## Where to find metrics Primitives in Communication Services emit metrics for API requests. To find these metrics, see the **Metrics** tab under your Communication Services resource. You can also create permanent dashboards by using the workbooks tab under your Communication Services resource.
communication-services Network Traversal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/network-traversal.md
Real-time Relays solve the problem of NAT (Network Address Translation) traversa
* STUN (Session Traversal Utilities for NAT) offers a protocol to allow devices to exchange external IPs on the internet. If the clients can see each other, there is typically no need for a relay through a TURN service since the connection can be made peer-to-peer. A STUN server's job is to respond to request for a device's external IP. * TURN (Traversal Using Relays around NAT) is an extension of the STUN protocol that also relays the data between two endpoints through a mutually visible server. + ## Azure Communication Services Network Traversal Overview WebRTC(Web Real-Time Technologies) allow web browsers to stream audio, video, and data between devices without needing to have a gateway in the middle. Some of the common use cases here are voice, video, broadcasting, and screen sharing. To connect two endpoints on the internet, their external IP address is required. External IP is typically not available for devices sitting behind a corporate firewall. The protocols like STUN (Session Traversal Utilities for NAT) and TURN (Traversal Using Relays around NAT) are used to help the endpoints communicate.
communication-services Relay Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/relay-token.md
This quickstart shows how to retrieve a network relay token to access Azure Communication Services TURN servers. + ## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free)
confidential-computing Concept Skr Attestation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/concept-skr-attestation.md
Title: Secure Key Release with Azure Key Vault and Azure Confidential Computing description: Concept guide on what SKR is and its usage with Azure Confidential Computing Offerings-+ Last updated 8/22/2023-+ # Secure Key Release feature with AKV and Azure Confidential Computing (ACC)
confidential-computing Confidential Containers Enclaves https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-containers-enclaves.md
Title: Confidential containers with Intel SGX enclaves on Azure description: Learn about unmodified container support with confidential containers on Intel SGX through OSS and partner solutions -+ Last updated 7/15/2022-+
confidential-computing Confidential Nodes Aks Addon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-addon.md
Title: Azure Kubernetes Service plugin for confidential VMs description: How to use the Intel SGX device plugin and Intel SGX quote helper daemon sets for confidential VMs with Azure Kubernetes Service.-+ Last updated 11/01/2021-+
confidential-computing Confidential Nodes Aks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-nodes-aks-overview.md
Title: Confidential computing application enclave nodes on Azure Kubernetes Service (AKS) description: Intel SGX based confidential computing VM nodes with application enclave support -+ Last updated 07/15/2022-+
confidential-computing Enclave Aware Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/enclave-aware-containers.md
Title: Enclave aware containers on Azure description: enclave ready application containers support on Azure Kubernetes Service (AKS)-+ Last updated 9/22/2020-+
confidential-computing Skr Flow Confidential Containers Azure Container Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-flow-confidential-containers-azure-container-instance.md
Title: Secure Key Release with Azure Key Vault and Confidential Containers on Azure Container Instance description: Learn how to build an application that securely gets the key from AKV to an attested Azure Container Instances confidential container environment-+ Last updated 3/9/2023-+ # Secure Key Release with Confidential containers on Azure Container Instance (ACI)
confidential-computing Skr Policy Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/skr-policy-examples.md
Title: Secure Key Release Policy with Azure Key Vault and Azure Confidential Computing description: Examples of AKV SKR policies across offered Azure Confidential Computing Trusted Execution Environments-+ Last updated 3/5/2023-+ # Secure Key Release Policy (SKR) Examples for Confidential Computing (ACC)
connectors Connectors Azure Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-application-insights.md
ms.suite: integration Previously updated : 03/07/2023 Last updated : 01/10/2024 tags: connectors # As a developer, I want to get telemetry from an Application Insights resource to use with my workflow in Azure Logic Apps.
connectors Connectors Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md
ms.suite: integration Previously updated : 03/06/2023 Last updated : 01/10/2024 tags: connectors # As a developer, I want to get log data from my Log Analytics workspace or telemetry from my Application Insights resource to use with my workflow in Azure Logic Apps.
connectors Connectors Create Api Azureblobstorage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-azureblobstorage.md
ms.suite: integration Previously updated : 06/27/2023 Last updated : 01/10/2024 tags: connectors
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
ms.suite: integration Previously updated : 08/23/2023 Last updated : 01/10/2024 tags: connectors
connectors Connectors Create Api Office365 Outlook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-office365-outlook.md
ms.suite: integration Previously updated : 08/23/2023 Last updated : 01/10/2024 tags: connectors
connectors Connectors Create Api Sqlazure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-sqlazure.md
ms.suite: integration Previously updated : 07/24/2023 Last updated : 01/10/2024 tags: connectors ## As a developer, I want to access my SQL database from my logic app workflow.
connectors Connectors Native Recurrence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-recurrence.md
ms.suite: integration
Previously updated : 10/24/2023 Last updated : 01/10/2024 # Schedule and run recurring workflows with the Recurrence trigger in Azure Logic Apps
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
ms.suite: integration ms.reviewers: estfan, azla Previously updated : 07/31/2023 Last updated : 01/10/2024 tags: connectors
connectors Connectors Sftp Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-sftp-ssh.md
ms.suite: integration Previously updated : 08/01/2023 Last updated : 01/10/2024 # Connect to an SFTP file server from workflows in Azure Logic Apps
connectors Enable Stateful Affinity Built In Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/enable-stateful-affinity-built-in-connectors.md
ms.suite: integration
Previously updated : 06/13/2023 Last updated : 01/10/2024 # Enable stateful mode for stateless built-in connectors in Azure Logic Apps
connectors File System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/file-system.md
ms.suite: integration Previously updated : 08/17/2023 Last updated : 01/10/2024 # Connect to on-premises file systems from workflows in Azure Logic Apps
connectors Integrate Ims Apps Ibm Mainframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/integrate-ims-apps-ibm-mainframe.md
Previously updated : 10/30/2023 Last updated : 01/10/2024 # Integrate IMS programs on IBM mainframes with Standard workflows in Azure Logic Apps
connectors Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/introduction.md
ms.suite: integration Previously updated : 03/02/2023 Last updated : 01/10/2024 # As a developer, I want to learn how connectors help me access data, events, and resources in other apps, services, systems, and platforms from my workflow in Azure Logic Apps.
cosmos-db Emulator Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/emulator-release-notes.md
Last updated 09/11/2023
The Azure Cosmos DB emulator is updated at a regular cadence with release notes provided in this article. > [!div class="nextstepaction"]
-> [Download latest version (``2.14.16``)](https://aka.ms/cosmosdb-emulator)
+> [Download latest version (``2.14.12``)](https://aka.ms/cosmosdb-emulator)
## Supported versions Only the most recent version of the Azure Cosmos DB emulator is actively supported.
-## Latest version ``2.14.16``
+## Latest version ``2.14.12``
-> *Released January 8, 2024*
+> *Released March 20, 2023*
-- This release fixes an issue which was causing emulator to bind with `loopback` instead of `public interface` even after passing /AllowNetworkAccess command line option.
+- This release fixes an issue impacting Gremlin and Table endpoint API types. Prior to this fix a client application fails with a 500 status code when trying to connect to the public emulator's endpoint.
## Previous releases > [!WARNING] > Previous versions of the emulator are not supported by the product group.
-### ``2.14.12`` (March 20, 2023)
--- This release fixes an issue impacting Gremlin and Table endpoint API types. Prior to this fix a client application fails with a 500 status code when trying to connect to the public emulator's endpoint.- ### ``2.14.11`` (January 27, 2023) - This release updates the Azure Cosmos DB Emulator background services to match the latest online functionality of the Azure Cosmos DB.
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Install-Module @parameters
Use [`az extension add`](/cli/azure/extension#az-extension-add) to install the [cosmosdb-preview](https://github.com/azure/azure-cli-extensions/tree/main/src/cosmosdb-preview) Azure CLI extension. ++++++++++ ```azurecli-interactive az extension add \ --name cosmosdb-preview ``` ++++ #### [API for NoSQL](#tab/nosql/azure-powershell)
az cosmosdb sql container merge \
--name '<cosmos-container-name>' ```
+For **shared throughput databases**, start the merge by using `az cosmosdb sql database merge`.
+++++
+```azurecli
+az cosmosdb sql database merge \
+ --account-name '<cosmos-account-name>'
+ --name '<cosmos-database-name>'
+ --resource-group '<resource-group-name>'
+```
++
+```http
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/sqlDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview
+```
+ #### [API for MongoDB](#tab/mongodb/azure-powershell) Use `Invoke-AzCosmosDBMongoDBCollectionMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation.
Invoke-AzCosmosDBMongoDBCollectionMerge @parameters
#### [API for MongoDB](#tab/mongodb/azure-cli)
-Start the merge by using [`az cosmosdb mongodb collection merge`](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-merge).
+For **shared-throughput databases**, start the merge by using [`az cosmosdb mongodb database merge`](/cli/azure/cosmosdb/mongodb/database?view=azure-cli-latest).
+++
+```azurecli
+az cosmosdb mongodb database merge \
+ --account-name '<cosmos-account-name>'
+ --name '<cosmos-database-name>'
+ --resource-group '<resource-group-name>'
+```
++
+```http
+POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DocumentDB/databaseAccounts/{accountName}/mongodbDatabases/{databaseName}/partitionMerge?api-version=2023-11-15-preview
+```
+
+For **provisioned containers**, start the merge by using [`az cosmosdb mongodb collection merge`](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-merge).
```azurecli-interactive az cosmosdb mongodb collection merge \
az cosmosdb mongodb collection merge \
--account-name '<cosmos-account-name>' \ --database-name '<cosmos-database-name>' \ --name '<cosmos-collection-name>'+ ``` +++ ### Monitor merge operations
To enroll in the preview, your Azure Cosmos DB account must meet all the followi
- Your Azure Cosmos DB account uses API for NoSQL or MongoDB with version >=3.6. - Your Azure Cosmos DB account is using provisioned throughput (manual or autoscale). Merge doesn't apply to serverless accounts.
- - Currently, merge isn't supported for shared throughput databases. You may enroll an account that has both shared throughput databases and containers with dedicated throughput (manual or autoscale).
- - However, only the containers with dedicated throughput are able to be merged.
- Your Azure Cosmos DB account is a single-write region account (merge isn't currently supported for multi-region write accounts). - Your Azure Cosmos DB account doesn't use any of the following features: - [Point-in-time restore](continuous-backup-restore-introduction.md)
If you enroll in the preview, the following connectors fail.
- Learn more about [using Azure CLI with Azure Cosmos DB.](/cli/azure/azure-cli-reference-for-cosmos-db) - Learn more about [using Azure PowerShell with Azure Cosmos DB.](/powershell/module/az.cosmosdb/) - Learn more about [partitioning in Azure Cosmos DB.](partitioning-overview.md)+
cost-management-billing Direct Ea Administration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-administration.md
Title: EA Billing administration on the Azure portal
description: This article explains the common tasks that an enterprise administrator accomplishes in the Azure portal. Previously updated : 11/07/2023 Last updated : 01/11/2024
After the account owner receives an account ownership email, they need to confir
After account ownership is confirmed, you can create subscriptions and purchase resources with the subscriptions.
+>[!NOTE]
+>The confirmation process can take up to 24 hours.
+ ### To activate an enrollment account with a .onmicrosoft.com account If you're a new EA account owner with a .onmicrosoft.com account, you might not have a forwarding email address by default. In that situation, you might not receive the activation email. If this situation applies to you, use the following steps to activate your account ownership.
data-factory Monitor Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-ssis.md
Previously updated : 10/20/2023 Last updated : 1/11/2024 # Monitor SSIS operations with Azure Monitor
+> [!NOTE]
+> You can only monitor SSIS operation with Azure Monitor in Azure Data Factory, not in Azure Synapse Pipelines.
+ To lift & shift your SSIS workloads, you can [provision SSIS IR in ADF](./tutorial-deploy-ssis-packages-azure.md) that supports: - Running packages deployed into SSIS catalog (SSISDB) hosted by Azure SQL Database server/Managed Instance (Project Deployment Model)
Once provisioned, you can [check SSIS IR operational status using Azure PowerShe
Now with [Azure Monitor](../azure-monitor/data-platform.md) integration, you can query, analyze, and visually present all metrics and logs generated from SSIS IR operations and SSIS package executions on Azure portal. Additionally, you can also raise alerts on them. ++ ## Configure diagnostic settings and workspace for SSIS operations To send all metrics and logs generated from SSIS IR operations and SSIS package executions to Azure Monitor, you need to [configure diagnostics settings and workspace for your ADF](monitor-configure-diagnostics.md).
defender-for-cloud Cross Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/cross-tenant-management.md
Title: Cross-tenant management description: Learn how to set up cross-tenant management to manage the security posture of multiple tenants in Defender for Cloud using Azure Lighthouse. - Last updated 11/09/2021- # Cross-tenant management in Defender for Cloud
-Cross-tenant management enables you to view and manage the security posture of multiple tenants in Defender for Cloud by leveraging [Azure Lighthouse](../lighthouse/overview.md). Manage multiple tenants efficiently, from a single view, without having to sign in to each tenant's directory.
+Cross-tenant management enables you to view and manage the security posture of multiple tenants in Defender for Cloud by using [Azure Lighthouse](../lighthouse/overview.md). Manage multiple tenants efficiently, from a single view, without having to sign in to each tenant's directory.
- Service providers can manage the security posture of resources, for multiple customers, from within their own tenant.
Cross-tenant management enables you to view and manage the security posture of m
[Azure delegated resource management](../lighthouse/concepts/architecture.md) is one of the key components of Azure Lighthouse. Set up cross-tenant management by delegating access to resources of managed tenants to your own tenant using these instructions from Azure Lighthouse's documentation: [Onboard a customer to Azure Lighthouse](../lighthouse/how-to/onboard-customer.md).
+## How cross-tenant management works in Defender for Cloud
-## How does cross-tenant management work in Defender for Cloud
-
-You are able to review and manage subscriptions across multiple tenants in the same way that you manage multiple subscriptions in a single tenant.
+You're able to review and manage subscriptions across multiple tenants in the same way that you manage multiple subscriptions in a single tenant.
-From the top menu bar, click the filter icon, and select the subscriptions, from each tenant's directory, you'd like to view.
+From the top menu bar, select the filter icon, and select the subscriptions, from each tenant's directory, you'd like to view.
![Filter tenants.](./media/cross-tenant-management/cross-tenant-filter.png)
The views and actions are basically the same. Here are some examples:
- **Manage Alerts**: Detect [alerts](alerts-overview.md) throughout the different tenants. Take action on resources that are out of compliance with actionable [remediation steps](managing-and-responding-alerts.md). - **Manage advanced cloud defense features and more**: Manage the various threat protection services, such as [just-in-time (JIT) VM access](just-in-time-access-usage.md), [Adaptive network hardening](adaptive-network-hardening.md), [adaptive application controls](adaptive-application-controls.md), and more.
-
+ ## Next steps
-This article explains how cross-tenant management works in Defender for Cloud. To discover how Azure Lighthouse can simplify cross-tenant management within an enterprise which uses multiple Microsoft Entra tenants, see [Azure Lighthouse in enterprise scenarios](../lighthouse/concepts/enterprise.md).
+
+This article explains how cross-tenant management works in Defender for Cloud. To discover how Azure Lighthouse can simplify cross-tenant management within an enterprise that uses multiple Microsoft Entra tenants, see [Azure Lighthouse in enterprise scenarios](../lighthouse/concepts/enterprise.md).
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
For a full list of the recommendations for Networking, see [Networking recommend
This article addresses recommendations that apply to your Azure resources from a network security perspective. Networking recommendations center around next generation firewalls, Network Security Groups, JIT VM access, overly permissive inbound traffic rules, and more. For a list of networking recommendations and remediation actions, see [Managing security recommendations in Microsoft Defender for Cloud](review-security-recommendations.md).
-The **Networking** features of Defender for Cloud include:
+The **Networking** features of Defender for Cloud include:
- Network map (requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)) - [Adaptive network hardening](adaptive-network-hardening.md) (requires [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)) - Networking security recommendations
-
+ ## View your networking resources and their recommendations From the [asset inventory page](asset-inventory.md), use the resource type filter to select the networking resources that you want to investigate: :::image type="content" source="./media/protect-network-resources/network-filters-inventory.png" alt-text="Asset inventory network resource types." lightbox="./media/protect-network-resources/network-filters-inventory.png"::: - ## Network map The interactive network map provides a graphical view with security overlays giving you recommendations and insights for hardening your network resources. Using the map you can see the network topology of your Azure workloads, connections between your virtual machines and subnets, and the capability to drill down from the map into specific resources and the recommendations for those resources.
To open the Network map:
:::image type="content" source="media/protect-network-resources/workload-protection-network-map.png" alt-text="Screenshot showing selection of network map from workload protections." lightbox="media/protect-network-resources/workload-protection-network-map.png"::: 1. Select the **Layers** menu choose **Topology**.
-
+ The default view of the topology map displays: - Currently selected subscriptions - The map is optimized for the subscriptions you selected in the portal. If you modify your selection, the map is regenerated with the new selections.
The default view of the topology map displays:
## Understanding the network map
-The network map can show you your Azure resources in a **Topology** view and a **Traffic** view.
+The network map can show you your Azure resources in a **Topology** view and a **Traffic** view.
### The topology view In the **Topology** view of the networking map, you can view the following insights about your networking resources: - In the inner circle, you can see all the VNets within your selected subscriptions, the next circle is all the subnets, the outer circle is all the virtual machines.-- The lines connecting the resources in the map let you know which resources are associated with each other, and how your Azure network is structured.
+- The lines connecting the resources in the map let you know which resources are associated with each other, and how your Azure network is structured.
- Use the severity indicators to quickly get an overview of which resources have open recommendations from Defender for Cloud.-- You can click any of the resources to drill down into them and view the details of that resource and its recommendations directly, and in the context of the Network map.
+- You can select any of the resources to drill down into them and view the details of that resource and its recommendations directly, and in the context of the Network map.
- If there are too many resources being displayed on the map, Microsoft Defender for Cloud uses its proprietary algorithm to 'smart cluster' your resources, highlighting the ones that are in the most critical state, and have the most high severity recommendations. Because the map is interactive and dynamic, every node is clickable, and the view can change based on the filters: 1. You can modify what you see on the network map by using the filters at the top. You can focus the map based on:
- - **Security health**: You can filter the map based on Severity (High, Medium, Low) of your Azure resources.
+ - **Security health**: You can filter the map based on Severity (High, Medium, Low) of your Azure resources.
- **Recommendations**: You can select which resources are displayed based on which recommendations are active on those resources. For example, you can view only resources for which Defender for Cloud recommends you enable Network Security Groups. - **Network zones**: By default, the map displays only Internet facing resources, you can select internal VMs as well.
-
-2. You can click **Reset** in top left corner at any time to return the map to its default state.
+
+1. You can select **Reset** in top left corner at any time to return the map to its default state.
To drill down into a resource:
-1. When you select a specific resource on the map, the right pane opens and gives you general information about the resource, connected security solutions if there are any, and the recommendations relevant to the resource. It's the same type of behavior for each type of resource you select.
+1. When you select a specific resource on the map, the right pane opens and gives you general information about the resource, connected security solutions if there are any, and the recommendations relevant to the resource. It's the same type of behavior for each type of resource you select.
2. When you hover over a node in the map, you can view general information about the resource, including subscription, resource type, and resource group.
-3. Use the link to zoom into the tool tip and refocus the map on that specific node.
+3. Use the link to zoom into the tool tip and refocus the map on that specific node.
4. To refocus the map away from a specific node, zoom out. ### The Traffic view
The **Traffic** view provides you with a map of all the possible traffic between
### Uncover unwanted connections
-The strength of this view is in its ability to show you these allowed connections together with the vulnerabilities that exist, so you can use this cross-section of data to perform the necessary hardening on your resources.
+The strength of this view is in its ability to show you these allowed connections together with the vulnerabilities that exist, so you can use this cross-section of data to perform the necessary hardening on your resources.
For example, you might detect two machines that you werenΓÇÖt aware could communicate, enabling you to better isolate the workloads and subnets.
For example, you might detect two machines that you werenΓÇÖt aware could commun
To drill down into a resource:
-1. When you select a specific resource on the map, the right pane opens and gives you general information about the resource, connected security solutions if there are any, and the recommendations relevant to the resource. It's the same type of behavior for each type of resource you select.
-2. Click **Traffic** to see the list of possible outbound and inbound traffic on the resource - this is a comprehensive list of who can communicate with the resource and who it can communicate with, and through which protocols and ports. For example, when you select a VM, all the VMs it can communicate with are shown, and when you select a subnet, all the subnets which it can communicate with are shown.
+1. When you select a specific resource on the map, the right pane opens and gives you general information about the resource, connected security solutions if there are any, and the recommendations relevant to the resource. It's the same type of behavior for each type of resource you select.
+2. Select **Traffic** to see the list of possible outbound and inbound traffic on the resource - this is a comprehensive list of who can communicate with the resource and who it can communicate with, and through which protocols and ports. For example, when you select a VM, all the VMs it can communicate with are shown, and when you select a subnet, all the subnets which it can communicate with are shown.
-**This data is based on analysis of the Network Security Groups as well as advanced machine learning algorithms that analyze multiple rules to understand their crossovers and interactions.**
+**This data is based on analysis of the Network Security Groups as well as advanced machine learning algorithms that analyze multiple rules to understand their crossovers and interactions.**
[![Networking traffic map.](./media/protect-network-resources/network-map-traffic.png)](./media/protect-network-resources/network-map-traffic.png#lightbox) - ## Next steps To learn more about recommendations that apply to other Azure resource types, see the following:
defender-for-iot Back Up Restore Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/back-up-restore-sensor.md
We recommend saving your OT sensor backup files on your internal network. To do
1. Create a shared folder on the external SMB server, and make sure that you have the folder's path and the credentials required to access the SMB server. 1. Sign into your OT sensor via SSH using the [*admin*](roles-on-premises.md#access-per-privileged-user) user.-
- If you're using a sensor version earlier than 23.2.0, use the [*cyberx_host*](roles-on-premises.md#legacy-users) user instead. Skip the next step for running `system shell` and jump directly to creating a directory for your backup files.
+ > [!NOTE]
+ > If you're using a sensor version earlier than 23.2.0, use the [*cyberx_host*](roles-on-premises.md#legacy-users) user instead. Skip the next step for running `system shell` and jump directly to creating a directory for your backup files.
1. Access the host by running the `system shell` command. Enter the admin user's password when prompted and press **ENTER**.
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
Title: Configure network connections
-description: Learn how to manage network connections for a dev center in Microsoft Dev Box. Use network connections to connect to virtual network or enable connecting to on-premises resources from a dev box.
+description: Learn how to manage network connections for a dev center in Microsoft Dev Box. Connect to a virtual network or enable connecting to on-premises resources.
Previously updated : 04/25/2023 Last updated : 12/20/2023 #Customer intent: As a platform engineer, I want to be able to manage network connections so that I can enable dev boxes to connect to my existing networks and deploy them in the desired region.
You can choose to deploy dev boxes to a Microsoft-hosted network associated with
You need to add at least one network connection to a dev center in Microsoft Dev Box.
-When you're planning network connectivity for your dev boxes, you must:
+## Prerequisites
+
+- Sufficient permissions to enable creating and configuring network connections.
+- At least one virtual network and subnet available for your dev boxes.
+
+When you're planning network connectivity for your dev boxes, consider the following points:
-- Ensure that you have sufficient permissions to create and configure network connections.-- Ensure that you have at least one virtual network and subnet available for your dev boxes. - Identify the region or location that's closest to your dev box users. Deploying dev boxes into a region that's close to users gives them a better experience. - Determine whether dev boxes should connect to your existing networks by using Microsoft Entra join or Microsoft Entra hybrid join.
-## Permissions
+### Verify your permissions
-To manage a network connection, you need the following permissions:
+To manage a network connection, confirm that you have the following permissions:
-|Action|Permissions required|
-|--|--|
-|Create and configure a virtual network and subnet|Network Contributor permissions on an existing virtual network (Owner or Contributor), or permission to create a new virtual network and subnet.|
-|Create or delete a network connection|Owner or Contributor permissions on an Azure subscription or on a specific resource group, which includes permission to create a resource group.|
-|Add or remove a network connection |Write permission on the dev center.|
+| Action | Role | Permissions required |
+||||
+| _Create and configure a virtual network and subnet_ | **Network Contributor** (**Owner** or **Contributor**) | Permissions on an existing virtual network or permission to create a new virtual network and subnet |
+| _Create or delete a network connection_ | **Owner** or **Contributor** | Permissions on an Azure subscription or on a specific resource group, which includes permission to create a resource group |
+| _Add or remove a network connection_ | **Contributor** | Permission to perform **Write** actions on the dev center |
## Create a virtual network and subnet
To create a network connection, you need an existing virtual network and subnet.
1. On the **Create virtual network** pane, on the **Basics** tab, enter the following values: | Setting | Value |
- | - | -- |
+ |||
| **Subscription** | Select your subscription. |
- | **Resource group** | Select an existing resource group. Or create a new one by selecting **Create new**, entering **rg-name**, and then selecting **OK**. |
- | **Name** | Enter *VNet-name*. |
+ | **Resource group** | Select an existing resource group, or create a new one by selecting **Create new**, entering a name, and then selecting **OK**. |
+ | **Name** | Enter a name for the virtual network. |
| **Region** | Select the region for the virtual network and dev boxes. |
- :::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of the Basics tab on the pane for creating a virtual network in the Azure portal." border="true":::
+ :::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of the Basics tab on the pane for creating a virtual network in the Azure portal." lightbox="./media/how-to-manage-network-connection/example-basics-tab.png":::
- > [!Important]
- > The region that you select for the virtual network is the where the dev boxes will be deployed.
+ > [!IMPORTANT]
+ > The region you select for the virtual network is the where Azure deploys the dev boxes.
1. On the **IP Addresses** tab, accept the default settings.
If your organization routes egress traffic through a firewall, you need to open
The following sections show you how to create and configure a network connection in Microsoft Dev Box.
-### Types of Active Directory join
-
-Microsoft Dev Box requires a configured and working Active Directory join, which defines how dev boxes join your domain and access resources. There are two choices:
+### Review types of Active Directory join
-- **Microsoft Entra join**: If your organization uses Microsoft Entra ID, you can use a Microsoft Entra join (sometimes called a native Microsoft Entra join). Dev box users sign in to Microsoft Entra joined dev boxes by using their Microsoft Entra account and access resources based on the permissions assigned to that account. Microsoft Entra join enables access to cloud-based and on-premises apps and resources.
+Microsoft Dev Box requires a configured and working Active Directory join, which defines how dev boxes join your domain and access resources. There are two choices: Microsoft Entra join and Microsoft Entra hybrid join.
- For more information, see [Plan your Microsoft Entra join deployment](../active-directory/devices/device-join-plan.md).
-- **Microsoft Entra hybrid join**: If your organization has an on-premises Active Directory implementation, you can still benefit from some of the functionality in Microsoft Entra ID by using Microsoft Entra hybrid joined dev boxes. These dev boxes are joined to your on-premises Active Directory instance and registered with Microsoft Entra ID.
+- **Microsoft Entra join**. If your organization uses Microsoft Entra ID, you can use a Microsoft Entra join (sometimes called a _native_ Microsoft Entra join). Dev box users sign in to Microsoft Entra joined dev boxes by using their Microsoft Entra account. They access resources based on the permissions assigned to that account. Microsoft Entra join enables access to cloud-based and on-premises apps and resources. For more information, see [Plan your Microsoft Entra join deployment](../active-directory/devices/device-join-plan.md).
- Microsoft Entra hybrid joined dev boxes require network line of sight to your on-premises domain controllers periodically. Without this connection, devices become unusable.
-
- For more information, see [Plan your Microsoft Entra hybrid join deployment](../active-directory/devices/hybrid-join-plan.md).
+- **Microsoft Entra hybrid join**. If your organization has an on-premises Active Directory implementation, you can still benefit from some of the functionality in Microsoft Entra ID by using Microsoft Entra hybrid joined dev boxes. These dev boxes are joined to your on-premises Active Directory instance and registered with Microsoft Entra ID. Microsoft Entra hybrid joined dev boxes require network line of sight to your on-premises domain controllers periodically. Without this connection, devices become unusable. For more information, see [Plan your Microsoft Entra hybrid join deployment](../active-directory/devices/hybrid-join-plan.md).
### Create a network connection-
-Follow the steps on the relevant tab to create your network connection.
- <a name='azure-ad-join'></a>
-#### [**Microsoft Entra join**](#tab/AzureADJoin/)
+# [**Microsoft Entra join**](#tab/AzureADJoin/)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, enter **network connections**. In the list of results, select **Network connections**.
+1. In the search box, enter **network connections**. In the list of results, select **Network Connections**.
1. On the **Network Connections** page, select **Create**.
- :::image type="content" source="./media/how-to-manage-network-connection/network-connections-empty.png" alt-text="Screenshot that shows the Create button on the page for network connections.":::
+ :::image type="content" source="./media/how-to-manage-network-connection/network-connections-empty.png" alt-text="Screenshot that shows the Create button on the page for network connections." lightbox="./media/how-to-manage-network-connection/network-connections-empty.png":::
1. On the **Create a network connection** pane, on the **Basics** tab, enter the following values:
- |Name|Value|
- |-|-|
- |**Domain join type**|Select **Microsoft Entra join**.|
- |**Subscription**|Select the subscription in which you want to create the network connection.|
- |**ResourceGroup**|Select an existing resource group, or select **Create new** and then enter a name for the new resource group.|
- |**Name**|Enter a descriptive name for the network connection.|
- |**Virtual network**|Select the virtual network that you want the network connection to use.|
- |**Subnet**|Select the subnet that you want the network connection to use.|
+ | Setting | Value |
+ |||
+ | **Domain join type** | Select **Microsoft Entra join**. |
+ | **Subscription** | Select the subscription in which you want to create the network connection. |
+ | **Resource group** | Select an existing resource group, or select **Create new** and then enter a name for the new resource group. |
+ | **Name** | Enter a descriptive name for the network connection. |
+ | **Virtual network** | Select the virtual network that you want the network connection to use. |
+ | **Subnet** | Select the subnet that you want the network connection to use. |
- :::image type="content" source="./media/how-to-manage-network-connection/create-native-network-connection-full-blank.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a network connection, with the option for Microsoft Entra join selected.":::
+ :::image type="content" source="./media/how-to-manage-network-connection/create-native-network-connection-full-blank.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a network connection, with the option for Microsoft Entra join selected." lightbox="./media/how-to-manage-network-connection/create-native-network-connection-full-blank.png":::
1. Select **Review + Create**. 1. On the **Review** tab, select **Create**.
-1. When the deployment is complete, select **Go to resource**. Confirm that the connection appears on the **Network connections** page.
+1. When the deployment completes, select **Go to resource**. Confirm the connection appears on the **Network Connections** page.
<a name='hybrid-azure-ad-join'></a>
-#### [**Microsoft Entra hybrid join**](#tab/HybridAzureADJoin/)
+# [**Microsoft Entra hybrid join**](#tab/HybridAzureADJoin/)
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, enter **network connections**. In the list of results, select **Network connections**.
+1. In the search box, enter **network connections**. In the list of results, select **Network Connections**.
1. On the **Network Connections** page, select **Create**.
- :::image type="content" source="./media/how-to-manage-network-connection/network-connections-empty.png" alt-text="Screenshot that shows the Create button on the page that lists network connections.":::
+ :::image type="content" source="./media/how-to-manage-network-connection/network-connections-empty.png" alt-text="Screenshot that shows the Create button on the page that lists network connections." lightbox="./media/how-to-manage-network-connection/network-connections-empty.png":::
1. On the **Create a network connection** pane, on the **Basics** tab, enter the following values:
- |Name|Value|
- |-|-|
- |**Domain join type**|Select **Microsoft Entra hybrid join**.|
- |**Subscription**|Select the subscription in which you want to create the network connection.|
- |**ResourceGroup**|Select an existing resource group, or select **Create new** and then enter a name for the new resource group.|
- |**Name**|Enter a descriptive name for the network connection.|
- |**Virtual network**|Select the virtual network that you want the network connection to use.|
- |**Subnet**|Select the subnet that you want the network connection to use.|
- |**AD DNS domain name**| Enter the DNS name of the Active Directory domain that you want to use for connecting and provisioning Cloud PCs. For example: `corp.contoso.com`. |
- |**Organizational unit**| Enter the organizational unit (OU). An OU is a container within an Active Directory domain that can hold users, groups, and computers. |
- |**AD username UPN**| Enter the username, in user principal name (UPN) format, that you want to use for connecting Cloud PCs to your Active Directory domain. For example: `svcDomainJoin@corp.contoso.com`. This service account must have permission to join computers to the domain and the target OU (if one is set). |
- |**AD domain password**| Enter the password for the user. |
-
- :::image type="content" source="./media/how-to-manage-network-connection/create-hybrid-network-connection-full-blank.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a network connection, with the option for Microsoft Entra hybrid join selected.":::
+ | Setting | Value |
+ |||
+ | **Domain join type** | Select **Microsoft Entra hybrid join**. |
+ | **Subscription** | Select the subscription in which you want to create the network connection. |
+ | **ResourceGroup** | Select an existing resource group, or select **Create new** and then enter a name for the new resource group. |
+ | **Name** | Enter a descriptive name for the network connection. |
+ | **Virtual network** | Select the virtual network that you want the network connection to use. |
+ | **Subnet** | Select the subnet that you want the network connection to use. |
+ | **AD DNS domain name**| Enter the DNS name of the Active Directory domain that you want to use for connecting and provisioning Cloud PCs. For example: `corp.contoso.com`. |
+ | **Organizational unit** | Enter the organizational unit (OU). An OU is a container within an Active Directory domain that can hold users, groups, and computers. |
+ | **AD username UPN** | Enter the username, in user principal name (UPN) format, that you want to use for connecting Cloud PCs to your Active Directory domain. For example: `svcDomainJoin@corp.contoso.com`. This service account must have permission to join computers to the domain and the target OU (if one is set). |
+ | **AD domain password** | Enter the password for the user. |
+
+ :::image type="content" source="./media/how-to-manage-network-connection/create-hybrid-network-connection-full-blank.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a network connection, with the option for Microsoft Entra hybrid join selected." lightbox="./media/how-to-manage-network-connection/create-hybrid-network-connection-full-blank.png":::
1. Select **Review + Create**. 1. On the **Review** tab, select **Create**.
-1. When the deployment is complete, select **Go to resource**. Confirm that the connection appears on the **Network connections** page.
+1. When the deployment completes, select **Go to resource**. Confirm the connection appears on the **Network connections** page.
->[!NOTE]
+> [!NOTE]
> Microsoft Dev Box automatically creates a resource group for each network connection, which holds the network interface cards (NICs) that use the virtual network assigned to the network connection. The resource group has a fixed name based on the name and region of the network connection. You can't change the name of the resource group, or specify an existing resource group. ## Attach a network connection to a dev center
You need to attach a network connection to a dev center before you can use it in
1. On the **Add network connection** pane, select the network connection that you created earlier, and then select **Add**.
- :::image type="content" source="./media/how-to-manage-network-connection/add-network-connection.png" alt-text="Screenshot that shows the pane for adding a network connection.":::
+ :::image type="content" source="./media/how-to-manage-network-connection/add-network-connection.png" alt-text="Screenshot that shows the pane for adding a network connection." lightbox="./media/how-to-manage-network-connection/add-network-connection.png":::
After you attach a network connection, the Azure portal runs several health checks on the network. You can view the status of the checks on the resource overview page. :::image type="content" source="./media/how-to-manage-network-connection/network-connection-grid-populated.png" alt-text="Screenshot that shows the status of a network connection.":::
-You can add network connections that pass all health checks to a dev center and use them to create dev box pools. Dev boxes within dev box pools are created and domain joined in the location of the virtual network that's assigned to the network connection.
+You can add network connections that pass all health checks to a dev center and use them to create dev box pools. Dev boxes within dev box pools are created and domain joined in the location of the virtual network assigned to the network connection.
To resolve any errors, see [Troubleshoot Azure network connections](/windows-365/enterprise/troubleshoot-azure-network-connection).
You can remove a network connection from a dev center if you no longer want to u
:::image type="content" source="./media/how-to-manage-network-connection/remove-network-connection.png" alt-text="Screenshot that shows the Remove button on the network connection page.":::
-1. Read the warning message, and then select **OK**.
+1. Review the warning message, and then select **OK**.
The network connection is no longer available for use in the dev center.
dev-box How To Configure Stop Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-stop-schedule.md
Title: Set a dev box auto-stop schedule
-description: Learn how to configure an auto-stop schedule to automatically shut down dev boxes in a pool at a specified time.
+description: Learn how to configure an auto-stop schedule to automatically shut down dev boxes in a pool at a specified time and save on costs.
Previously updated : 04/25/2023 Last updated : 01/10/2024 # Auto-stop your Dev Boxes on schedule
-To save on costs, you can enable an Auto-stop schedule on a dev box pool. Microsoft Dev Box will attempt to shut down all dev boxes in that pool at the time specified in the schedule. You can configure one stop time in one timezone for each pool.
+
+To save on costs, you can enable an auto-stop schedule on a dev box pool. Microsoft Dev Box attempts to shut down all dev boxes in the pool at the time specified in the schedule. You can configure one stop time in one timezone for each pool.
## Permissions+ To manage a dev box schedule, you need the following permissions:
-|Action|Permission required|
-|--|--|
-|Configure a schedule|Owner, Contributor, or DevCenter Project Admin.|
+| Action | Permission required |
+|||
+| _Configure a schedule_ | Owner, Contributor, or DevCenter Project Admin. |
## Manage an auto-stop schedule in the Azure portal
-You can enable, modify, and disable auto-stop schedules using the Azure portal.
+You can enable, modify, and disable auto-stop schedules by using the Azure portal.
### Create an auto-stop schedule
-You can create an auto-stop schedule while creating a new dev box pool, or by modifying an already existing dev box pool. The following steps show you how to use the Azure portal to create and configure an auto-stop schedule.
+
+You can create an auto-stop schedule while configuring a new dev box pool, or by modifying an already existing dev box pool. The following steps show you how to use the Azure portal to create and configure an auto-stop schedule.
### Add an auto-stop schedule to an existing pool 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, type *Projects* and then select **Projects** from the list.
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
- :::image type="content" source="./media/how-to-manage-stop-schedule/discover-projects.png" alt-text="Screenshot showing a search for projects from the Azure portal search box.":::
+ :::image type="content" source="./media/how-to-manage-stop-schedule/discover-projects.png" alt-text="Screenshot showing a search for projects from the Azure portal search box." lightbox="./media/how-to-manage-stop-schedule/discover-projects.png":::
-1. Open the project associated with the pool you want to edit.
+1. Open the project associated with the pool that you want to edit, and then select **Dev box pools**.
- :::image type="content" source="./media/how-to-manage-stop-schedule/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
+ :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-pool-grid-populated.png" alt-text="Screenshot of the list of existing dev box pools for the project." lightbox="./media/how-to-manage-stop-schedule/dev-box-pool-grid-populated.png":::
-1. Select the pool you wish to modify, and then select edit. You might need to scroll to locate edit.
+1. Determine the pool that you want to modify and scroll right. Open the more options (**...**) menu for the pool and select **Edit**.
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-edit-pool.png" alt-text="Screenshot of the edit dev box pool button.":::
+ :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-edit-pool.png" alt-text="Screenshot of the more options menu for a dev box pool and the Edit option selected." lightbox="./media/how-to-manage-stop-schedule/dev-box-edit-pool.png":::
-1. In **Enable Auto-stop**, select **Yes**.
+1. In the **Edit dev box pool** pane, configure the following settings in the **Auto-stop** section:
- |Name|Value|
- |-|-|
- |**Enable Auto-stop**|Select **Yes** to enable an Auto-stop schedule after the pool has been created.|
- |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool shutdown at this time every day.|
- |**Time zone**| Select the time zone that the stop time is in.|
+ | Setting | Value |
+ |||
+ | **Enable Auto-stop** | Select **Yes** to enable an auto-stop schedule after the pool is created. |
+ | **Stop time** | Select a time to shutdown all the dev boxes in the pool. All dev boxes in this pool shutdown at this time every day. |
+ | **Time zone** | Select the time zone that the stop time is in. |
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-save-pool.png" alt-text="Screenshot of the edit dev box pool page showing the Auto-stop options.":::
+ :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-enable-stop.png" alt-text="Screenshot of the edit dev box pool page showing the Auto-stop options and Yes selected." lightbox="./media/how-to-manage-stop-schedule/dev-box-enable-stop.png":::
1. Select **Save**.
-### Add an Auto-stop schedule as you create a pool
+### Add an auto-stop schedule when you create a pool
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, type *Projects* and then select **Projects** from the list.
-
- :::image type="content" source="./media/how-to-manage-stop-schedule/discover-projects.png" alt-text="Screenshot showing a search for projects from the Azure portal search box.":::
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
-1. Open the project with which you want to associate the new dev box pool.
-
- :::image type="content" source="./media/how-to-manage-stop-schedule/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
-
-1. Select **Dev box pools** and then select **+ Create**.
+1. Open the project for which you want to create a pool, select **Dev box pools**, and then select **Create**.
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-pool-grid-empty.png" alt-text="Screenshot of the list of dev box pools within a project. The list is empty.":::
+ :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-pool-grid-empty.png" alt-text="Screenshot of the list of dev box pools within a project. The list is empty. The Create option is selected." lightbox="./media/how-to-manage-stop-schedule/dev-box-pool-grid-empty.png":::
-1. On the **Create a dev box pool** page, enter the following values:
+1. On the **Create a dev box pool** pane, enter the following values:
- |Name|Value|
- |-|-|
- |**Name**|Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes, and must be unique within a project.|
- |**Dev box definition**|Select an existing dev box definition. The definition determines the base image and size for the dev boxes created within this pool.|
- |**Network connection**|Select an existing network connection. The network connection determines the region of the dev boxes created within this pool.|
- |**Dev Box Creator Privileges**|Select Local Administrator or Standard User.|
- |**Enable Auto-stop**|Yes is the default. Select No to disable an Auto-stop schedule. You can configure an Auto-stop schedule after the pool has been created.|
- |**Stop time**| Select a time to shutdown all the dev boxes in the pool. All Dev Boxes in this pool shutdown at this time every day.|
- |**Time zone**| Select the time zone that the stop time is in.|
- |**Licensing**| Select this check box to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
+ | Setting | Value |
+ |||
+ | **Name** | Enter a name for the pool. The pool name is visible to developers to select when they're creating dev boxes. The name must be unique within a project. |
+ | **Dev box definition** | Select an existing dev box definition. The definition determines the base image and size for the dev boxes that are created in this pool. |
+ | **Network connection** | 1. Select **Deploy to a Microsoft hosted network**. </br>2. Select your desired deployment region for the dev boxes. Choose a region close to your expected dev box users for the optimal user experience. |
+ | **Dev box Creator Privileges** | Select **Local Administrator** or **Standard User**. |
+ | **Enable Auto-stop** | **Yes** is the default. Select **No** to disable an auto-stop schedule. You can configure an auto-stop schedule after the pool is created. |
+ | **Stop time** | Select a time to shut down all the dev boxes in the pool. All dev boxes in this pool shut down at this time every day. |
+ | **Time zone** | Select the time zone for the stop time. |
+ | **Licensing** | Select this checkbox to confirm that your organization has Azure Hybrid Benefit licenses that you want to apply to the dev boxes in this pool. |
+ :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-pool-create.png" alt-text="Screenshot of the Create dev box pool dialog." lightbox="./media/how-to-manage-stop-schedule/dev-box-pool-create.png":::
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-pool-create.png" alt-text="Screenshot of the Create dev box pool dialog.":::
-
-1. Select **Add**.
+1. Select **Create**.
-1. Verify that the new dev box pool appears in the list. You may need to refresh the screen.
-
+1. Verify that the new dev box pool appears in the list. You might need to refresh the screen.
### Delete an auto-stop schedule
-To delete an auto-stop schedule, first navigate to your pool:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-1. In the search box, type *Projects* and then select **Projects** from the list.
+Follow these steps to delete an auto-stop schedule for your pool:
- :::image type="content" source="./media/how-to-manage-stop-schedule/discover-projects.png" alt-text="Screenshot showing a search for projects from the Azure portal search box.":::
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Open the project associated with the pool you want to edit.
-
- :::image type="content" source="./media/how-to-manage-stop-schedule/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
-1. Select the pool you wish to modify, and then select edit. You might need to scroll to locate edit.
+1. Open the project associated with the pool that you want to modify, and then select **Dev box pools**.
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-edit-pool.png" alt-text="Screenshot of the edit dev box pool button.":::
+1. Determine the pool that you want to modify and scroll right. Open the more options (**...**) menu for the pool and select **Edit**.
-1. In **Enable Auto-stop**, select **No**.
+1. In the **Edit dev box pool** pane, in the **Auto-stop** section, toggle the **Enable Auto-stop** setting to **No**.
- :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-disable-stop.png" alt-text="Screenshot of the edit dev box pool page showing Auto-stop disabled.":::
+ :::image type="content" source="./media/how-to-manage-stop-schedule/dev-box-disable-stop.png" alt-text="Screenshot of the edit dev box pool page showing the Auto-stop options and No selected." lightbox="./media/how-to-manage-stop-schedule/dev-box-disable-stop.png":::
-1. Select **Save**. Dev boxes in this pool won't automatically shut down.
+1. Select **Save**.
+
+After you change the setting, dev boxes in this pool don't automatically shut down.
-## Manage an auto-stop schedule at the CLI
+## Manage an auto-stop schedule with the Azure CLI
-You can also manage auto-stop schedules using Azure CLI.
+You can also manage auto-stop schedules by using the Azure CLI.
### Create an auto-stop schedule
-```az devcenter admin schedule create -n default --pool {poolName} --project {projectName} --time 23:15 --time-zone "America/Los_Angeles" --schedule-type stopdevbox --frequency daily --state enabled```
+The following Azure CLI command creates an auto-stop schedule:
-|Parameter|Description|
-|--|--|
-|poolName|Name of your pool|
-|project|Name of your Project|
-|time| Local time when Dev Boxes should be shut down|
-|time-zone|Standard timezone string to determine local time|
+```azurecli
+az devcenter admin schedule create --pool-name {poolName} --project {projectName} --resource-group {resourceGroupName} --time {hh:mm} --time-zone {"timeZone"} --state Enabled
+```
+
+| Parameter | Value |
+|||
+| `pool-name` | Name of your dev box pool. |
+| `project` | Name of your dev box project. |
+| `resource-group` | Name of the resource group for your dev box pool. |
+| `time` | Local time when dev boxes should be shut down, such as `23:15` for 11:15 PM. |
+| `time-zone` | Standard timezone string to determine the local time, such as `"America/Los_Angeles"`. |
+| `state` | Indicates whether the schedule is in use. The options include `Enabled` or `Disabled`. |
### Delete an auto-stop schedule
-```az devcenter admin schedule delete -n default --pool {poolName} --project {projectName}```
+Enter the following command in the Azure CLI to delete an auto-stop schedule:
+
+```azurecli
+az devcenter admin schedule delete --pool-name {poolName} --project-name {projectName}
+```
-|Parameter|Description|
-|--|--|
-|poolName|Name of your pool|
-|project|Name of your Project|
+| Parameter | Value |
+|||
+| `pool-name` | Name of your dev box pool. |
+| `project-name` | Name of your dev box project. |
-## Next steps
+## Related content
- [Manage a dev box definition](./how-to-manage-dev-box-definitions.md)-- [Manage a dev box using the developer portal](./how-to-create-dev-boxes-developer-portal.md)
+- [Manage a dev box by using the developer portal](./how-to-create-dev-boxes-developer-portal.md)
dev-box How To Determine Your Quota Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-determine-your-quota-usage.md
Previously updated : 08/21/2023 Last updated : 01/09/2024 # Determine resource usage and quota for Microsoft Dev Box
-To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a quota.
+To ensure that resources are available for customers, Microsoft Dev Box has a limit on the number of each type of resource that can be used in a subscription. This limit is called a _quota_.
-Keeping track of how your quota of VM cores is being used across your subscriptions can be difficult. You may want to know what your current usage is, how much you have left, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the Usage + Quotas page.
+Keeping track of how your quota of virtual machine cores is being used across your subscriptions can be difficult. You might want to know what your current usage is, how much is remaining, and in what regions you have capacity. To help you understand where and how you're using your quota, Azure provides the **Usage + Quotas** page in the Azure portal.
## Determine your Dev Box usage and quota by subscription
-1. In the [Azure portal](https://portal.azure.com), go to the subscription you want to examine.
+1. Sign in to the [Azure portal](https://portal.azure.com), and go to the subscription you want to examine.
-1. On the Subscription page, under Settings, select **Usage + quotas**.
+1. On the **Subscription** page, under **Settings**, select **Usage + quotas**.
:::image type="content" source="media/how-to-determine-your-quota-usage/subscription-overview.png" alt-text="Screenshot showing the Subscription overview left menu, with Usage and quotas highlighted." lightbox="media/how-to-determine-your-quota-usage/subscription-overview.png":::
-1. To view Usage + quotas information about Microsoft Dev Box, select **Dev Box**.
+1. To view usage and quota information about Microsoft Dev Box, select the **Provider** filter, and then select **Dev Box** in the dropdown list.
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-dev-box.png" alt-text="Screenshot showing the Usage and quotas page, with Dev Box highlighted." lightbox="media/how-to-determine-your-quota-usage/select-dev-box.png":::
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-dev-box.png" alt-text="Screenshot showing the Usage and quotas page, with Dev Box highlighted in the Provider filter dropdown list." lightbox="media/how-to-determine-your-quota-usage/select-dev-box.png":::
1. In this example, you can see the **Quota name**, the **Region**, the **Subscription** the quota is assigned to, the **Current Usage**, and whether or not the limit is **Adjustable**. :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription.png" alt-text="Screenshot showing the Usage and quotas page, with column headings highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription.png":::
-1. You can also see that the usage is grouped by level; regular, low, and no usage.
+1. Notice that Azure groups the usage by level: **Regular**, **Low**, and **No usage**:
- :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription-groups.png" alt-text="Screenshot showing the Usage and quotas page, with VM size groups highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription-groups.png" :::
+ :::image type="content" source="media/how-to-determine-your-quota-usage/example-subscription-groups.png" alt-text="Screenshot showing the Usage and quotas page, with virtual machine size groups highlighted." lightbox="media/how-to-determine-your-quota-usage/example-subscription-groups.png" :::
1. To view quota and usage information for specific regions, select the **Region:** filter, select the regions to display, and then select **Apply**.
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-regions.png" lightbox="media/how-to-determine-your-quota-usage/select-regions.png" alt-text="Screenshot showing the Usage and quotas page, with Regions drop down highlighted.":::
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-regions.png" alt-text="Screenshot showing the Usage and quotas page, with the Regions dropdown list highlighted." lightbox="media/how-to-determine-your-quota-usage/select-regions.png":::
1. To view only the items that are using part of your quota, select the **Usage:** filter, and then select **Only items with usage**.
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-items-with-usage.png" lightbox="media/how-to-determine-your-quota-usage/select-items-with-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Only show items with usage option highlighted.":::
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-items-with-usage.png" alt-text="Screenshot showing the Usage and quotas page, with the Usage dropdown list and Only show items with usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-items-with-usage.png" :::
1. To view items that are using above a certain amount of your quota, select the **Usage:** filter, and then select **Select custom usage**.
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" alt-text="Screenshot showing the Usage and quotas page, with Usage drop down and Select custom usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" :::
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" alt-text="Screenshot showing the Usage and quotas page, with the Usage dropdown list and Select custom usage option highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage-before.png" :::
1. You can then set a custom usage threshold, so only the items using above the specified percentage of the quota are displayed.
- :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Select custom usage option and configuration settings highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage.png":::
+ :::image type="content" source="media/how-to-determine-your-quota-usage/select-custom-usage.png" alt-text="Screenshot showing the Usage and quotas page, with Select custom usage option and configuration settings highlighted." lightbox="media/how-to-determine-your-quota-usage/select-custom-usage.png":::
1. Select **Apply**.
- Each subscription has its own Usage + quotas page, which covers all the various services in the subscription, not just Microsoft Dev Box.
+Each subscription has its own **Usage + quotas** page that covers all the various services in the subscription and not just Microsoft Dev Box.
## Related content -- Check the default quota for each resource type by subscription type: [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits).-- To learn how to request a quota increase, see [Request a quota limit increase](./how-to-request-quota-increase.md).
+- Check the default quota for each resource type by subscription type with [Microsoft Dev Box limits](../azure-resource-manager/management/azure-subscription-service-limits.md#microsoft-dev-box-limits)
+- Learn how to [request a quota limit increase](./how-to-request-quota-increase.md)
dev-box How To Manage Dev Box Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-projects.md
Previously updated : 04/25/2023 Last updated : 01/08/2024 #Customer intent: As a platform engineer, I want to be able to manage dev box projects so that I can provide appropriate dev boxes to my users. -->
To learn how to add a user to the Project Admin role, refer to [Provide access t
[!INCLUDE [permissions note](./includes/note-permission-to-create-dev-box.md)] ## Permissions+ To manage a dev box project, you need the following permissions:
-|Action|Permission required|
-|--|--|
-|Create or delete dev box project|Owner, Contributor, or Write permissions on the dev center in which you want to create the project. |
-|Update a dev box project|Owner, Contributor, or Write permissions on the project.|
-|Create, delete, and update dev box pools in the project|Owner, Contributor, or DevCenter Project Admin.|
-|Manage a dev box within the project|Owner, Contributor, or DevCenter Project Admin.|
-|Add a dev box user to the project|Owner permissions on the project.|
+| Action | Permission required |
+|||
+| _Create or delete dev box project_ | Owner, Contributor, or Write permissions on the dev center in which you want to create the project. |
+| _Update a dev box project_ | Owner, Contributor, or Write permissions on the project. |
+| _Create, delete, and update dev box pools in the project_ | Owner, Contributor, or DevCenter Project Admin. |
+| _Manage a dev box within the project_ | Owner, Contributor, or DevCenter Project Admin. |
+| _Add a dev box user to the project_ | Owner permissions on the project. |
## Create a Microsoft Dev Box project The following steps show you how to create and configure a Microsoft Dev Box project.
-1. In the [Azure portal](https://portal.azure.com), in the search box, type *Projects* and then select **Projects** from the list.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
-1. On the Projects page, select **+Create**.
+1. On the **Projects** page, select **Create**.
-1. On the **Create a project** page, on the **Basics** tab, enter the following values:
+1. On the **Create a project** pane, on the **Basics** tab, enter the following values:
+
+ | Setting | Value |
+ |||
+ | **Subscription** | Select the subscription in which you want to create the project. |
+ | **Resource group** | Select an existing resource group, or select **Create new** and then enter a name for the new resource group. |
+ | **Dev center** | Select the dev center that you want to associate with this project. All the settings at the dev center level apply to the project. |
+ | **Name** | Enter a name for the project. |
+ | **Description** | Enter a brief description of the project. |
+
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-project-create.png" alt-text="Screenshot of the Create a dev box project Basics tab." lightbox="./media/how-to-manage-dev-box-projects/dev-box-project-create.png":::
+
+1. On the **Dev box management** tab, ensure **No** is selected.
- |Name|Value|
- |-|-|
- |**Subscription**|Select the subscription in which you want to create the project.|
- |**Resource group**|Select an existing resource group or select **Create new**, and enter a name for the resource group.|
- |**Dev center**|Select the dev center to which you want to associate this project. All the dev center level settings are applied to the project.|
- |**Name**|Enter a name for your project. |
- |**Description**|Enter a brief description of the project. |
+ You can select **Yes** to limit the number of dev boxes per developer, and specify the maximum number of dev boxes a developer can create. The default, **No**, means developers can create an unlimited number of dev boxes.
- :::image type="content" source="./media/how-to-manage-dev-box-projects/dev-box-project-create.png" alt-text="Screenshot of the Create a dev box project basics tab.":::
+ To learn more about dev box limits, see [Tutorial: Control costs by setting dev box limits on a project](./tutorial-dev-box-limits.md).
-1. [Optional] On the **Tags** tab, enter a name and value pair that you want to assign.
+1. (Optional) On the **Tags** tab, enter a name/value pair that you want to assign.
1. Select **Review + Create**.
The following steps show you how to create and configure a Microsoft Dev Box pro
1. Confirm that the project is created successfully by checking the notifications. Select **Go to resource**.
-1. Verify that you see the **Project** page.
+1. Verify that the project appears on the **Projects** page.
+
+As you create a project, you might see this informational message about catalogs:
++
+Because you're not configuring Deployment Environments, you can safely ignore this message.
## Delete a Microsoft Dev Box project
You can delete a Microsoft Dev Box project when you're no longer using it. Delet
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, type *Projects* and then select **Projects** from the list.
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
-1. Open the project you want to delete.
+1. Open the dev box project that you want to delete.
-1. Select the dev box project you want to delete and then select **Delete**.
+1. Select **Delete**.
- :::image type="content" source="./media/how-to-manage-dev-box-projects/delete-project.png" alt-text="Screenshot of the list of existing dev box pools, with the one to be deleted selected.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/delete-project.png" alt-text="Screenshot of the overview page for a dev box project and the Delete option highlighted." lightbox="./media/how-to-manage-dev-box-projects/delete-project.png":::
-1. In the confirmation message, select **Confirm**.
-
- :::image type="content" source="./media/how-to-manage-dev-box-projects/confirm-delete-project.png" alt-text="Screenshot of the Delete dev box pool confirmation message.":::
+1. In the confirmation message, select **OK**.
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/confirm-delete-project.png" alt-text="Screenshot of the Delete dev box pool confirmation message.":::
## Provide access to a Microsoft Dev Box project
-Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it.
+Before users can create dev boxes based on the dev box pools in a project, you must provide access for them through a role assignment. The Dev Box User role enables dev box users to create, manage, and delete their own dev boxes. You must have sufficient permissions to a project before you can add users to it.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the search box, type *Projects* and then select **Projects** from the list.
+1. In the search box, enter **projects**. In the list of results, select **Projects**.
-1. Select the project you want to provide your team members access to.
-
- :::image type="content" source="./media/how-to-manage-dev-box-projects/projects-grid.png" alt-text="Screenshot of the list of existing projects.":::
+1. Open the dev box project that you want to provide your team members access to.
-1. Select **Access Control (IAM)** from the left menu.
+1. On the left menu, select **Access Control (IAM)**, and then select **Add** > **Add role assignment**.
-1. Select **Add** > **Add role assignment**.
+ :::image type="content" source="./media/how-to-manage-dev-box-projects/project-permissions.png" alt-text="Screenshot that shows the page for project access control." lightbox="./media/how-to-manage-dev-box-projects/project-permissions.png":::
1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
- | Setting | Value |
- | | |
- | **Role** | Select **DevCenter Dev Box User**. |
- | **Assign access to** | Select **User, group, or service principal**. |
- | **Members** | Select the users or groups you want to have access to the project. |
+ | Setting | Value |
+ |||
+ | **Role** | Select **DevCenter Dev Box User**. |
+ | **Assign access to** | Select **User, group, or service principal**. |
+ | **Members** | Select the users or groups you want to have access to the project. |
- :::image type="content" source="media/how-to-manage-dev-box-projects/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane.":::
+ :::image type="content" source="media/how-to-manage-dev-box-projects/add-role-assignment-user.png" alt-text="Screenshot that shows the Add role assignment pane." lightbox="media/how-to-manage-dev-box-projects/add-role-assignment-user.png":::
The user is now able to view the project and all the pools within it. They can create dev boxes from any of the pools and manage those dev boxes from the [developer portal](https://aka.ms/devbox-portal). To assign administrative access to a project, select the DevCenter Project Admin role. For more information on how to add a user to the Project Admin role, see [Provide access to projects for project admins](how-to-project-admin.md).
-## Next steps
+## Related content
- [Manage dev box pools](./how-to-manage-dev-box-pools.md)-- [2. Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)-- [Configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
+- [Create a dev box definition](quickstart-configure-dev-box-service.md#create-a-dev-box-definition)
+- [Configure Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md)
dev-box How To Skip Delay Stop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-skip-delay-stop.md
Title: Skip or delay an auto-stop scheduled shutdown
-description: Learn how to delay the scheduled shutdown of your dev box, or skip the shutdown entirely.
+description: Learn how to delay the scheduled shutdown of your dev box, or skip the shutdown entirely, to manage your work and resources more effectively.
Previously updated : 06/16/2023 Last updated : 01/10/2024
A platform engineer or project admin can schedule a time for dev boxes in a pool
You can delay the shutdown or skip it altogether. This flexibility allows you to manage your work and resources more effectively, ensuring that your projects remain uninterrupted when necessary. -
-## Skip scheduled shutdown from the dev box
+## Change scheduled shutdown from the dev box
If your dev box is in a pool with a stop schedule, you receive a notification about 30 minutes before the scheduled shutdown giving you time to save your work or make necessary adjustments. ### Delay the shutdown
-1. In the pop-up notification, select a time to delay the shutdown for.
+1. In the pop-up notification, select a time to delay the shutdown for, such as **Delay 1 Hour**.
- :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-time.png" alt-text="Screenshot showing the shutdown notification.":::
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-time.png" alt-text="Screenshot showing the shutdown notification and delay options in the dropdown list." lightbox="media/how-to-skip-delay-stop/dev-box-toast-time.png" :::
-1. Select **Delay**
+1. Select **Delay**.
- :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-delay.png" alt-text="Screenshot showing the shutdown notification with Delay highlighted.":::
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-delay.png" alt-text="Screenshot showing the shutdown notification and the Delay button highlighted." lightbox="media/how-to-skip-delay-stop/dev-box-toast-delay.png" :::
### Skip the shutdown
-To skip the shutdown, select **Skip** in the notification. The dev box doesn't shut down until the next scheduled shutdown time.
+You can also choose to skip the next scheduled shutdown altogether. In the pop-up notification, select **Skip**.
- :::image type="content" source="media/how-to-skip-delay-stop/dev-box-toast-skip.png" alt-text="Screenshot showing the shutdown notification with Skip highlighted.":::
-## Skip scheduled shutdown from the developer portal
+The dev box doesn't shut down until the next scheduled shutdown time.
-In the developer portal, you can see the scheduled shutdown time on the dev box tile, and delay or skip the shutdown from the more options menu.
+## Change scheduled shutdown from the developer portal
-Shutdown time is shown on dev box tile:
+In the developer portal, you can see the scheduled shutdown time on the dev box tile. You can delay or skip the shutdown from the more options (**...**) menu.
### Delay the shutdown+ 1. Locate your dev box.
-1. On the more options menu, select **Delay scheduled shutdown**.
- :::image type="content" source="media/how-to-skip-delay-stop/dev-portal-menu.png" alt-text="Screenshot showing the dev box tile, more options menu, with Delay scheduled shutdown highlighted.":::
+1. On the more options (**...**) menu, select **Delay scheduled shutdown**.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/dev-portal-menu.png" alt-text="Screenshot showing the dev box tile and the more options menu with the Delay scheduled shutdown option highlighted." lightbox="media/how-to-skip-delay-stop/dev-portal-menu.png":::
+
+1. In the **Delay shutdown until** dropdown list, select the time that you want to delay the shutdown until. You can delay the shutdown by up to 8 hours from the scheduled time.
-1. You can delay the shutdown by up to 8 hours from the scheduled time. From **Delay shutdown until**, select the time you want to delay the shutdown until, and then select **Delay**.
+ :::image type="content" source="media/how-to-skip-delay-stop/delay-options.png" alt-text="Screenshot showing how to delay the scheduled shutdown until 7 30 p m." lightbox="media/how-to-skip-delay-stop/delay-options.png":::
- :::image type="content" source="media/how-to-skip-delay-stop/delay-options.png" alt-text="Screenshot showing the options available for delaying the scheduled shutdown.":::
+1. Select **Delay**.
### Skip the shutdown+ 1. Locate your dev box.
-1. On the more options menu, select **Delay scheduled shutdown**.
-1. On the **Delay shutdown until** list, select the last available option, which specifies the time 8 hours after the scheduled shutdown time, and then select **Delay**. In this example, the last option is **6:30 pm tomorrow (skip)**.
- :::image type="content" source="media/how-to-skip-delay-stop/skip-shutdown.png" alt-text="Screenshot showing the final shutdown option is to skip shutdown until the next scheduled time.":::
+1. On the more options (**...**) menu, select **Delay scheduled shutdown**.
+
+1. In the **Delay shutdown until** dropdown list, select the last available option, which specifies the time 8 hours after the scheduled shutdown time. In this example, the last option is **6:30 pm tomorrow (skip)**.
+
+ :::image type="content" source="media/how-to-skip-delay-stop/skip-shutdown.png" alt-text="Screenshot showing the final shutdown option that skips shutdown until the next scheduled time." lightbox="media/how-to-skip-delay-stop/skip-shutdown.png":::
+
+1. Select **Delay**.
-## Next steps
+## Related content
-- [Manage a dev box using the developer portal](./how-to-create-dev-boxes-developer-portal.md)-- [Auto-stop your Dev Boxes on schedule](how-to-configure-stop-schedule.md)
+- [Manage a dev box by using the developer portal](./how-to-create-dev-boxes-developer-portal.md)
+- [Auto-stop your dev boxes on schedule](how-to-configure-stop-schedule.md)
dev-box How To Troubleshoot Repair Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-troubleshoot-repair-dev-box.md
Title: Troubleshoot and Repair Dev Box RDP Connectivity Issues
+ Title: Troubleshoot and repair Dev Box RDP connectivity issues
description: Having problems connecting to your dev box remotely? Learn how to troubleshoot and resolve connectivity issues to your dev box with developer portal tools. Previously updated : 09/25/2023 Last updated : 01/10/2024 #CustomerIntent: As a dev box user, I want to be able to troubleshoot and repair connectivity issues with my dev box so that I don't lose development time. # Troubleshoot and resolve dev box remote desktop connectivity issues
-In this article, you learn how to troubleshoot and resolve remote desktop connectivity (RDC) issues with your dev box. Since RDC issues to your dev box can be time consuming to resolve manually, use the *Troubleshoot & repair* tool in the developer portal to diagnose and repair some common dev box connectivity issues.
+In this article, you learn how to troubleshoot and resolve remote desktop connectivity (RDC) issues with your dev box. Because RDC issues to your dev box can be time consuming to resolve manually, use the **Troubleshoot & repair** tool in the developer portal to diagnose and repair some common dev box connectivity issues.
-When you run the *Troubleshoot & repair* tool, your dev box and its backend services in the Azure infrastructure are scanned for issues. If an issue is detected, *Troubleshoot & repair* fixes the issue so you can connect to your dev box.
+When you run the **Troubleshoot & repair** tool, your dev box and its back-end services in the Azure infrastructure are scanned for issues. If an issue is detected, the troubleshoot and repair process fixes the issue so you can connect to your dev box.
## Prerequisites -- Access to the developer portal.-- The dev box you want to troubleshoot must be running.
+- Access to the Microsoft developer portal.
+- The dev box that you want to troubleshoot must be running.
-## Run Troubleshoot and repair
+## Run Troubleshoot & repair
-If you're unable to connect to your dev box using an RDP client, use the *Troubleshoot & repair* tool.
+If you're unable to connect to your dev box by using an RDP client, use the **Troubleshoot & repair** tool.
-The *Troubleshoot & repair* process takes between 10 to 40 minutes to complete. During this time, you can't use your dev box. The tool scans a list of critical components that relate to RDP connectivity, including but not limited to:
+The troubleshoot and repair process completes on average in 20 minutes, but can take up to 40 minutes. During this time, you can't use your dev box. The tool scans a list of critical components that relate to RDP connectivity, including but not limited to:
- Domain join check - SxS stack listener readiness - URL accessibility check-- VM power status check
+- Virtual machine power status check
- Azure resource availability check-- VM extension check
+- Virtual machine extension check
- Windows Guest OS readiness > [!WARNING]
-> Running *Troubleshoot & repair* may effectively restart your Dev Box. Any unsaved data on your Dev Box will be lost.
+> Running the troubleshoot and repair process might effectively restart your dev box. Any unsaved data on your dev box will be lost.
-To run *Troubleshoot & repair* on your dev box, follow these steps:
+To run the **Troubleshoot & repair** tool on your dev box, follow these steps:
1. Sign in to the [developer portal](https://aka.ms/devbox-portal). 1. Check that the dev box you want to troubleshoot is running.
- :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-running-tile.png" alt-text="Screenshot showing the dev box tile with the status Running.":::
+ :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-running-tile.png" alt-text="Screenshot showing the dev box tile with the status Running." lightbox="media/how-to-troubleshoot-repair-dev-box/dev-box-running-tile.png":::
1. If the dev box isn't running, start it, and check whether you can connect to it with RDP.
-1. If your dev box is running and you still can't connect to it with RDP, on the Actions menu, select **Troubleshoot & repair**.
+1. If your dev box is running and you still can't connect to it with RDP, on the more actions (**...**) menu, select **Troubleshoot & repair**.
- :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-actions-troubleshoot-repair.png" alt-text="Screenshot showing the Troubleshoot and repair option for a dev box.":::
+ :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-actions-troubleshoot-repair.png" alt-text="Screenshot showing the Troubleshoot and repair option for a dev box on the more actions menu." lightbox="media/how-to-troubleshoot-repair-dev-box/dev-box-actions-troubleshoot-repair.png" :::
-1. In the Troubleshoot and repair connectivity message box, select *Yes, I want to troubleshoot this dev box*, and then select **Troubleshoot**.
+1. In the **Troubleshoot & repair** connectivity message box, select **Yes, I want to troubleshoot this dev box**, and then select **Troubleshoot**.
- :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-troubleshooting-confirm.png" alt-text="Screenshot showing the Troubleshoot and repair connectivity confirmation message with Yes, I want to troubleshoot this dev box highlighted.":::
+ :::image type="content" source="media/how-to-troubleshoot-repair-dev-box/dev-box-troubleshoot-confirm.png" alt-text="Screenshot showing the Troubleshoot and repair connectivity confirmation message with the Yes option highlighted." lightbox="media/how-to-troubleshoot-repair-dev-box/dev-box-troubleshoot-confirm.png" :::
- While waiting for the process to complete, you can leave your dev portal as is, or close it and come back. The process continues in the background.
+ While you wait for the process to complete, you can leave your developer portal session open, or close it and reopen it later. The troubleshoot and repair process continues in the background.
1. After the RDP connectivity issue is resolved, you can connect to dev box again through [a browser](quickstart-create-dev-box.md#connect-to-a-dev-box), or [a Remote Desktop client](/azure/dev-box/tutorial-connect-to-dev-box-with-remote-desktop-app?tabs=windows).
-## Troubleshoot & repair results
+## View Troubleshoot & repair results
-When the *Troubleshoot & repair* process finishes, it lists the results of the checks it ran:
+When the troubleshoot and repair process finishes, the tool lists the results of the completed checks:
-|Outcome |Description |
-|||
-|An issue was resolved. |An issue was detected and fixed. You can try to connect to Dev Box again. |
-|No issue detected. |None of the checks discovered an issue with the Dev Box. |
-|An issue was detected but could not be fixed automatically. |There is an issue with Dev Box, but this action couldnΓÇÖt fix it. You can select **view details** about the issue was and how to fix it manually. |
+| Check result | Description |
+|||
+| **An issue was resolved.** | An issue was detected and fixed. You can try to connect to the dev box again. |
+| **No issue detected.** | None of the checks discovered an issue with the dev box. |
+| **An issue was detected but could not be fixed automatically.** | There's an issue with the dev box that the Troubleshoot & repair process couldn't resolve. You can select **View details** for the issue and explore options to fix the issue manually. |
## Related content
+- [Tutorial: Use a Remote Desktop client to connect to a dev box](tutorial-connect-to-dev-box-with-remote-desktop-app.md)
event-grid Event Schema Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-blob-storage.md
Title: Azure Blob Storage as Event Grid source description: Describes the properties that are provided for blob storage events with Azure Event Grid Previously updated : 12/02/2022 Last updated : 01/10/2024 # Azure Blob Storage as an Event Grid source
If the blob storage account uses SFTP to create or overwrite a blob, then the da
* The `data.api` key is set to the string `SftpCreate` or `SftpCommit`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `contentType` key is set to `application/octet-stream`.
If the blob storage account uses SFTP to delete a blob, then the data looks simi
* The `data.api` key is set to the string `SftpRemove`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `contentType` key is set to `application/octet-stream`.
If the blob storage account uses SFTP to rename a blob, then the data looks simi
* The `data.api` key is set to the string `SftpRename`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to create a directory, then the data looks
* The `data.api` key is set to the string `SftpMakeDir`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to rename a directory, then the data looks
* The `data.api` key is set to the string `SftpRename`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to delete a directory, then the data looks
* The `data.api` key is set to the string `SftpRemoveDir`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to create or overwrite a blob, then the da
* The `data.api` key is set to the string `SftpCreate` or `SftpCommit`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `contentType` key is set to `application/octet-stream`.
If the blob storage account uses SFTP to delete a blob, then the data looks simi
* The `data.api` key is set to the string `SftpRemove`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `contentType` key is set to `application/octet-stream`.
If the blob storage account uses SFTP to rename a blob, then the data looks simi
* The `data.api` key is set to the string `SftpRename`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to create a directory, then the data looks
* The `data.api` key is set to the string `SftpMakeDir`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to rename a directory, then the data looks
* The `data.api` key is set to the string `SftpRename`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
If the blob storage account uses SFTP to delete a directory, then the data looks
* The `data.api` key is set to the string `SftpRemoveDir`.
-* The `clientRequestId` key is not included.
+* The `clientRequestId` key isn't included.
* The `identity` key is included in the data set. This corresponds to the local user used for SFTP authentication.
These events are triggered when the actions defined by a policy are performed.
|Event name |Description| |-|--|
- | [Microsoft.Storage.BlobInventoryPolicyCompleted](#microsoftstorageblobinventorypolicycompleted-event) |Triggered when the inventory run completes for a rule that is defined an inventory policy. This event also occurs if the inventory run fails with a user error before it starts to run. For example, an invalid policy, or an error that occurs when a destination container is not present will trigger the event. |
+ | [Microsoft.Storage.BlobInventoryPolicyCompleted](#microsoftstorageblobinventorypolicycompleted-event) |Triggered when the inventory run completes for a rule that is defined an inventory policy. This event also occurs if the inventory run fails with a user error before it starts to run. For example, an invalid policy, or an error that occurs when a destination container isn't present will trigger the event. |
| [Microsoft.Storage.LifecyclePolicyCompleted](#microsoftstoragelifecyclepolicycompleted-event) |Triggered when the actions defined by a lifecycle management policy are performed. | ### Example events
event-grid Handler Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-webhooks.md
Title: Webhooks as event handlers for Azure Event Grid events description: Describes how you can use webhooks as event handlers for Azure Event Grid events. Azure Automation runbooks and logic apps are supported as event handlers via webhooks. Previously updated : 11/17/2022 Last updated : 01/10/2024 # Webhooks, Automation runbooks, Logic Apps as event handlers for Azure Event Grid events
event-grid Receive Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events.md
Title: Receive events from Azure Event Grid to an HTTP endpoint
-description: Describes how to validate an HTTP endpoint, then receive and deserialize Events from Azure Event Grid
+description: Describes how to validate an HTTP endpoint, then receive and deserialize Events from Azure Event Grid.
Previously updated : 11/14/2022 Last updated : 01/10/2024 ms.devlang: csharp, javascript
This article describes how to [validate an HTTP endpoint](webhook-event-delivery.md) to receive events from an Event Subscription and then receive and deserialize events. This article uses an Azure Function for demonstration purposes, however the same concepts apply regardless of where the application is hosted. > [!NOTE]
-> It is recommended that you use an [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md) when triggering an Azure Function with Event Grid. It provides an easier and quicker integration between Event Grid and Azure Functions. However, please note that Azure Functions' Event Grid trigger does not support the scenario where the hosted code needs to control the HTTP status code returned to Event Grid. Given this limitation, your code running on an Azure Function would not be able to return a 5XX error to initiate an event delivery retry by Event Grid, for example.
+> We recommend that you use an [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md) when triggering an Azure Function with Event Grid. It provides an easier and quicker integration between Event Grid and Azure Functions. However, note that Azure Functions' Event Grid trigger does not support the scenario where the hosted code needs to control the HTTP status code returned to Event Grid. Given this limitation, your code running on an Azure Function would not be able to return a 5XX error to initiate an event delivery retry by Event Grid, for example.
## Prerequisites
SDKs for other languages are available via the [Publish SDKs](./sdk-overview.md#
The first thing you want to do is handle `Microsoft.EventGrid.SubscriptionValidationEvent` events. Every time someone subscribes to an event, Event Grid sends a validation event to the endpoint with a `validationCode` in the data payload. The endpoint is required to echo this back in the response body to [prove the endpoint is valid and owned by you](webhook-event-delivery.md). If you're using an [Event Grid Trigger](../azure-functions/functions-bindings-event-grid.md) rather than a WebHook triggered Function, endpoint validation is handled for you. If you use a third-party API service (like [Zapier](https://zapier.com/) or [IFTTT](https://ifttt.com/)), you might not be able to programmatically echo the validation code. For those services, you can manually validate the subscription by using a validation URL that is sent in the subscription validation event. Copy that URL in the `validationUrl` property and send a GET request either through a REST client or your web browser.
-In C#, the `ParseMany()` method is used to deserialize a `BinaryData` instance containing 1 or more events into an array of `EventGridEvent`. If you knew ahead of time that you are deserializing only a single event, you could use the `Parse` method instead.
+In C#, the `ParseMany()` method is used to deserialize a `BinaryData` instance containing one or more events into an array of `EventGridEvent`. If you knew ahead of time that you are deserializing only a single event, you could use the `Parse` method instead.
To programmatically echo the validation code, use the following code.
You should see the blob URL output in the function log:
2022-11-14T22:40:46.346 [Information] Executed 'Function1' (Succeeded, Id=8429137d-9245-438c-8206-f9e85ef5dd61, Duration=387ms) ```
-You can also test by creating a Blob storage account or General Purpose V2 (GPv2) Storage account, [adding an event subscription](../storage/blobs/storage-blob-event-quickstart.md), and setting the endpoint to the function URL:
+You can also test by creating a Blob storage account or General Purpose V2 Storage account, [adding an event subscription](../storage/blobs/storage-blob-event-quickstart.md), and setting the endpoint to the function URL:
![Function URL](./media/receive-events/function-url.png)
event-hubs Azure Event Hubs Kafka Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md
Standalone and without ksqlDB, Kafka Streams has fewer capabilities than many al
- [Apache Storm](event-hubs-storm-getstarted-receive.md) - [Apache Spark](event-hubs-kafka-spark-tutorial.md) - [Apache Flink](event-hubs-kafka-flink-tutorial.md)
+- [Apache Flink on HDInsight on AKS](/azure/hdinsight-aks/flink/flink-overview)
- [Akka Streams](event-hubs-kafka-akka-streams-tutorial.md) The listed services and frameworks can generally acquire event streams and reference data directly from a diverse set of sources through adapters. Kafka Streams can only acquire data from Apache Kafka and your analytics projects are therefore locked into Apache Kafka. To use data from other sources, you're required to first import data into Apache Kafka with the Kafka Connect framework.
governance Migrate From Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/migrate-from-azure-automation.md
configuration to a MOF file and create a machine configuration package.
Some modules might have compatibility issues with machine configuration. The most common problems are related to .NET framework vs .NET core. Detailed technical information is available on
-the page, [Differences between Windows PowerShell 5.1 and PowerShell 7.x][07].
+the page, [Differences between Windows PowerShell 5.1 and PowerShell 7.x][https://learn.microsoft.com/powershell/scripting/whats-new/differences-from-windows-powershell?view=powershell-7.4].
One option to resolve compatibility issues is to run commands in Windows PowerShell from within a module that's imported in PowerShell 7, by running `powershell.exe`. You can review a sample module
hdinsight Hdinsight Custom Ambari Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-custom-ambari-db.md
The custom Ambari DB has the following other requirements:
- The name of the database cannot contain hyphens or spaces - You must have an existing Azure SQL DB server and database. - The database that you provide for Ambari setup must be empty. There should be no tables in the default dbo schema.-- The user used to connect to the database should have SELECT, CREATE TABLE, and INSERT permissions on the database.
+- The user used to connect to the database should have **SELECT, CREATE TABLE, INSERT, UPDATE, DELETE, ALTER ON SCHEMA and REFERENCES ON SCHEMA** permissions on the database.
+```sql
+GRANT CREATE TABLE TO newuser;
+GRANT INSERT TO newuser;
+GRANT SELECT TO newuser;
+GRANT UPDATE TO newuser;
+GRANT DELETE TO newuser;
+GRANT ALTER ON SCHEMA::dbo TO newuser;
+GRANT REFERENCES ON SCHEMA::dbo TO newuser;
+```
+ - Turn on the option to [Allow access to Azure services](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#azure-portal-steps) on the server where you host Ambari. - Management IP addresses from HDInsight service need to be allowed in the firewall rule. See [HDInsight management IP addresses](hdinsight-management-ip-addresses.md) for a list of the IP addresses that must be added to the server-level firewall rule.
iot-hub-device-update Device Update Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-security.md
JSON Web Signature is a widely used [proposed IETF standard](https://tools.ietf.
Every Device Update device must contain a set of root keys. These keys are the root of trust for all of Device Update's signatures. Any signature must be chained up through one of these root keys to be considered legitimate.
-The set of root keys will change over time as it is proper to periodically rotate signing keys for security purposes. As a result, the Device Update agent software will need to be updated with the latest set of root keys at intervals specified by the Device Update team.
+The set of root keys will change over time as it is proper to periodically rotate signing keys for security purposes. As a result, the Device Update agent software will need to be updated with the latest set of root keys at intervals specified by the Device Update team. **The next planned root key rotation will occur in May 2025**.
Starting with version 1.1.0 of the Device Update agent, the agent will automatically check for any changes to root keys each time a deployment of an update to that device occurs. Possible changes:
iot-operations Concept Iot Operations In Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/concept-iot-operations-in-layered-network.md
Title: How does Azure IoT Operations work in layered network?
-#
+ description: Use the Layered Network Management service to enable Azure IoT Operations in industrial network environment. + Last updated 11/29/2023
iot-operations Howto Connect Arc Enabled Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-connect-arc-enabled-servers.md
+
+ Title: Connect Azure Arc-enabled servers from an isolated network
+
+description: Connect a node host to Azure Arc-enabled servers from an isolated network
++++ Last updated : 01/09/2024+
+#CustomerIntent: As an operator, I want to configure Layered Network Management so that I have secure isolated devices.
++
+# Connect Azure Arc-enabled servers from an isolated network
++
+This walkthrough is an example of connecting your host machine to Azure Arc as an [Arc-enabled server](/azure/azure-arc/servers) from an isolated network environment. For example, level 3 of the *Purdue Network*. You connect the host machine to Azure IoT Layered Network Management at the parent level as a proxy to reach Azure endpoints for the service. You can integrate these steps into your procedure to set up your cluster for Azure IoT Operations. Don't use this guidance independently. For more information, see [Configure Layered Network Management service to enable Azure IoT Operations in an isolated network](howto-configure-aks-edge-essentials-layered-network.md).
+
+> [!IMPORTANT]
+> **Arc-enabled servers** aren't a requirement for the Azure IoT Operations experiences. You should evaluate your own design and only enable this service if it suits your needs. Before proceeding with these steps, you should also get familiar with the **Arc-enabled servers** by trying this service with a machine that has direct internet access.
+
+> [!NOTE]
+> Layered Network Management for Arc-enabled servers does not support Azure VMs.
+
+## Configuration for Layered Network Management
+
+To support the Arc-enabled servers, the level 4 Layered Network Management instance needs to include the following endpoints in the allowlist when applying the custom resource:
+
+```
+ - destinationUrl: "gbl.his.arc.azure.com"
+ destinationType: external
+ - destinationUrl: "gbl.his.arc.azure.us"
+ destinationType: external
+ - destinationUrl: "gbl.his.arc.azure.cn"
+ destinationType: external
+ - destinationUrl: "packages.microsoft.com"
+ destinationType: external
+ - destinationUrl: "aka.ms"
+ destinationType: external
+ - destinationUrl: "ppa.launchpadcontent.net"
+ destinationType: external
+ - destinationUrl: "mirror.enzu.com"
+ destinationType: external
+ - destinationUrl: "*.guestconfiguration.azure.com"
+ destinationType: external
+ - destinationUrl: "agentserviceapi.guestconfiguration.azure.com"
+ destinationType: external
+ - destinationUrl: "pas.windows.net"
+ destinationType: external
+ - destinationUrl: "download.microsoft.com"
+ destinationType: external
+```
+> [!NOTE]
+> If you plan to onboard an Ubuntu machine, you also need to add the domain name of the *software and update* repository of your choice to the allowlist. For more information selecting the repository, see [Configuration Ubuntu host machine](#configure-ubuntu-host-machine).
+
+If you want to enable optional features, add the appropriate endpoint from the following table. For more information, see [Connected Machine agent network requirements](/azure/azure-arc/servers/network-requirements).
+
+| Endpoint | Optional feature |
+|||
+| *.waconazure.com | Windows Admin Center connectivity |
+| san-af-[REGION]-prod.azurewebsites.net | SQL Server enabled by Azure Arc. The Azure extension for SQL Server uploads inventory and billing information to the data processing service. |
+| telemetry.[REGION].arcdataservices.com | For Arc SQL Server. Sends service telemetry and performance monitoring to Azure. |
+
+## Configure Windows host machine
+
+If you prepared your host machine and cluster to be Arc-enabled by following the [Configure level 3 cluster](howto-configure-l3-cluster-layered-network.md) article, you can proceed to onboard your Arc-enabled server. Otherwise at a minimum, point the host machine to your custom DNS. For more information, see [Configure the DNS server](howto-configure-layered-network.md#configure-the-dns-server).
+
+After properly setting up the parent level Layered Network Management instance and custom DNS, see [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm).
+- After downloading the `OnboardingScript.ps1`, open it with a text editor. find the following section and add the parameter `--use-device-code` at the end of the command.
+ ```
+ # Run connect command
+ & "$env:ProgramW6432\AzureConnectedMachineAgent\azcmagent.exe" connect --resource-group "$env:RESOURCE_GROUP" --tenant-id "$env:TENANT_ID" --location "$env:LOCATION" --subscription-id "$env:SUBSCRIPTION_ID" --cloud "$env:CLOUD" --correlation-id "$env:CORRELATION_ID";
+ ```
+- Proceed with the steps in [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm) to complete the onboarding.
+
+## Configure Ubuntu host machine
+
+If you prepared your host machine and cluster to be Arc-enabled by following the [Configure level 3 cluster](howto-configure-l3-cluster-layered-network.md) article, you can proceed to onboard your Arc-enabled server. Otherwise at a minimum, point the host machine to your custom DNS. For more information, see [Configure the DNS server](howto-configure-layered-network.md#configure-the-dns-server).
+
+Before starting the onboarding process, you need to assign an *https* address for the Ubuntu OS *software and update* repository.
+1. Visit https://launchpad.net/ubuntu/+archivemirrors and identify a repository that is close to your location and supports the *https* protocol.
+1. Modify `/etc/apt/sources.list` to replace the address with the URL from the previous step.
+1. Add the domain name of this repository to the Layered Network Management allowlist.
+1. Run `apt update` to confirm the update is pulling packages from the new repository with *https* protocol.
+
+See [Quickstart: Connect hybrid machines with Azure Arc-enabled servers](/azure/azure-arc/servers/learn/quick-enable-hybrid-vm) to complete the Arc-enabled Servers onboarding.
iot-operations Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/troubleshoot.md
- ignite-2023 Previously updated : 09/20/2023 Last updated : 01/09/2024 # Troubleshoot Azure IoT Operations
kubectl rollout restart statefulset aio-dp-runner-worker -n azure-iot-operations
kubectl rollout restart statefulset aio-dp-reader-worker -n azure-iot-operations ```
+## Troubleshoot Layered Network Management
+
+The troubleshooting guidance in this section is specific to Azure IoT Operations when using an IoT Layered Network Management. For more information, see [How does Azure IoT Operations work in layered network?](../manage-layered-network/concept-iot-operations-in-layered-network.md).
+
+### Can't install Layered Network Management on the parent level
+
+Layered Network Management operator install fails or you can't apply the custom resource for a Layered Network Management instance.
+
+1. Verify the regions are supported for public preview. Public preview supports eight regions. For more information, see [Quickstart: Deploy Azure IoT Operations](../get-started/quickstart-deploy.md#connect-a-kubernetes-cluster-to-azure-arc).
+1. If there are any other errors in installing Layered Network Management Arc extensions, follow the guidance included with the error. Try uninstalling and installing the extension.
+1. Verify the Layered Network Management operator is in the *Running and Ready* state.
+1. If applying the custom resource `kubectl apply -f cr.yaml` fails, the output of this command lists the reason for error. For example, CRD version mismatch or wrong entry in CRD.
+
+### Can't Arc-enable the cluster through the parent level Layered Network Management
+
+If you repeatedly remove and onboard a cluster with the same machine, you might get an error while Arc-enabling the cluster on nested layers. For example, the error message might look like:
+
+```Output
+Error: We found an issue with outbound network connectivity from the cluster to the endpoints required for onboarding.
+Please ensure to meet the following network requirements 'https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli#meet-network-requirements'
+If your cluster is behind an outbound proxy server, please ensure that you have passed proxy parameters during the onboarding of your cluster.
+```
+
+1. Run the following command:
+
+ ```bash
+ sudo systemctl restart systemd-networkd
+ ```
+
+1. Reboot the host machine.
+
+#### Other types of Arc-enablement failures
+
+1. Add the `--debug` parameter when running the `connectedk8s` command.
+1. Capture and investigate a network packet trace. For more information, see [capture Layered Network Management packet trace](#capture-layered-network-management-packet-trace).
+
+### Can't install IoT Operations on the isolated cluster
+
+You can't install IoT Operations components on nested layers. For example, Layered Network Management on level 4 is running but can't install IoT Operations on level 3.
+
+1. Verify the nodes can access the Layered Network Management service running on parent level. For example, run `ping <IP-ADDRESS-L4-LNM>` from the node.
+1. Verify the DNS queries are being resolved to the Layered Network Management service running on the parent level using the following commands:
+
+ ```bash
+ nslookup management.azure.com
+ ```
+
+ DNS should respond with the IP address of the Layered Network Management service.
+
+1. If the domain is being resolved correctly, verify the domain is added to the allowlist. For more information, see [Check the allowlist of Layered Network Management](#check-the-allowlist-of-layered-network-management).
+1. Capture and investigate a network packet trace. For more information, see [capture Layered Network Management packet trace](#capture-layered-network-management-packet-trace).
+
+### A pod fails when installing IoT Operations on an isolated cluster
+
+When installing the IoT Operations components to a cluster, the installation starts and proceeds. However, initialization of one or few of the components (pods) fails.
+
+1. Identify the failed pod
+
+ ```bash
+ kubectl get pods -n azure-iot-operations
+ ```
+
+1. Get details about the pod:
+
+ ```bash
+ kubectl describe pod [POD NAME] -n azure-iot-operations
+ ```
+
+1. Check the container image related information. If the image download fails, check if the domain name of download path is on the allowlist. For example:
+
+ ```output
+ Warning Failed 3m14s kubelet Failed to pull image "…
+ ```
+
+### Check the allowlist of Layered Network Management
+
+Layered Network Management blocks traffic if the destination domain isn't on the allowlist.
+
+1. Run the following command to list the config maps.
+ ```bash
+ kubectl get cm -n azure-iot-operations
+ ```
+1. The output should look like the following example:
+ ```
+ NAME DATA AGE
+ aio-lnm-level4-config 1 50s
+ aio-lnm-level4-client-config 1 50s
+ ```
+1. The *xxx-client-config* contains the allowlist. Run:
+ ```bash
+ kubectl get cm aio-lnm-level4-client-config -o yaml
+ ```
+1. All the allowed domains are listed in the output.
+
+### Capture Layered Network Management packet trace
+
+In some cases, you might suspect that Layered Network Management instance at the parent level isn't forwarding network traffic to a particular endpoint. Connection to a required endpoint is causing an issue for the service running on your node. It's possible that the service you enabled is trying to connect to a new endpoint after an update. Or you're trying to install a new Arc extension or service that requires connection to endpoints that aren't on the default allowlist. Usually there would be information in the error message to notify the connection failure. However, if there's no clear information about the missing endpoint, you can capture the network traffic on the child node for detailed debugging.
+
+#### Windows host
+
+1. Install Wireshark network traffic analyzer on the host.
+1. Run Wireshark and start capturing.
+1. Reproduce the installation or connection failure.
+1. Stop capturing.
+
+#### Linux host
+
+1. Run the following command to start capturing:
+
+ ```bash
+ sudo tcpdump -W 5 -C 10 -i any -w AIO-deploy -Z root
+ ```
+
+1. Reproduce the installation or connection failure.
+1. Stop capturing.
+
+#### Analyze the packet trace
+
+Use Wireshark to open the trace file. Look for connection failures or nonresponded connections.
+
+1. Filter the packets with the *ip.addr == [IP address]* parameter. Input the IP address of your custom DNS service address.
+1. Review the DNS query and response, check if there's a domain name that isn't on the allowlist of Layered Network Management.
logic-apps Sap Generate Schemas For Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/connectors/sap-generate-schemas-for-artifacts.md
Previously updated : 07/10/2023 Last updated : 01/10/2024 # Generate schemas for SAP artifacts in Azure Logic Apps
logic-apps Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/enterprise-integration/create-integration-account.md
Previously updated : 08/29/2023 Last updated : 01/10/2024 # Create and manage integration accounts for B2B workflows in Azure Logic Apps with the Enterprise Integration Pack
logic-apps Export From Consumption To Standard Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-consumption-to-standard-logic-app.md
ms.suite: integration Previously updated : 07/21/2023 Last updated : 01/10/2024 #Customer intent: As a developer, I want to export one or more Consumption workflows to a Standard workflow.
logic-apps Logic Apps Batch Process Send Receive Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-batch-process-send-receive-messages.md
ms.suite: integration Previously updated : 07/07/2023 Last updated : 01/10/2024 # Send, receive, and batch process messages in Azure Logic Apps
logic-apps Logic Apps Create Variables Store Values https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-create-variables-store-values.md
ms.suite: integration Previously updated : 06/29/2023 Last updated : 01/10/2024 # Create variables to store and manage values in Azure Logic Apps
logic-apps Logic Apps Enterprise Integration As2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-as2.md
Previously updated : 08/15/2023 Last updated : 01/10/2024 # Exchange AS2 messages using workflows in Azure Logic Apps
This how-to guide shows how to add the AS2 encoding and decoding actions to an e
## Connector technical reference
-The AS2 connector has different versions, based on [logic app type and host environment](logic-apps-overview.md#resource-environment-differences).
+The **AS2** connector has different versions, based on [logic app type and host environment](logic-apps-overview.md#resource-environment-differences).
| Logic app | Environment | Connector version | |--|-|-|
-| **Consumption** | Multi-tenant Azure Logic Apps | **AS2 (v2)** and **AS2** managed connectors (Standard class). The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
-| **Consumption** | Integration service environment (ISE) | **AS2 (v2)** and **AS2** managed connectors (Standard class) and **AS2** ISE version, which has different message limits than the Standard class. The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [ISE message limits](logic-apps-limits-and-config.md#message-size-limits) <br>- [Managed connectors in Azure Logic Apps](../connectors/managed.md) |
-| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | **AS2 (v2)** built-in connector and **AS2** managed connector. The built-in version differs in the following ways: <br><br>- The built-in version provides only actions, but you can use any trigger that works for your scenario. <br><br>- The built-in version can directly access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) built-in connector operations](#as2-v2-operations) <br>- [Built-in connectors in Azure Logic Apps](../connectors/built-in.md) |
+| **Consumption** | multitenant Azure Logic Apps | **AS2 (v2)** and **AS2** managed connectors (Standard class). The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) |
+| **Consumption** | Integration service environment (ISE) | **AS2 (v2)** and **AS2** managed connectors (Standard class) and **AS2** ISE version, which has different message limits than the Standard class. The **AS2 (v2)** connector provides only actions, but you can use any trigger that works for your scenario. For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) managed connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | **AS2 (v2)** built-in connector and **AS2** managed connector. The built-in version differs in the following ways: <br><br>- The built-in version provides only actions, but you can use any trigger that works for your scenario. <br><br>- The built-in version can directly access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, review the following documentation: <br><br>- [AS2 managed connector reference](/connectors/as2/) <br>- [AS2 (v2) built-in connector operations](#as2-v2-operations) <br>- [AS2 message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) |
<a name="as-v2-operations"></a>
The **AS2 (v2)** connector has no triggers. The following table describes the ac
| Action | Description | |--|-|
-| [**AS2 Encode** action](#encode) | Provides encryption, digital signing, and acknowledgments through Message Disposition Notifications (MDN), which help support non-repudiation. For example, this action applies AS2/HTTP headers and performs the following tasks when configured: <br><br>- Sign outgoing messages. <br>- Encrypt outgoing messages. <br>- Compress the message. <br>- Transmit the file name in the MIME header. |
-| [**AS2 Decode** action](#decode) | Provide decryption, digital signing, and acknowledgments through Message Disposition Notifications (MDN). For example, this action performs the following tasks when configured: <br><br>- Process AS2/HTTP headers. <br>- Reconcile received MDNs with the original outbound messages. <br>- Update and correlate records in the non-repudiation database. <br>- Write records for AS2 status reporting. <br>- Output payload contents as base64-encoded. <br>- Determine whether MDNs are required. Based on the AS2 agreement, determine whether MDNs should be synchronous or asynchronous. <br>- Generate synchronous or asynchronous MDNs based on the AS2 agreement. <br>- Set the correlation tokens and properties on MDNs. <br>- Verify the signature. <br>- Decrypt the messages. <br>- Decompress the message. <br>- Check and disallow message ID duplicates. |
+| [**AS2 Encode** action](#encode) | Provides encryption, digital signing, and acknowledgments through Message Disposition Notifications (MDN), which help support nonrepudiation. For example, this action applies AS2/HTTP headers and performs the following tasks when configured: <br><br>- Sign outgoing messages. <br>- Encrypt outgoing messages. <br>- Compress the message. <br>- Transmit the file name in the MIME header. |
+| [**AS2 Decode** action](#decode) | Provide decryption, digital signing, and acknowledgments through Message Disposition Notifications (MDN). For example, this action performs the following tasks when configured: <br><br>- Process AS2/HTTP headers. <br>- Reconcile received MDNs with the original outbound messages. <br>- Update and correlate records in the nonrepudiation database. <br>- Write records for AS2 status reporting. <br>- Output payload contents as base64-encoded. <br>- Determine whether MDNs are required. Based on the AS2 agreement, determine whether MDNs should be synchronous or asynchronous. <br>- Generate synchronous or asynchronous MDNs based on the AS2 agreement. <br>- Set the correlation tokens and properties on MDNs. <br>- Verify the signature. <br>- Decrypt the messages. <br>- Decompress the message. <br>- Check and disallow message ID duplicates. |
## Prerequisites
The **AS2 (v2)** connector has no triggers. The following table describes the ac
* The logic app resource and workflow where you want to use the AS2 operations.
-* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) to define and store artifacts for use in enterprise integration and B2B workflows.
+* An [integration account resource](./enterprise-integration/create-integration-account.md) to define and store artifacts for use in enterprise integration and B2B workflows.
* Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
The **AS2 (v2)** connector has no triggers. The following table describes the ac
| Logic app workflow | Link required? | |--|-|
- | Consumption | - **AS2 (v2)** connector: Connection required, but no link required <br>- **AS2** connector: [Link required](logic-apps-enterprise-integration-create-integration-account.md?tabs=consumption#link-account), but no connection required |
- | Standard | - **AS2 (v2)** connector: [Link required](logic-apps-enterprise-integration-create-integration-account.md?tabs=standard#link-account), but no connection required <br>- **AS2** connector: Connection required, but no link required |
+ | Consumption | - **AS2 (v2)** connector: Connection required, but no link required <br>- **AS2** connector: [Link required](./enterprise-integration/create-integration-account.md?tabs=consumption#link-account), but no connection required |
+ | Standard | - **AS2 (v2)** connector: [Link required](./enterprise-integration/create-integration-account.md?tabs=standard#link-account), but no connection required <br>- **AS2** connector: Connection required, but no link required |
* If you use [Azure Key Vault](../key-vault/general/overview.md) for certificate management, check that your vault keys permit the **Encrypt** and **Decrypt** operations. Otherwise, the encoding and decoding actions fail.
logic-apps Logic Apps Enterprise Integration Edifact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-edifact.md
Previously updated : 12/12/2023 Last updated : 01/10/2024 # Exchange EDIFACT messages using workflows in Azure Logic Apps To send and receive EDIFACT messages in workflows that you create using Azure Logic Apps, use the **EDIFACT** connector, which provides operations that support and manage EDIFACT communication.
-This how-to guide shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. The **EDIFACT** connector doesn't include any triggers, so you can use any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
+This guide shows how to add the EDIFACT encoding and decoding actions to an existing logic app workflow. When no **EDIFACT** trigger is available, you can any trigger to start your workflow. The examples in this guide use the [Request trigger](../connectors/connectors-native-reqres.md).
## Connector technical reference
-The **EDIFACT** connector has one version across workflows in [multi-tenant Azure Logic Apps, single-tenant Azure Logic Apps, and the integration service environment (ISE)](logic-apps-overview.md#resource-environment-differences). For technical information about the **EDIFACT** connector, see the following documentation:
+The **EDIFACT** connector has different versions, based on [logic app type and host environment](logic-apps-overview.md#resource-environment-differences).
-* [Connector reference page](/connectors/edifact/), which describes the triggers, actions, and limits as documented by the connector's Swagger file
+| Logic app | Environment | Connector version |
+|--|-|-|
+| **Consumption** | multitenant Azure Logic Apps | **EDIFACT** managed connector (Standard class). The **EDIFACT** connector provides only actions, but you can use any trigger that works for your scenario. For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) |
+| **Consumption** | Integration service environment (ISE) | **EDIFACT** managed connector (Standard class) and **EDIFACT** ISE version, which has different message limits than the Standard class. The **EDIFACT** connector provides only actions, but you can use any trigger that works for your scenario. For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | **EDIFACT** built-in connector (preview) and **EDIFACT** managed connector. The built-in version differs in the following ways: <br><br>- The built-in version provides only actions, but you can use any trigger that works for your scenario. <br><br>- The built-in version can directly access Azure virtual networks. You don't need an on-premises data gateway.<br><br>For more information, see the following documentation: <br><br>- [EDIFACT managed connector reference](/connectors/edifact/) <br>- [EDIFACT built-in connector operations](#edifact-built-in-operations) <br>- [EDIFACT message limits](logic-apps-limits-and-config.md#b2b-protocol-limits) |
-* [B2B protocol limits for message sizes](logic-apps-limits-and-config.md#b2b-protocol-limits)
+<a name="edifact-built-in-operations"></a>
- For example, in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), this connector's ISE version uses the [B2B message limits for ISE](logic-apps-limits-and-config.md#b2b-protocol-limits).
+### EDIFACT built-in operations (Standard workflows only - Preview)
-The following sections provide more information about the tasks that you can complete using the EDIFACT encoding and decoding actions.
+The preview **EDIFACT** built-in connector has the following actions, which are similar to their counterpart **EDIFACT** managed connector actions, except where noted in [Limitations and known issues](#limitations-known-issues).
-### Encode to EDIFACT message action
+* [**EDIFACT Encode** action](#encode)
+* [**EDIFACT Decode** action](#decode)
-This action performs the following tasks:
+<a name="limitations-known-issues"></a>
-* Resolve the agreement by matching the sender qualifier & identifier and receiver qualifier and identifier.
-
-* Serialize the Electronic Data Interchange (EDI), which converts XML-encoded messages into EDI transaction sets in the interchange.
-
-* Apply transaction set header and trailer segments.
-
-* Generate an interchange control number, a group control number, and a transaction set control number for each outgoing interchange.
-
-* Replace separators in the payload data.
-
-* Validate EDI and partner-specific properties, such as the schema for transaction-set data elements against the message schema, transaction-set data elements, and extended validation on transaction-set data elements.
-
-* Generate an XML document for each transaction set.
-
-* Request a technical acknowledgment, functional acknowledgment, or both, if configured.
-
- * As a technical acknowledgment, the CONTRL message indicates the receipt for an interchange.
-
- * As a functional acknowledgment, the CONTRL message indicates the acceptance or rejection for the received interchange, group, or message, including a list of errors or unsupported functionality.
-
-### Decode EDIFACT message action
-
-This action performs the following tasks:
-
-* Validate the envelope against the trading partner agreement.
-
-* Resolve the agreement by matching the sender qualifier and identifier along with the receiver qualifier and identifier.
-
-* Split an interchange into multiple transaction sets when the interchange has more than one transaction, based on the agreement's **Receive Settings**.
-
-* Disassemble the interchange.
-
-* Validate Electronic Data Interchange (EDI) and partner-specific properties, such as the interchange envelope structure, the envelope schema against the control schema, the schema for the transaction-set data elements against the message schema, and extended validation on transaction-set data elements.
-
-* Verify that the interchange, group, and transaction set control numbers aren't duplicates, if configured, for example:
-
- * Check the interchange control number against previously received interchanges.
-
- * Check the group control number against other group control numbers in the interchange.
+### Limitations and known issues
- * Check the transaction set control number against other transaction set control numbers in that group.
-
-* Split the interchange into transaction sets, or preserve the entire interchange, for example:
+* Preview **EDIFACT** built-in connector
- * Split Interchange as transaction sets - suspend transaction sets on error.
-
- The decoding action splits the interchange into transaction sets and parses each transaction set. The action outputs only those transaction sets that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
-
- * Split Interchange as transaction sets - suspend interchange on error.
-
- The decoding action splits the interchange into transaction sets and parses each transaction set. If one or more transaction sets in the interchange fail validation, the action outputs all the transaction sets in that interchange to `badMessages`.
+ * This capability is in preview and is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
- * Preserve Interchange - suspend transaction sets on error.
+ * This connector's actions currently support payloads up to at least 100 MB.
- The decoding action preserves the interchange and processes the entire batched interchange. The action outputs only those transaction sets that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
+ * The preview **EDIFACT Decode** action currently doesn't include the following capabilities:
- * Preserve Interchange - suspend interchange on error.
+ * Check for duplicate interchange, group, and transaction set control numbers, if configured.
- The decoding action preserves the interchange and processes the entire batched interchange. If one or more transaction sets in the interchange fail validation, the action outputs all the transaction sets in that interchange to `badMessages`.
+ * Preserve the entire interchange.
-* Generate a technical acknowledgment, functional acknowledgment, or both, if configured.
+ Otherwise, the preview **EDIFACT Encode** and **EDIFACT decode** built-in connector actions have capabilities similar to their counterpart **EDIFACT** managed connector actions.
- * A technical acknowledgment or the CONTRL ACK, which reports the results from a syntactical check on the complete received interchange.
+ * This connector's actions currently don't support interchanges with multiple transactions or batched messages.
- * A functional acknowledgment that acknowledges the acceptance or rejection for the received interchange or group.
+ * This connector's actions don't currently emit EDI-specific tracking.
## Prerequisites * An Azure account and subscription. If you don't have a subscription yet, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An [integration account resource](logic-apps-enterprise-integration-create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
+* An [integration account resource](./enterprise-integration/create-integration-account.md) where you define and store artifacts, such as trading partners, agreements, certificates, and so on, for use in your enterprise integration and B2B workflows. This resource has to meet the following requirements:
* Both your integration account and logic app resource must exist in the same Azure subscription and Azure region.
This action performs the following tasks:
For more information, see the following documentation:
- * [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+ * [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
This action performs the following tasks:
## Encode EDIFACT messages
+The **EDIFACT** managed connector action named **Encode to EDIFACT message** action and the **EDIFACT** built-in connector action named **EDIFACT Encode** performs the following tasks, except where noted in [Limitations and known issues](#limitations-known-issues):
+
+* Resolve the agreement by matching the sender qualifier & identifier and receiver qualifier and identifier.
+
+* Serialize the Electronic Data Interchange (EDI), which converts XML-encoded messages into EDI transaction sets in the interchange.
+
+* Apply transaction set header and trailer segments.
+
+* Generate an interchange control number, a group control number, and a transaction set control number for each outgoing interchange.
+
+* Replace separators in the payload data.
+
+* Validate EDI and partner-specific properties, such as the schema for transaction-set data elements against the message schema, transaction-set data elements, and extended validation on transaction-set data elements.
+
+* Generate an XML document for each transaction set.
+
+* Request a technical acknowledgment, functional acknowledgment, or both, if configured.
+
+ * As a technical acknowledgment, the CONTRL message indicates the receipt for an interchange.
+
+ * As a functional acknowledgment, the CONTRL message indicates the acceptance or rejection for the received interchange, group, or message, including a list of errors or unsupported functionality.
+ ### [Consumption](#tab/consumption) 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
This action performs the following tasks:
> You also have to specify the **XML message to encode**, which can be the output > from the trigger or a preceding action.
-1. When prompted, provide the following connection information for your integration account:
+1. Provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-|
This action performs the following tasks:
For example:
- ![Screenshot showing the "Encode to EDIFACT message by agreement name" connection pane.](./media/logic-apps-enterprise-integration-edifact/create-edifact-encode-connection-consumption.png)
+ ![Screenshot shows Azure portal, Consumption workflow, and connection box for action named Encode to EDIFACT message by agreement name.](./media/logic-apps-enterprise-integration-edifact/create-edifact-encode-connection-consumption.png)
1. When you're done, select **Create**.
-1. In the EDIFACT action information box, provide the following property values:
+1. In the EDIFACT action, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement |
- | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br><br>For more information, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+
+ For example, the XML message payload to encode can be the **Body** content output from the Request trigger:
- For example, the XML message payload can be the **Body** content output from the Request trigger:
+ ![Screenshot shows Consumption workflow, action named Encode to EDIFACT message by agreement name, and message encoding properties.](./media/logic-apps-enterprise-integration-edifact/encode-edifact-message-agreement-consumption.png)
- ![Screenshot showing the "Encode to EDIFACT message by agreement name" operation with the message encoding properties.](./media/logic-apps-enterprise-integration-edifact/encode-edifact-message-agreement-consumption.png)
+1. Save your workflow.
### [Standard](#tab/standard)
+#### EDIFACT built-in connector (preview)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **EDIFACT Encode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. In the EDIFACT action pane, provide the required property values:
+
+ | Property | Required | Description |
+ |-|-|-|
+ | **Message To Encode** | Yes | The XML message content that you want to encode. Specifically, the business identifier for the message sender as specified by your EDIFACT agreement. |
+ | **Advanced parameters** | No | From the list, add any other properties that you want to use. This operation includes the following other parameters: <br><br>- **Sender Identity Receiver Qualifier** <br>- **Sender Identity Receiver Identifier** <br>- **Receiver Identity Receiver Qualifier** <br>- **Receiver Identity Receiver Identifier** <br>- **Name of EDIFACT agreement** |
+
+ For example, the XML message payload to encode can be the **Body** content output from the Request trigger:
+
+ ![Screenshot shows Standard workflow, action named EDIFACT Encode, and message encoding properties.](./media/logic-apps-enterprise-integration-edifact/edifact-encode-standard.png)
+
+1. Save your workflow.
+
+#### EDIFACT managed connector
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. 1. In the designer, [follow these general steps to add the **EDIFACT** action named **Encode to EDIFACT message by agreement name** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
This action performs the following tasks:
> You also have to specify the **XML message to encode**, which can be the output > from the trigger or a preceding action.
-1. When prompted, provide the following connection information for your integration account:
+1. Provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-| | **Connection name** | Yes | A name for the connection |
- | **Integration account** | Yes | From the list of available integration accounts, select the account to use. |
+ | **Integration Account ID** | Yes | The resource ID for your integration account, which has the following format: <br><br>**`/subscriptions/<Azure-subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Logic/integrationAccounts/<integration-account-name>`** <br><br>For example: <br>`/subscriptions/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX/resourceGroups/integrationAccount-RG/providers/Microsoft.Logic/integrationAccounts/myIntegrationAccount` <br><br>To find this resource ID, follow these steps: <br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, select **Overview**. <br>3. On the **Overview** page, select **JSON View**. <br>4. From the **Resource ID** property, copy the value. |
+ | **Integration Account SAS URL** | Yes | The request endpoint URL that uses shared access signature (SAS) authentication to provide access to your integration account. This callback URL has the following format: <br><br>**`https://<request-endpoint-URI>sp=<permissions>sv=<SAS-version>sig=<signature>`** <br><br>For example: <br>`https://prod-04.west-us.logic-azure.com:443/integrationAccounts/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX?api-version=2015-08-1-preview&sp=XXXXXXXXX&sv=1.0&sig=ZZZZZZZZZZZZZZZZZZZZZZZZZZZ` <br><br>To find this URL, follow these steps: <br>1. In the Azure portal, open your integration account. <br>2. On the integration account menu, under **Settings**, select **Callback URL**. <br>3. From the **Generated Callback URL** property, copy the value. |
+ | **Size of Control Number Block** | No | The block size of control numbers to reserve from an agreement for high throughput scenarios |
For example:
- ![Screenshot showing the "Encode to EDIFACT message by parameter name" connection pane.](./media/logic-apps-enterprise-integration-edifact/create-edifact-encode-connection-standard.png)
+ ![Screenshot shows Standard workflow and connection pane for action named Encode to EDIFACT message by agreement name.](./media/logic-apps-enterprise-integration-edifact/create-edifact-encode-connection-standard.png)
1. When you're done, select **Create**.
-1. In the EDIFACT action information box, provide the following property values:
+1. In the EDIFACT action pane, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement |
- | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br><br>For more information, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+
+ For example, the XML message payload to encode can be the **Body** content output from the Request trigger:
- For example, the message payload is the **Body** content output from the Request trigger:
+ ![Screenshot shows Standard workflow, action named Encode to EDIFACT message by agreement name, and message encoding properties.](./media/logic-apps-enterprise-integration-edifact/encode-edifact-message-agreement-standard.png)
- ![Screenshot showing the "Encode to EDIFACT message by parameter name" operation with the message encoding properties.](./media/logic-apps-enterprise-integration-edifact/encode-edifact-message-agreement-standard.png)
+1. Save your workflow.
This action performs the following tasks:
## Decode EDIFACT messages
+The **EDIFACT** managed connector action named **Decode EDIFACT message** action and the **EDIFACT** built-in connector action named **EDIFACT Decode** performs the following tasks, except where noted in [Limitations and known issues](#limitations-known-issues):
+
+* Validate the envelope against the trading partner agreement.
+
+* Resolve the agreement by matching the sender qualifier and identifier along with the receiver qualifier and identifier.
+
+* Split an interchange into multiple transaction sets when the interchange has more than one transaction, based on the agreement's **Receive Settings**.
+
+* Disassemble the interchange.
+
+* Validate Electronic Data Interchange (EDI) and partner-specific properties, such as the interchange envelope structure, the envelope schema against the control schema, the schema for the transaction-set data elements against the message schema, and extended validation on transaction-set data elements.
+
+* Verify that the interchange, group, and transaction set control numbers aren't duplicates (managed connector only), if configured, for example:
+
+ * Check the interchange control number against previously received interchanges.
+
+ * Check the group control number against other group control numbers in the interchange.
+
+ * Check the transaction set control number against other transaction set control numbers in that group.
+
+* Split the interchange into transaction sets, or preserve the entire interchange (managed connector only), for example:
+
+ * Split Interchange as transaction sets - suspend transaction sets on error.
+
+ The decoding action splits the interchange into transaction sets and parses each transaction set. The action outputs only those transaction sets that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
+
+ * Split Interchange as transaction sets - suspend interchange on error.
+
+ The decoding action splits the interchange into transaction sets and parses each transaction set. If one or more transaction sets in the interchange fail validation, the action outputs all the transaction sets in that interchange to `badMessages`.
+
+ * Preserve Interchange - suspend transaction sets on error.
+
+ The decoding action preserves the interchange and processes the entire batched interchange. The action outputs only those transaction sets that fail validation to `badMessages`, and outputs the remaining transactions sets to `goodMessages`.
+
+ * Preserve Interchange - suspend interchange on error.
+
+ The decoding action preserves the interchange and processes the entire batched interchange. If one or more transaction sets in the interchange fail validation, the action outputs all the transaction sets in that interchange to `badMessages`.
+
+* Generate a technical acknowledgment, functional acknowledgment, or both, if configured.
+
+ * A technical acknowledgment or the CONTRL ACK, which reports the results from a syntactical check on the complete received interchange.
+
+ * A functional acknowledgment that acknowledges the acceptance or rejection for the received interchange or group.
+ ### [Consumption](#tab/consumption) 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. 1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. When prompted, provide the following connection information for your integration account:
+1. Provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-|
This action performs the following tasks:
For example:
- ![Screenshot showing the "Decode EDIFACT message" connection pane.](./media/logic-apps-enterprise-integration-edifact/create-edifact-decode-connection-consumption.png)
+ ![Screenshot shows Consumption workflow designer and connection pane for the action named Decode EDIFACT message.](./media/logic-apps-enterprise-integration-edifact/create-edifact-decode-connection-consumption.png)
1. When you're done, select **Create**.
-1. In the EDIFACT action information box, provide the following property values:
+1. In the EDIFACT action, provide the following property values:
| Property | Required | Description | |-|-|-| | **EDIFACT flat file message to decode** | Yes | The XML flat file message to decode. |
- | Other parameters | No | This operation includes the following other parameters: <p>- **Component separator** <br>- **Data element separator** <br>- **Release indicator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br>- **Payload character set** <br>- **Segment terminator suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Component separator** <br>- **Data element separator** <br>- **Release indicator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br>- **Payload character set** <br>- **Segment terminator suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <br><br>For more information, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
For example, the XML message payload to decode can be the **Body** content output from the Request trigger:
- ![Screenshot showing the "Decode EDIFACT message" operation with the message decoding properties.](./media/logic-apps-enterprise-integration-edifact/decode-edifact-message-consumption.png)
+ ![Screenshot shows Consumption workflow, action named Decode EDIFACT message, and message decoding properties.](./media/logic-apps-enterprise-integration-edifact/decode-edifact-message-consumption.png)
### [Standard](#tab/standard)
+#### EDIFACT built-in connector (preview)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. In the designer, [follow these general steps to add the **EDIFACT** action named **EDIFACT Decode** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
+
+1. In the EDIFACT action, provide the following property values:
+
+ | Property | Required | Description |
+ |-|-|-|
+ | **Message To Decode** | Yes | The XML flat file message to decode. |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Component Separator** <br>- **Data Element Separator** <br>- **Release Indicator** <br>- **Repetition Separator** <br>- **Segment Terminator** <br>- **Segment Terminator Suffix** <br>- **Decimal Indicator** <br>- **Payload Character Set** <br>- **Segment Terminator Suffix** <br>- **Preserve Interchange** <br>- **Suspend Interchange On Error** <br><br>For more information, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+
+ For example, the XML message payload to decode can be the **Body** content output from the Request trigger:
+
+ ![Screenshot shows Standard workflow, action named EDIFACT Decode, and message decoding properties.](./media/logic-apps-enterprise-integration-edifact/edifact-decode-standard.png)
+
+1. Save your workflow.
+
+#### EDIFACT managed connector
+ 1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer. 1. In the designer, [follow these general steps to add the **EDIFACT** action named **Decode EDIFACT message** to your workflow](create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. When prompted, provide the following connection information for your integration account:
+1. Provide the following connection information for your integration account:
| Property | Required | Description | |-|-|-|
This action performs the following tasks:
For example:
- ![Screenshot showing the "Decode EDIFACT message" connection pane.](./media/logic-apps-enterprise-integration-edifact/create-edifact-decode-connection-standard.png)
+ ![Screenshot shows Standard workflow and connection pane for action named EDIFACT Decode.](./media/logic-apps-enterprise-integration-edifact/create-edifact-decode-connection-standard.png)
1. When you're done, select **Create**.
-1. In the EDIFACT action information box, provide the following property values:
+1. In the EDIFACT action pane, provide the following property values:
| Property | Required | Description | |-|-|-| | **Name of EDIFACT agreement** | Yes | The EDIFACT agreement to use. | | **XML message to encode** | Yes | The business identifier for the message sender as specified by your EDIFACT agreement |
- | Other parameters | No | This operation includes the following other parameters: <p>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <p>For more information, review [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
+ | Other parameters | No | This operation includes the following other parameters: <br><br>- **Data element separator** <br>- **Release indicator** <br>- **Component separator** <br>- **Repetition separator** <br>- **Segment terminator** <br>- **Segment terminator suffix** <br>- **Decimal indicator** <br><br>For more information, see [EDIFACT message settings](logic-apps-enterprise-integration-edifact-message-settings.md). |
- For example, the message payload is the **Body** content output from the Request trigger:
+ For example, the XML message payload to decode can be the **Body** content output from the Request trigger:
![Screenshot showing the "Decode EDIFACT message" operation with the message decoding properties.](./media/logic-apps-enterprise-integration-edifact/decode-edifact-message-standard.png)
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
For Azure Logic Apps to receive incoming communication through your firewall, yo
| Southeast Asia | 52.163.93.214, 52.187.65.81, 52.187.65.155, 104.215.181.6, 20.195.49.246, 20.198.130.155, 23.98.121.180 | | Switzerland North | 51.103.128.52, 51.103.132.236, 51.103.134.138, 51.103.136.209, 20.203.230.170, 20.203.227.226 | | Switzerland West | 51.107.225.180, 51.107.225.167, 51.107.225.163, 51.107.239.66, 51.107.235.139,51.107.227.18 |
-| UAE Central | 20.45.75.193, 20.45.64.29, 20.45.64.87, 20.45.71.213 |
+| UAE Central | 20.45.75.193, 20.45.64.29, 20.45.64.87, 20.45.71.213, 40.126.212.77, 40.126.209.97 |
| UAE North | 20.46.42.220, 40.123.224.227, 40.123.224.143, 20.46.46.173, 20.74.255.147, 20.74.255.37 | | UK South | 51.140.79.109, 51.140.78.71, 51.140.84.39, 51.140.155.81, 20.108.102.180, 20.90.204.232, 20.108.148.173, 20.254.10.157 | | UK West | 51.141.48.98, 51.141.51.145, 51.141.53.164, 51.141.119.150, 51.104.62.166, 51.141.123.161 |
This section lists the outbound IP addresses that Azure Logic Apps requires in y
| Southeast Asia | 13.76.133.155, 52.163.228.93, 52.163.230.166, 13.76.4.194, 13.67.110.109, 13.67.91.135, 13.76.5.96, 13.67.107.128, 20.195.49.240, 20.195.49.29, 20.198.130.152, 20.198.128.124, 23.98.121.179, 23.98.121.115 | | Switzerland North | 51.103.137.79, 51.103.135.51, 51.103.139.122, 51.103.134.69, 51.103.138.96, 51.103.138.28, 51.103.136.37, 51.103.136.210, 20.203.230.58, 20.203.229.127, 20.203.224.37, 20.203.225.242 | | Switzerland West | 51.107.239.66, 51.107.231.86, 51.107.239.112, 51.107.239.123, 51.107.225.190, 51.107.225.179, 51.107.225.186, 51.107.225.151, 51.107.239.83, 51.107.232.61, 51.107.234.254, 51.107.226.253, 20.199.193.249 |
-| UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135 |
+| UAE Central | 20.45.75.200, 20.45.72.72, 20.45.75.236, 20.45.79.239, 20.45.67.170, 20.45.72.54, 20.45.67.134, 20.45.67.135, 40.126.210.93, 40.126.209.151, 40.126.208.156, 40.126.214.92 |
| UAE North | 40.123.230.45, 40.123.231.179, 40.123.231.186, 40.119.166.152, 40.123.228.182, 40.123.217.165, 40.123.216.73, 40.123.212.104, 20.74.255.28, 20.74.250.247, 20.216.16.75, 20.74.251.30 | | UK South | 51.140.74.14, 51.140.73.85, 51.140.78.44, 51.140.137.190, 51.140.153.135, 51.140.28.225, 51.140.142.28, 51.140.158.24, 20.108.102.142, 20.108.102.123, 20.90.204.228, 20.90.204.188, 20.108.146.132, 20.90.223.4, 20.26.15.70, 20.26.13.151 | | UK West | 51.141.54.185, 51.141.45.238, 51.141.47.136, 51.141.114.77, 51.141.112.112, 51.141.113.36, 51.141.118.119, 51.141.119.63, 51.104.58.40, 51.104.57.160, 51.141.121.72, 51.141.121.220 |
logic-apps Logic Apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-overview.md
ms.suite: integration
Previously updated : 05/24/2023 Last updated : 01/10/2024 # What is Azure Logic Apps?
logic-apps Monitor Workflows Collect Diagnostic Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-workflows-collect-diagnostic-data.md
Previously updated : 06/26/2023 Last updated : 01/10/2024 # As a developer, I want to collect and send diagnostics data for my logic app workflows to specific destinations, such as a Log Analytics workspace, storage account, or event hub, for further review.
machine-learning Concept Endpoints Online Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online-auth.md
A _user identity_ is a Microsoft Entra ID that you can use to create an endpoint
An _endpoint identity_ is a Microsoft Entra ID that runs the user container in deployments. In other words, if the identity is associated with the endpoint and used for the user container for the deployment, then it's called an endpoint identity. The endpoint identity would also need proper permissions for the user container to interact with resources as needed. For example, the endpoint identity would need the proper permissions to pull images from the Azure Container Registry or to interact with other Azure services.
+In general, the user identity and endpoint identity would have separate permission requirements. For more information on managing identities and permissions, see [How to authenticate clients for online endpoints](how-to-authenticate-online-endpoint.md). For more information on the special case of automatically adding extra permission for secrets, see [Additional permissions for user identity](#additional-permissions-for-user-identity-when-enforcing-access-to-default-secret-stores).
++ ## Limitation Microsoft Entra ID authentication (`aad_token`) is supported for managed online endpoints __only__. For Kubernetes online endpoints, you can use either a key or an Azure Machine Learning token (`aml_token`). + ## Permissions needed for user identity When you sign in to your Azure tenant with your Microsoft account (for example, using `az login`), you complete the user authentication step (commonly known as _authn_) and your identity as a user is determined. Now, say you want to create an online endpoint under a workspace, you'll need the proper permission to do so. This is where authorization (commonly known as _authz_) comes in.
For control plane operations, your user identity needs to have a proper Azure ro
> [!NOTE] > You can fetch your Microsoft Entra token (`aad_token`) directly from Microsoft Entra ID once you're signed in, and you don't need extra Azure RBAC permission on the workspace.
+#### Additional permissions for user identity when enforcing access to default secret stores
+
+If you intend to use the [secret injection](concept-secret-injection.md) feature, and while creating your endpoints, you set the flag to enforce access to the default secret stores, your _user identity_ needs to have the permission to read secrets from workspace connections.
+
+When the endpoint is created with a system-assigned identity (SAI) _and_ the flag is set to enforce access to the default secret stores, your user identity needs to have permissions to read secrets from workspace connections when creating the endpoint and creating the deployment(s) under the endpoint. This restriction ensures that only a _user identity_ with the permission to read secrets can grant the endpoint identity the permission to read secrets.
+
+- If a user identity doesn't have the permissions to read secrets from workspace connections, but it tries to create the _endpoint_ with an SAI and the endpoint's flag set to enforce access to the default secret stores, the endpoint creation is rejected.
+
+- Similarly, if a user identity doesn't have the permissions to read secrets from workspace connections, but tries to create a _deployment_ under the endpoint with an SAI and the endpoint's flag set to enforce access to the default secret stores, the deployment creation is rejected.
+
+When (1) the endpoint is created with a UAI, _or_ (2) the flag is _not_ set to enforce access to the default secret stores even if the endpoint uses an SAI, your user identity doesn't need to have permissions to read secrets from workspace connections. In this case, the endpoint identity won't be automatically granted the permission to read secrets, but you can still manually grant the endpoint identity this permission by assigning proper roles if needed. Regardless of whether the role assignment was done automatically or manually, the secret retrieval and injection will still be triggered if you mapped the environment variables with secret references in the deployment definition, and it will use the endpoint identity to do so.
+
+For more information on managing authorization to an Azure Machine Learning workspace, see [Manage access to Azure Machine Learning](how-to-assign-roles.md).
+
+For more information on secret injection, see [Secret injection in online endpoints](concept-secret-injection.md).
+ ### Data plane operations
An online deployment runs your user container with the _endpoint identity_, that
### Automatic role assignment for endpoint identity
-Online endpoints require Azure Container Registry (ACR) pull permission on the ACR associated with the workspace. They also require Storage Blob Data Reader permission on the default datastore of the workspace. By default, these permissions are automatically granted to the endpoint identity if the endpoint identity is a system-assigned identity.
+If the endpoint identity is a system-assigned identity, some roles are assigned to the endpoint identity for convenience.
+
+Role | Description | Condition for the automatic role assignment
+-- | -- | --
+`AcrPull` | Allows the endpoint identity to pull images from the Azure Container Registry (ACR) associated with the workspace. | The endpoint identity is a system-assigned identity (SAI).
+`Storage Blob Data Reader` | Allows the endpoint identity to read blobs from the default datastore of the workspace. | The endpoint identity is a system-assigned identity (SAI).
+`AzureML Metrics Writer (preview)` | Allows the endpoint identity to write metrics to the workspace. | The endpoint identity is a system-assigned identity (SAI).
+`Azure Machine Learning Workspace Connection Secrets Reader` <sup>1</sup> | Allows the endpoint identity to read secrets from workspace connections. | The endpoint identity is a system-assigned identity (SAI). The endpoint is created with a flag to enforce access to the default secret stores. The _user identity_ that creates the endpoint has the same permission to read secrets from workspace connections. <sup>2</sup>
-Also, when creating an endpoint, if you set the flag to enforce access to the default secret stores, the endpoint identity is automatically granted the permission to read secrets from workspace connections.
+<sup>1</sup> For more information on the `Azure Machine Learning Workspace Connection Secrets Reader` role, see [Assign permissions to the identity](how-to-authenticate-online-endpoint.md#assign-permissions-to-the-identity).
-There's no automatic role assignment if the endpoint identity is a user-assigned identity.
+<sup>2</sup> Even if the endpoint identity is SAI, if the enforce flag is not set or the user identity doesn't have the permission, there's no automatic role assignment for this role. For more information, see [How to deploy online endpoint with secret injection](how-to-deploy-online-endpoint-with-secret-injection.md#create-an-endpoint).
-In more detail:
-- If you use a system-assigned identity (SAI) for the endpoint, roles with fundamental permissions (such as Azure Container Registry pull permission, and Storage Blob Data Reader) are automatically assigned to the endpoint identity. Also, you can set a flag on the endpoint to allow its SAI have the permission to read secrets from workspace connections. To have this permission, the `Azure Machine Learning Workspace Connection Secret Reader` role would be automatically assigned to the endpoint identity. For this role to be automatically assigned to the endpoint identity, the following conditions must be met:
- - Your _user identity_, that is, the identity that creates the endpoint, has the permissions to read secrets from workspace connections when creating the endpoint.
- - The endpoint uses an SAI.
- - The endpoint is defined with a flag to enforce access to default secret stores (workspace connections under the current workspace) when creating the endpoint.
-- If your endpoint uses a UAI, or it uses the Key Vault as the secret store with an SAI. In these cases, you need to manually assign to the endpoint identity the role with the proper permissions to read secrets from the Key Vault.
+If the endpoint identity is a user-assigned identity, there's no automatic role assignment. In this case, you need to manually assign roles to the endpoint identity as needed.
## Choosing the permissions and scope for authorization
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
reviewer: msakande Previously updated : 09/13/2023 Last updated : 10/24/2023
-#Customer intent: As an MLOps administrator, I want to understand what a managed endpoint is and why I need it.
+#Customer intent: As an ML pro, I want to understand what an online endpoint is and why I need it.
# Online endpoints and deployments for real-time inference
Monitoring for Azure Machine Learning endpoints is possible via integration with
For more information on monitoring, see [Monitor online endpoints](how-to-monitor-online-endpoints.md).
+### Secret injection in online deployments (preview)
+
+Secret injection in the context of an online deployment is a process of retrieving secrets (such as API keys) from secret stores, and injecting them into your user container that runs inside an online deployment. Secrets will eventually be accessible via environment variables, thereby providing a secure way for them to be consumed by the inference server that runs your scoring script or by the inferencing stack that you bring with a BYOC (bring your own container) deployment approach.
+
+There are two ways to inject secrets. You can inject secrets yourself, using managed identities, or you can use the secret injection feature. To learn more about the ways to inject secrets, see [Secret injection in online endpoints (preview)](concept-secret-injection.md).
+ ## Next steps
machine-learning Concept Secret Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secret-injection.md
+
+ Title: What is secret injection in online endpoints (preview)?
+
+description: Learn about secret injection as it applies to online endpoints in Azure Machine Learning.
+++++++
+reviewer: msakande
+ Last updated : 01/10/2024+
+#CustomerIntent: As an ML Pro, I want to retrieve and inject secrets into the deployment environment easily so that deployments I create can consume the secrets in a secured manner.
++
+# Secret injection in online endpoints (preview)
++
+Secret injection in the context of an online endpoint is a process of retrieving secrets (such as API keys) from secret stores, and injecting them into your user container that runs inside an online deployment. Secrets are eventually accessed securely via environment variables, which are used by the inference server that runs your scoring script or by the inferencing stack that you bring with a BYOC (bring your own container) deployment approach.
++
+## Problem statement
+
+When you create an online deployment, you might want to use secrets from within the deployment to access external services. Some of these external services include Microsoft Azure OpenAI service, Azure AI Services, and Azure AI Content Safety.
+
+To use the secrets, you have to find a way to securely pass them to your user container that runs inside the deployment. We don't recommend that you include secrets as part of the deployment definition, since this practice would expose the secrets in the deployment definition.
+
+A better approach is to store the secrets in secret stores and then retrieve them securely from within the deployment. However, this approach poses its own challenge: how the deployment should authenticate itself to the secret stores to retrieve secrets. Because the online deployment runs your user container using the _endpoint identity_, which is a [managed identity](/entr) to control the endpoint identity's permissions and allow the endpoint to retrieve secrets from the secret stores.
+Using this approach requires you to do the following tasks:
+
+- Assign the right roles to the endpoint identity so that it can read secrets from the secret stores.
+- Implement the scoring logic for the deployment so that it uses the endpoint's managed identity to retrieve the secrets from the secret stores.
+
+While this approach of using a managed identity is a secure way to retrieve and inject secrets, [secret injection via the secret injection feature](#secret-injection-via-the-secret-injection-feature) further simplifies the process of retrieving secrets for [workspace connections](prompt-flow/concept-connections.md) and [key vaults](../key-vault/general/overview.md).
++
+## Managed identity associated with the endpoint
++
+An online deployment runs your user container with the managed identity associated with the endpoint. This managed identity, called the _endpoint identity_, is a [Microsoft Entra ID](/entr). Therefore, you can assign Azure roles to the identity to control permissions that are required to perform operations. The endpoint identity can be either a system-assigned identity (SAI) or a user-assigned identity (UAI). You can decide which of these kinds of identities to use when you create the endpoint.
+
+- For a _system-assigned identity_, the identity is created automatically when you create the endpoint, and roles with fundamental permissions (such as the Azure Container Registry pull permission and the storage blob data reader) are automatically assigned.
+- For a _user-assigned identity_, you need to create the identity first, and then associate it with the endpoint when you create the endpoint. You're also responsible for assigning proper roles to the UAI as needed.
+
+For more information on using managed identities of an endpoint, see [How to access resources from endpoints with managed identities](how-to-access-resources-from-endpoints-managed-identities.md), and the example for [using managed identities to interact with external services](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/managed/managed-identities).
++
+## Role assignment to the endpoint identity
+
+The following roles are required by the secret stores:
+
+- For __secrets stored in workspace connections under your workspace__: `Workspace Connections` provides a [List Secrets API (preview)](/rest/api/azureml/2023-08-01-preview/workspace-connections/list-secrets) that requires the identity that calls the API to have `Azure Machine Learning Workspace Connection Secrets Reader` role (or equivalent) assigned to the identity.
+- For __secrets stored in an external Microsoft Azure Key Vault__: Key Vault provides a [Get Secret Versions API](/rest/api/keyvault/secrets/get-secret-versions/get-secret-versions) that requires the identity that calls the API to have `Key Vault Secrets User` role (or equivalent) assigned to the identity.
++
+## Implementation of secret injection
+
+Once secrets (such as API keys) are retrieved from secret stores, there are two ways to inject them into a user container that runs inside the online deployment:
+
+- Inject secrets yourself, using managed identities.
+- Inject secrets, using the secret injection feature.
+
+Both of these approaches involve two steps:
+
+1. First, retrieve secrets from the secret stores, using the endpoint identity.
+1. Second, inject the secrets into your user container.
+
+### Secret injection via the use of managed identities
+
+In your deployment definition, you need to use the endpoint identity to call the APIs from secret stores. You can implement this logic either in your scoring script or in shell scripts that you run in your BYOC container. To implement secret injection via the use of managed identities, see the [example for using managed identities to interact with external services](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/managed/managed-identities).
+
+### Secret injection via the secret injection feature
+
+To use the secret injection feature, in your deployment definition, map the secrets (that you want to refer to) from workspace connections or the Key Vault onto the environment variables. This approach doesn't require you to write any code in your scoring script or in shell scripts that you run in your BYOC container. To map the secrets from workspace connections or the Key Vault onto the environment variables, the following conditions must be met:
+
+- During endpoint creation, if an online endpoint was defined to enforce access to default secret stores (workspace connections under the current workspace), your user identity that creates the deployment under the endpoint should have the permissions to read secrets from workspace connections.
+- The endpoint identity that the deployment uses should have permissions to read secrets from either workspace connections or the Key Vault, as referenced in the deployment definition.
+
+> [!NOTE]
+> - If the endpoint was successfully created with an SAI and the flag set to enforce access to default secret stores, then the endpoint would automatically have the permission for workspace connections.
+> - In the case where the endpoint used a UAI, or the flag to enforce access to default secret stores wasn't set, then the endpoint identity might not have the permission for workspace connections. In such a situation, you need to manually assign the role for the workspace connections to the endpoint identity.
+> - The endpoint identity won't automatically receive permission for the external Key Vault. If you're using the Key Vault as a secret store, you'll need to manually assign the role for the Key Vault to the endpoint identity.
+
+For more information on using secret injection, see [Deploy machine learning models to online endpoints with secret injection (preview)](how-to-deploy-online-endpoint-with-secret-injection.md).
++
+## Related content
+
+- [Deploy machine learning models to online endpoints with secret injection (preview)](how-to-deploy-online-endpoint-with-secret-injection.md)
+- [Authentication for managed online endpoints](concept-endpoints-online-auth.md)
+- [Online endpoints](concept-endpoints-online.md)
machine-learning How To Authenticate Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-authenticate-online-endpoint.md
In this section, you assign permissions to the user identity that you use for in
### Use a built-in role
-The `AzureML Data Scientist` [built-in role](../role-based-access-control/built-in-roles.md#azureml-data-scientist) uses wildcards to include the following _control plane_ RBAC actions:
+The `AzureML Data Scientist` [built-in role](../role-based-access-control/built-in-roles.md#azureml-data-scientist) can be used to manage and use endpoints and deployments and it uses wildcards to include the following _control plane_ RBAC actions:
- `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/write` - `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/delete` - `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/read`
The `AzureML Data Scientist` [built-in role](../role-based-access-control/built-
and to include the following _data plane_ RBAC action: - `Microsoft.MachineLearningServices/workspaces/onlineEndpoints/score/action`
-If you use this built-in role, there's no action needed at this step.
+Optionally, the `Azure Machine Learning Workspace Connection Secrets Reader` built-in role can be used to access secrets from workspace connections and it include the following _control plane_ RBAC actions:
+- `Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action`
+- `Microsoft.MachineLearningServices/workspaces/metadata/secrets/read`
+
+If you use these built-in roles, there's no action needed at this step.
### (Optional) Create a custom role
You can skip this step if you're using built-in roles or other pre-made custom r
az role assignment create --assignee <identityId> --role "AzureML Data Scientist" --scope /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.MachineLearningServices/workspaces/<workspaceName> ```
+1. Optionally, if you're using the `Azure Machine Learning Workspace Connection Secrets Reader` built-in role, use the following code to assign the role to your user identity.
+
+ ```bash
+ az role assignment create --assignee <identityId> --role "Azure Machine Learning Workspace Connection Secrets Reader" --scope /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.MachineLearningServices/workspaces/<workspaceName>
+ ```
+ 1. If you're using a custom role, use the following code to assign the role to your user identity. ```bash
After you retrieve the Microsoft Entra token, you can verify that the token is f
``` - ## Create an endpoint The following example creates the endpoint with a system-assigned identity (SAI) as the endpoint identity. The SAI is the default identity type of the managed identity for endpoints. Some basic roles are automatically assigned for the SAI. For more information on role assignment for a system-assigned identity, see [Automatic role assignment for endpoint identity](concept-endpoints-online-auth.md#automatic-role-assignment-for-endpoint-identity).
You can find the scoring URI on the __Details__ tab of the endpoint's page.
## Get the key or token for data plane operations - A key or token can be used for data plane operations, even though the process of getting the key or token is a control plane operation. In other words, you use a control plane token to get the key or token that you later use to perform your data plane operations. Getting the _key_ or _Azure Machine Learning token_ requires that the correct role is assigned to the user identity that is requesting it, as described in [authorization for control plane operations](concept-endpoints-online-auth.md#control-plane-operations).
Microsoft Entra token isn't exposed in the studio.
### Verify the resource endpoint and client ID for the Microsoft Entra token - After getting the Entra token, you can verify that the token is for the right Azure resource endpoint `ml.azure.com` and the right client ID by decoding the token via [jwt.ms](https://jwt.ms/), which will return a json response with the following information: ```json
machine-learning How To Deploy Online Endpoint With Secret Injection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoint-with-secret-injection.md
+
+ Title: Access secrets from online deployment using secret injection (preview)
+
+description: Learn to use secret injection with online endpoint and deployment to access secrets like API keys.
++++++
+reviewer: msakande
Last updated : 01/10/2024++++
+# Access secrets from online deployment using secret injection (preview)
++
+In this article, you learn to use secret injection with an online endpoint and deployment to access secrets from a secret store.
+
+You'll learn to:
+
+> [!div class="checklist"]
+> * Set up your user identity and its permissions
+> * Create workspace connections and/or key vaults to use as secret stores
+> * Create the endpoint and deployment by using the secret injection feature
++
+## Prerequisites
+
+- To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+- Install and configure the [Azure Machine Learning CLI (v2) extension](how-to-configure-cli.md) or the [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install).
+
+- An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your Azure Machine Learning extension as stated previously.
+
+- An Azure Machine Learning workspace. You'll have a workspace if you configured your Azure Machine Learning extension as stated previously.
+
+- Any trained machine learning model ready for scoring and deployment.
+
+## Choose a secret store
+
+You can choose to store your secrets (such as API keys) using either:
+
+- __Workspace connections under the workspace__: If you use this kind of secret store, you can later grant permission to the endpoint identity (at endpoint creation time) to read secrets from workspace connections automatically, provided certain conditions are met. For more information, see the system-assigned identity tab from the [Create an endpoint](#create-an-endpoint) section.
+- __Key vaults__ that aren't necessarily under the workspace: If you use this kind of secret store, the endpoint identity won't be granted permission to read secrets from the key vaults automatically. Therefore, if you want to use a managed key vault service such as Microsoft Azure Key Vault as a secret store, you must assign a proper role later.
+
+#### Use workspace connection as a secret store
+
+You can create workspace connections to use in your deployment. For example, you can create a connection to Microsoft Azure OpenAI Service by using [Workspace Connections - Create REST API](/rest/api/azureml/2023-08-01-preview/workspace-connections/create).
+
+Alternatively, you can create a custom connection by using Azure Machine Learning studio (see [How to create a custom connection for prompt flow](./prompt-flow/tools-reference/python-tool.md#create-a-custom-connection)) or Azure AI Studio (see [How to create a custom connection in AI Studio](../ai-studio/how-to/connections-add.md?tabs=custom#create-a-new-connection)).
+
+1. Create an Azure OpenAI connection:
+
+ ```REST
+ PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.MachineLearningServices/workspaces/{{workspaceName}}/connections/{{connectionName}}?api-version=2023-08-01-preview
+ Authorization: Bearer {{token}}
+ Content-Type: application/json
+
+ {
+ "properties": {
+ "authType": "ApiKey",
+ "category": "AzureOpenAI",
+ "credentials": {
+ "key": "<key>",
+ "endpoint": "https://<name>.openai.azure.com/",
+ },
+ "expiryTime": null,
+ "target": "https://<name>.openai.azure.com/",
+ "isSharedToAll": false,
+ "sharedUserList": [],
+ "metadata": {
+ "ApiType": "Azure"
+ }
+ }
+ }
+ ```
+
+1. Alternatively, you can create a custom connection:
+
+ ```REST
+ PUT https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.MachineLearningServices/workspaces/{{workspaceName}}/connections/{{connectionName}}?api-version=2023-08-01-preview
+ Authorization: Bearer {{token}}
+ Content-Type: application/json
+
+ {
+ "properties": {
+ "authType": "CustomKeys",
+ "category": "CustomKeys",
+ "credentials": {
+ "keys": {
+ "OPENAI_API_KEY": "<key>",
+ "SPEECH_API_KEY": "<key>"
+ }
+ },
+ "expiryTime": null,
+ "target": "_",
+ "isSharedToAll": false,
+ "sharedUserList": [],
+ "metadata": {
+ "OPENAI_API_BASE": "<oai endpoint>",
+ "OPENAI_API_VERSION": "<oai version>",
+ "OPENAI_API_TYPE": "azure",
+ "SPEECH_REGION": "eastus",
+ }
+ }
+ }
+ ```
+
+1. Verify that the user identity can read the secrets from the workspace connection, by using [Workspace Connections - List Secrets REST API (preview)](/rest/api/azureml/2023-08-01-preview/workspace-connections/list-secrets).
+
+ ```REST
+ POST https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resourceGroupName}}/providers/Microsoft.MachineLearningServices/workspaces/{{workspaceName}}/connections/{{connectionName}}/listsecrets?api-version=2023-08-01-preview
+ Authorization: Bearer {{token}}
+ ```
+
+> [!NOTE]
+> The previous code snippets use a token in the `Authorization` header when making REST API calls. You can get the token by running `az account get-access-token`. For more information on getting a token, see [Get an access token](how-to-authenticate-online-endpoint.md#get-the-microsoft-entra-token-for-control-plane-operations).
+
+#### (Optional) Use Azure Key Vault as a secret store
+
+Create the key vault and set a secret to use in your deployment. For more information on creating the key vault, see [Set and retrieve a secret from Azure Key Vault using Azure CLI](../key-vault/secrets/quick-create-cli.md). Also,
+- [az keyvault CLI](/cli/azure/keyvault#az-keyvault-create) and [Set Secret REST API](/rest/api/keyvault/secrets/set-secret/set-secret) show how to set a secret.
+- [az keyvault secret show CLI](/cli/azure/keyvault/secret#az-keyvault-secret-show) and [Get Secret Versions REST API](/rest/api/keyvault/secrets/get-secret-versions/get-secret-versions) show how to retrieve a secret version.
+
+1. Create an Azure Key Vault:
+
+ ```azurecli
+ az keyvault create --name mykeyvault --resource-group myrg --location eastus
+ ```
+
+1. Create a secret:
+
+ ```azurecli
+ az keyvault secret set --vault-name mykeyvault --name secret1 --value <value>
+ ```
+
+ This command returns the secret version it creates. You can check the `id` property of the response to get the secret version. The returned response looks like `https://mykeyvault.vault.azure.net/secrets/<secret_name>/<secret_version>`.
+
+1. Verify that the user identity can read the secret from the key vault:
+
+ ```azurecli
+ az keyvault secret show --vault-name mykeyvault --name secret1 --version <secret_version>
+ ```
+
+> [!IMPORTANT]
+> If you use the key vault as a secret store for secret injection, you must configure the key vault's permission model as Azure role-based access control (RBAC). For more information, see [Azure RBAC vs. access policy for Key Vault](/azure/key-vault/general/rbac-access-policy).
+
+## Choose a user identity
+
+Choose the user identity that you'll use to create the online endpoint and online deployment. This user identity can be a user account, a service principal account, or a managed identity in Microsoft Entra ID. To set up the user identity, follow the steps in [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md).
+
+#### (Optional) Assign a role to the user identity
+
+- If your user identity wants the endpoint's system-assigned identity (SAI) to be automatically granted permission to read secrets from workspace connections, the user identity __must__ have the `Azure Machine Learning Workspace Connection Secrets Reader` role (or higher) on the scope of the workspace.
+ - An admin that has the `Microsoft.Authorization/roleAssignments/write` permission can run a CLI command to assign the role to the _user identity_:
+
+ ```azurecli
+ az role assignment create --assignee <UserIdentityID> --role "Azure Machine Learning Workspace Connection Secrets Reader" --scope /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.MachineLearningServices/workspaces/<workspaceName>
+ ```
+
+ > [!NOTE]
+ > The endpoint's system-assigned identity (SAI) won't be automatically granted permission for reading secrets from key vaults. Hence, the user identity doesn't need to be assigned a role for the Key Vault.
+
+- If you want to use a user-assigned identity (UAI) for the endpoint, you __don't need__ to assign the role to your user identity. Instead, if you intend to use the secret injection feature, you must assign the role to the endpoint's UAI manually.
+ - An admin that has the `Microsoft.Authorization/roleAssignments/write` permission can run the following commands to assign the role to the _endpoint identity_:
+
+ __For workspace connections__:
+
+ ```azurecli
+ az role assignment create --assignee <EndpointIdentityID> --role "Azure Machine Learning Workspace Connection Secrets Reader" --scope /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.MachineLearningServices/workspaces/<workspaceName>
+ ```
+
+ __For key vaults__:
+
+ ```azurecli
+ az role assignment create --assignee <EndpointIdentityID> --role "Key Vault Secrets User" --scope /subscriptions/<subscriptionId>/resourcegroups/<resourceGroupName>/providers/Microsoft.KeyVault/vaults/<vaultName>
+ ```
+
+- Verify that an identity (either a user identity or endpoint identity) has the role assigned, by going to the resource in the Azure portal. For example, in the Azure Machine Learning workspace or the Key Vault:
+ 1. Select the __Access control (IAM)__ tab.
+ 1. Select the __Check access__ button and find the identity.
+ 1. Verify that the right role shows up under the __Current role assignments__ tab.
+
+## Create an endpoint
+
+### [System-assigned identity](#tab/sai)
+
+If you're using a system-assigned identity (SAI) as the endpoint identity, specify whether you want to enforce access to default secret stores (namely, workspace connections under the workspace) to the endpoint identity.
+
+1. Create an `endpoint.yaml` file:
+
+ ```YAML
+ $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
+ name: my-endpoint
+ auth_mode: key
+ properties:
+ enforce_access_to_default_secret_stores: enabled # default: disabled
+ ```
+
+1. Create the endpoint, using the `endpoint.yaml` file:
+
+ ```azurecli
+ az ml online-endpoint create -f endpoint.yaml
+ ```
+
+If you don't specify the `identity` property in the endpoint definition, the endpoint will use an SAI by default.
+
+If the following conditions are met, the endpoint identity will automatically be granted the `Azure Machine Learning Workspace Connection Secrets Reader` role (or higher) on the scope of the workspace:
+
+- The user identity that creates the endpoint has the permission to read secrets from workspace connections (`Microsoft.MachineLearningServices/workspaces/connections/listsecrets/action`).
+- The endpoint uses an SAI.
+- The endpoint is defined with a flag to enforce access to default secret stores (workspace connections under the current workspace) when creating the endpoint.
+
+The endpoint identity won't automatically be granted a role to read secrets from the Key Vault. If you want to use the Key Vault as a secret store, you need to manually assign a proper role such as `Key Vault Secrets User` to the _endpoint identity_ on the scope of the Key Vault. For more information on roles, see [Azure built-in roles for Key Vault data plane operations](../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations).
+
+### [User-assigned identity](#tab/uai)
+
+If you're using a user-assigned identity (UAI) as the endpoint identity, you're not allowed to specify the `enforce_access_to_default_secret_stores` flag.
+
+1. Create an `endpoint.yaml` file:
+
+ ```YAML
+ $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineEndpoint.schema.json
+ name: my-endpoint
+ auth_mode: key
+ identity:
+ type: user_assigned
+ user_assigned_identities: /subscriptions/00000000-0000-0000-000-000000000000/resourcegroups/myrg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/my-identity
+ ```
+
+1. Create the endpoint, using the `endpoint.yaml` file:
+
+ ```azurecli
+ az ml online-endpoint create -f endpoint.yaml
+ ```
+
+When using a UAI, you must manually assign any required roles to the endpoint identity as described in the optional section [Assign role to the user identity](#optional-assign-a-role-to-the-user-identity).
+++
+## Create a deployment
+
+1. Author a scoring script or Dockerfile and the related scripts so that the deployment can consume the secrets via environment variables.
+
+ - There's no need for you to call the secret retrieval APIs for the workspace connections or key vaults. The environment variables are populated with the secrets when the user container in the deployment initiates.
+
+ - The value that gets injected into an environment variable can be one of the three types:
+ - The whole [List Secrets API (preview)](/rest/api/azureml/workspace-connections/list-secrets) response. You'll need to understand the API response structure, parse it, and use it in your user container.
+ - Individual secret or metadata from the workspace connection. You can use it without understanding the workspace connection API response structure.
+ - Individual secret version from the Key Vault. You can use it without understanding the Key Vault API response structure.
+
+1. Initiate the creation of the deployment, using either the scoring script (if you use a custom model) or a Dockerfile (if you take the BYOC approach to deployment). Specify environment variables the user expects within the user container.
+
+ If the values that are mapped to the environment variables follow certain patterns, the endpoint identity will be used to perform secret retrieval and injection.
+
+ | Pattern | Behavior |
+ | -- | -- |
+ | `${{azureml://connections/<connection_name>}}` | The whole [List Secrets API (preview)](/rest/api/azureml/workspace-connections/list-secrets) response is injected into the environment variable. |
+ | `${{azureml://connections/<connection_name>/credentials/<credential_name>}}` | The value of the credential is injected into the environment variable. |
+ | `${{azureml://connections/<connection_name>/metadata/<metadata_name>}}` | The value of the metadata is injected into the environment variable. |
+ | `${{azureml://connections/<connection_name>/target}}` | The value of the target (where applicable) is injected into the environment variable. |
+ | `${{keyvault:https://<keyvault_name>.vault.azure.net/secrets/<secret_name>/<secret_version>}}` | The value of the secret version is injected into the environment variable. |
+
+ For example:
+
+ 1. Create `deployment.yaml`:
+
+ ```YAML
+ $schema: https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json
+ name: blue
+ endpoint_name: my-endpoint
+ #…
+ environment_variables:
+ AOAI_CONNECTION: ${{azureml://connections/aoai_connection}}
+ LANGCHAIN_CONNECTION: ${{azureml://connections/multi_connection_langchain}}
+
+ OPENAI_KEY: ${{azureml://connections/multi_connection_langchain/credentials/OPENAI_API_KEY}}
+ OPENAI_VERSION: ${{azureml://connections/multi_connection_langchain/metadata/OPENAI_API_VERSION}}
+
+ USER_SECRET_KV1_KEY: ${{keyvault:https://mykeyvault.vault.azure.net/secrets/secret1/secretversion1}}
+ ```
+
+ 1. Create the deployment:
+
+ ```azurecli
+ az ml online-deployment create -f deployment.yaml
+ ```
++
+If the `enforce_access_to_default_secret_stores` flag was set for the endpoint, the user identity's permission to read secrets from workspace connections will be checked both at endpoint creation and deployment creation time. If the user identity doesn't have the permission, the creation will fail.
+
+At deployment creation time, if any environment variable is mapped to a value that follows the patterns in the previous table, secret retrieval and injection will be performed with the endpoint identity (either an SAI or a UAI). If the endpoint identity doesn't have the permission to read secrets from designated secret stores (either workspace connections or key vaults), the deployment creation will fail. Also, if the specified secret reference doesn't exist in the secret stores, the deployment creation will fail.
+
+For more information on errors that can occur during deployment of Azure Machine Learning online endpoints, see [Secret Injection Errors](how-to-troubleshoot-online-endpoints.md#error-secretsinjectionerror).
++
+## Consume the secrets
+
+You can consume the secrets by retrieving them from the environment variables within the user container running in your deployments.
++
+## Related content
+
+- [Secret injection in online endpoints (preview)](concept-secret-injection.md)
+- [How to authenticate clients for online endpoint](how-to-authenticate-online-endpoint.md)
+- [Deploy and score a model using an online endpoint](how-to-deploy-online-endpoints.md)
+- [Use a custom container to deploy a model using an online endpoint](how-to-deploy-custom-container.md)
machine-learning How To Identity Based Service Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md
Previously updated : 09/23/2022 Last updated : 01/05/2024
To access the workspace ACR, create machine learning compute cluster with system
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-```azurecli-interaction
+```azurecli-interactive
az ml compute create --name cpu-cluster --type <cluster name> --identity-type systemassigned ```
machine-learning How To Manage Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md
To request an exception from the Azure Machine Learning product team, use the st
| Total connections active at endpoint level for all deployments | 500 <sup>5</sup> | Yes | Managed online endpoint | | Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>5</sup> | Yes | Managed online endpoint |
-> [!NOTE]
-> 1. This is a regional limit. For example, if current limit on number of endpoint is 100, you can create 100 endpoints in the East US region, 100 endpoints in the West US region, and 100 endpoints in each of the other supported regions in a single subscription. Same principle applies to all the other limits.
-> 2. Single dashes like, `my-endpoint-name`, are accepted in endpoint and deployment names.
-> 3. Endpoints and deployments can be of different types, but limits apply to the sum of all types. For example, the sum of managed online endpoints, Kubernetes online endpoint and batch endpoint under each subscription can't exceed 100 per region by default. Similarly, the sum of managed online deployments, Kubernetes online deployments and batch deployments under each subscription can't exceed 500 per region by default.
-> 4. We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you receive an error. There are some VM SKUs that are exempt from extra quota. See [virtual machine quota allocation for deployment](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment) for more.
-> 5. Requests per second, connections, bandwidth etc are related. If you request for increase for any of these limits, ensure estimating/calculating other related limites together.
+
+<sup>1</sup> This is a regional limit. For example, if current limit on number of endpoint is 100, you can create 100 endpoints in the East US region, 100 endpoints in the West US region, and 100 endpoints in each of the other supported regions in a single subscription. Same principle applies to all the other limits.
+
+<sup>2</sup> Single dashes like, `my-endpoint-name`, are accepted in endpoint and deployment names.
+
+<sup>3</sup> Endpoints and deployments can be of different types, but limits apply to the sum of all types. For example, the sum of managed online endpoints, Kubernetes online endpoint and batch endpoint under each subscription can't exceed 100 per region by default. Similarly, the sum of managed online deployments, Kubernetes online deployments and batch deployments under each subscription can't exceed 500 per region by default.
+
+<sup>4</sup> We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you receive an error. There are some VM SKUs that are exempt from extra quota. See [virtual machine quota allocation for deployment](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment) for more.
+
+<sup>5</sup> Requests per second, connections, bandwidth etc are related. If you request for increase for any of these limits, ensure estimating/calculating other related limites together.
### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits.
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
Local deployment is deploying a model to a local Docker environment. Local deplo
Local deployment supports creation, update, and deletion of a local endpoint. It also allows you to invoke and get logs from the endpoint.
-## [Azure CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To use local deployment, add `--local` to the appropriate CLI command:
To use local deployment, add `--local` to the appropriate CLI command:
az ml online-deployment create --endpoint-name <endpoint-name> -n <deployment-name> -f <spec_file.yaml> --local ```
-## [Python SDK](#tab/python)
+### [Python SDK](#tab/python)
To use local deployment, add `local=True` parameter in the command:
ml_client.begin_create_or_update(online_deployment, local=True)
* `ml_client` is the instance for `MLCLient` class, and `online_deployment` is the instance for either `ManagedOnlineDeployment` class or `KubernetesOnlineDeployment` class.
-## [Studio](#tab/studio)
+### [Studio](#tab/studio)
The studio doesn't support local endpoints/deployments. See the Azure CLI or Python tabs for steps to perform deployment locally.
There are two types of containers that you can get the logs from:
- Inference server: Logs include the console log (from [the inference server](how-to-inference-server-http.md)) which contains the output of print/logging functions from your scoring script (`score.py` code). - Storage initializer: Logs contain information on whether code and model data were successfully downloaded to the container. The container runs before the inference server container starts to run.
-# [Azure CLI](#tab/cli)
+### [Azure CLI](#tab/cli)
To see log output from a container, use the following CLI command:
You can also get logs from the storage initializer container by passing `ΓÇô-con
Add `--help` and/or `--debug` to commands to see more information.
-# [Python SDK](#tab/python)
+### [Python SDK](#tab/python)
To see log output from container, use the `get_logs` method as follows:
ml_client.online_deployments.get_logs(
) ```
-# [Studio](#tab/studio)
+### [Studio](#tab/studio)
To see log output from a container, use the **Endpoints** in the studio:
There are two supported tracing headers:
The following list is of common deployment errors that are reported as part of the deployment operation status: * [ImageBuildFailure](#error-imagebuildfailure)
+ * [Azure Container Registry (ACR) authorization failure](#container-registry-authorization-failure)
+ * [Image build compute not set in a private workspace with VNet](#image-build-compute-not-set-in-a-private-workspace-with-vnet)
+ * [Generic or unknown failure](#generic-image-build-failure)
* [OutOfQuota](#error-outofquota)
+ * [CPU](#cpu-quota)
+ * [Cluster](#cluster-quota)
+ * [Disk](#disk-quota)
+ * [Memory](#memory-quota)
+ * [Role assignments](#role-assignment-quota)
+ * [Endpoints](#endpoint-quota)
+ * [Region-wide VM capacity](#region-wide-vm-capacity)
+ * [Other](#other-quota)
* [BadArgument](#error-badargument)
+ * Common to both managed online endpoint and Kubernetes online endpoint
+ * [Subscription doesn't exist](#subscription-does-not-exist)
+ * [Startup task failed due to authorization error](#authorization-error)
+ * [Startup task failed due to incorrect role assignments on resource](#authorization-error)
+ * [Invalid template function specification](#invalid-template-function-specification)
+ * [Unable to download user container image](#unable-to-download-user-container-image)
+ * [Unable to download user model](#unable-to-download-user-model)
+ * Errors limited to Kubernetes online endpoint
+ * [Resource request was greater than limits](#resource-requests-greater-than-limits)
+ * [azureml-fe for kubernetes online endpoint isn't ready](#azureml-fe-not-ready)
* [ResourceNotReady](#error-resourcenotready) * [ResourceNotFound](#error-resourcenotfound)
+ * [Azure Resource Manager can't find a required resource](#resource-manager-cannot-find-a-resource)
+ * [Azure Container Registry is private or otherwise inaccessible](#container-registry-authorization-error)
* [OperationCanceled](#error-operationcanceled)
+ * [Operation was canceled by another operation that has a higher priority](#operation-canceled-by-another-higher-priority-operation)
+ * [Operation was canceled due to a previous operation waiting for lock confirmation](#operation-canceled-waiting-for-lock-confirmation)
+* [SecretsInjectionError](#error-secretsinjectionerror)
+* [InternalServerError](#error-internalservererror)
If you're creating or updating a Kubernetes online deployment, you can see [Common errors specific to Kubernetes deployments](#common-errors-specific-to-kubernetes-deployments).
If your container couldn't start, it means scoring couldn't happen. It might be
To get the exact reason for an error, run:
-#### [Azure CLI](#tab/cli)
+##### [Azure CLI](#tab/cli)
```azurecli az ml online-deployment get-logs -e <endpoint-name> -n <deployment-name> -l 100 ```
-#### [Python SDK](#tab/python)
+##### [Python SDK](#tab/python)
```python ml_client.online_deployments.get_logs(
ml_client.online_deployments.get_logs(
) ```
-#### [Studio](#tab/studio)
+##### [Studio](#tab/studio)
Use the **Endpoints** in the studio:
It's possible that the user's model can't be found. Check [container logs](#get-
Make sure whether you have registered the model to the same workspace as the deployment. To show details for a model in a workspace:
-#### [Azure CLI](#tab/cli)
+##### [Azure CLI](#tab/cli)
```azurecli az ml model show --name <model-name> --version <version> ```
-#### [Python SDK](#tab/python)
+##### [Python SDK](#tab/python)
```python ml_client.models.get(name="<model-name>", version=<version>) ```
-#### [Studio](#tab/studio)
+##### [Studio](#tab/studio)
See the **Models** page in the studio:
You can also check if the blobs are present in the workspace storage account.
- If the blob is present, you can use this command to obtain the logs from the storage initializer:
- #### [Azure CLI](#tab/cli)
+ ##### [Azure CLI](#tab/cli)
```azurecli az ml online-deployment get-logs --endpoint-name <endpoint-name> --name <deployment-name> ΓÇô-container storage-initializer` ```
- #### [Python SDK](#tab/python)
+ ##### [Python SDK](#tab/python)
```python ml_client.online_deployments.get_logs(
You can also check if the blobs are present in the workspace storage account.
) ```
- #### [Studio](#tab/studio)
+ ##### [Studio](#tab/studio)
You can't see logs from the storage initializer in the studio. Use the Azure CLI or Python SDK (see each tab for details).
Azure operations have a brief waiting period after being submitted during which
Retrying the operation after waiting several seconds up to a minute might allow it to be performed without cancellation.
+### ERROR: SecretsInjectionError
+
+Secret retrieval and injection during online deployment creation uses the identity associated with the online endpoint to retrieve secrets from the workspace connections and/or key vaults. This error happens when:
+
+- The endpoint identity doesn't have the Azure RBAC permission to read the secrets from the workspace connections and/or key vaults, even though the secrets were specified by the deployment definition as references (mapped to environment variables). Remember that role assignment may take time for changes to take effect.
+- The format of the secret references are invalid, or the specified secrets do not exist in the workspace connections and/or key vaults.
+
+For more information, see [Secret injection in online endpoints (preview)](concept-secret-injection.md) and [Access secrets from online deployment using secret injection (preview)](how-to-deploy-online-endpoint-with-secret-injection.md).
+ ### ERROR: InternalServerError Although we do our best to provide a stable and reliable service, sometimes things don't go according to plan. If you get this error, it means that something isn't right on our side, and we need to fix it. Submit a [customer support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) with all related information and we can address the issue.
managed-grafana Known Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/known-limitations.md
The following quotas apply to the Essential (preview) and Standard plans.
| Limit | Description | Essential | Standard | |--|--|--||
-| Alert rules | Maximum number of alert rules that can be created | Not supported | 100 per instance |
+| Alert rules | Maximum number of alert rules that can be created | Not supported | 500 per instance |
| Dashboards | Maximum number of dashboards that can be created | 20 per instance | Unlimited | | Data sources | Maximum number of datasources that can be created | 5 per instance | Unlimited | | API keys | Maximum number of API keys that can be created | 2 per instance | 100 per instance |
openshift Azure Redhat Openshift Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/azure-redhat-openshift-release-notes.md
Azure Red Hat OpenShift receives improvements on an ongoing basis. To stay up to
## Version 4.13 - December 2023
-We're pleased to announce the launch of OpenShift 4.13 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.13](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html). Version 4.11 will be outside of support after January 15th, 2024. Existing clusters version 4.11 and below should be upgraded before then.
+We're pleased to announce the launch of OpenShift 4.13 for Azure Red Hat OpenShift. This release enables [OpenShift Container Platform 4.13](https://docs.openshift.com/container-platform/4.13/release_notes/ocp-4-13-release-notes.html). Version 4.11 will be outside of support after January 26, 2024. Existing clusters version 4.11 and below should be upgraded before then.
## Update - September 2023
openshift Support Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-lifecycle.md
See the following guide for the [past Red Hat OpenShift Container Platform (upst
|4.8|July 2021| Sept 15 2021|4.10 GA| |4.9|November 2021| February 1 2022|4.11 GA| |4.10|March 2022| June 21 2022|4.12 GA|
-|4.11|August 2022| March 2 2023|4.13 GA|
+|4.11|August 2022| March 2 2023|January 26 2024|
|4.12|January 2023| August 19 2023|October 19 2024| |4.13|May 2023| December 15 2023|February 15 2025|
operator-insights Business Continuity Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/business-continuity-disaster-recovery.md
+
+ Title: Business Continuity and Disaster recovery (BCDR) for Azure Operator Insights
+description: This article helps you understand BCDR concepts Azure Operator Insights.
+++++ Last updated : 11/27/2023++
+# Business continuity and disaster recovery
+
+Disasters can be hardware failures, natural disasters, or software failures. The process of preparing for and recovering from a disaster is called disaster recovery (DR). This article discusses recommended practices to achieve business continuity and disaster recovery (BCDR) for Azure Operator Insights.
+
+BCDR strategies include availability zone redundancy and user-managed recovery.
+
+## Control plane
+
+The Azure Operator Insights control plane is resilient both to software errors and failure of an Availability Zone. The ability to create and manage Data Products isn't affected by these failure modes.
+
+The control plane isn't regionally redundant. During an outage in an Azure region, you can't create new Data Products in that region or access/manage existing ones. Once the region recovers from the outage, you can access and manage existing Data Products again.
+
+## Data plane
+
+Data Products are resilient to software or hardware failures. For example, if a software bug causes the service to crash, or a hardware failure causes the compute resources for enrichment queries to be lost, service automatically recovers. The only impact is a slight delay in newly ingested data becoming available in the Data Product's storage endpoint and in the KQL consumption URL.
+
+### Zone redundancy
+
+Data Products don't support zone redundancy. When an availability zone fails, the Data Product's ingestion, blob/DFS and KQL/SQL APIs are all unavailable, and dashboards don't work. Transformation of already-ingested data is paused. No previously ingested data is lost. Processing resumes when the availability zone recovers.
+
+What happens to data that was generated during the availability zone outage depends on the behavior of the ingestion agent:
+
+* If the ingestion agent buffers data and resends it when the availability zone recovers, data isn't lost. Azure Operator Insights might take some time to work through its transformation backlog.
+* Otherwise, data is lost.
+
+### Disaster recovery
+
+Azure Operator Insights has no innate region redundancy. Regional outages affect Data Products in the same way as [availability zone failures](#zone-redundancy). We have recommendations and features to support customers that want to be able to handle failure of an entire Azure region.
+
+#### User-managed redundancy
+
+For maximal redundancy, you can deploy Data Products in an active-active mode. Deploy a second Data Product in a backup Azure region of your choice, and configure your ingestion agents to fork data to both Data Products simultaneously. The backup data product is unaffected by the failure of the primary region. During a regional outage, look at dashboards that use the backup Data Product as the data source. This architecture doubles the cost of the solution.
+
+Alternatively, you could use an active-passive mode. Deploy a second Data Product in a backup Azure region, and configure your ingestion agents to send to the primary Data Product. During a regional outage, reconfigure your ingestion agents to send data to the backup Data Product during a region outage. This architecture gives full access to data created during the outage (starting from the time where you reconfigure the ingestion agents), but during the outage you don't have access to data ingested before that time. This architecture requires a small infrastructure charge for the second Data Product, but no additional data processing charges.
operator-insights Consumption Plane Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/consumption-plane-configure-permissions.md
+
+ Title: Manage permissions for Azure Operator Insights consumption plane
+description: This article helps you configure consumption URI permissions for Azure Operator Insights.
+++++ Last updated : 1/06/2024++
+# Manage permissions to the consumption URL
+
+Azure Operator Insights enables you to control access to the consumption URL of each Data Product based on email addresses or distribution lists. Use the following steps to configure read-only access to the consumption URL.
+
+Azure Operator Insights currently supports a single role that gives Read access to all tables and columns on the consumption URL.
+
+## Add user access
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your Azure Operator Insights Data Product resource.
+1. In the left-hand menu under **Security**, select **Permissions**.
+1. Select **Add Reader** to add a new user.
+1. Type in the user's email address or distribution list and select **Add Reader(s)**.
+1. Wait for about 30 seconds, then refresh the page to view your changes.
+
+## Remove user access
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your Azure Operator Insights Data Product resource.
+1. In the left-hand menu under **Security**, select **Permissions**.
+1. Select the **Delete** symbol next to the user who you want to remove.
+ > [!NOTE]
+ > There is no confirmation dialog box, so be careful when deleting users.
+1. Wait for about 30 seconds, then refresh the page to view your changes.
operator-insights Data Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-query.md
Last updated 10/22/2023
-#CustomerIntent: As a consumer of the Data Product, I want to query data that has been collected so that I can visualise the data and gain customised insights.
+#CustomerIntent: As a consumer of the Data Product, I want to query data that has been collected so that I can visualize the data and gain customized insights.
# Query data in the Data Product
The Azure Operator Insights Data Product stores enriched and processed data, whi
## Prerequisites
-A deployed Data Product, see [Create an Azure Operator Insights Data Product](data-product-create.md).
+- A deployed Data Product: see [Create an Azure Operator Insights Data Product](data-product-create.md).
+- The `Reader` role for the data for this Data Product, because access to the data is controlled by role-based access control (RBAC).
+ - To check your access, sign in to the [Azure portal](https://portal.azure.com), go to the Data Product resource and open the **Permissions** pane. You must have the `Reader` role.
+ - If you don't have this role, ask an owner of the resource to give you `Reader` permissions by following [Manage permissions to the consumption URL](consumption-plane-configure-permissions.md).
-## Get access to the ADX cluster
+## Add the consumption URL in Azure Data Explorer
-Access to the data is controlled by role-based access control (RBAC).
-
-1. In the Azure portal, select the Data Product resource and open the Permissions pane. You must have the `Reader` role. If you do not, contact an owner of the resource to grant you `Reader` permissions.
-1. In the Overview pane, copy the Consumption URL.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Go to your Azure Operator Insights Data Product resource.
+1. In the **Overview** pane, copy the Consumption URL.
1. Open the [Azure Data Explorer web UI](https://dataexplorer.azure.com/) and select **Add** > **Connection**. 1. Paste your Consumption URL in the connection box and select **Add**.
postgresql How To Manage Virtual Network Private Endpoint Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-private-endpoint-portal.md
Title: Manage virtual networks - Azure portal with Private Link- Azure Database for PostgreSQL - Flexible Server
-description: Create and manage virtual networks for Azure Database with Private Link for PostgreSQL - Flexible Server using the Azure portal
+ Title: Manage virtual networks - Azure portal with Private Link - Azure Database for PostgreSQL - Flexible Server
+description: Learn how to create a PostgreSQL server with public access by using the Azure portal, and how to add private networking to the server based on Azure Private Link.
Last updated 10/23/2023
-# Create and manage virtual networks with Private Link for Azure Database for PostgreSQL - Flexible Server using the Azure portal
+# Create and manage virtual networks with Private Link for Azure Database for PostgreSQL - Flexible Server by using the Azure portal
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server. The two options are:
+Azure Database for PostgreSQL - Flexible Server supports two types of mutually exclusive network connectivity methods to connect to your flexible server:
-* Public access (allowed IP addresses). That method can be further secured by using [Private Link](./concepts-networking-private-link.md) based networking with Azure Database for PostgreSQL - Flexible Server in Preview.
-* Private access (VNet Integration)
+* Public access through allowed IP addresses. You can further secure that method by using [Azure Private Link](./concepts-networking-private-link.md)-based networking with Azure Database for PostgreSQL - Flexible Server. The feature is in preview.
+* Private access through virtual network integration.
-In this article, we'll focus on creation of PostgreSQL server with **Public access (allowed IP addresses)** using Azure portal and securing it **adding private networking to the server based on [Private Link](./concepts-networking-private-link.md) technology**. **[Azure Private Link](../../private-link/private-link-overview.md)** enables you to access Azure PaaS Services, such as [Azure Database for PostgreSQL - Flexible Server](./concepts-networking-private-link.md) , and Azure hosted customer-owned/partner services over a **Private Endpoint** in your virtual network. **Traffic between your virtual network and the service traverses over the Microsoft backbone network, eliminating exposure from the public Internet**.
+This article focuses on creation of a PostgreSQL server with public access (allowed IP addresses) by the using Azure portal. You can then help secure the server by adding private networking based on Private Link technology.
-> [!NOTE]
-> Azure Database for PostgreSQL - Flexible Server supports Private Link based networking in Preview.
+You can use [Private Link](../../private-link/private-link-overview.md) to access the following services over a private endpoint in your virtual network:
+
+* Azure platform as a service (PaaS) services, such as Azure Database for PostgreSQL - Flexible Server
+* Customer-owned or partner services that are hosted in Azure
+
+Traffic between your virtual network and a service traverses the Microsoft backbone network, which eliminates exposure to the public internet.
## Prerequisites
-To add a flexible server to the virtual network using Private Link, you need:
-- A [Virtual Network](../../virtual-network/quick-create-portal.md#create-a-virtual-network). The virtual network and subnet should be in the same region and subscription as your flexible server. The virtual network shouldn't have any resource lock set at the virtual network or subnet level, as locks might interfere with operations on the network and DNS. Make sure to remove any lock (**Delete** or **Read only**) from your virtual network and all subnets before adding server to a virtual network, and you can set it back after server creation.-- Register [**PostgreSQL Private Endpoint capability** preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
+To add a flexible server to a virtual network by using Private Link, you need:
+
+* A [virtual network](../../virtual-network/quick-create-portal.md#create-a-virtual-network). The virtual network and subnet should be in the same region and subscription as your flexible server.
-## Create an Azure Database for PostgreSQL - Flexible Server with Private Endpoint
+ Be sure to remove any locks (**Delete** or **Read only**) from your virtual network and all subnets before you add a server to the virtual network, because locks might interfere with operations on the network and DNS. You can reset the locks after server creation.
+* Registration of the [PostgreSQL private endpoint preview feature in your subscription](../../azure-resource-manager/management/preview-features.md).
+
+## Create an Azure Database for PostgreSQL - Flexible Server instance with a private endpoint
To create an Azure Database for PostgreSQL server, take the following steps:
-1. Select Create a resource **(+)** in the upper-left corner of the portal.
+1. In the upper-left corner of the Azure portal, select **Create a resource** (the plus sign).
-2. Select **Databases > Azure Database for PostgreSQL**.
+2. Select **Databases** > **Azure Database for PostgreSQL**.
3. Select the **Flexible server** deployment option.
-4. Fill out the Basics form with the pertinent information. tHis includes Azure subscription, resource group, Azure region location, server name, server administrative credentials.
-
-| **Setting** | **Value**|
-|||
-|Subscription| Select your **Azure subscription**|
-|Resource group| Select your **Azure resource group**|
-|Server name| Enter **unique server name**|
-|Admin username |Enter an **administrator name** of your choosing|
-|Password|Enter a **password** of your choosing. The password must be at least eight characters long and meet the defined requirements|
-|Location|Select an **Azure region** where you want to want your PostgreSQL Server to reside, example West Europe|
-|Version|Select the **database version** of the PostgreSQL server that is required|
-|Compute + Storage|Select the **pricing tier** that is needed for the server based on the workload|
-
-5. Select **Next:Networking**
-6. Choose **"Public access (allowed IP addresses) and Private endpoint"** checkbox checked as Connectivity method.
-7. Select **"Add Private Endpoint"** in Private Endpoint section
- :::image type="content" source="./media/how-to-manage-virtual-network-private-endpoint-portal/private-endpoint-selection.png" alt-text="Screenshot of Add Private Endpoint button in Private Endpoint Section in Networking blade of Azure Portal" :::
-8. In Create Private Endpoint Screen enter following:
-
-| **Setting** | **Value**|
-|||
-|Subscription| Select your **subscription**|
-|Resource group| Select **resource group** you picked previously|
-|Location|Select an **Azure region where you created your VNET**, example West Europe|
-|Name|Name of Private Endpoint|
-|Target subresource|**postgresqlServer**|
-|NETWORKING|
-|Virtual Network| Enter **VNET name** for Azure virtual network created previously |
-|Subnet|Enter **Subnet name** for Azure Subnet you created previously|
-|PRIVATE DNS INTEGRATION]
-|Integrate with Private DNS Zone| **Yes**|
-|Private DNS Zone| Pick **(New)privatelink.postgresql.database.azure.com**. This creates new private DNS zone.|
+4. Fill out the **Basics** form with the following information:
+
+ |Setting |Value|
+ |||
+ |**Subscription**| Select your Azure subscription.|
+ |**Resource group**| Select your Azure resource group.|
+ |**Server name**| Enter a unique server name.|
+ |**Admin username** |Enter an administrator name of your choosing.|
+ |**Password**|Enter a password of your choosing. The password must have at least eight characters and meet the defined requirements.|
+ |**Location**|Select an Azure region where you want to want your PostgreSQL server to reside.|
+ |**Version**|Select the required database version of the PostgreSQL server.|
+ |**Compute + Storage**|Select the pricing tier that you need for the server, based on the workload.|
+
+5. Select **Next: Networking**.
+
+6. For **Connectivity method**, select the **Public access (allowed IP addresses) and private endpoint** checkbox.
+
+7. In the **Private Endpoint (preview)** section, select **Add private endpoint**.
+
+ :::image type="content" source="./media/how-to-manage-virtual-network-private-endpoint-portal/private-endpoint-selection.png" alt-text="Screenshot of the button for adding a private endpoint button on the Networking pane in the Azure portal." :::
+8. On the **Create Private Endpoint** pane, enter the following values:
+
+ |Setting|Value|
+ |||
+ |**Subscription**| Select your subscription.|
+ |**Resource group**| Select the resource group that you chose previously.|
+ |**Location**|Select an Azure region where you created your virtual network.|
+ |**Name**|Enter a name for the private endpoint.|
+ |**Target subresource**|Select **postgresqlServer**.|
+ |**NETWORKING**|
+ |**Virtual Network**| Enter a name for the Azure virtual network that you created previously. |
+ |**Subnet**|Enter the name of the Azure subnet that you created previously.|
+ |**PRIVATE DNS INTEGRATION**|
+ |**Integrate with Private DNS Zone**| Select **Yes**.|
+ |**Private DNS Zone**| Select **(New)privatelink.postgresql.database.azure.com**. This setting creates a new private DNS zone.|
9. Select **OK**.
-10. Select **Review + create**. You're taken to the **Review + create** page where Azure validates your configuration.
-11. Networking section of the **Review + Create** page will list your Private Endpoint information.
-12. When you see the Validation passed message, select **Create**.
-
-### Approval Process for Private Endpoint
-
-With separation of duties, common in many enterprises today, creation of cloud networking infrastructure, such as Azure Private Link services, are done by network administrator, whereas database servers are commonly created and managed by database administrator (DBA).
-Once the network administrator creates the private endpoint (PE), the PostgreSQL database administrator (DBA) can manage the **Private Endpoint Connection (PEC)** to Azure Database for PostgreSQL.
-1. Navigate to the Azure Database for PostgreSQL - Flexible Server resource in the Azure portal.
- - Select **Networking** in the left pane.
- - Shows a list of all **Private Endpoint Connections (PECs)**.
- - Corresponding **Private Endpoint (PE)** created.
- - Select an individual **PEC** from the list by selecting it.
- - The PostgreSQL server admin can choose to **approve** or **reject a PEC** and optionally add a short text response.
- - After approval or rejection, the list will reflect the appropriate state along with the response text.
+
+10. Select **Review + create**.
+
+11. On the **Review + create** tab, Azure validates your configuration. The **Networking** section lists information about your private endpoint.
+
+ When you see the message that your configuration passed validation, select **Create**.
+
+### Approval process for a private endpoint
+
+A separation of duties is common in many enterprises today:
+
+* A network administrator creates the cloud networking infrastructure, such as Azure Private Link services.
+* A database administrator (DBA) creates and manages database servers.
+
+After a network administrator creates a private endpoint, the PostgreSQL DBA can manage the private endpoint connection to Azure Database for PostgreSQL. The DBA uses the following approval process for a private endpoint connection:
+
+1. In the Azure portal, go to the Azure Database for PostgreSQL - Flexible Server resource.
+
+1. On the left pane, select **Networking**.
+
+1. A list of all private endpoint connections appears, along with corresponding private endpoints. Select a private endpoint connection from the list.
+
+1. Select **Approve** or **Reject**, and optionally add a short text response.
+
+ After approval or rejection, the list reflects the appropriate state, along with the response text.
## Next steps-- Learn more about [networking in Azure Database for PostgreSQL - Flexible Server using Private Link](./concepts-networking-private-link.md).-- Understand more about [Azure Database for PostgreSQL - Flexible Server virtual network using VNET Integration](./concepts-networking-private.md).+
+* Learn more about [networking in Azure Database for PostgreSQL - Flexible Server with Private Link](./concepts-networking-private-link.md).
+* Understand more about [virtual network integration in Azure Database for PostgreSQL - Flexible Server](./concepts-networking-private.md).
postgresql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-cli.md
az postgres flexible-server server-logs download --resource-group <myresourcegro
## Next steps - To enable and disable Server logs from portal, you can refer to the [article.](./how-to-server-logs-portal.md)-- Learn more about [Logging](./concepts-logging.md)
+- Learn more about [Logging](./concepts-logging.md)
postgresql How To Server Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-portal.md
Previously updated : 1/2/2024 Last updated : 1/10/2024 # Enable, list and download server logs for Azure Database for PostgreSQL - Flexible Server
To download server logs, perform the following steps.
:::image type="content" source="./media/how-to-server-logs-portal/5-how-to-server-log.png" alt-text="Screenshot showing server Logs - Disable.":::
-3. Select Save.
+3. Select Save.
+
+## Next steps
+- To enable and disable Server logs from CLI, you can refer to the [article.](./how-to-server-logs-cli.md)
+- Learn more about [Logging](./concepts-logging.md)
postgresql Howto Connect To Data Factory Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-connect-to-data-factory-private-endpoint.md
Title: Connect to Azure Data Factory privately networked pipeline with Azure Database for PostgreSQL - Flexible Server using Azure Private Link
-description: This article describes how to connect to Azure Data Factory privately networked pipeline with Azure Database for PostgreSQL - Flexible Server using Azure Private Link
+ Title: Connect to an Azure Data Factory privately networked pipeline with Azure Database for PostgreSQL - Flexible Server by using Azure Private Link
+description: This article describes how to connect Azure Database for PostgreSQL - Flexible Server to an Azure-hosted Data Factory pipeline via Private Link.
-# Connect to Azure Data Factory privately networked pipeline with Azure Database for PostgreSQL - Flexible Server using Azure Private Link
+# Connect to an Azure Data Factory privately networked pipeline with Azure Database for PostgreSQL - Flexible Server by using Azure Private Link
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-In this quickstart, you connect your Azure Database for PostgreSQL - Flexible Server to Azure hosted Data Factory pipeline via Private Link.
+In this article, you connect Azure Database for PostgreSQL - Flexible Server to an Azure Data Factory pipeline via Azure Private Link.
-## Azure hosted Data Factory with private (VNET) networking
+[Azure Data Factory](../../data-factory/introduction.md) is a fully managed, serverless solution to ingest and transform data. An Azure [integration runtime](../../data-factory/concepts-integration-runtime.md#azure-integration-runtime) supports connecting to data stores and compute services with public accessible endpoints. When you enable a managed virtual network, an integration runtime supports connecting to data stores by using the Azure Private Link service in a private network environment.
-**[Azure Data Factory](../../data-factory/introduction.md)** is a fully managed, easy-to-use, serverless data integration, and transformation solution to ingest and transform all your data. Data Factory offers three types of Integration Runtime (IR), and you should choose the type that best serves your data integration capabilities and network environment requirements. The three types of IR are:
+Data Factory offers three types of integration runtimes:
- Azure - Self-hosted-- Azure-SSIS
+- Azure-SQL Server Integration Services (Azure-SSIS)
-**[Azure Integration Runtime](../../data-factory/concepts-integration-runtime.md#azure-integration-runtime)** supports connecting to data stores and computes services with public accessible endpoints. Enabling Managed Virtual Network, Azure Integration Runtime supports connecting to data stores using private link service in private network environment. [Azure PostgreSQL - Flexible Server provides for private link connectivity in preview](../flexible-server/concepts-networking-private-link.md).
+Choose the type that best serves your data integration capabilities and network environment requirements.
+
+Azure Database for PostgreSQL - Flexible Server provides for Private Link connectivity in preview. For more information, see [this article](../flexible-server/concepts-networking-private-link.md).
## Prerequisites -- An Azure Database for PostgreSQL - Flexible Server [privately networked via Azure Private Link](../flexible-server/concepts-networking-private-link.md).-- An Azure integration runtime within a [data factory managed virtual network](../../data-factory/data-factory-private-link.md)
+- An Azure Database for PostgreSQL - Flexible Server instance that's [privately networked via Azure Private Link](../flexible-server/concepts-networking-private-link.md)
+- An Azure integration runtime within a [Data Factory managed virtual network](../../data-factory/data-factory-private-link.md)
+
+## Create a private endpoint in Data Factory
-## Create Private Endpoint in Azure Data Factory
+An Azure Database for PostgreSQL connector currently supports *public connectivity only*. When you use an Azure Database for PostgreSQL connector in Azure Data Factory, you might get an error when you try to connect to a privately networked instance of Azure Database for PostgreSQL - Flexible Server.
-Unfortunately at this time using Azure Database for PostgreSQL connector in ADF you might get an error when trying to connect to privately networked Azure Database for PostgreSQL - Flexible Server, as connector supports **public connectivity only**.
-To work around this limitation, we can use Azure CLI to create a private endpoint first and then use the Data Factory user interface with Azure Database for PostgreSQL connector to create connection between privately networked Azure Database for PostgreSQL - Flexible Server and Azure Data Factory in managed virtual network.
-Example below creates private endpoint in Azure data factory, you substitute with your own values placeholders for *subscription_id, resource_group_name, azure_data_factory_name,endpoint_name,flexible_server_name*:
+To work around this limitation, you can use the Azure CLI to create a private endpoint first. Then you can use the Data Factory user interface with the Azure Database for PostgreSQL connector to create a connection between privately networked Azure Database for PostgreSQL - Flexible Server and Azure Data Factory in a managed virtual network.
+
+The following example creates a private endpoint in Azure Data Factory. Substitute the placeholders *subscription_id*, *resource_group_name*, *azure_data_factory_name*, *endpoint_name*, and *flexible_server_name* with your own values.
```azurecli az resource create --id /subscriptions/<subscription_id>/resourceGroups/<resource_group_name>/providers/Microsoft.DataFactory/factories/<azure_data_factory_name>/managedVirtualNetworks/default/managedPrivateEndpoints/<endpoint_name> --properties '
az resource create --id /subscriptions/<subscription_id>/resourceGroups/<resourc
``` > [!NOTE]
-> Alternative command to create private endpoint in data factory using Azure CLI is [az datafactory managed-private-endpoint create](/cli/azure/datafactory/managed-private-endpoint)
+> An alternative command to create a private endpoint in Data Factory by using the Azure CLI is [az datafactory managed-private-endpoint create](/cli/azure/datafactory/managed-private-endpoint).
+
+After you successfully run the preceding command, you can view the private endpoint in the Azure portal by going to **Data Factory** > **Managed private endpoints**. The following screenshot shows an example.
-After above command is successfully executed you should ne able to view private endpoint in Managed Private Endpoints blade in Data Factory Azure portal interface, as shown in the following example:
+## Approve a private endpoint
-## Approve Private Endpoint
+After you provision a private endpoint, you can approve it by following the **Manage approvals in Azure portal** link in the endpoint details. It takes several minutes for Data Factory to discover that the private endpoint is approved.
-Once the private endpoint is provisioned, we can follow the "Manage approvals In Azure portal" link in the Private Endpoint details screen to approve the private endpoint. It takes several minutes for ADF to discover that it's now approved.
+## Add a networked server data source in Data Factory
-## Add PostgreSQL Flexible Server networked server data source in data factory.
+After you provision and approve a private endpoint, you can create a connection to Azure Database for PostgreSQL - Flexible Server by using a Data Factory connector.
-When both provisioning succeeded and the endpoint are approved, we can finally create connection to PGFlex using "Azure Database for PostgreSQL" ADF connector.
+In the previous steps, when you selected the server for which you created the private endpoint, the private endpoint was also selected automatically.
-1. After following previous steps, when selecting the server for which we created the private endpoint, the private endpoint gets selected automatically as well.
+1. Select a database, enter a username and password, and select **SSL** as the encryption method. The following screenshot shows an example.
-1. Next, select database, enter username/password and be sure to select "SSL" as encryption method, as shown in the following example:
:::image type="content" source="./media/howto-connect-to-data-factory-private-endpoint/data-factory-data-source-connection.png" alt-text="Example screenshot of connection properties." lightbox="./media/howto-connect-to-data-factory-private-endpoint/data-factory-data-source-connection.png":::
-1. Select test connection. You should see "Connection Successful" message next to test connection button.
+1. Select **Test connection**. A "Connection Successful" message should appear.
-## Next step
+## Next step
> [!div class="nextstepaction"]
-> [Networking with private link in Azure Database for PostgreSQL - Flexible Server](concepts-networking-private-link.md)
+> [Networking with Private Link in Azure Database for PostgreSQL - Flexible Server](concepts-networking-private-link.md)
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Cassandra | privatelink.cassandra.cosmos.azure.com | cassandra.cosmos.azure.com | >| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Gremlin | privatelink.gremlin.cosmos.azure.com | gremlin.cosmos.azure.com | >| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Table | privatelink.table.cosmos.azure.com | table.cosmos.azure.com |
+>| Azure Cosmos DB (Microsoft.DocumentDB/databaseAccounts) | Analytical | privatelink.analytics.cosmos.azure.com | analytics.cosmos.azure.com |
>| Azure Cosmos DB (Microsoft.DBforPostgreSQL/serverGroupsv2) | coordinator | privatelink.postgres.cosmos.azure.com | postgres.cosmos.azure.com | >| Azure Database for PostgreSQL - Single server (Microsoft.DBforPostgreSQL/servers) | postgresqlServer | privatelink.postgres.database.azure.com | postgres.database.azure.com | >| Azure Database for MySQL - Single Server (Microsoft.DBforMySQL/servers) | mysqlServer | privatelink.mysql.database.azure.com | mysql.database.azure.com |
search Cognitive Search Aml Skill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-aml-skill.md
- ignite-2023-+ Last updated 12/01/2022
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-attach-cognitive-services.md
- ignite-2023 Previously updated : 05/31/2023 Last updated : 01/11/2024 # Attach an Azure AI multi-service resource to a skillset in Azure AI Search When configuring an optional [AI enrichment pipeline](cognitive-search-concept-intro.md) in Azure AI Search, you can enrich a limited number of documents free of charge. For larger and more frequent workloads, you should attach a billable [**Azure AI multi-service resource**](../ai-services/multi-service-resource.md?pivots=azportal).
-A multi-service resource references a subset of "Azure AI services" as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/create-skillset) and allows Microsoft to charge you for using these
+A multi-service resource references a set of Azure AI services as the offering, rather than individual services, with access granted through a single API key. This key is specified in a [**skillset**](/rest/api/searchservice/create-skillset) and allows Microsoft to charge you for using these
+ [Azure AI Vision](../ai-services/computer-vision/overview.md) for image analysis and optical character recognition (OCR) + [Azure AI Language](../ai-services/language-service/overview.md) for language detection, entity recognition, sentiment analysis, and key phrase extraction
SearchIndexerSkillset skillset = CreateOrUpdateDemoSkillSet(indexerClient, skill
## Remove the key
-Enrichments are a billable feature. If you no longer need to call Azure AI services, follow these instructions to remove the multi-region key and prevent use of the external resource. Without the key, the skillset reverts to the default allocation of 20 free transactions per indexer, per day. Execution of billable skills stops at 20 transactions and a "Time Out" message appears in indexer execution history when the allocation is used up.
+Enrichments are billable operations. If you no longer need to call Azure AI services, follow these instructions to remove the multi-region key and prevent use of the external resource. Without the key, the skillset reverts to the default allocation of 20 free transactions per indexer, per day. Execution of billable skills stops at 20 transactions and a "Time Out" message appears in indexer execution history when the allocation is used up.
### [**Azure portal**](#tab/portal-remove)
Enrichments are a billable feature. If you no longer need to call Azure AI servi
Key-based billing applies when API calls to Azure AI services resources exceed 20 API calls per indexer, per day.
-The key is used for billing, but not for enrichment operations' connections. For connections, a search service [connects over the internal network](search-security-overview.md#internal-traffic) to an Azure AI services resource that's colocated in the [same physical region](https://azure.microsoft.com/global-infrastructure/services/?products=search). Most regions that offer Azure AI Search also offer other Azure AI services such as Language. If you attempt AI enrichment in a region that doesn't have both services, you'll see this message: "Provided key isn't a valid CognitiveServices type key for the region of your search service."
+The key is used for billing, but not for enrichment operations' connections. For connections, a search service [connects over the internal network](search-security-overview.md#internal-traffic) to an Azure AI services resource that's located in the [same physical region](https://azure.microsoft.com/global-infrastructure/services/?products=search). Most regions that offer Azure AI Search also offer other Azure AI services such as Language. If you attempt AI enrichment in a region that doesn't have both services, you'll see this message: "Provided key isn't a valid CognitiveServices type key for the region of your search service."
Currently, billing for [built-in skills](cognitive-search-predefined-skills.md) requires a public connection from Azure AI Search to another Azure AI service. Disabling public network access breaks billing. If disabling public networks is a requirement, you can configure a [Custom Web API skill](cognitive-search-custom-skill-interface.md) implemented with an [Azure Function](cognitive-search-create-custom-skill-example.md) that supports [private endpoints](../azure-functions/functions-create-vnet.md) and add the [Azure AI services resource to the same VNET](../ai-services/cognitive-services-virtual-networks.md). In this way, you can call Azure AI services resource directly from the custom skill using private endpoints.
Some enrichments are always free:
+ Utility skills that don't call Azure AI services (namely, [Conditional](cognitive-search-skill-conditional.md), [Document Extraction](cognitive-search-skill-document-extraction.md), [Shaper](cognitive-search-skill-shaper.md), [Text Merge](cognitive-search-skill-textmerger.md), and [Text Split skills](cognitive-search-skill-textsplit.md)) aren't billable.
-+ Text extraction from PDF documents and other application files is nonbillable. Text extraction occurs during the [document cracking](search-indexer-overview.md#document-cracking) phase and isn't technically an enrichment, but it occurs during AI enrichment and is thus noted here.
++ Text extraction from PDF documents and other application files is nonbillable. Text extraction, which occurs during the [document cracking](search-indexer-overview.md#document-cracking), isn't an AI enrichment, but it occurs during AI enrichment and is thus noted here. ## Billable enrichments
search Cognitive Search Concept Image Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-image-scenarios.md
Previously updated : 08/29/2022 Last updated : 01/10/2024 - devx-track-csharp - ignite-2023
Image processing is indexer-driven, which means that the raw inputs must be in a
+ Image analysis supports JPEG, PNG, GIF, and BMP + OCR supports JPEG, PNG, BMP, and TIF
-Images are either standalone binary files or embedded in documents (PDF, RTF, and Microsoft application files). A maximum of 1000 images will be extracted from a given document. If there are more than 1000 images in a document, the first 1000 will be extracted and a warning will be generated.
+Images are either standalone binary files or embedded in documents (PDF, RTF, and Microsoft application files). A maximum of 1000 images can be extracted from a given document. If there are more than 1000 images in a document, the first 1000 are extracted and then a warning is generated.
Azure Blob Storage is the most frequently used storage for image processing in Azure AI Search. There are three main tasks related to retrieving images from a blob container:
Azure Blob Storage is the most frequently used storage for image processing in A
## Configure indexers for image processing
-Extracting images from the source content files is the first step of indexer processing. Extracted images are queued for image processing. Extracted text is queued for text processing, if applicable.
+After the source files are set up, enable image normalization by setting the `imageAction` parameter in indexer configuration. Image normalization helps make images more uniform for downstream processing. Image normalization includes the following operations:
-Image processing requires image normalization to make images more uniform for downstream processing. This second step occurs automatically and is internal to indexer processing. As a developer, you enable image normalization by setting the `"imageAction"` parameter in indexer configuration.
-
-Image normalization includes the following operations:
-
-+ Large images are resized to a maximum height and width to make them uniform and consumable during skillset processing.
++ Large images are resized to a maximum height and width to make them uniform. + For images that have metadata on orientation, image rotation is adjusted for vertical loading.
-Metadata adjustments are captured in a complex type created for each image. You cannot turn off image normalization. Skills that iterate over images, such as OCR and image analysis, expect normalized images.
+Metadata adjustments are captured in a complex type created for each image. You can't opt out of the image normalization requirement. Skills that iterate over images, such as OCR and image analysis, expect normalized images.
1. [Create or Update an indexer](/rest/api/searchservice/create-indexer) to set the configuration properties:
Metadata adjustments are captured in a complex type created for each image. You
} ```
-1. Set `"dataToExtract`" to `"contentAndMetadata"` (required).
+1. Set `dataToExtract` to `contentAndMetadata` (required).
-1. Verify that the `"parsingMode`" is set to default (required).
+1. Verify that the `parsingMode` is set to default (required).
This parameter determines the granularity of search documents created in the index. The default mode sets up a one-to-one correspondence so that one blob results in one search document. If documents are large, or if skills require smaller chunks of text, you can add Text Split skill that subdivides a document into paging for processing purposes. But for search scenarios, one blob per document is required if enrichment includes image processing.
-1. Set `"imageAction"` to enable the *normalized_images* node in an enrichment tree (required):
+1. Set `imageAction` to enable the *normalized_images* node in an enrichment tree (required):
- + `"generateNormalizedImages"` to generate an array of normalized images as part of document cracking.
+ + `generateNormalizedImages` to generate an array of normalized images as part of document cracking.
- + `"generateNormalizedImagePerPage"` (applies to PDF only) to generate an array of normalized images where each page in the PDF is rendered to one output image. For non-PDF files, the behavior of this parameter is similar as if you had set "generateNormalizedImages". However, note that setting "generateNormalizedImagePerPage" can make indexing operation less performant by design (especially for big documents) since several images would have to be generated.
+ + `generateNormalizedImagePerPage` (applies to PDF only) to generate an array of normalized images where each page in the PDF is rendered to one output image. For non-PDF files, the behavior of this parameter is similar as if you had set "generateNormalizedImages". However, note that setting "generateNormalizedImagePerPage" can make indexing operation less performant by design (especially for large documents) since several images would have to be generated.
1. Optionally, adjust the width or height of the generated normalized images:
- + `"normalizedImageMaxWidth"` (in pixels). Default is 2000. Maximum value is 10000.
+ + `normalizedImageMaxWidth` (in pixels). Default is 2000. Maximum value is 10000.
- + `"normalizedImageMaxHeight"` (in pixels). Default is 2000. Maximum value is 10000.
+ + `normalizedImageMaxHeight` (in pixels). Default is 2000. Maximum value is 10000.
The default of 2000 pixels for the normalized images maximum width and height is based on the maximum sizes supported by the [OCR skill](cognitive-search-skill-ocr.md) and the [image analysis skill](cognitive-search-skill-image-analysis.md). The [OCR skill](cognitive-search-skill-ocr.md) supports a maximum width and height of 4200 for non-English languages, and 10000 for English. If you increase the maximum limits, processing could fail on larger images depending on your skillset definition and the language of the documents.
Metadata adjustments are captured in a complex type created for each image. You
### About normalized images
-When "imageAction" is set to a value other than "none", the new *normalized_images* field will contain an array of images. Each image is a complex type that has the following members:
+When `imageAction` is set to a value other than "none", the new *normalized_images* field contains an array of images. Each image is a complex type that has the following members:
| Image member | Description | |--|--|
When "imageAction" is set to a value other than "none", the new *normalized_imag
| originalWidth | The original width of the image before normalization. | | originalHeight | The original height of the image before normalization. | | rotationFromOriginal | Counter-clockwise rotation in degrees that occurred to create the normalized image. A value between 0 degrees and 360 degrees. This step reads the metadata from the image that is generated by a camera or scanner. Usually a multiple of 90 degrees. |
-| contentOffset | The character offset within the content field where the image was extracted from. This field is only applicable for files with embedded images. Note that the *contentOffset* for images extracted from PDF documents will always be at the end of the text on the page it was extracted from in the document. This means images will be after all the text on that page, regardless of the original location of the image in the page. |
-| pageNumber | If the image was extracted or rendered from a PDF, this field contains the page number in the PDF it was extracted or rendered from, starting from 1. If the image was not from a PDF, this field will be 0. |
+| contentOffset | The character offset within the content field where the image was extracted from. This field is only applicable for files with embedded images. The *contentOffset* for images extracted from PDF documents is always at the end of the text on the page it was extracted from in the document. This means images appear after all text on that page, regardless of the original location of the image in the page. |
+| pageNumber | If the image was extracted or rendered from a PDF, this field contains the page number in the PDF it was extracted or rendered from, starting from 1. If the image isn't from a PDF, this field is 0. |
Sample value of *normalized_images*:
Once the basic framework of your skillset is created and Azure AI services is co
As noted, images are extracted during document cracking and then normalized as a preliminary step. The normalized images are the inputs to any image processing skill, and are always represented in an enriched document tree in either one of two ways:
-+ `"/document/normalized_images/*"` is for documents that are processed whole.
++ `/document/normalized_images/*` is for documents that are processed whole.
-+ `"/document/normalized_images/*/pages"` is for documents that are processed in chunks (pages).
++ `/document/normalized_images/*/pages` is for documents that are processed in chunks (pages). Whether you're using OCR and image analysis in the same, inputs have virtually the same construction:
Whether you're using OCR and image analysis in the same, inputs have virtually t
## Map outputs to search fields
-Azure AI Search is a full text search and knowledge mining solution, so Image Analysis and OCR skill output is always text. Output text is represented as nodes in an internal enriched document tree, and each node must be mapped to fields in a search index or projections in a knowledge store to make the content available in your app.
+In a skillset, Image Analysis and OCR skill output is always text. Output text is represented as nodes in an internal enriched document tree, and each node must be mapped to fields in a search index, or to projections in a knowledge store, to make the content available in your app.
-1. In the skillset, review the "outputs" section of each skill to determine which nodes exist in the enriched document:
+1. In the skillset, review the `outputs` section of each skill to determine which nodes exist in the enriched document:
```json {
Azure AI Search is a full text search and knowledge mining solution, so Image An
1. [Create or update a search index](/rest/api/searchservice/create-index) to add fields to accept the skill outputs.
- In the fields collection below, "content" is blob content. "Metadata_storage_name" contains the name of the file (make sure it is "retrievable"). "Metadata_storage_path" is the unique path of the blob and is the default document key. "Merged_content" is output from Text Merge (useful when images are embedded).
+ In the following fields collection example, "content" is blob content. "Metadata_storage_name" contains the name of the file (make sure it is "retrievable"). "Metadata_storage_path" is the unique path of the blob and is the default document key. "Merged_content" is output from Text Merge (useful when images are embedded).
"Text" and "layoutText" are OCR skill outputs and must be a string collection in order to the capture all of the OCR-generated output for the entire document.
Azure AI Search is a full text search and knowledge mining solution, so Image An
1. [Update the indexer](/rest/api/searchservice/update-indexer) to map skillset output (nodes in an enrichment tree) to index fields.
- Enriched documents are internal. To "externalize" the nodes, an output field mapping specifies the data structure that receives node content. This is the data that will be accessed by your app. Below is an example of a "text" node (OCR output) mapped to a "text" field in a search index.
+ Enriched documents are internal. To externalize the nodes in an enriched document tree, set up an output field mapping that specifies which index field receives node content. Enriched data is accessed by your app through an index field. The following example shows a "text" node (OCR output) in an enriched document that's mapped to a "text" field in a search index.
```json "outputFieldMappings": [
POST /indexes/[index name]/docs/search?api-version=[api-version]
} ```
-OCR recognizes text in image files. This means that OCR fields ("text" and "layoutText") will be empty if source documents are pure text or pure imagery. Similarly, image analysis fields ("imageCaption" and "imageTags") will be empty if source document inputs are strictly text. Indexer execution will emit warnings if imaging inputs are empty. Such warnings are to be expected when nodes are unpopulated in the enriched document. Recall that blob indexing lets you include or exclude file types if you want to work with content types in isolation. You can use these setting to reduce noise during indexer runs.
+OCR recognizes text in image files. This means that OCR fields ("text" and "layoutText") are empty if source documents are pure text or pure imagery. Similarly, image analysis fields ("imageCaption" and "imageTags") are empty if source document inputs are strictly text. Indexer execution emits warnings if imaging inputs are empty. Such warnings are to be expected when nodes are unpopulated in the enriched document. Recall that blob indexing lets you include or exclude file types if you want to work with content types in isolation. You can use these setting to reduce noise during indexer runs.
-An alternate query for checking results might include the "content" and "merged_content" fields. Notice that those fields will include content for any blob file, even those where there was no image processing performed.
+An alternate query for checking results might include the "content" and "merged_content" fields. Notice that those fields include content for any blob file, even those where there was no image processing performed.
### About skill outputs Skill outputs include "text" (OCR), "layoutText" (OCR), "merged_content", "captions" (image analysis), "tags" (image analysis):
-+ "text" stores OCR-generated output. This node should be mapped to field of type `Collection(Edm.String)`. There is one "text" field per search document consisting of comma-delimited strings for documents that contain multiple images. The following illustration shows OCR output for three documents. First is a document containing a file with no images. Second is a document (image file) containing one word, "Microsoft". Third is a document containing multiple images, some without any text (`"",`).
++ "text" stores OCR-generated output. This node should be mapped to field of type `Collection(Edm.String)`. There's one "text" field per search document consisting of comma-delimited strings for documents that contain multiple images. The following illustration shows OCR output for three documents. First is a document containing a file with no images. Second is a document (image file) containing one word, "Microsoft". Third is a document containing multiple images, some without any text (`"",`). ```json "value": [
Skill outputs include "text" (OCR), "layoutText" (OCR), "merged_content", "capti
] ```
-+ "layoutText" stores OCR-generated information about text location on the page, described in terms of bounding boxes and coordinates of the normalized image. This node should be mapped to field of type `Collection(Edm.String)`. There is one "layoutText" field per search document consisting of comma-delimited strings.
++ "layoutText" stores OCR-generated information about text location on the page, described in terms of bounding boxes and coordinates of the normalized image. This node should be mapped to field of type `Collection(Edm.String)`. There's one "layoutText" field per search document consisting of comma-delimited strings.
-+ "merged_content" stores the output of a Text Merge skill, and it should be one large field of type `Edm.String` that contains raw text from the source document, with embedded "text" in place of an image. If files are text-only, then OCR and image analysis have nothing to do, and "merged_content" will be the same as "content" (a blob property that contains the content of the blob).
++ "merged_content" stores the output of a Text Merge skill, and it should be one large field of type `Edm.String` that contains raw text from the source document, with embedded "text" in place of an image. If files are text-only, then OCR and image analysis have nothing to do, and "merged_content" is the same as "content" (a blob property that contains the content of the blob). + "imageCaption" captures a description of an image as individuals tags and a longer text description.
Image analysis output is illustrated in the JSON below (search result). The skil
+ "imageCaption" output is an array of descriptions, one per image, denoted by "tags" consisting of single words and longer phrases that describe the image. Notice the tags consisting of "a flock of seagulls are swimming in the water", or "a close up of a bird".
-+ "imageTags" output is an array of single tags, listed in the order of creation. Notice that tags will repeat. There is no aggregation or grouping.
++ "imageTags" output is an array of single tags, listed in the order of creation. Notice that tags repeat. There's no aggregation or grouping. ```json "imageCaption": [
Image analysis output is illustrated in the JSON below (search result). The skil
## Scenario: Embedded images in PDFs
-When the images you want to process are embedded in other files, such as PDF or DOCX, the enrichment pipeline will extract just the images and then pass them to OCR or image analysis for processing. Separation of image from text content occurs during the document cracking phase, and once the images are separated, they remain separate unless you explicitly merge the processed output back into the source text.
+When the images you want to process are embedded in other files, such as PDF or DOCX, the enrichment pipeline extracts just the images and then pass them to OCR or image analysis for processing. Image extraction occurs during the document cracking phase, and once the images are separated, they remain separate unless you explicitly merge the processed output back into the source text.
-[**Text Merge**](cognitive-search-skill-textmerger.md) is used to put image processing output back into the document. Although Text Merge is not a hard requirement, it's frequently invoked so that image output (OCR text, OCR layoutText, image tags, image captions) can be reintroduced into the document. Depending on the skill, the image output replaces an embedded binary image with an in-place text equivalent. Image Analysis output can be merged at image location. OCR output always appears at the end of each page.
+[**Text Merge**](cognitive-search-skill-textmerger.md) is used to put image processing output back into the document. Although Text Merge isn't a hard requirement, it's frequently invoked so that image output (OCR text, OCR layoutText, image tags, image captions) can be reintroduced into the document. Depending on the skill, the image output replaces an embedded binary image with an in-place text equivalent. Image Analysis output can be merged at image location. OCR output always appears at the end of each page.
The following workflow outlines the process of image extraction, analysis, merging, and how to extend the pipeline to push image-processed output into other text-based skills such as Entity Recognition or Text Translation.
The following workflow outlines the process of image extraction, analysis, mergi
1. Image enrichments execute, using `"/document/normalized_images"` as input.
-1. Image outputs are passed into enriched documents, with each output as a separate node. Outputs vary by skill (text and layoutText for OCR, tags and captions for Image Analysis).
+1. Image outputs are passed into the enriched document tree, with each output as a separate node. Outputs vary by skill (text and layoutText for OCR, tags and captions for Image Analysis).
1. Optional but recommended if you want search documents to include both text and image-origin text together, [Text Merge](cognitive-search-skill-textmerger.md) runs, combining the text representation of those images with the raw text extracted from the file. Text chunks are consolidated into a single large string, where the text is inserted first in the string and then the OCR text output or image tags and captions.
The following example skillset creates a `"merged_text"` field containing the or
} ```
-Now that you have a merged_text field, you could map it as a searchable field in your indexer definition. All of the content of your files, including the text of the images, will be searchable.
+Now that you have a merged_text field, you can map it as a searchable field in your indexer definition. All of the content of your files, including the text of the images, will be searchable.
## Scenario: Visualize bounding boxes Another common scenario is visualizing search results layout information. For example, you might want to highlight where a piece of text was found in an image as part of your search results.
-Since the OCR step is performed on the normalized images, the layout coordinates are in the normalized image space. When displaying the normalized image, the presence of coordinates is generally not a problem, but in some situations you might want to display the original image. In this case, convert each of coordinate points in the layout to the original image coordinate system.
+Since the OCR step is performed on the normalized images, the layout coordinates are in the normalized image space, but if you need to display the original image, convert coordinate points in the layout to the original image coordinate system.
-As a helper, if you need to transform normalized coordinates to the original coordinate space, you could use the following algorithm:
+The following algorithm illustrates the pattern:
```csharp /// <summary>
The following skillset takes the normalized image (obtained during document crac
### Custom skill example
-The custom skill itself is external to the skillset. In this case, it is Python code that first loops through the batch of request records in the custom skill format, then converts the base64-encoded string to an image.
+The custom skill itself is external to the skillset. In this case, it's Python code that first loops through the batch of request records in the custom skill format, then converts the base64-encoded string to an image.
```python # deserialize the request, for each item in the batch
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Title: Create a skillset
-description: A skillset defines data extraction, natural language processing, and image analysis steps. A skillset is attached to indexer. It's used to enrich and extract information from source data for use in Azure AI Search.
+description: A skillset defines steps for content extraction, natural language processing, and image analysis. A skillset is attached to indexer. It's used to enrich and extract information from source data for indexing in Azure AI Search.
- ignite-2023 Previously updated : 07/14/2022 Last updated : 01/10/2024 # Create a skillset in Azure AI Search ![indexer stages](media/cognitive-search-defining-skillset/indexer-stages-skillset.png "indexer stages")
-A skillset defines the operations that extract and enrich data to make it searchable. It executes after text and images are extracted, and after [field mappings](search-indexer-field-mappings.md) are processed.
+A skillset defines operations that generate textual content and structure from documents that contain images or unstructured text. Examples are OCR for images, entity recognition for undifferentiated text, and text translation. A skillset executes after text and images are extracted from an external data source, and after [field mappings](search-indexer-field-mappings.md) are processed.
-This article explains how to create a skillset with the [Create Skillset (REST API)](/rest/api/searchservice/create-skillset), but the same concepts and steps apply to other programming languages.
+This article explains how to create a skillset using [REST APIs](/rest/api/searchservice/create-skillset), but the same concepts and steps apply to other programming languages.
Rules for skillset definition include:
-+ A skillset must have a unique name within the skillset collection. When you define a skillset, you're creating a top-level resource that can be used by any indexer.
-+ A skillset must contain at least one skill. A typical skillset has three to five. The maximum is 30.
++ A unique name within the skillset collection. A skillset is a top-level resource that can be used by any indexer.++ At least one skill. Three to five skills are typical. The maximum is 30. + A skillset can repeat skills of the same type (for example, multiple Shaper skills). + A skillset supports chained operations, looping, and branching.
-Indexers drive skillset execution. You'll need an [indexer](search-howto-create-indexers.md), [data source](search-data-sources-gallery.md), and [index](search-what-is-an-index.md) before you can test your skillset.
+Indexers drive skillset execution. You need an [indexer](search-howto-create-indexers.md), [data source](search-data-sources-gallery.md), and [index](search-what-is-an-index.md) before you can test your skillset.
> [!TIP] > Enable [enrichment caching](cognitive-search-incremental-indexing-conceptual.md) to reuse the content you've already processed and lower the cost of development.
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
- ignite-2023 Previously updated : 10/19/2022 Last updated : 01/10/2024 # Debug an Azure AI Search skillset in Azure portal
A debug session is a cached indexer and skillset execution, scoped to a single d
+ An Azure Storage account, used to save session state.
-+ A **Storage Blob Data Contributor** role assignment in Azure Storage.
++ A **Storage Blob Data Contributor** role assignment in Azure Storage if you're using managed identities.
-+ If the Azure Storage account is behind a firewall, configure it to [allow Search service access](search-indexer-howto-access-ip-restricted.md).
++ If the Azure Storage account is behind a firewall, configure it to [allow search service access](search-indexer-howto-access-ip-restricted.md). ## Limitations
-A Debug Session works with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The following list notes the exceptions:
+Debug sessions work with all generally available [indexer data sources](search-data-sources-gallery.md) and most preview data sources. The following list notes the exceptions:
+ Azure Cosmos DB for MongoDB is currently not supported. + For the Azure Cosmos DB for NoSQL, if a row fails during index and there's no corresponding metadata, the debug session might not pick the correct row.
-+ For the SQL API of Azure Cosmos DB, if a partitioned collection was previously non-partitioned, a Debug Session won't find the document.
++ For the SQL API of Azure Cosmos DB, if a partitioned collection was previously non-partitioned, the debug session won't find the document.
-+ Debug sessions doesn't currently support connections using a managed identity or private endpoints to custom skills.
++ For custom skills, you can't use a *user-assigned managed identity* to connect over a private endpoint in a debug session, but a system managed identity is supported. For more information, see [Connect a search service to other Azure resources using a managed identity](search-howto-managed-identities-data-sources.md). ## Create a debug session 1. Sign in to the [Azure portal](https://portal.azure.com) and find your search service.
-1. In the **Overview** page of your search service, select the **Debug Sessions** tab.
+1. In the left navigation page, select **Debug sessions**.
-1. Select **+ New Debug Session**.
+1. In the action bar at the top, select **Add debug session**.
:::image type="content" source="media/cognitive-search-debug/new-debug-session.png" alt-text="Screenshot of the debug sessions commands in the portal page." border="true":::
A Debug Session works with all generally available [indexer data sources](search
1. In **Storage connection**, find a general-purpose storage account for caching the debug session. You'll be prompted to select and optionally create a blob container in Blob Storage or Azure Data Lake Storage Gen2. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
+1. In **Managed identity authentication**, choose **None** if the connection to Azure Storage doesn't use a managed identity. Otherwise, choose the managed identity to which you've granted **Storage Blob Data Contributor** permissions.
+ 1. In **Indexer template**, select the indexer that drives the skillset you want to debug. Copies of both the indexer and skillset are used to initialize the session.
-1. In **Document to debug**, choose the first document in the index or select a specific document. If you select a specific document, depending on the data source, you'll be asked for a URI or a row ID.
+1. In **Document to debug**, choose the first document in the index or select a specific document. If you select a specific document, depending on the data source, you're asked for a URI or a row ID.
- If your specific document is a blob, you'll be asked for the blob URI. You can find the URL in the blob property page in the portal.
+ If your specific document is a blob, provide the blob URI. You can find the URI in the blob property page in the portal.
:::image type="content" source="media/cognitive-search-debug/copy-blob-url.png" alt-text="Screenshot of the URI property in blob storage." border="true"::: 1. Optionally, in **Indexer settings**, specify any indexer execution settings used to create the session. The settings should mimic the settings used by the actual indexer. Any indexer options that you specify in a debug session have no effect on the indexer itself.
-1. Your configuration should look similar to this screenshot. Select **Save Session** to get started.
+1. Your configuration should look similar to this screenshot. Select **Save session** to get started.
:::image type="content" source="media/cognitive-search-debug/debug-session-new.png" alt-text="Screenshot of a debug session page." border="true":::
A debug session can be canceled while it's executing using the **Cancel** button
It is expected for a debug session to take longer to execute than the indexer since it goes through extra processing. - ## Start with errors and warnings Indexer execution history in the portal gives you the full error and warning list for all documents. In a debug session, the errors and warnings will be limited to one document. You'll work through this list, make your changes, and then return to the list to verify whether issues are resolved.
search Knowledge Store Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-concept-intro.md
- ignite-2023 Previously updated : 01/31/2023 Last updated : 01/10/2024 # Knowledge store in Azure AI Search
-Knowledge store is a data sink created by an [Azure AI Search enrichment pipeline](cognitive-search-concept-intro.md) that stores AI-enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios like knowledge mining.
+Knowledge store is secondary storage for [AI-enriched content created by a skillset](cognitive-search-concept-intro.md) in Azure AI Search. In Azure AI Search, an indexing job always sends output to a search index, but if you attach a skillset to an indexer, you can optionally also send AI-enriched output to a container or table in Azure Storage. A knowledge store can be used for independent analysis or downstream processing in non-search scenarios like knowledge mining.
-If you've used cognitive skills in the past, you already know that enriched content is created by *skillsets*. Skillsets move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text.
-
-Output is always a search index, but it can also be projections in a knowledge store. The two outputs, search index and knowledge store, are mutually exclusive products of the same pipeline. They are derived from the same inputs, but their content is structured, stored, and used in different applications.
+The two outputs of indexing, a search index and knowledge store, are mutually exclusive products of the same pipeline. They're derived from the same inputs and contain the same data, but their content is structured, stored, and used in different applications.
:::image type="content" source="media/knowledge-store-concept-intro/knowledge-store-concept-intro.svg" alt-text="Pipeline with skillset" border="false"::: Physically, a knowledge store is [Azure Storage](../storage/common/storage-account-overview.md), either Azure Table Storage, Azure Blob Storage, or both. Any tool or process that can connect to Azure Storage can consume the contents of a knowledge store.
-Viewed through Azure portal, a knowledge store looks like any other collection of tables, objects, or files. The following screenshot shows a knowledge store composed of three tables. You can adopt a naming convention, such as a "kstore" prefix, to keep your content together.
+When viewed through Azure portal, a knowledge store looks like any other collection of tables, objects, or files. The following screenshot shows a knowledge store composed of three tables. You can adopt a naming convention, such as a `kstore` prefix, to keep your content together.
:::image type="content" source="media/knowledge-store-concept-intro/kstore-in-storage-explorer.png" alt-text="Skills read and write from enrichment tree" border="true":::
Viewed through Azure portal, a knowledge store looks like any other collection o
The primary benefits of a knowledge store are two-fold: flexible access to content, and the ability to shape data.
-Unlike a search index that can only be accessed through queries in Azure AI Search, a knowledge store can be accessed by any tool, app, or process that supports connections to Azure Storage. This flexibility opens up new scenarios for consuming the analyzed and enriched content produced by an enrichment pipeline.
+Unlike a search index that can only be accessed through queries in Azure AI Search, a knowledge store is accessible to any tool, app, or process that supports connections to Azure Storage. This flexibility opens up new scenarios for consuming the analyzed and enriched content produced by an enrichment pipeline.
The same skillset that enriches data can also be used to shape data. Some tools like Power BI work better with tables, whereas a data science workload might require a complex data structure in a blob format. Adding a [Shaper skill](cognitive-search-skill-shaper.md) to a skillset gives you control over the shape of your data. You can then pass these shapes to projections, either tables or blobs, to create physical data structures that align with the data's intended use.
REST API version `2020-06-30` can be used to create a knowledge store through ad
Within the skillset:
-+ Specify the projections that you want built in Azure Storage (tables, objects, files)
++ Specify the projections that you want built into Azure Storage (tables, objects, files) + Include a Shaper skill in your skillset to determine the schema and contents of the projection + Assign the named shape to a projection
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
- ignite-2023 Previously updated : 01/18/2023 Last updated : 01/10/2024 # Productivity tools - Azure AI Search
Productivity tools are built by engineers at Microsoft, but aren't part of the A
| Tool name | Description | Source code | |--| |-|
-| [Back up and Restore readme](https://github.com/liamc) | Download a populated search index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
-| [Knowledge Mining Accelerator readme](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
-| [Performance testing readme](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure AI Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
+| [Back up and Restore](https://github.com/liamc) | Download the retrievable fields of an index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
+| [Chat with your data solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator/README.md) | Code and docs to create interactive search solution in production environments. | [https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) |
+| [Knowledge Mining Accelerator](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
+| [Performance testing solution](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure AI Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
| [Visual Studio Code extension](https://github.com/microsoft/vscode-azurecognitivesearch) | Although the extension is no longer available in the Visual Studio Code Marketplace, the code is open sourced at `https://github.com/microsoft/vscode-azurecognitivesearch`. You can clone and modify the tool for your own use. |
search Resource Training https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-training.md
- ignite-2023 Previously updated : 09/20/2022 Last updated : 01/10/2024 # Training - Azure AI Search
search Samples Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/samples-rest.md
- ignite-2023 Previously updated : 01/04/2023 Last updated : 01/11/2024 # REST samples for Azure AI Search Learn about the REST API samples that demonstrate the functionality and workflow of an Azure AI Search solution. These samples use the [**Search REST APIs**](/rest/api/searchservice).
-REST is the definitive programming interface for Azure AI Search, and all operations that can be invoked programmatically are available first in REST, and then in SDKs. For this reason, most examples in the documentation leverage the REST APIs to demonstrate or explain important concepts.
+REST is the definitive programming interface for Azure AI Search, and all operations that can be invoked programmatically are available first in REST, and then in SDKs. For this reason, most examples in the documentation use the REST APIs to demonstrate or explain important concepts.
-REST samples are usually developed and tested on Postman, but you can use any client that supports HTTP calls, including the [Postman app](https://www.postman.com/downloads/). [This quickstart](search-get-started-rest.md) explains how to formulate the HTTP request from end-to-end.
+REST samples are usually developed and tested on the [Postman app](https://www.postman.com/downloads/), but you can use any client that supports HTTP calls. [Here's a quickstart](search-get-started-rest.md) that explains how to formulate the HTTP request from end-to-end in Postman.
## Doc samples Code samples from the Azure AI Search team demonstrate features and workflows. Many of these samples are referenced in tutorials, quickstarts, and how-to articles. You can find these samples in [**Azure-Samples/azure-search-postman-samples**](https://github.com/Azure-Samples/azure-search-postman-samples) on GitHub.
-| Samples | Article |
+| Samples | Description |
|||
-| [Quickstart](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart) | Source code for [Quickstart: Create a search index using REST APIs](search-get-started-rest.md). This article covers the basic workflow for creating, loading, and querying a search index using sample data. |
-| [Tutorial](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Tutorial) | Source code for [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md). This article shows you how to create a skillset that iterates over Azure blobs to extract information and infer structure.|
-| [Debug-sessions](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Debug-sessions) | Source code for [Tutorial: Diagnose, repair, and commit changes to your skillset](cognitive-search-tutorial-debug-sessions.md). This article shows you how to use a skillset debug session in the Azure portal. REST is used to create the objects used during debug.|
-| [custom-analyzers](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/custom-analyzers) | Source code for [Tutorial: Create a custom analyzer for phone numbers](tutorial-create-custom-analyzer.md). This article explains how to use analyzers to preserve patterns and special characters in searchable content.|
-| [knowledge-store](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/knowledge-store) | Source code for [Create a knowledge store using REST and Postman](knowledge-store-create-rest.md). This article explains the necessary steps for populating a knowledge store used for knowledge mining workflows. |
-| [projections](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/projections) | Source code for [Define projections in a knowledge store](knowledge-store-projections-examples.md). This article explains how to specify the physical data structures in a knowledge store.|
+| [Quickstart](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart) | Source code for [Quickstart: Create a search index using REST APIs](search-get-started-rest.md). This sample covers the basic workflow for creating, loading, and querying a search index using sample data. |
+| [Quickstart-vectors](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Quickstart-vector) | Source code for [Quickstart: Vector search using REST APIs](search-get-started-vector.md). This sample covers the basic workflow for indexing and querying vector data. |
+| [Tutorial](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Tutorial) | Source code for [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md). This sample shows you how to create a skillset that iterates over Azure blobs to extract information and infer structure.|
+| [Debug-sessions](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/Debug-sessions) | Source code for [Tutorial: Diagnose, repair, and commit changes to your skillset](cognitive-search-tutorial-debug-sessions.md). This sample shows you how to use a skillset debug session in the Azure portal. REST is used to create the objects used during debug.|
+| [custom-analyzers](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/custom-analyzers) | Source code for [Tutorial: Create a custom analyzer for phone numbers](tutorial-create-custom-analyzer.md). This sample explains how to use analyzers to preserve patterns and special characters in searchable content.|
+| [knowledge-store](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/knowledge-store) | Source code for [Create a knowledge store using REST and Postman](knowledge-store-create-rest.md). This sample explains the necessary steps for populating a knowledge store used for knowledge mining workflows. |
+| [semantic ranker](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/semantic-search) | Source code for [Define projections in a knowledge store](knowledge-store-projections-examples.md). This sample creates a basic search index with a semantic configuration, loads data into the index, and then creates a semantic query.|
+| [projections](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/projections) | Source code for [Define projections in a knowledge store](knowledge-store-projections-examples.md). This sample explains how to specify the physical data structures in a knowledge store.|
| [index-encrypted-blobs](https://github.com/Azure-Samples/azure-search-postman-samples/commit/f5ebb141f1ff98f571ab84ac59dcd6fd06a46718) | Source code for [How to index encrypted blobs using blob indexers and skillsets](search-howto-index-encrypted-blobs.md). This article shows how to index documents in Azure Blob Storage that have been previously encrypted using Azure Key Vault. | > [!TIP]
Code samples from the Azure AI Search team demonstrate features and workflows. M
## Other samples
-The following samples are also published by the Azure AI Search team, but are not referenced in documentation. Associated readme files provide usage instructions.
+The following samples are also published by the Azure AI Search team, but aren't referenced in documentation. Associated readme files provide usage instructions.
| Samples | Description | ||-|
search Search Api Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-versions.md
- devx-track-python - ignite-2023 Previously updated : 03/22/2023 Last updated : 01/10/2024 # API versions in Azure AI Search
Some API versions are discontinued and will be rejected by a search service:
+ **2014-07-31-Preview** + **2014-10-20-Preview**
-All SDKs are based on REST API versions. If a REST version is discontinued, any SDK that's based on it is also discontinued. All Azure AI Search .NET SDKs older than [**3.0.0-rc**](https://www.nuget.org/packages/Microsoft.Azure.Search/3.0.0-rc) are now discontinued.
+All SDKs are based on REST API versions. If a REST version is discontinued, SDK packages based on that version are also discontinued. All Azure AI Search .NET SDKs older than [**3.0.0-rc**](https://www.nuget.org/packages/Microsoft.Azure.Search/3.0.0-rc) are now obsolete.
-Support for the above-listed versions was discontinued on October 15, 2020. If you have code that uses a discontinued version, you can [migrate existing code](search-api-migration.md) to a newer [REST API version](/rest/api/searchservice/) or to a newer Azure SDK.
+Support for the above-listed versions ended on October 15, 2020. If you have code that uses a discontinued version, you can [migrate existing code](search-api-migration.md) to a newer [REST API version](/rest/api/searchservice/) or to a newer Azure SDK.
## REST APIs
The following table provides links to more recent SDK versions.
| SDK version | Status | Description | |-|--||
-| [Java azure-search-documents 11](/java/api/overview/azure/search-documents-readme) | Active | New client library from Azure Java SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
-| [Java Management Client 1.35.0](/java/api/overview/azure/search/management) | Active | Targets the Management REST api-version=2015-08-19. |
+| [Java azure-search-documents 11](/java/api/overview/azure/search-documents-readme) | Active | Use the `azure-search-documents` client library for data plane operations. |
+| [Java Management Client 1.35.0](/java/api/overview/azure/search/management) | Active | Use the `azure-mgmt-search` client library for control plane operations. |
## Azure SDK for JavaScript | SDK version | Status | Description | |-|--||
-| [JavaScript @azure/search-documents 11.0](/javascript/api/overview/azure/search-documents-readme) | Active | New client library from Azure JavaScript & TypesScript SDK, released July 2020. Targets the Search REST api-version=2016-09-01. |
-| [JavaScript @azure/arm-search](https://www.npmjs.com/package/@azure/arm-search) | Active | Targets the Management REST api-version=2015-08-19. |
+| [JavaScript @azure/search-documents 11.0](/javascript/api/overview/azure/search-documents-readme) | Active | Use the `@azure/search-documents` client library for data plane operations. |
+| [JavaScript @azure/arm-search](https://www.npmjs.com/package/@azure/arm-search) | Active | Use the `@azure/arm-search` client library for control plane operations. |
## Azure SDK for Python | SDK version | Status | Description | |-|--||
-| [Python azure-search-documents 11.0](/python/api/azure-search-documents) | Active | New client library from Azure Python SDK, released July 2020. Targets the Search REST api-version=2019-05-06. |
-| [Python azure-mgmt-search 8.0](https://pypi.org/project/azure-mgmt-search/) | Active | Targets the Management REST api-version=2015-08-19. |
+| [Python azure-search-documents 11.0](/python/api/azure-search-documents) | Active | Use the `azure-search-documents` client library for data plane operations. |
+| [Python azure-mgmt-search 8.0](https://pypi.org/project/azure-mgmt-search/) | Active | Use the `azure-mgmt-search` client library for control plane operations. |
## All Azure SDKs
search Search Capacity Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-capacity-planning.md
Title: Estimate capacity for query and index workloads
-description: Adjust partition and replica computer resources in Azure AI Search, where each resource is priced in billable search units.
+description: Learn how capacity is structured and used in Azure AI Search, and how to estimate the resources needed for indexing and query workloads.
- ignite-2023 Previously updated : 03/15/2023 Last updated : 01/10/2024 # Estimate and manage capacity of a search service
-Before you [create a search service](search-create-service-portal.md) and lock in a specific [pricing tier](search-sku-tier.md), take a few minutes to understand how capacity works and how you might adjust replicas and partitions to accommodate workload fluctuation.
- In Azure AI Search, capacity is based on *replicas* and *partitions* that can be scaled to your workload. Replicas are copies of the search engine.
-Partitions are units of storage. Each new search service starts with one each, but you can adjust each unit independently to accommodate fluctuating workloads. Adding either unit is [billable](search-sku-manage-costs.md#billable-events).
+Partitions are units of storage. Each new search service starts with one each, but you can add or remove replicas and partitions independently to accommodate fluctuating workloads. Adding capacity increases the [cost of running a search service](search-sku-manage-costs.md#billable-events).
-The physical characteristics of replicas and partitions, such as processing speed and disk IO, vary by [service tier](search-sku-tier.md). If you provisioned on Standard, replicas and partitions will be faster and larger than those of Basic.
+The physical characteristics of replicas and partitions, such as processing speed and disk IO, vary by [service tier](search-sku-tier.md). On a standard search service, the replicas and partitions are faster and larger than those of a basic service.
Changing capacity isn't instantaneous. It can take up to an hour to commission or decommission partitions, especially on services with large amounts of data.
Capacity is expressed in *search units* that can be allocated in combinations of
| Concept | Definition| |-|--| |*Search unit* | A single increment of total available capacity (36 units). It's also the billing unit for an Azure AI Search service. A minimum of one unit is required to run the service.|
-|*Replica* | Instances of the search service, used primarily to load balance query operations. Each replica hosts one copy of an index. If you allocate three replicas, you'll have three copies of an index available for servicing query requests.|
+|*Replica* | Instances of the search service, used primarily to load balance query operations. Each replica hosts one copy of an index. If you allocate three replicas, you have three copies of an index available for servicing query requests.|
|*Partition* | Physical storage and I/O for read/write operations (for example, when rebuilding or refreshing an index). Each partition has a slice of the total index. If you allocate three partitions, your index is divided into thirds. | |*Shard* | A chunk of an index. Azure AI Search divides each index into shards to make the process of adding partitions faster (by moving shards to new search units).|
In Azure AI Search, shard management is an implementation detail and nonconfigur
+ Ranking anomalies: Search scores are computed at the shard level first, and then aggregated up into a single result set. Depending on the characteristics of shard content, matches from one shard might be ranked higher than matches in another one. If you notice counter intuitive rankings in search results, it's most likely due to the effects of sharding, especially if indexes are small. You can avoid these ranking anomalies by choosing to [compute scores globally across the entire index](index-similarity-and-scoring.md#scoring-statistics-and-sticky-sessions), but doing so will incur a performance penalty.
-+ Autocomplete anomalies: Autocomplete queries, where matches are made on the first several characters of a partially entered term, accept a fuzzy parameter that forgives small deviations in spelling. For autocomplete, fuzzy matching is constrained to terms within the current shard. For example, if a shard contains "Microsoft" and a partial term of "micor" is entered, the search engine will match on "Microsoft" in that shard, but not in other shards that hold the remaining parts of the index.
++ Autocomplete anomalies: Autocomplete queries, where matches are made on the first several characters of a partially entered term, accept a fuzzy parameter that forgives small deviations in spelling. For autocomplete, fuzzy matching is constrained to terms within the current shard. For example, if a shard contains "Microsoft" and a partial term of "micro" is entered, the search engine will match on "Microsoft" in that shard, but not in other shards that hold the remaining parts of the index.
-## Approaching estimation
+## Estimation targets
-Capacity and the costs of running the service go hand in hand. Tiers impose limits on two levels: content (a count of indexes on a service, for example) and storage. It's important to consider both because whichever limit you reach first is the effective limit.
+Capacity planning must include object limits (for example, the maximum number of indexes allowed on a service) and storage limits. The service tier determines [object and storage limits](search-limits-quotas-capacity.md). Whichever limit is reached first is the effective limit.
Counts of indexes and other objects are typically dictated by business and engineering requirements. For example, you might have multiple versions of the same index for active development, testing, and production.
Storage needs are determined by the size of the indexes you expect to build. The
For full text search, the primary data structure is an [inverted index](https://en.wikipedia.org/wiki/Inverted_index) structure, which has different characteristics than source data. For an inverted index, size and complexity are determined by content, not necessarily by the amount of data that you feed into it. A large data source with high redundancy could result in a smaller index than a smaller dataset that contains highly variable content. So it's rarely possible to infer index size based on the size of the original dataset.
-Attributes on the index, such as enabling filters and sorting, will impact storage requirements. The use of suggesters also has storage implications. For more information, see [Attributes and index size](search-what-is-an-index.md#index-size).
+Attributes on the index, such as enabling filters and sorting, affect storage requirements. The use of suggesters also has storage implications. For more information, see [Attributes and index size](search-what-is-an-index.md#index-size).
> [!NOTE] > Even though estimating future needs for indexes and storage can feel like guesswork, it's worth doing. If a tier's capacity turns out to be too low, you'll need to provision a new service at a higher tier and then [reload your indexes](search-howto-reindex.md). There's no in-place upgrade of a service from one tier to another.
One approach for estimating capacity is to start with the Free tier. Remember th
+ [Create a free service](search-create-service-portal.md). + Prepare a small, representative dataset. + Create an index and load your data. If the dataset can be hosted in an Azure data source supported by indexers, you can use the [Import data wizard in the portal](search-get-started-portal.md) to both create and load the index. Otherwise, you could use [REST and Postman](search-get-started-rest.md) to create the index and push the data. The push model requires data to be in the form of JSON documents, where fields in the document correspond to fields in the index.
-+ Collect information about the index, such as size. Features and attributes have an impact on storage. For example, adding suggesters (search-as-you-type queries) will increase storage requirements.
++ Collect information about the index, such as size. Features and attributes affect storage. For example, adding suggesters (search-as-you-type queries) will increase storage requirements. Using the same data set, you might try creating multiple versions of an index, with different attributes on each field, to see how storage requirements vary. For more information, see ["Storage implications" in Create a basic index](search-what-is-an-index.md#index-size).
Dedicated resources can accommodate larger sampling and processing times for mor
1. [Monitor storage, service limits, query volume, and latency](monitor-azure-cognitive-search.md) in the portal. The portal shows you queries per second, throttled queries, and search latency. All of these values can help you decide if you selected the right tier.
-1. Add replicas if you need high availability or if you experience slow query performance.
+1. Add replicas for high availability or to mitigate slow query performance.
There are no guidelines on how many replicas are needed to accommodate query loads. Query performance depends on the complexity of the query and competing workloads. Although adding replicas clearly results in better performance, the result isn't strictly linear: adding three replicas doesn't guarantee triple throughput. For guidance in estimating QPS for your solution, see [Analyze performance](search-performance-analysis.md)and [Monitor queries](search-monitor-queries.md).
Dedicated resources can accommodate larger sampling and processing times for mor
**Query volume considerations**
-Queries per second (QPS) is an important metric during performance tuning, but it's generally only a tier consideration if you expect high query volume at the outset.
+Queries per second (QPS) is an important metric during performance tuning, but for capacity planning, it becomes a consideration only if you expect high query volume at the outset.
The Standard tiers can provide a balance of replicas and partitions. You can increase query turnaround by adding replicas for load balancing or add partitions for parallel processing. You can then tune for performance after the service is provisioned.
search Search Data Sources Terms Of Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-terms-of-use.md
- ignite-2023 Previously updated : 09/07/2022 Last updated : 01/10/2024 # Terms of Use: Partner data sources
search Search Howto Index Json Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-json-blobs.md
Title: Search over JSON blobs
-description: Crawl Azure JSON blobs for text content using the Azure AI Search Blob indexer. Indexers automate data ingestion for selected data sources like Azure Blob Storage.
+description: Extract searchable text from JSON blobs using the Blob indexer in Azure AI Search. Indexers provide indexing automation for supported data sources like Azure Blob Storage.
- ignite-2023 Previously updated : 03/22/2023 Last updated : 01/11/2024 + # Index JSON blobs and files in Azure AI Search **Applies to**: [Blob indexers](search-howto-indexing-azure-blob-storage.md), [File indexers](search-file-storage-integration.md)
-This article shows you how to set JSON-specific properties for blobs or files that consist of JSON documents. JSON blobs in Azure Blob Storage or Azure File Storage commonly assume any of these forms:
+For blob indexing in Azure AI Search, this article shows you how to set properties for blobs or files consisting of JSON documents. JSON files in Azure Blob Storage or Azure File Storage commonly assume any of these forms:
+ A single JSON document + A JSON document containing an array of well-formed JSON elements + A JSON document containing multiple entities, separated by a newline
-The blob indexer provides a "parsingMode" parameter to optimize the output of the search document based on the structure. Parsing modes consist of the following options:
+The blob indexer provides a `parsingMode` parameter to optimize the output of the search document based on JSON structure. Parsing modes consist of the following options:
| parsingMode | JSON document | Description | |--|-|--|
By default, blob indexers parse JSON blobs as a single chunk of text, one search
The blob indexer parses the JSON document into a single search document, loading an index by matching "text", "datePublished", and "tags" from the source against identically named and typed target index fields. Given an index with "text", "datePublished, and "tags" fields, the blob indexer can infer the correct mapping without a field mapping present in the request.
-Although the default behavior is one search document per JSON blob, setting the 'json' parsing mode changes the internal field mappings for content, promoting fields inside `content` to actual fields in the search index. An example indexer definition for the **`json`** parsing mode might look like this:
+Although the default behavior is one search document per JSON blob, setting the **`json`** parsing mode changes the internal field mappings for content, promoting fields inside `content` to actual fields in the search index. An example indexer definition for the **`json`** parsing mode might look like this:
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
Alternatively, you can use the JSON array option. This option is useful when blo
] ```
-The "parameters" property on the indexer contains parsing mode values. For a JSON array, the indexer definition should look similar to the following example.
+The `parameters` property on the indexer contains parsing mode values. For a JSON array, the indexer definition should look similar to the following example.
```http POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
The data set consists of eight blobs, each containing a JSON array of entities,
### Parsing nested JSON arrays
-For JSON arrays having nested elements, you can specify a "documentRoot" to indicate a multi-level structure. For example, if your blobs look like this:
+For JSON arrays having nested elements, you can specify a `documentRoot` to indicate a multi-level structure. For example, if your blobs look like this:
```http {
api-key: [admin key]
## Map JSON fields to search fields
-Field mappings are used to associate a source field with a destination field in situations where the field names and types are not identical. But field mappings can also be used to match parts of a JSON document and "lift" them into top-level fields of the search document.
+Field mappings associate a source field with a destination field in situations where the field names and types aren't identical. But field mappings can also be used to match parts of a JSON document and "lift" them into top-level fields of the search document.
The following example illustrates this scenario. For more information about field mappings in general, see [field mappings](search-indexer-field-mappings.md).
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
Title: Drop and rebuild an index
-description: Add new elements, update existing elements or documents, or delete obsolete documents in a full rebuild or partial indexing to refresh an Azure AI Search index.
+description: Re-index a search index to add or update the schema or delete obsolete documents using a full rebuild or partial indexing.
- ignite-2023 Previously updated : 02/07/2023 Last updated : 01/11/2024 # Drop and rebuild an index in Azure AI Search
-This article explains how to drop and rebuild an Azure AI Search index. It explains the circumstances under which rebuilds are required, and provides recommendations for mitigating the impact of rebuilds on ongoing query requests. If you have to rebuild frequently, we recommend using [index aliases](search-how-to-alias.md) to make it easier to swap which index your application is pointing to.
+This article explains how to drop and rebuild an Azure AI Search index. It explains the circumstances under which rebuilds are required, and provides recommendations for mitigating the effects of rebuilds on ongoing query requests. If you have to rebuild frequently, we recommend using [index aliases](search-how-to-alias.md) to make it easier to swap which index your application is pointing to.
-During active development, it's common to drop and rebuild indexes when you're iterating over index design. Most developers work with a small representative sample of their data to facilitate this process.
+During active development, it's common to drop and rebuild indexes when you're iterating over index design. Most developers work with a small representative sample of their data so that reindexing goes faster.
## Modifications requiring a rebuild
-The following table lists the modifications that require an index rebuild.
+The following table lists the modifications that require an index drop and rebuild.
| Action | Description | |--|-|
-| Delete a field | To physically remove all traces of a field, you have to rebuild the index. When an immediate rebuild isn't practical, you can modify application code to disable access to the "deleted" field or use the [$select query parameter](search-query-odata-select.md) to choose which fields are represented in the result set. Physically, the field definition and contents remain in the index until the next rebuild, when you apply a schema that omits the field in question. |
-| Change a field definition | Revising a field name, data type, or specific [index attributes](/rest/api/searchservice/create-index) (searchable, filterable, sortable, facetable) requires a full rebuild. |
-| Assign an analyzer to a field | [Analyzers](search-analyzers.md) are defined in an index and then assigned to fields. You can add a new analyzer definition to an index at any time, but you can only *assign* an analyzer when the field is created. This is true for both the **analyzer** and **indexAnalyzer** properties. The **searchAnalyzer** property is an exception (you can assign this property to an existing field). |
+| Delete a field | To physically remove all traces of a field, you have to rebuild the index. When an immediate rebuild isn't practical, you can modify application code to redirect access away from an obsolete field or use the [searchFields](search-query-create.md#example-of-a-full-text-query-request) and [select](search-query-odata-select.md) query parameters to choose which fields are searched and returned. Physically, the field definition and contents remain in the index until the next rebuild, when you apply a schema that omits the field in question. |
+| Change a field definition | Revisions to a field name, data type, or specific [index attributes](/rest/api/searchservice/create-index) (searchable, filterable, sortable, facetable) require a full rebuild. |
+| Assign an analyzer to a field | [Analyzers](search-analyzers.md) are defined in an index, assigned to fields, and then invoked during indexing to inform how tokens are created. You can add a new analyzer definition to an index at any time, but you can only *assign* an analyzer when the field is created. This is true for both the **analyzer** and **indexAnalyzer** properties. The **searchAnalyzer** property is an exception (you can assign this property to an existing field). |
| Update or delete an analyzer definition in an index | You can't delete or change an existing analyzer configuration (analyzer, tokenizer, token filter, or char filter) in the index unless you rebuild the entire index. |
-| Add a field to a suggester | If a field already exists and you want to add it to a [Suggesters](index-add-suggesters.md) construct, you must rebuild the index. |
-| Switch tiers | In-place upgrades aren't supported. If you require more capacity, you must create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure AI Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-utilities). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
+| Add a field to a suggester | If a field already exists and you want to add it to a [Suggesters](index-add-suggesters.md) construct, rebuild the index. |
+| Switch tiers | In-place upgrades aren't supported. If you require more capacity, create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure AI Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-utilities). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
## Modifications with no rebuild requirement
During development, the index schema changes frequently. You can plan for it by
For applications already in production, we recommend creating a new index that runs side by side an existing index to avoid query downtime. Your application code provides redirection to the new index.
+1. Check for space. Search services are subject to [maximum number of indexes](search-limits-quotas-capacity.md), varying by service tier. Make sure you have room for a second index.
+ 1. Determine whether a rebuild is required. If you're just adding fields, or changing some part of the index that is unrelated to fields, you might be able to simply [update the definition](/rest/api/searchservice/update-index) without deleting, recreating, and fully reloading it. 1. [Get an index definition](/rest/api/searchservice/get-index) in case you need it for future reference.
search Search Indexer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-overview.md
An *indexer* in Azure AI Search is a crawler that extracts textual data from clo
Indexers also drive [skillset execution and AI enrichment](cognitive-search-concept-intro.md), where you can configure skills to integrate extra processing of content en route to an index. A few examples are OCR over image files, text split skill for data chunking, text translation for multiple languages.
-Indexers target[supported data sources](#supported-data-sources). An indexer configuration specifies a data source (origin) and a search index (destination). Several sources, such as Azure Blob Storage, have more configuration properties specific to that content type.
+Indexers target [supported data sources](#supported-data-sources). An indexer configuration specifies a data source (origin) and a search index (destination). Several sources, such as Azure Blob Storage, have more configuration properties specific to that content type.
You can run indexers on demand or on a recurring data refresh schedule that runs as often as every five minutes. More frequent updates require a ['push model'](search-what-is-data-import.md) that simultaneously updates data in both Azure AI Search and your external data source.
search Search Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-limits-quotas-capacity.md
Azure AI Search is rolling out increased vector index size limits worldwide for
The following regions **do not** support increased limits: -- Germany West Central-- Jio India West-- Qatar Central++ Germany West Central++ West India++ Qatar Central | Tier | Storage quota (GB) | Vector quota per partition (GB) | Approx. floats per partition (assuming 15% overhead) | | -- | | | - |
Maximum running times exist to provide balance and stability to the service as a
<sup>4</sup> Maximum of 30 skills per skillset.
-<sup>5</sup> Regarding the 2 or 24 hour maximum duration for indexers: a 2-hour maximum is the most common and it's what you should plan for. The 24-hour limit is from an older indexer implementation. If you have unscheduled indexers that run continuously for 24 hours, it's because those indexers couldn't be migrated to the newer runtime behavior. For extra large data sets, indexers can be made to run longer than maximum limits if you put them on a [2-hour run time schedule](search-howto-schedule-indexers.md). When the first 2-hour interval is complete, the indexer picks up where it left off to start the next 2-hour interval.
+<sup>5</sup> Regarding the 2 or 24 hour maximum duration for indexers: a 2-hour maximum is the most common and it's what you should plan for. The 24-hour limit is from an older indexer implementation. If you have unscheduled indexers that run continuously for 24 hours, it's because those indexers couldn't be migrated to the newer infrastructure. As a general rule, for indexing jobs that can't finish within two hours, put the indexer on a [2-hour schedule](search-howto-schedule-indexers.md). When the first 2-hour interval is complete, the indexer picks up where it left off when starting the next 2-hour interval.
<sup>6</sup> Skillset execution, and image analysis in particular, are computationally intensive and consume disproportionate amounts of available processing power. Running time for these workloads has been shortened to give other jobs in the queue more opportunity to run.
search Search Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage.md
Title: Service administration in the portal
+ Title: Portal administration
-description: Manage an Azure AI Search service, a hosted cloud search service on Microsoft Azure, using the Azure portal.
+description: Manage an Azure AI Search resource using the Azure portal.
- ignite-2023 Previously updated : 01/12/2023 Last updated : 01/12/2024 + # Service administration for Azure AI Search in the Azure portal > [!div class="op_single_selector"]
Last updated 01/12/2023
> * [Portal](search-manage.md) > * [Python](https://pypi.python.org/pypi/azure-mgmt-search/0.1.0)>
-Azure AI Search is a fully managed, cloud-based search service used for building a rich search experience into custom apps. This article covers the administration tasks that you can perform in the [Azure portal](https://portal.azure.com) for a search service that you've already created.
+This article covers the Azure AI Search administration tasks that you can perform in the [Azure portal](https://portal.azure.com).
-Depending on your permission level, the portal covers virtually all aspects of search service operations, including:
+Depending on your permission level, the portal provides coverage of most search service operations, including:
* [Service administration](#management-tasks) * Content management
Each search service is managed as a standalone resource. The following image sho
## Overview (home) page
-The overview page is the "home" page of each service. In the following screenshot, the areas on the screen enclosed in red boxes indicate tasks, tools, and tiles that you might use often, especially if you're new to the service.
+The overview page is the "home" page of each service. In the following screenshot, the red boxes indicate tasks, tools, and tiles that you might use often, especially if you're new to the service.
:::image type="content" source="media/search-manage/search-portal-overview-page.png" alt-text="Portal pages for a search service" border="true"::: | Area | Description | ||-|
-| 1 | The **Essentials** section lists service properties, such as the service endpoint, service tier, and replica and partition counts. |
-| 2 | A command bar at the top of the page includes [Import data](search-get-started-portal.md) and [Search explorer](search-explorer.md), used for prototyping and exploration. |
-| 3 | Tabbed pages in the center provide quick access to usage statistics, service health metrics, and access to all of the existing indexes, indexers, data sources, and skillsets.|
-| 4 | Navigation links to other pages. |
+| 1 | A command bar at the top of the page includes [Import data wizard](search-get-started-portal.md) and [Search explorer](search-explorer.md), used for prototyping and exploration. |
+| 2 | The **Essentials** section lists service properties, such as the service endpoint, service tier, and replica and partition counts. |
+| 3 | Tabbed pages in the center provide quick access to usage statistics and service health metrics.|
+| 4 | Navigation links to existing indexes, indexers, data sources, and skillsets. |
### Read-only service properties
Several aspects of a search service are determined when the service is provision
* Service location <sup>1</sup> * Service tier <sup>2</sup>
-<sup>1</sup> Although there are ARM and bicep templates for service deployment, moving content is a manual job.
+<sup>1</sup> Although there are ARM and bicep templates for service deployment, moving content is a manual effort.
<sup>2</sup> Switching a tier requires creating a new service or filing a support ticket to request a tier upgrade.
search Search Normalizers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-normalizers.md
- ignite-2023 Previously updated : 07/14/2022 Last updated : 01/11/2024 # Text normalization for case-insensitive filtering, faceting and sorting
Last updated 07/14/2022
> [!IMPORTANT] > This feature is in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). The [preview REST API](/rest/api/searchservice/index-preview) supports this feature.
-In Azure AI Search, a *normalizer* is a component that pre-processes text for keyword matching over fields marked as "filterable", "facetable", or "sortable". In contrast with full text "searchable" fields that are paired with [text analyzers](search-analyzers.md), content that's created for filter-facet-sort operations doesn't undergo analysis or tokenization. Omission of text analysis can produce unexpected results when casing and character differences show up.
+In Azure AI Search, a *normalizer* is a component that pre-processes text for keyword matching over fields marked as "filterable", "facetable", or "sortable". In contrast with full text "searchable" fields that are paired with [text analyzers](search-analyzers.md), content that's created for filter-facet-sort operations doesn't undergo analysis or tokenization. Omission of text analysis can produce unexpected results when casing and character differences show up, which is why you need a normalizer to homogenize variations in your content.
By applying a normalizer, you can achieve light text transformations that improve results:
Searching and retrieving documents from a search index requires matching the que
Because non-tokenized content is also not analyzed, small differences in the content are evaluated as distinctly different values. Consider the following examples:
-+ `$filter=City eq 'Las Vegas'` will only return documents that contain the exact text "Las Vegas" and exclude documents with "LAS VEGAS" and "las vegas", which is inadequate when the use-case requires all documents regardless of the casing.
++ `$filter=City eq 'Las Vegas'` will only return documents that contain the exact text `"Las Vegas"` and exclude documents with `"LAS VEGAS"` and `"las vegas"`, which is inadequate when the use-case requires all documents regardless of the casing.
-+ `search=*&facet=City,count:5` will return "Las Vegas", "LAS VEGAS" and "las vegas" as distinct values despite being the same city.
++ `search=*&facet=City,count:5` will return `"Las Vegas"`, `"LAS VEGAS"` and `"las vegas"` as distinct values despite being the same city.
-+ `search=usa&$orderby=City` will return the cities in lexicographical order: "Las Vegas", "Seattle", "las vegas", even if the intent is to order the same cities together irrespective of the case.
++ `search=usa&$orderby=City` will return the cities in lexicographical order: `"Las Vegas"`, `"Seattle"`, `"las vegas"`, even if the intent is to order the same cities together irrespective of the case.
-A normalizer, which is invoked during indexing and query execution, adds light transformations that smooth out minor differences in text for filter, facet, and sort scenarios. In the previous examples, the variants of "Las Vegas" would be processed according to the normalizer you select (for example, all text is lower-cased) for more uniform results.
+A normalizer, which is invoked during indexing and query execution, adds light transformations that smooth out minor differences in text for filter, facet, and sort scenarios. In the previous examples, the variants of `"Las Vegas"` would be processed according to the normalizer you select (for example, all text is lower-cased) for more uniform results.
## How to specify a normalizer
-Normalizers are specified in an index definition, on a per-field basis, on text fields (`Edm.String` and `Collection(Edm.String)`) that have at least one of "filterable", "sortable", or "facetable" properties set to true. Setting a normalizer is optional and it's null by default. We recommend evaluating predefined normalizers before configuring a custom one.
+Normalizers are specified in an index definition, on a per-field basis, on text fields (`Edm.String` and `Collection(Edm.String)`) that have at least one of "filterable", "sortable", or "facetable" properties set to true. Setting a normalizer is optional and is null by default. We recommend evaluating predefined normalizers before configuring a custom one.
Normalizers can only be specified when you add a new field to the index, so if possible, try to assess the normalization needs upfront and assign normalizers in the initial stages of development when dropping and recreating indexes is routine.
Normalizers can only be specified when you add a new field to the index, so if p
``` > [!NOTE]
-> To change the normalizer of an existing field, you'll have to rebuild the index entirely (you cannot rebuild individual fields).
+> To change the normalizer of an existing field, [rebuild the index](search-howto-reindex.md) entirely (you cannot rebuild individual fields).
A good workaround for production indexes, where rebuilding indexes is costly, is to create a new field identical to the old one but with the new normalizer, and use it in place of the old one. Use [Update Index](/rest/api/searchservice/update-index) to incorporate the new field and [mergeOrUpload](/rest/api/searchservice/addupdate-or-delete-documents) to populate it. Later, as part of planned index servicing, you can clean up the index to remove obsolete fields.
Azure AI Search provides built-in normalizers for common use-cases along with th
|standard| Lowercases the text followed by asciifolding.| |lowercase| Transforms characters to lowercase.| |uppercase| Transforms characters to uppercase.|
-|asciifolding| Transforms characters that aren't in the Basic Latin Unicode block to their ASCII equivalent, if one exists. For example, changing à to a.|
+|asciifolding| Transforms characters that aren't in the Basic Latin Unicode block to their ASCII equivalent, if one exists. For example, changing `à` to `a`.|
|elision| Removes elision from beginning of the tokens.| ### Supported char filters
search Search Query Understand Collection Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-understand-collection-filters.md
Title: Understanding OData collection filters
+ Title: OData collection filters
description: Learn the mechanics of how OData collection filters work in Azure AI Search queries, including limitations and behaviors unique to collections.
- ignite-2023 Previously updated : 01/30/2023 Last updated : 01/11/2024
-# Understanding OData collection filters in Azure AI Search
-To [filter](query-odata-filter-orderby-syntax.md) on collection fields in Azure AI Search, you can use the [`any` and `all` operators](search-query-odata-collection-operators.md) together with **lambda expressions**. Lambda expressions are Boolean expressions that refer to a **range variable**. The `any` and `all` operators are analogous to a `for` loop in most programming languages, with the range variable taking the role of loop variable, and the lambda expression as the body of the loop. The range variable takes on the "current" value of the collection during iteration of the loop.
+# Understand how OData collection filters work in Azure AI Search
-At least that's how it works conceptually. In reality, Azure AI Search implements filters in a very different way to how `for` loops work. Ideally, this difference would be invisible to you, but in certain situations it isn't. The end result is that there are rules you have to follow when writing lambda expressions.
+This article provides background for developers who are writing advanced filters with complex lambda expressions. The article explains why the rules for collection filters exist by exploring how Azure AI Search executes these filters.
+
+When you build a [filter](query-odata-filter-orderby-syntax.md) on collection fields in Azure AI Search, you can use the [`any` and `all` operators](search-query-odata-collection-operators.md) together with **lambda expressions**. Lambda expressions are Boolean expressions that refer to a **range variable**. In filters that use a lambda expression, the `any` and `all` operators are analogous to a `for` loop in most programming languages, with the range variable taking the role of loop variable, and the lambda expression as the body of the loop. The range variable takes on the "current" value of the collection during iteration of the loop.
-This article explains why the rules for collection filters exist by exploring how Azure AI Search executes these filters. If you're writing advanced filters with complex lambda expressions, you may find this article helpful in building your understanding of what's possible in filters and why.
+At least that's how it works conceptually. In reality, Azure AI Search implements filters in a very different way to how `for` loops work. Ideally, this difference would be invisible to you, but in certain situations it isn't. The end result is that there are rules you have to follow when writing lambda expressions.
-For information on what the rules for collection filters are, including examples, see [Troubleshooting OData collection filters in Azure AI Search](search-query-troubleshoot-collection-filters.md).
+> [!NOTE]
+> For information on what the rules for collection filters are, including examples, see [Troubleshooting OData collection filters in Azure AI Search](search-query-troubleshoot-collection-filters.md).
## Why collection filters are limited
-There are three underlying reasons why not all filter features are supported for all types of collections:
+There are three underlying reasons why filter features aren't fully supported for all types of collections:
1. Only certain operators are supported for certain data types. For example, it doesn't make sense to compare the Boolean values `true` and `false` using `lt`, `gt`, and so on.
-1. Azure AI Search doesn't support **correlated search** on fields of type `Collection(Edm.ComplexType)`.
+1. Azure AI Search doesn't support *correlated search* on fields of type `Collection(Edm.ComplexType)`.
1. Azure AI Search uses inverted indexes to execute filters over all types of data, including collections. The first reason is just a consequence of how the OData language and EDM type system are defined. The last two are explained in more detail in the rest of this article. ## Correlated versus uncorrelated search
-When applying multiple filter criteria over a collection of complex objects, the criteria are **correlated** since they apply to *each object in the collection*. For example, the following filter will return hotels that have at least one deluxe room with a rate less than 100:
+When you apply multiple filter criteria over a collection of complex objects, the criteria are correlated because they apply to *each object in the collection*. For example, the following filter returns hotels that have at least one deluxe room with a rate less than 100:
```odata-filter-expr Rooms/any(room: room/Type eq 'Deluxe Room' and room/BaseRate lt 100)
However, for full-text search, there's no way to refer to a specific range varia
Rooms/Type:deluxe AND Rooms/Description:"city view" ```
-you may get hotels back where one room is deluxe, and a different room mentions "city view" in the description. For example, the document below with `Id` of `1` would match the query:
+you might get hotels back where one room is deluxe, and a different room mentions "city view" in the description. For example, the document below with `Id` of `1` would match the query:
```json {
So unlike the filter above, which basically says "match documents where a room h
## Inverted indexes and collections
-You may have noticed that there are far fewer restrictions on lambda expressions over complex collections than there are for simple collections like `Collection(Edm.Int32)`, `Collection(Edm.GeographyPoint)`, and so on. This is because Azure AI Search stores complex collections as actual collections of sub-documents, while simple collections aren't stored as collections at all.
+You might have noticed that there are far fewer restrictions on lambda expressions over complex collections than there are for simple collections like `Collection(Edm.Int32)`, `Collection(Edm.GeographyPoint)`, and so on. This is because Azure AI Search stores complex collections as actual collections of subdocuments, while simple collections aren't stored as collections at all.
For example, consider a filterable string collection field like `seasons` in an index for an online retailer. Some documents uploaded to this index might look like this:
The values of the `seasons` field are stored in a structure called an **inverted
This data structure is designed to answer one question with great speed: In which documents does a given term appear? Answering this question works more like a plain equality check than a loop over a collection. In fact, this is why for string collections, Azure AI Search only allows `eq` as a comparison operator inside a lambda expression for `any`.
-Building up from equality, next we'll look at how it's possible to combine multiple equality checks on the same range variable with `or`. It works thanks to algebra and [the distributive property of quantifiers](https://en.wikipedia.org/wiki/Existential_quantification#Negation). This expression:
+Next, we look at how it's possible to combine multiple equality checks on the same range variable with `or`. It works thanks to algebra and [the distributive property of quantifiers](https://en.wikipedia.org/wiki/Existential_quantification#Negation). This expression:
```odata-filter-expr seasons/any(s: s eq 'winter' or s eq 'fall')
In summary, here are the rules of thumb for what's allowed in a lambda expressio
- Inside `any`, *positive checks* are always allowed, like equality, range comparisons, `geo.intersects`, or `geo.distance` compared with `lt` or `le` (think of "closeness" as being like equality when it comes to checking distance). - Inside `any`, `or` is always allowed. You can use `and` only for data types that can express range checks, and only if you use ORs of ANDs (DNF).-- Inside `all`, the rules are reversed -- only *negative checks* are allowed, you can use `and` always, and you can use `or` only for range checks expressed as ANDs of ORs (CNF).
+- Inside `all`, the rules are reversed. Only *negative checks* are allowed, you can use `and` always, and you can use `or` only for range checks expressed as ANDs of ORs (CNF).
In practice, these are the types of filters you're most likely to use anyway. It's still helpful to understand the boundaries of what's possible though.
search Search Security Trimming For Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-trimming-for-azure-search.md
- ignite-2023 Previously updated : 03/24/2023 Last updated : 01/10/2024 # Security filters for trimming results in Azure AI Search
A better solution is using the `search.in` function for security filters, as des
## Prerequisites
-* The field containing group or user identity must be a string with the "filterable" attribute. It should be a collection. It shouldn't allow nulls.
+* The field containing group or user identity must be a string with the filterable attribute. It should be a collection. It shouldn't allow nulls.
* Other fields in the same document should provide the content that's accessible to that group or user. In the following JSON documents, the "security_id" fields contain identities used in a security filter, and the name, salary, and marital status will be included if the identity of the caller matches the "security_id" of the document.
In the search index, within the field collection, you need one field that contai
1. Indexes require a document key. The "file_id" field satisfies that requirement. Indexes should also contain searchable content. The "file_name" and "file_description" fields represent that in this example. ```https
- POST https://[search service].search.windows.net/indexes/securedfiles/docs/index?api-version=2020-06-30
+ POST https://[search service].search.windows.net/indexes/securedfiles/docs/index?api-version=2023-11-01
{ "name": "securedfiles", "fields": [
In the search index, within the field collection, you need one field that contai
## Push data into your index using the REST API
-Issue an HTTP POST request to your index's URL endpoint. The body of the HTTP request is a JSON object containing the documents to be indexed:
+Send an HTTP POST request to the docs collection of your index's URL endpoint (see [Documents - Index](/rest/api/searchservice/documents/)). The body of the HTTP request is a JSON rendering of the documents to be indexed:
```http
-POST https://[search service].search.windows.net/indexes/securedfiles/docs/index?api-version=2020-06-30
+POST https://[search service].search.windows.net/indexes/securedfiles/docs/index?api-version=2023-11-01
``` In the request body, specify the content of your documents:
If you need to update an existing document with the list of groups, you can use
} ```
-For more information on uploading documents, see [Add, Update, or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents).
- ## Apply the security filter in the query In order to trim documents based on `group_ids` access, you should issue a search query with a `group_ids/any(g:search.in(g, 'group_id1, group_id2,...'))` filter, where 'group_id1, group_id2,...' are the groups to which the search request issuer belongs. This filter matches all documents for which the `group_ids` field contains one of the given identifiers.
-For full details on searching documents using Azure AI Search, you can read [Search Documents](/rest/api/searchservice/search-documents).
+For full details on searching documents using Azure AI Search, you can read [Search Documents](/rest/api/searchservice/documents/search-post?).
This sample shows how to set up query using a POST request.
You should get the documents back where `group_ids` contains either "group_id1"
## Next steps
-This article described a pattern for filtering results based on user identity and the `search.in()` function. You can use this function to pass in principal identifiers for the requesting user to match against principal identifiers associated with each target document. When a search request is handled, the `search.in` function filters out search results for which none of the user's principals have read access. The principal identifiers can represent things like security groups, roles, or even the user's own identity.
+This article describes a pattern for filtering results based on user identity and the `search.in()` function. You can use this function to pass in principal identifiers for the requesting user to match against principal identifiers associated with each target document. When a search request is handled, the `search.in` function filters out search results for which none of the user's principals have read access. The principal identifiers can represent things like security groups, roles, or even the user's own identity.
For an alternative pattern based on Microsoft Entra ID, or to revisit other security features, see the following links.
search Service Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-create-private-endpoint.md
- ignite-2023 Previously updated : 09/12/2022 Last updated : 01/10/2024 # Create a Private Endpoint for a secure connection to Azure AI Search
-In this article, you'll learn how to secure an Azure AI Search service so that it can't be accessed over the internet:
+In this article, learn how to secure an Azure AI Search service so that it can't be accessed over a public internet connection:
+ [Create an Azure virtual network](#create-the-virtual-network) (or use an existing one)
-+ [Create a search service to use a private endpoint](#create-a-search-service-with-a-private-endpoint)
++ [Configure a search service to use a private endpoint](#create-a-search-service-with-a-private-endpoint) + [Create an Azure virtual machine in the same virtual network](#create-a-virtual-machine)
-+ [Connect to search using a browser session on the virtual machine](#connect-to-the-vm)
++ [Test using a browser session on the virtual machine](#connect-to-the-vm) Private endpoints are provided by [Azure Private Link](../private-link/private-link-overview.md), as a separate billable service. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/).
-You can create a private endpoint in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API version](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search).
+You can create a private endpoint for a search service in the Azure portal, as described in this article. Alternatively, you can use the [Management REST API version](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or [Azure CLI](/cli/azure/search).
> [!NOTE] > Once a search service has a private endpoint, portal access to that service must be initiated from a browser session on a virtual machine inside the virtual network. See [this step](#portal-access-private-search-service) for details.
sentinel Create Codeless Connector Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-codeless-connector-legacy.md
Use one of the following methods:
- **Azure portal**: In your Microsoft Sentinel data connector page, select **Disconnect**. -- **API**: Use the [DISCONNECT](/rest/api/securityinsights/preview/data-connectors/disconnect) API to send a PUT call with an empty body to the following URL:
+- **API**: Use the *DISCONNECT* API to send a PUT call with an empty body to the following URL:
```http https://management.azure.com /subscriptions/{{SUB}}/resourceGroups/{{RG}}/providers/Microsoft.OperationalInsights/workspaces/{{WS-NAME}}/providers/Microsoft.SecurityInsights/dataConnectors/{{Connector_Id}}/disconnect?api-version=2021-03-01-preview
sentinel Workspace Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/workspace-manager.md
Common reasons for failure include:
### API references - [Workspace Manager Assignment Jobs](/rest/api/securityinsights/preview/workspace-manager-assignment-jobs) - [Workspace Manager Assignments](/rest/api/securityinsights/preview/workspace-manager-assignments)-- [Workspace Manager Configurations](/rest/api/securityinsights/preview/workspace-manager-configurations)
+- *Workspace Manager Configurations*
- [Workspace Manager Groups](/rest/api/securityinsights/preview/workspace-manager-groups) - [Workspace Manager Members](/rest/api/securityinsights/preview/workspace-manager-members)
service-fabric Service Fabric Cluster Creation Setup Azure Ad Via Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-setup-azure-ad-via-portal.md
Enter the following information for an admin user, and then select **Apply**:
Enter the following information for a read-only user, and then select **Apply**: - **Display name**: Enter **ReadOnly**. - **Allowed member types**: Select **Users/Groups**.-- **Value**: Enter **ReadOnly**.
+- **Value**: Enter **User**.
- **Description**: Enter **ReadOnly roles have limited query access**. ![Screenshot of selections for creating a read-only user role in the portal.](media/service-fabric-cluster-creation-setup-azure-ad-via-portal/portal-cluster-roles-readonly.png)
site-recovery Failover Failback Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/failover-failback-overview.md
# About on-premises disaster recovery failover/failback - Classic
-This article provides an overview of failover and failback during disaster recovery of on-premises machines to Azure with [Azure Site Recovery](site-recovery-overview.md) - Classic.
+[Azure Site Recovery](site-recovery-overview.md) contributes to your business continuity and disaster recovery (BCDR) strategy by keeping your business apps up and running during planned and unplanned outages. Site Recovery manages and orchestrates disaster recovery of on-premises machines and Azure virtual machines (VMs). Disaster recovery includes replication, failover, and recovery of various workloads.
-For information about failover and failback in Azure Site Recovery Modernized release, [see this article](failover-failback-overview-modernized.md).
+> [!IMPORTANT]
+> This article provides an overview of failover and failback during disaster recovery of on-premises machines to Azure with [Azure Site Recovery](site-recovery-overview.md) - Classic.
+><br>
+> For information about failover and failback in Azure Site Recovery Modernized release, [see this article](failover-failback-overview-modernized.md).
## Recovery stages
site-recovery Hyper V Vmm Network Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-vmm-network-mapping.md
Title: About Hyper-V (with VMM) network mapping with Site Recovery
description: Describes how to set up network mapping for disaster recovery of Hyper-V VMs (managed in VMM clouds) to Azure, with Azure Site Recovery. Previously updated : 11/14/2019 Last updated : 01/10/2024
Network mapping works as follows:
- If the target network has multiple subnets, and one of those subnets has the same name as subnet on which the source virtual machine is located, then the replica virtual machine connects to that target subnet after failover. - If thereΓÇÖs no target subnet with a matching name, the virtual machine connects to the first subnet in the network.
-## Prepare network mapping for replication to a secondary site
-
-When you're replicating to a secondary site, network mapping maps between VM networks on a source VMM server, and VM networks on a target VMM server. Mapping does the following:
--- **Network connection**ΓÇöConnects VMs to appropriate networks after failover. The replica VM will be connected to the target network that's mapped to the source network.-- **Optimal VM placement**ΓÇöOptimally places the replica VMs on Hyper-V host servers. Replica VMs are placed on hosts that can access the mapped VM networks.-- **No network mapping**ΓÇöIf you donΓÇÖt configure network mapping, replica VMs wonΓÇÖt be connected to any VM networks after failover.-
-Network mapping works as follows:
--- Network mapping can be configured between VM networks on two VMM servers, or on a single VMM server if two sites are managed by the same server.-- When mapping is configured correctly and replication is enabled, a VM at the primary location will be connected to a network, and its replica at the target location will be connected to its mapped network.-- When you select a target VM network during network mapping in Site Recovery, the VMM source clouds that use the source VM network will be displayed, along with the available target VM networks on the target clouds that are used for protection.-- If the target network has multiple subnets and one of those subnets has the same name as the subnet on which the source virtual machine is located, then the replica VM will be connected to that target subnet after failover. If thereΓÇÖs no target subnet with a matching name, the VM will be connected to the first subnet in the network.- ## Example HereΓÇÖs an example to illustrate this mechanism. LetΓÇÖs take an organization with two locations in New York and Chicago.
site-recovery Site Recovery Backup Interoperability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-backup-interoperability.md
Previously updated : 12/15/2023 Last updated : 01/10/2024 # Support for using Site Recovery with Azure Backup
+> [!NOTE]
+> Running MARS agent of both ASR and Backup on the same Hyper-V host is not be supported.
+ This article summarizes support for using the [Site Recovery service](site-recovery-overview.md) together with the [Azure Backup service](../backup/backup-overview.md). **Action** | **Site Recovery support** | **Details**
This article summarizes support for using the [Site Recovery service](site-recov
**Disk restore** | No current support | If you restore a backed up disk, you need to disable and re-enable replication for the VM again. **VM restore** | No current support | If you restore a VM or group of VMs, you need to disable and re-enable replication for the VM.
-Please note that the above table is applicable across all supported Azure Site Recovery scenarios.
+> [!IMPORTANT]
+> The above table is applicable across all supported Azure Site Recovery scenarios.
site-recovery Site Recovery Workload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-workload.md
Title: About disaster recovery for on-premises apps with Azure Site Recovery
description: Describes the workloads that can be protected using disaster recovery with the Azure Site Recovery service. Previously updated : 03/18/2020 Last updated : 01/10/2024
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-application-configuration-service.md
This article shows you how to use Application Configuration Service for VMware T
[Application Configuration Service for VMware Tanzu](https://docs.vmware.com/en/Application-Configuration-Service-for-VMware-Tanzu/2.0/acs/GUID-overview.html) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native `ConfigMap` resources that are populated from properties defined in one or more Git repositories.
-With Application Configuration Service for Tanzu, you have a central place to manage external properties for applications across all environments. To understand the differences from Spring Cloud Config Server in Basic/Standard, see the [Use Application Configuration Service for external configuration](./how-to-migrate-standard-tier-to-enterprise-tier.md#use-application-configuration-service-for-external-configuration) section of [Migrate an Azure Spring Apps Basic or Standard plan instance to the Enterprise plan](./how-to-migrate-standard-tier-to-enterprise-tier.md).
+With Application Configuration Service, you have a central place to manage external properties for applications across all environments. To understand the differences from Spring Cloud Config Server in the Basic and Standard plans, see the [Use Application Configuration Service for external configuration](./how-to-migrate-standard-tier-to-enterprise-tier.md#use-application-configuration-service-for-external-configuration) section of [Migrate an Azure Spring Apps Basic or Standard plan instance to the Enterprise plan](./how-to-migrate-standard-tier-to-enterprise-tier.md).
Application Configuration Service is offered in two versions: Gen1 and Gen2. The Gen1 version mainly serves existing customers for backward compatibility purposes, and is supported only until April 30, 2024. New service instances should use Gen2. The Gen2 version uses [flux](https://fluxcd.io/) as the backend to communicate with Git repositories, and provides better performance compared to Gen1.
+The following table shows the subcomponent relationships:
+
+| Application Configuration Service generation | Subcomponents |
+| -- | |
+| Gen1 | `application-configuration-service` |
+| Gen2 | `application-configuration-service` <br/> `flux-source-controller` |
+ The following table shows some benchmark data for your reference. However, the Git repository size is a key factor with significant impact on the performance data. We recommend that you store only the necessary configuration files in the Git repository in order to keep it small. | Application Configuration Service generation | Duration to refresh under 100 patterns | Duration to refresh under 250 patterns | Duration to refresh under 500 patterns |
You can choose the version of Application Configuration Service when you create
## Prerequisites -- An already provisioned Azure Spring Apps Enterprise plan instance with Application Configuration Service for Tanzu enabled. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md).-
- > [!NOTE]
- > To use Application Configuration Service for Tanzu, you must enable it when you provision your Azure Spring Apps service instance. You can't enable it after you provision the instance.
+- An already provisioned Azure Spring Apps Enterprise plan instance with Application Configuration Service enabled. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise plan](quickstart-deploy-apps-enterprise.md).
-## Manage Application Configuration Service for Tanzu settings
+## Manage Application Configuration Service settings
-Application Configuration Service for Tanzu supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your configuration files.
+Application Configuration Service supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your configuration files.
To manage the service settings, open the **Settings** section and add a new entry under the **Repositories** section. The following table describes the properties for each entry.
Configuration is pulled from Git backends using what you define in a pattern. A
### Authentication
-The following screenshot shows the three types of repository authentication supported by Application Configuration Service for Tanzu.
+The following screenshot shows the three types of repository authentication supported by Application Configuration Service.
The following list describes the three authentication types:
The following list describes the three authentication types:
| Property | Required? | Description | |-|-|-| | `Private key` | Yes | The private key that identifies the Git user. Passphrase-encrypted private keys aren't supported. |
- | `Host key` | No for Gen1 <br> Yes for Gen2 | The host key of the Git server. If you've connected to the server via Git on the command line, the host key is in your *.ssh/known_hosts* file. Don't include the algorithm prefix, because it's specified in `Host key algorithm`. |
+ | `Host key` | No for Gen1 <br> Yes for Gen2 | The host key of the Git server. If you connect to the server via Git on the command line, the host key is in your *.ssh/known_hosts* file. Don't include the algorithm prefix, because it's specified in `Host key algorithm`. |
| `Host key algorithm` | No for Gen1 <br> Yes for Gen2 | The algorithm for `hostKey`: one of `ssh-dss`, `ssh-rsa`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, and `ecdsa-sha2-nistp521`. (Required if supplying `Host key`). | | `Strict host key checking` | No | Optional value that indicates whether the backend should be ignored if it encounters an error when using the provided `Host key`. Valid values are `true` and `false`. The default value is `true`. |
Gen2 requires more configuration properties than Gen1 when using SSH authenticat
| Property | Description | |-|-|
-| `Host key` | The host key of the Git server. If you've connected to the server via Git on the command line, the host key is in your *.ssh/known_hosts* file. Don't include the algorithm prefix, because it's specified in `Host key algorithm`. |
+| `Host key` | The host key of the Git server. If you connect to the server via Git on the command line, the host key is in your *.ssh/known_hosts* file. Don't include the algorithm prefix, because it's specified in `Host key algorithm`. |
| `Host key algorithm` | The algorithm for `hostKey`: one of `ssh-dss`, `ssh-rsa`, `ecdsa-sha2-nistp256`, `ecdsa-sha2-nistp384`, or `ecdsa-sha2-nistp521`. | Use the following steps to upgrade from Gen1 to Gen2: 1. In the Azure portal, navigate to the Application Configuration Service page for your Azure Spring Apps service instance.
-1. Select the **Settings** section, and then select **Gen 2** in the **Generation** dropdown menu.
+1. Select the **Settings** section and then select **Gen 2** in the **Generation** dropdown menu.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-server-upgrade-gen2.png" alt-text="Screenshot of the Azure portal showing the Application Configuration Service page with the Settings tab showing and the Generation menu open." lightbox="media/how-to-enterprise-application-configuration-service/config-server-upgrade-gen2.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the Settings tab showing and the Generation menu open." lightbox="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2.png":::
1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-server-upgrade-gen2-settings.png" alt-text="Screenshot of the Azure portal showing the Application Configuration Service page with the Settings tab showing and the Validate button highlighted." lightbox="media/how-to-enterprise-application-configuration-service/config-server-upgrade-gen2-settings.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2-settings.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the Settings tab showing and the Validate button highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-server-upgrade-gen2-settings.png":::
## Polyglot support
-The Application Configuration Service for Tanzu works seamlessly with Spring Boot applications. The properties generated by the service are imported as external configurations by Spring Boot and injected into the beans. You don't need to write extra code. You can consume the values by using the `@Value` annotation, accessed through Spring's Environment abstraction, or you can bind them to structured objects by using the `@ConfigurationProperties` annotation.
+The Application Configuration Service works seamlessly with Spring Boot applications. The properties generated by the service are imported as external configurations by Spring Boot and injected into the beans. You don't need to write extra code. You can consume the values by using the `@Value` annotation, accessed through Spring's Environment abstraction, or you can bind them to structured objects by using the `@ConfigurationProperties` annotation.
The Application Configuration Service also supports polyglot apps like dotNET, Go, Python, and so on. To access config files that you specify to load during polyglot app deployment in the apps, try to access a file path that you can retrieve through an environment variable with a name such as `AZURE_SPRING_APPS_CONFIG_FILE_PATH`. You can access all your intended config files under that path. To access the property values in the config files, use the existing read/write file libraries for your app.
The Application Configuration Service also supports polyglot apps like dotNET, G
Use the following steps to refresh your Java Spring Boot application configuration after you update the configuration file in the Git repository.
-1. Load the configuration to Application Configuration Service for Tanzu.
+1. Load the configuration to Application Configuration Service.
Azure Spring Apps manages the refresh frequency, which is set to 60 seconds.
A Spring application holds the properties as the beans of the Spring Application
curl -X POST http://{app-endpoint}/actuator/refresh ```
-## Configure Application Configuration Service for Tanzu settings
+## Configure Application Configuration Service settings
### [Azure portal](#tab/Portal)
-Use the following steps to configure Application Configuration Service for Tanzu:
+Use the following steps to configure Application Configuration Service:
1. Select **Application Configuration Service**.
-1. Select **Overview** to view the running state and resources allocated to Application Configuration Service for Tanzu.
+1. Select **Overview** to view the running state and resources allocated to Application Configuration Service.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-overview.png" alt-text="Screenshot of the Azure portal showing the Application Configuration Service page with Overview tab highlighted." lightbox="media/how-to-enterprise-application-configuration-service/config-service-overview.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-overview.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with Overview tab highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-overview.png":::
1. Select **Settings** and add a new entry in the **Repositories** section with the Git backend information. 1. Select **Validate** to validate access to the target URI. After validation completes successfully, select **Apply** to update the configuration settings.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-settings-validate.png" alt-text="Screenshot of the Azure portal showing the Application Configuration Service page with the Settings tab and Validate button highlighted." lightbox="media/how-to-enterprise-application-configuration-service/config-service-settings-validate.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-settings-validate.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the Settings tab and Validate button highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-settings-validate.png":::
### [Azure CLI](#tab/Azure-CLI)
-Use the following command to configure Application Configuration Service for Tanzu:
+Use the following command to configure Application Configuration Service:
```azurecli az spring application-configuration-service git repo add \
You need to upload the certificate to Azure Spring Apps first. For more informat
Use the following steps to configure the TLS certificate:
-1. Navigate to your service resource, and then select **Application Configuration Service**.
+1. Navigate to your service resource and then select **Application Configuration Service**.
1. Select **Settings** and add or update a new entry in the **Repositories** section with the Git backend information.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/ca-certificate.png" alt-text="Screenshot of the Azure portal showing the Application Configuration Service page with the Settings tab showing." lightbox="media/how-to-enterprise-application-configuration-service/ca-certificate.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/ca-certificate.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the Settings tab showing." lightbox="media/how-to-enterprise-application-configuration-service/ca-certificate.png":::
### [Azure CLI](#tab/Azure-CLI)
-Use the following Azure CLI commands to configure the TLS certificate:
+Use the following command to configure the TLS certificate:
```azurecli az spring application-configuration-service git repo add \
az spring application-configuration-service git repo add \
-## Use Application Configuration Service for Tanzu with applications using the portal
+## Use Application Configuration Service with applications
-## Use Application Configuration Service for Tanzu with applications
-
-When you use Application Configuration Service for Tanzu with a Git back end and use the centralized configurations, you must bind the app to Application Configuration Service for Tanzu.
+When you use Application Configuration Service with a Git back end and use the centralized configurations, you must bind the app to Application Configuration Service.
### [Azure portal](#tab/Portal)
-Use the following steps to use Application Configuration Service for Tanzu with applications:
+Use the following steps to use Application Configuration Service with applications:
1. Open the **App binding** tab. 1. Select **Bind app** and choose one app in the dropdown. Select **Apply** to bind.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png" alt-text="Screenshot of the Azure portal showing the Application Configuration Service page with the App binding tab highlighted." lightbox="media/how-to-enterprise-application-configuration-service/config-service-app-bind-dropdown.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-app-bind-dropdown.png" alt-text="Screenshot of the Azure portal that shows the Application Configuration Service page with the App binding tab highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-app-bind-dropdown.png":::
> [!NOTE] > When you change the bind/unbind status, you must restart or redeploy the app to for the binding to take effect.
Use the following steps to use Application Configuration Service for Tanzu with
1. Select the target app to configure patterns for from the `name` column.
-1. In the navigation pane, select **Configuration**, and then select **General settings**.
+1. In the navigation pane, select **Configuration** and then select **General settings**.
1. In the **Config file patterns** dropdown, choose one or more patterns from the list. For more information, see the [Pattern](./how-to-enterprise-application-configuration-service.md#pattern) section.
- :::image type="content" source="media/how-to-enterprise-application-configuration-service/config-service-pattern.png" alt-text="Screenshot of the Azure portal showing the App Configuration page with the General settings tab and api-gateway options highlighted." lightbox="media/how-to-enterprise-application-configuration-service/config-service-pattern.png":::
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/configuration-service-pattern.png" alt-text="Screenshot of the Azure portal that shows the App Configuration page with the General settings tab and api-gateway options highlighted." lightbox="media/how-to-enterprise-application-configuration-service/configuration-service-pattern.png":::
1. Select **Save** ### [Azure CLI](#tab/Azure-CLI)
-Use the following command to use Application Configuration Service for Tanzu with applications:
+Use the following command to use Application Configuration Service with applications:
```azurecli az spring application-configuration-service bind --app <app-name>
az spring app deploy \
## Enable/disable Application Configuration Service after service creation
-You can enable and disable Application Configuration Service after service creation using the Azure portal or Azure CLI. Before disabling Application Configuration Service, you're required to unbind all of your apps from it.
+You can enable and disable Application Configuration Service after service creation using the Azure portal or the Azure CLI. Before disabling Application Configuration Service, you're required to unbind all of your apps from it.
### [Azure portal](#tab/Portal) Use the following steps to enable or disable Application Configuration Service:
-1. Navigate to your service resource, and then select **Application Configuration Service**.
+1. Navigate to your service resource and then select **Application Configuration Service**.
1. Select **Manage**.
-1. Select or unselect **Enable Application Configuration Service**, and then select **Save**.
+1. Select or unselect **Enable Application Configuration Service** and then select **Save**.
1. You can now view the state of Application Configuration Service on the **Application Configuration Service** page. ### [Azure CLI](#tab/Azure-CLI)
az spring application-configuration-service delete \
+## Check logs
+
+The following sections show you how to view application logs by using either the Azure CLI or the Azure portal.
+
+### Use real-time log streaming
+
+You can stream logs in real time with the Azure CLI. For more information, see [Stream Azure Spring Apps managed component logs in real time](./how-to-managed-component-log-streaming.md). The following examples show how you can use Azure CLI commands to continuously stream new logs for `application-configuration-service` and `flux-source-controller` subcomponents.
+
+Use the following command to stream logs for `application-configuration-service`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name application-configuration-service \
+ --all-instances \
+ --follow
+```
+
+Use the following command to stream logs for `flux-source-controller`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name flux-source-controller \
+ --all-instances \
+ --follow
+```
+
+### Use Log Analytics
+
+The following sections show you how to turn on and view System Logs using Log Analytics.
+
+#### Diagnostic settings for Log Analytics
+
+You must turn on System Logs and send the logs to your Log Analytics instance before you query the logs for Application Configuration Service. To enable System Logs in the Azure portal, use the following steps:
+
+1. Open your Azure Spring Apps instance.
+
+1. In the navigation pane, select **Diagnostics settings**.
+
+1. Select **Add diagnostic setting** or select **Edit setting** for an existing setting.
+
+1. In the **Logs** section, select the **System Logs** category.
+
+1. In the **Destination details** section, select **Send to Log Analytics workspace** and then select your workspace.
+
+1. Select **Save** to update the setting.
+
+#### Check logs in Log Analytics
+
+To check the logs of `application-configuration-service` and `flux-source-controller` using the Azure portal, use the following steps:
+
+1. Make sure you turned on **System Logs**. For more information, see the [Diagnostic settings for Log Analytics](#diagnostic-settings-for-log-analytics) section.
+
+1. Open your Azure Spring Apps instance.
+
+1. In the navigation menu, select **Logs** and then select **Overview**.
+
+1. Use the following sample queries in the query edit pane. Adjust the time range then select **Run** to search for logs.
+
+ - To view the logs for `application-configuration-service`, use the following query:
+
+ ```kusto
+ AppPlatformSystemLogs
+ | where LogType in ("ApplicationConfigurationService")
+ | project TimeGenerated , ServiceName , LogType, Log , _ResourceId
+ | limit 100
+ ```
+
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/query-logs-application-configuration-service.png" alt-text="Screenshot of the Azure portal that shows the query result of logs for application-configuration-service." lightbox="media/how-to-enterprise-application-configuration-service/query-logs-application-configuration-service.png":::
+
+ - To view the logs for `flux-source-controller`, use the following query:
+
+ ```kusto
+ AppPlatformSystemLogs
+ | where LogType in ("Flux")
+ | project TimeGenerated , ServiceName , LogType, Log , _ResourceId
+ | limit 100
+ ```
+
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/query-logs-flux-source-controller.png" alt-text="Screenshot of the Azure portal that shows the query result of logs for flux-source-controller." lightbox="media/how-to-enterprise-application-configuration-service/query-logs-flux-source-controller.png":::
+
+> [!NOTE]
+> There could be a few minutes delay before the logs are available in Log Analytics.
+ ## Next steps - [Azure Spring Apps](index.yml)
spring-apps How To Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-log-streaming.md
**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
-This article describes how to enable log streaming in Azure CLI to get real-time application console logs for troubleshooting. You can also use diagnostics settings to analyze diagnostics data in Azure Spring Apps. For more information, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+This article describes how to enable log streaming in the Azure CLI to get real-time application console logs for troubleshooting. You can also use diagnostics settings to analyze diagnostics data in Azure Spring Apps. For more information, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+
+For streaming logs of managed components in Azure Spring Apps, see [Stream Azure Spring Apps managed component logs in real time](./how-to-managed-component-log-streaming.md).
## Prerequisites -- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, minimum version 1.0.0. You can install the extension by using the following command: `az extension add --name spring`
+- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, version 1.0.0 or higher. You can install the extension by using the following command: `az extension add --name spring`
- An instance of Azure Spring Apps with a running application. For more information, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md).
-## Use Azure CLI to produce tail logs
+## Use the Azure CLI to produce tail logs
-This section provides examples of using Azure CLI to produce tail logs. To avoid repeatedly specifying your resource group and service instance name, use the following commands to set your default resource group name and cluster name:
+This section provides examples of using the Azure CLI to produce tail logs. To avoid repeatedly specifying your resource group and service instance name, use the following commands to set your default resource group name and cluster name:
```azurecli az config set defaults.group=<service-group-name>
If an app named `auth-service` has only one instance, you can view the log of th
az spring app logs --name <application-name> ```
-This command returns logs similar to the following examples, where `auth-service` is the application name.
+The command returns logs similar to the following examples, where `auth-service` is the application name.
```output ...
First, run the following command to get the app instance names:
az spring app show --name auth-service --query properties.activeDeployment.properties.instances --output table ```
-This command produces results similar to the following output:
+The command produces results similar to the following output:
```output Name Status DiscoveryStatus
You can also get details of app instances from the Azure portal. After selecting
### Continuously stream new logs
-By default, `az spring app logs` prints only existing logs streamed to the app console, and then exits. If you want to stream new logs, add the `-f/--follow` argument:
+By default, `az spring app logs` prints only existing logs streamed to the app console and then exits. If you want to stream new logs, add the `-f/--follow` argument, as shown in the following example:
```azurecli az spring app logs --name auth-service --follow
Azure Spring Apps also enables you to access real-time app logs from a public ne
### [Azure portal](#tab/azure-portal)
-Use the following steps to enable a log streaming endpoint on the public network.
+Use the following steps to enable a log streaming endpoint on the public network:
-1. Select the Azure Spring Apps service instance deployed in your virtual network, and then open the **Networking** tab in the navigation menu.
+1. Select the Azure Spring Apps service instance deployed in your virtual network and then select **Networking** in the navigation menu.
-1. Select the **Vnet injection** page.
+1. Select the **Vnet injection** tab.
-1. Switch the status of **Dataplane resources on public network** to **enable** to enable a log streaming endpoint on the public network. This process will take a few minutes.
+1. Switch the status of **Dataplane resources on public network** to **enable** to enable a log streaming endpoint on the public network. This process takes a few minutes.
- :::image type="content" source="media/how-to-log-streaming/dataplane-public-endpoint.png" alt-text="Screenshot of enabling a log stream public endpoint on the Vnet Injection page." lightbox="media/how-to-log-streaming/dataplane-public-endpoint.png":::
+ :::image type="content" source="media/how-to-log-streaming/dataplane-public-endpoint.png" alt-text="Screenshot of the Azure portal that shows the Networking page with the Vnet injection tab selected and the Troubleshooting section highlighted." lightbox="media/how-to-log-streaming/dataplane-public-endpoint.png":::
#### [Azure CLI](#tab/azure-CLI)
-Use the following command to enable the log stream public endpoint.
+Use the following command to enable the log stream public endpoint:
```azurecli az spring update \
az spring update \
-After you've enabled the log stream public endpoint, you can access the app log from a public network as you would access a normal instance.
+After you enable the log stream public endpoint, you can access the app log from a public network just like you would access a normal instance.
## Secure traffic to the log streaming public endpoint
Log streaming uses the same key as the test endpoint described in [Set up a stag
To ensure the security of your applications when you expose a public endpoint for them, secure the endpoint by filtering network traffic to your service with a network security group. For more information, see [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md). A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol. > [!NOTE]
-> If you can't access app logs in the virtual network injection instance from the internet after you've enabled a log stream public endpoint, check your network security group to see whether you've allowed such inbound traffic.
+> If you can't access app logs in the virtual network injection instance from the internet after you enable a log stream public endpoint, check your network security group to see whether you allowed such inbound traffic.
The following table shows an example of a basic rule that we recommend. You can use commands like `nslookup` with the endpoint `<service-name>.private.azuremicroservices.io` to get the target IP address of a service.
The following table shows an example of a basic rule that we recommend. You can
- [Quickstart: Monitoring Azure Spring Apps apps with logs, metrics, and tracing](./quickstart-logs-metrics-tracing.md) - [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md)
+- [Stream Azure Spring Apps managed component logs in real time](./how-to-managed-component-log-streaming.md)
spring-apps How To Managed Component Log Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-managed-component-log-streaming.md
+
+ Title: Stream Azure Spring Apps managed component logs in real time
+description: Learn how to use log streaming to view managed component logs in real time.
++++ Last updated : 01/10/2024+++
+# Stream Azure Spring Apps managed component logs in real time
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard ✔️ Enterprise
+
+This article describes how to use the Azure CLI to get real-time logs of managed components for troubleshooting. You can also use diagnostics settings to analyze diagnostics data in Azure Spring Apps. For more information, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
+
+For streaming logs of applications in Azure Spring Apps, see [Stream Azure Spring Apps application console logs in real time](./how-to-log-streaming.md).
+
+## Prerequisites
+
+- [Azure CLI](/cli/azure/install-azure-cli) with the Azure Spring Apps extension, version 1.19.0 or higher. You can install the extension by using the following command: `az extension add --name spring`.
+
+## Supported managed components
+
+The following table lists the managed components that are currently supported, along with their subcomponents:
+
+| Managed component | Subcomponents |
+|--|-|
+| Application Configuration Service | `application-configuration-service` <br/> `flux-source-controller` (Supported in ACS Gen2 version) |
+| Spring Cloud Gateway | `spring-cloud-gateway` <br/> `spring-cloud-gateway-operator` |
+
+You can use the following command to list all subcomponents:
+
+```azurecli
+az spring component list
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name>
+```
+
+## Assign an Azure role
+
+To stream logs of managed components, you must have the relevant Azure roles assigned to you. The following table lists the required roles and the operations for which these roles are granted permissions:
+
+| Managed component | Required role | Operations |
+|--|||
+| Application Configuration Service | Azure Spring Apps Application Configuration Service Log Reader Role | `Microsoft.AppPlatform/Spring/ApplicationConfigurationService/logstream/action` |
+| Spring Cloud Gateway | Azure Spring Apps Spring Cloud Gateway Log Reader Role | `Microsoft.AppPlatform/Spring/SpringCloudGateway/logstream/action` |
+
+### [Azure portal](#tab/azure-Portal)
+
+Use the following steps to assign an Azure role using the Azure portal:
+
+1. Open the [Azure portal](https://portal.azure.com).
+
+1. Open your Azure Spring Apps service instance.
+
+1. In the navigation pane, select **Access Control (IAM)**.
+
+1. On the **Access Control (IAM)** page, select **Add** and then select **Add role assignment**.
+
+ :::image type="content" source="media/how-to-managed-component-log-streaming/add-role-assignment.png" alt-text="Screenshot of the Azure portal that shows the Access Control (IAM) page for an Azure Spring Apps instance with the Add role assignment option highlighted." lightbox="media/how-to-managed-component-log-streaming/add-role-assignment.png":::
+
+1. On the **Add role assignment** page, in the **Name** list, search for and select the target role and then select **Next**.
+
+ :::image type="content" source="media/how-to-managed-component-log-streaming/application-configuration-service-log-reader-role.png" alt-text="Screenshot of the Azure portal that shows the Add role assignment page for an Azure Spring Apps instance with the Azure Spring Apps Application Configuration Service Log Reader Role name highlighted." lightbox="media/how-to-managed-component-log-streaming/application-configuration-service-log-reader-role.png":::
+
+1. Select **Members** and then search for and select your username.
+
+1. Select **Review + assign**.
+
+### [Azure CLI](#tab/azure-CLI)
+
+Use the following command to assign an Azure role:
+
+ ```azurecli
+ az role assignment create \
+ --role "<Log-reader-role-for-managed-component>" \
+ --scope "<service-instance-resource-id>" \
+ --assignee "<your-identity>"
+ ```
+++
+## List all instances in a component
+
+Use the following command to list all instances in a component:
+
+```azurecli
+az spring component instance list \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --component <component-name>
+```
+
+For example, to list all instances for `flux-source-controller` in ACS Gen2 version, use the following command:
+
+```azurecli
+az spring component instance list \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --component flux-source-controller
+```
+
+## View tail logs
+
+This section provides examples of using the Azure CLI to produce tail logs.
+
+### View tail logs for a specific instance
+
+To view the tail logs for a specific instance, use the `az spring component logs` command with the `-i/--instance` argument, as shown in the next section.
+
+#### View tail logs for an instance of application-configuration-service
+
+Use the following command to view the tail logs for `application-configuration-service`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name application-configuration-service \
+ --instance <instance-name>
+```
+
+For ACS Gen2, the command returns logs similar to the following example:
+
+```output
+...
+2023-12-18T07:09:54.020Z INFO 16715 [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8090 (https)
+2023-12-18T07:09:54.116Z INFO 16715 [main] org.apache.juli.logging.DirectJDKLog : Starting service [Tomcat]
+2023-12-18T07:09:54.117Z INFO 16715 [main] org.apache.juli.logging.DirectJDKLog : Starting Servlet engine: [Apache Tomcat/10.1.12]
+2023-12-18T07:09:54.522Z INFO 16715 [main] org.apache.juli.logging.DirectJDKLog : Initializing Spring embedded WebApplicationContext
+2023-12-18T07:09:54.524Z INFO 16715 [main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 14100 ms
+2023-12-18T07:09:56.920Z INFO 16715 [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8090 (https) with context path ''
+2023-12-18T07:09:57.528Z INFO 16715 [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8081 (http)
+2023-12-18T07:09:57.529Z INFO 16715 [main] org.apache.juli.logging.DirectJDKLog : Starting service [Tomcat]
+2023-12-18T07:09:57.529Z INFO 16715 [main] org.apache.juli.logging.DirectJDKLog : Starting Servlet engine: [Apache Tomcat/10.1.12]
+2023-12-18T07:09:57.629Z INFO 16715 [main] org.apache.juli.logging.DirectJDKLog : Initializing Spring embedded WebApplicationContext
+2023-12-18T07:09:57.629Z INFO 16715 [main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 603 ms
+2023-12-18T07:09:57.824Z INFO 16715 [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8081 (http) with context path ''
+2023-12-18T07:09:58.127Z INFO 16715 [main] o.springframework.boot.StartupInfoLogger : Started ReconcilerApplication in 21.005 seconds (process running for 22.875)
+...
+```
+
+#### View tail logs for an instance of flux-source-controller
+
+Use the following command to view the tail logs for `flux-source-controller`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name flux-source-controller \
+ --instance <instance-name>
+```
+
+The command returns logs similar to the following example:
+
+```output
+...
+{"level":"info","ts":"2023-12-18T07:07:54.615Z","logger":"controller-runtime.metrics","msg":"Metrics server is starting to listen","addr":":8080"}
+{"level":"info","ts":"2023-12-18T07:07:54.615Z","logger":"setup","msg":"starting manager"}
+{"level":"info","ts":"2023-12-18T07:07:54.615Z","msg":"Starting server","path":"/metrics","kind":"metrics","addr":"[::]:8080"}
+{"level":"info","ts":"2023-12-18T07:07:54.615Z","msg":"Starting server","kind":"health probe","addr":"[::]:9440"}
+{"level":"info","ts":"2023-12-18T07:07:54.817Z","logger":"runtime","msg":"attempting to acquire leader lease flux-system/source-controller-leader-election...\n"}
+{"level":"info","ts":"2023-12-18T07:07:54.830Z","logger":"runtime","msg":"successfully acquired lease flux-system/source-controller-leader-election\n"}
+...
+```
+
+#### View tail logs for an instance of spring-cloud-gateway
+
+Use the following command to view the tail logs for `spring-cloud-gateway`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name spring-cloud-gateway \
+ --instance <instance-name>
+```
+
+The command returns logs similar to the following example:
+
+```output
+...
+2023-12-11T14:13:40.310Z INFO 1 [ main] i.p.s.c.g.s.SsoDeactivatedConfiguration : SSO is deactivated, setting up default security filters
+2023-12-11T14:13:40.506Z INFO 1 [ main] .h.HazelcastReactiveSessionConfiguration : Configuring Hazelcast as a session management storage
+2023-12-11T14:13:51.008Z INFO 1 [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8443
+2023-12-11T14:13:51.810Z INFO 1 [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 7 endpoint(s) beneath base path '/actuator'
+2023-12-11T14:13:52.410Z INFO 1 [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8090
+2023-12-11T14:13:52.907Z INFO 1 [ main] i.p.s.c.g.r.h.HazelcastRateLimitsRemover : Removing Hazelcast map 'GLOBAL_RATE_LIMIT' with rate limit information
+2023-12-11T14:13:52.912Z INFO 1 [ main] i.p.s.cloud.gateway.GatewayApplication : Started GatewayApplication in 36.084 seconds (process running for 38.651)
+...
+```
+
+#### View tail logs for an instance of spring-cloud-gateway-operator
+
+Use the following command to view the tail logs for `spring-cloud-gateway-operator`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name spring-cloud-gateway-operator \
+ --instance <instance-name>
+```
+
+The command returns logs similar to the following example:
+
+```output
+...
+2023-12-01T08:37:05.080Z INFO 1 [ main] c.v.t.s.OperatorApplication : Starting OperatorApplication v2.0.6 using Java 17.0.7 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)
+2023-12-01T08:37:05.157Z INFO 1 [ main] c.v.t.s.OperatorApplication : No active profile set, falling back to 1 default profile: "default"
+2023-12-01T08:37:14.379Z INFO 1 [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 1 endpoint(s) beneath base path '/actuator'
+2023-12-01T08:37:15.274Z INFO 1 [ main] o.s.b.web.embedded.netty.NettyWebServer : Netty started on port 8080
+2023-12-01T08:37:15.366Z INFO 1 [ main] c.v.t.s.OperatorApplication : Started OperatorApplication in 11.489 seconds (process running for 12.467)
+...
+```
+++
+### View tail logs for all instances in one command
+
+To view the tail logs for all instances, use the `--all-instances` argument, as shown in the following command. The instance name is the prefix of each log line. When there are multiple instances, logs are printed in batch for each instance, so logs of one instance aren't interleaved with the logs of another instance.
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <component-name> \
+ --all-instances
+```
+
+## Stream new logs continuously
+
+By default, `az spring component logs` prints only existing logs streamed to the console and then exits. If you want to stream new logs, add the `-f/--follow` argument.
+
+When you use the `-f/--follow` option to tail instant logs, the Azure Spring Apps log streaming service sends heartbeat logs to the client every minute unless the component is writing logs constantly. Heartbeat log messages use the following format: `2023-12-18 09:12:17.745: No log from server`.
+
+### Stream logs for a specific instance
+
+Use the following command to stream logs for a specific instance:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <component-name> \
+ --instance <instance-name> \
+ --follow
+```
+
+### Stream logs for all instances
+
+Use the following command to stream logs for all instances:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name <component-name> \
+ --all-instances \
+ --follow
+```
+
+When you stream logs for multiple instances in a component, the logs of one instance interleave with logs of others.
+
+## Stream logs in a virtual network injection instance
+
+For an Azure Spring Apps instance deployed in a custom virtual network, you can access log streaming by default from a private network. For more information, see [Deploy Azure Spring Apps in a virtual network](./how-to-deploy-in-azure-virtual-network.md)
+
+Azure Spring Apps also enables you to access real-time managed component logs from a public network.
+
+> [!NOTE]
+> Enabling the log streaming endpoint on the public network adds a public inbound IP to your virtual network. Be sure to use caution if this is a concern for you.
+
+### [Azure portal](#tab/azure-Portal)
+
+Use the following steps to enable a log streaming endpoint on the public network:
+
+1. Select the Azure Spring Apps service instance deployed in your virtual network and then select **Networking** in the navigation menu.
+
+1. Select the **Vnet injection** tab.
+
+1. Switch the status of **Dataplane resources on public network** to **Enable** to enable a log streaming endpoint on the public network. This process takes a few minutes.
+
+ :::image type="content" source="media/how-to-managed-component-log-streaming/dataplane-public-endpoint.png" alt-text="Screenshot of the Azure portal that shows the Networking page with the Vnet injection tab selected and the Troubleshooting section highlighted." lightbox="media/how-to-log-streaming/dataplane-public-endpoint.png":::
+
+#### [Azure CLI](#tab/azure-CLI)
+
+Use the following command to enable the log stream public endpoint.
+
+```azurecli
+az spring update \
+ --resource-group <resource-group-name> \
+ --service <service-instance-name> \
+ --enable-dataplane-public-endpoint true
+```
+++
+After you enable the log stream public endpoint, you can access the managed component logs from a public network just like you would access a normal instance.
+
+## Secure traffic to the log streaming public endpoint
+
+Log streaming for managed components uses Azure RBAC to authenticate the connections to the components. As a result, only users who have the proper roles can access the logs.
+
+To ensure the security of your managed components when you expose a public endpoint for them, secure the endpoint by filtering network traffic to your service with a network security group. For more information, see [Tutorial: Filter network traffic with a network security group using the Azure portal](../virtual-network/tutorial-filter-network-traffic.md). A network security group contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, you can specify source and destination, port, and protocol.
+
+> [!NOTE]
+> If you can't access managed component logs in the virtual network injection instance from the internet after you enable a log stream public endpoint, check your network security group to see whether you've allowed such inbound traffic.
+
+The following table shows an example of a basic rule that we recommend. You can use commands like `nslookup` with the endpoint `<service-name>.private.azuremicroservices.io` to get the target IP address of a service.
+
+| Priority | Name | Port | Protocol | Source | Destination | Action |
+|-|--||-|-|--|--|
+| 100 | Rule name | 80 | TCP | Internet | Service IP address | Allow |
+| 110 | Rule name | 443 | TCP | Internet | Service IP address | Allow |
+
+## Next steps
+
+- [Troubleshoot VMware Spring Cloud Gateway](./how-to-troubleshoot-enterprise-spring-cloud-gateway.md)
+- [Use Application Configuration Service](./how-to-enterprise-application-configuration-service.md)
+- [Stream Azure Spring Apps application console logs in real time](./how-to-log-streaming.md)
spring-apps How To Troubleshoot Enterprise Spring Cloud Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-troubleshoot-enterprise-spring-cloud-gateway.md
For more information on each supported metric, see the [Gateway](./concept-metri
## Check Gateway logs
-There are two components that make up the Spring Cloud Gateway for VMware Tanzu: the Gateway itself and the Gateway operator. You can infer from the name that the Gateway operator is for managing the Gateway, while the Gateway itself fulfills the features. The logs of both components are available. The following sections describe how to check these logs.
+Spring Cloud Gateway is composed of following subcomponents:
-### Diagnostic settings for Log Analytics
+- `spring-cloud-gateway-operator` is for managing the Gateway.
+- `spring-cloud-gateway` fulfills the features.
+
+The logs of both subcomponents are available. The following sections describe how to check these logs.
+
+### Use real-time log streaming
+
+You can stream logs in real time with the Azure CLI. For more information, see [Stream Azure Spring Apps managed component logs in real time](./how-to-managed-component-log-streaming.md). The following examples show how you can use Azure CLI commands to continuously stream new logs for `spring-cloud-gateway` and `spring-cloud-gateway-operator` subcomponents.
+
+Use the following command to stream logs for `spring-cloud-gateway`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name spring-cloud-gateway \
+ --all-instances \
+ --follow
+```
+
+Use the following command to stream logs for `spring-cloud-gateway-operator`:
+
+```azurecli
+az spring component logs \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --name spring-cloud-gateway-operator \
+ --all-instances \
+ --follow
+```
+
+### Use Log Analytics
+
+The following sections show you how to view System Logs using Log Analytics.
+
+#### Diagnostic settings for Log Analytics
You must turn on System Logs and send to your Log Analytics before you query the logs for VMware Spring Cloud Gateway. To enable System Logs in the Azure portal, use the following steps: 1. Open your Azure Spring Apps instance.
-1. Select **Diagnostics settings** in the navigation pane.
+
+1. In the navigation menu, select **Diagnostics settings**.
+ 1. Select **Add diagnostic setting** or select **Edit setting** for an existing setting.+ 1. In the **Logs** section, select the **System Logs** category.+ 1. In the **Destination details** section, select **Send to Log Analytics workspace** and then select your workspace.+ 1. Select **Save** to update the setting.
-### Check logs in Log Analytics
+#### Check logs in Log Analytics
+
+To check the logs of `spring-cloud-gateway` and `spring-cloud-gateway-operator` using the Azure portal, use the following steps:
-To check the logs by using the Azure portal, use the following steps:
+1. Make sure you turned on **System Logs**. For more information, see the [Diagnostic settings for Log Analytics](#diagnostic-settings-for-log-analytics) section.
-1. Make sure you turned on System Logs. For more information, see the [Diagnostic settings for Log Analytics](#diagnostic-settings-for-log-analytics) section.
1. Open your Azure Spring Apps instance.
-1. Select **Logs** in the navigation pane, and then select **Overview**.
-1. Use one of the following sample queries in the query edit pane. Adjust the time range, then select **Run** to search for logs.
- - Query logs for Gateway
+1. Select **Logs** in the navigation pane and then select **Overview**.
+
+1. Use the following sample queries in the query edit pane. Adjust the time range then select **Run** to search for logs.
- ```Kusto
- AppPlatformSystemLogs
+ - To view the logs for `spring-cloud-gateway`, use the following query:
+
+ ```kusto
+ AppPlatformSystemLogs
| where LogType in ("SpringCloudGateway")
- | project TimeGenerated , ServiceName , LogType, Log , _ResourceId
+ | project TimeGenerated , ServiceName , LogType, Log , _ResourceId
| limit 100 ```
- - Query logs for Gateway Operator
+ :::image type="content" source="media/how-to-troubleshoot-enterprise-spring-cloud-gateway/query-logs-spring-cloud-gateway.png" alt-text="Screenshot of the Azure portal that shows the query result of logs for VMware Spring Cloud Gateway." lightbox="media/how-to-troubleshoot-enterprise-spring-cloud-gateway/query-logs-spring-cloud-gateway.png":::
- ```Kusto
+ - To view the logs for `spring-cloud-gateway-operator`, use the following query:
+
+ ```kusto
AppPlatformSystemLogs | where LogType in ("SpringCloudGatewayOperator") | project TimeGenerated , ServiceName , LogType, Log , _ResourceId | limit 100 ```
-The following screenshot shows an example of the query results\:
-
+ :::image type="content" source="media/how-to-troubleshoot-enterprise-spring-cloud-gateway/query-logs-spring-cloud-gateway-operator.png" alt-text="Screenshot of the Azure portal that shows the query result of logs for VMware Spring Cloud Gateway operator." lightbox="media/how-to-troubleshoot-enterprise-spring-cloud-gateway/query-logs-spring-cloud-gateway-operator.png":::
> [!NOTE]
-> There might be a 3-5 minutes delay before the logs are available in Log Analytics.
+> There could be a few minutes delay before the logs are available in Log Analytics.
### Adjust log levels
This section describes how to adjust the log levels for VMware Spring Cloud Gate
Use the following steps to adjust the log levels:
-1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane, and then select **Configuration**.
+1. In your Azure Spring Apps instance, select **Spring Cloud Gateway** in the navigation pane and then select **Configuration**.
1. In the **Properties** sections, fill in the key/value pair `logging.level.org.springframework.cloud.gateway=DEBUG`. 1. Select **Save** to save your changes. 1. After the change is successful, you can find more detailed logs for troubleshooting, such as information about how requests are routed.
For some errors, a restart might help solve the issue. For more information, see
## Next steps - [How to Configure Spring Cloud Gateway](./how-to-configure-enterprise-spring-cloud-gateway.md)
+- [Stream Azure Spring Apps managed component logs in real time](./how-to-managed-component-log-streaming.md)
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-auth-abac-examples.md
Previously updated : 05/09/2023 Last updated : 01/11/2024 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions. # Example Azure role assignment conditions for Blob Storage
-This article list some examples of role assignment conditions for controlling access to Azure Blob Storage.
+This article lists some examples of role assignment conditions for controlling access to Azure Blob Storage.
[!INCLUDE [storage-abac-preview](../../../includes/storage-abac-preview.md)]
This section includes examples involving blob index tags.
This condition allows users to read blobs with a [blob index tag](storage-blob-index-how-to.md) key of Project and a value of Cascade. Attempts to access blobs without this key-value tag won't be allowed.
-> [!IMPORTANT]
-> For this condition to be effective for a security principal, you must add it to all role assignments for them that include the following actions.
+For this condition to be effective for a security principal, you must add it to all role assignments for them that include the following actions:
> [!div class="mx-tableFixed"] > | Action | Notes |
This condition allows users to read blobs with a [blob index tag](storage-blob-i
![Diagram of condition showing read access to blobs with a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-read.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
-Here are the settings to add this condition using the Azure portal.
+Here are the settings to add this condition using the Azure portal visual editor.
> [!div class="mx-tableFixed"] > | Condition #1 | Setting |
Here are the settings to add this condition using the Azure portal.
:::image type="content" source="./media/storage-auth-abac-examples/blob-index-tags-read-portal.png" alt-text="Screenshot of condition editor in Azure portal showing read access to blobs with a blob index tag." lightbox="./media/storage-auth-abac-examples/blob-index-tags-read-portal.png":::
-#### Azure PowerShell
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$bearerCtx = New-AzStorageContext -StorageAccountName $storageAccountName
Get-AzStorageBlob -Container <containerName> -Blob <blobName> -Context $bearerCtx ``` ++ ### Example: New blobs must include a blob index tag This condition requires that any new blobs must include a [blob index tag](storage-blob-index-how-to.md) key of Project and a value of Cascade.
-There are two actions that allow you to create new blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
+There are two actions that allow you to create new blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions:
> [!div class="mx-tableFixed"] > | Action | Notes |
There are two actions that allow you to create new blobs, so you must target bot
![Diagram of condition showing new blobs must include a blob index tag.](./media/storage-auth-abac-examples/blob-index-tags-new-blobs.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
- AND
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
- )
- OR
- (
- @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
:::image type="content" source="./media/storage-auth-abac-examples/blob-index-tags-new-blobs-portal.png" alt-text="Screenshot of condition editor in Azure portal showing new blobs must include a blob index tag." lightbox="./media/storage-auth-abac-examples/blob-index-tags-new-blobs-portal.png":::
-#### Azure PowerShell
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] StringEquals 'Cascade'
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blo
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example2 -Blob "Example2.txt" -Tag $grantedTag -Context $bearerCtx ``` ++ ### Example: Existing blobs must have blob index tag keys This condition requires that any existing blobs be tagged with at least one of the allowed [blob index tag](storage-blob-index-how-to.md) keys: Project or Program. This condition is useful for adding governance to existing blobs.
-There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions.
+There are two actions that allow you to update tags on existing blobs, so you must target both. You must add this condition to any role assignments that include one of the following actions:
> [!div class="mx-tableFixed"] > | Action | Notes |
There are two actions that allow you to update tags on existing blobs, so you mu
![Diagram of condition showing existing blobs must have blob index tag keys.](./media/storage-auth-abac-examples/blob-index-tags-keys.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
- AND
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
- )
- OR
- (
- @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
:::image type="content" source="./media/storage-auth-abac-examples/blob-index-tags-keys-portal.png" alt-text="Screenshot of condition editor in Azure portal showing existing blobs must have blob index tag keys." lightbox="./media/storage-auth-abac-examples/blob-index-tags-keys-portal.png":::
-#### Azure PowerShell
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAllOfAnyValues:StringEquals {'Project', 'Program'}
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blo
$content = Set-AzStorageBlobContent -File $localSrcFile -Container example3 -Blob "Example3.txt" -Tag $grantedTag -Context $bearerCtx ``` ++ ### Example: Existing blobs must have a blob index tag key and values This condition requires that any existing blobs to have a [blob index tag](storage-blob-index-how-to.md) key of Project and values of Cascade, Baker, or Skagit. This condition is useful for adding governance to existing blobs.
There are two actions that allow you to update tags on existing blobs, so you mu
![Diagram of condition showing existing blobs must have a blob index tag key and values.](./media/storage-auth-abac-examples/blob-index-tags-key-values.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
- AND
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
- )
- OR
- (
- @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAnyOfAnyValues:StringEquals {'Project'}
- AND
- @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
:::image type="content" source="./media/storage-auth-abac-examples/blob-index-tags-key-values-portal.png" alt-text="Screenshot of condition editor in Azure portal showing existing blobs must have a blob index tag key and values." lightbox="./media/storage-auth-abac-examples/blob-index-tags-key-values-portal.png":::
-#### Azure PowerShell
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags/write'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags&$keys$&] ForAnyOfAnyValues:StringEquals {'Project'}
+ AND
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAllOfAnyValues:StringEquals {'Cascade', 'Baker', 'Skagit'}
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag2
Set-AzStorageBlobTag -Container example4 -Blob "Example4.txt" -Tag $grantedTag3 -Context $bearerCtx ``` ++ ## Blob container names or paths This section includes examples showing how to restrict access to objects based on container name or blob path.
Suboperations aren't used in this condition because the suboperation is needed o
![Diagram of condition showing read, write, or delete blobs in named containers.](./media/storage-auth-abac-examples/containers-read-write-delete.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
++
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
``` (
Storage Blob Data Owner
) ```
-Storage Blob Data Contributor
+**Storage Blob Data Contributor**
``` (
Storage Blob Data Contributor
) ```
-#### Azure portal
-
-Here are the settings to add this condition using the Azure portal.
-
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
-> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
-> | Value | {containerName} |
-
+After entering your code, switch back to the visual editor to validate it.
-#### Azure PowerShell
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Example5
$content = Remove-AzStorageBlob -Container $grantedContainer -Blob "Example5.txt" -Context $bearerCtx ``` ++ ### Example: Read blobs in named containers with a path This condition allows read access to storage containers named blobs-example-container with a blob path of readonly/*. This condition is useful for sharing specific parts of storage containers for read access with other users in the subscription.
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
++
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
``` (
Storage Blob Data Owner
) ```
-Storage Blob Data Reader, Storage Blob Data Contributor
+**Storage Blob Data Reader**, **Storage Blob Data Contributor**
``` (
Storage Blob Data Reader, Storage Blob Data Contributor
) ```
-#### Azure portal
-
-Here are the settings to add this condition using the Azure portal.
-
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
-> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
-> | Value | {containerName} |
-> | **Expression 2** | |
-> | Operator | And |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
-> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
-> | Value | {pathString} |
-
+After entering your code, switch back to the visual editor to validate it.
-#### Azure PowerShell
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "Ungrante
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "readonly/Example6.txt" -Context $bearerCtx ``` ++ ### Example: Read or list blobs in named containers with a path This condition allows read access and also list access to storage containers named blobs-example-container with a blob path of readonly/*. Condition #1 applies to read actions excluding list blobs. Condition #2 applies to list blobs. This condition is useful for sharing specific parts of storage containers for read or list access with other users in the subscription.
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read and list access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-read.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- AND
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
- AND
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringStartsWith 'readonly/'
- )
-)
-AND
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!NOTE]
+> The Azure portal uses prefix='' to list blobs from container's root directory. After the condition is added with the list blobs operation using prefix StringStartsWith 'readonly/', targeted users won't be able to list blobs from container's root directory in the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
+> | Value | {pathString} |
+
+> [!div class="mx-tableFixed"]
+> | Condition #2 | Setting |
+> | | |
+> | Actions | [List blobs](storage-auth-abac-attributes.md#list-blobs)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | Request |
+> | Attribute | [Blob prefix](storage-auth-abac-attributes.md#blob-prefix) |
+> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
+> | Value | {pathString} |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'blobs-example-container'
+ AND
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringStartsWith 'readonly/'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND SubOperationMatches{'Blob.List'})
AND !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/runAsSuperUser/action'}) )
AND
) ```
-Storage Blob Data Reader, Storage Blob Data Contributor
+**Storage Blob Data Reader**, **Storage Blob Data Contributor**
``` (
AND
) ```
-#### Azure portal
-
-Here are the settings to add this condition using the Azure portal.
+After entering your code, switch back to the visual editor to validate it.
-> [!NOTE]
-> The Azure portal uses prefix='' to list blobs from container's root directory. After the condition is added with the list blobs operation using prefix StringStartsWith 'readonly/', targeted users won't be able to list blobs from container's root directory in the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
-> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
-> | Value | {containerName} |
-> | **Expression 2** | |
-> | Operator | And |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
-> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
-> | Value | {pathString} |
+Currently no example provided.
-> [!div class="mx-tableFixed"]
-> | Condition #2 | Setting |
-> | | |
-> | Actions | [List blobs](storage-auth-abac-attributes.md#list-blobs)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
-> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
-> | Value | {containerName} |
-> | **Expression 2** | |
-> | Operator | And |
-> | Attribute source | Request |
-> | Attribute | [Blob prefix](storage-auth-abac-attributes.md#blob-prefix) |
-> | Operator | [StringStartsWith](../../role-based-access-control/conditions-format.md#stringstartswith) |
-> | Value | {pathString} |
+ ### Example: Write blobs in named containers with a path
You must add this condition to any role assignments that include the following a
![Diagram of condition showing write access to blobs in named containers with a path.](./media/storage-auth-abac-examples/containers-path-write.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | {containerName} |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
+> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
+> | Value | {pathString} |
++
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
``` (
Storage Blob Data Owner
) ```
-Storage Blob Data Contributor
+**Storage Blob Data Contributor**
``` (
Storage Blob Data Contributor
) ```
-#### Azure portal
-
-Here are the settings to add this condition using the Azure portal.
-
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Container name](storage-auth-abac-attributes.md#container-name) |
-> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
-> | Value | {containerName} |
-> | **Expression 2** | |
-> | Operator | And |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Blob path](storage-auth-abac-attributes.md#blob-path) |
-> | Operator | [StringLike](../../role-based-access-control/conditions-format.md#stringlike) |
-> | Value | {pathString} |
-
+After entering your code, switch back to the visual editor to validate it.
-#### Azure PowerShell
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "Example7
$content = Set-AzStorageBlobContent -Container $grantedContainer -Blob "uploads/contoso/Example7.txt" -Context $bearerCtx -File $localSrcFile ``` ++ ### Example: Read blobs with a blob index tag and a path This condition allows a user to read blobs with a [blob index tag](storage-blob-index-how-to.md) key of Program, a value of Alpine, and a blob path of logs*. The blob path of logs* also includes the blob name.
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to blobs with a blob index tag and a path.](./media/storage-auth-abac-examples/blob-index-tags-path-read.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>] StringEquals 'Alpine'
- )
-)
-AND
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
:::image type="content" source="./media/storage-auth-abac-examples/blob-index-tags-path-read-condition-2-portal.png" alt-text="Screenshot of condition 2 editor in Azure portal showing read access to blobs with a blob index tag and a path." lightbox="./media/storage-auth-abac-examples/blob-index-tags-path-read-condition-2-portal.png":::
-#### Azure PowerShell
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Program<$key_case_sensitive$>] StringEquals 'Alpine'
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:path] StringLike 'logs*'
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
Here's how to add this condition using Azure PowerShell.
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logsAlpi
$content = Get-AzStorageBlobContent -Container $grantedContainer -Blob "logs/AlpineFile.txt" -Context $bearerCtx ``` ++ ## Blob versions or blob snapshots This section includes examples showing how to restrict access to objects based on the blob version or snapshot.
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to current blob version only.](./media/storage-auth-abac-examples/current-version-read-only.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
``` (
Storage Blob Data Owner
) ```
-Storage Blob Data Reader, Storage Blob Data Contributor
+**Storage Blob Data Reader**, **Storage Blob Data Contributor**
``` (
Storage Blob Data Reader, Storage Blob Data Contributor
) ```
-#### Azure portal
+After entering your code, switch back to the visual editor to validate it.
-Here are the settings to add this condition using the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
-> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
-> | Value | True |
+Currently no example provided.
++ ### Example: Read current blob versions and a specific blob version
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to a specific blob version.](./media/storage-auth-abac-examples/version-id-specific-blob-read.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- )
- OR
- (
- @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'
- OR
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) | > | Value | True |
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:versionId] DateTimeEquals '2022-06-01T23:38:32.8883645Z'
+ OR
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs:isCurrentVersion] BoolEquals true
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
+
+Currently no example provided.
+++ ### Example: Delete old blob versions This condition allows a user to delete versions of a blob that are older than 06/01/2022 to perform cleanup. The [Version ID](storage-auth-abac-attributes.md#version-id) attribute is available only for storage accounts where hierarchical namespace isn't enabled.
You must add this condition to any role assignments that include the following a
> | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete` | | > | `Microsoft.Storage/storageAccounts/blobServices/containers/blobs/deleteBlobVersion/action` | |
-![Diagram of condition showing delete access to old blob versions.](./media/storage-auth-abac-examples/version-id-blob-delete.png)
+![Diagram of condition showing delete access to old blob versions.](./media/storage-auth-abac-examples/version-id-blob-delete.png)
+
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Delete a version of a blob](storage-auth-abac-attributes.md#delete-a-version-of-a-blob) |
+> | Attribute source | Request |
+> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
+> | Operator | [DateTimeLessThan](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) |
+> | Value | &lt;blobVersionId&gt; |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
``` (
You must add this condition to any role assignments that include the following a
) ```
-#### Azure portal
+After entering your code, switch back to the visual editor to validate it.
-Here are the settings to add this condition using the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Delete a blob](storage-auth-abac-attributes.md#delete-a-blob)<br/>[Delete a version of a blob](storage-auth-abac-attributes.md#delete-a-version-of-a-blob) |
-> | Attribute source | Request |
-> | Attribute | [Version ID](storage-auth-abac-attributes.md#version-id) |
-> | Operator | [DateTimeLessThan](../../role-based-access-control/conditions-format.md#datetime-comparison-operators) |
-> | Value | &lt;blobVersionId&gt; |
+Currently no example provided.
++ ### Example: Read current blob versions and any blob snapshots
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to current blob versions and any blob snapshots.](./media/storage-auth-abac-examples/version-id-snapshot-blob-read.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | Request |
+> | Attribute | [Snapshot](storage-auth-abac-attributes.md#snapshot) |
+> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
+> | **Expression 2** | |
+> | Operator | Or |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
``` (
Storage Blob Data Owner
) ```
-Storage Blob Data Reader, Storage Blob Data Contributor
+**Storage Blob Data Reader**, **Storage Blob Data Contributor**
``` (
Storage Blob Data Reader, Storage Blob Data Contributor
) ```
-#### Azure portal
+After entering your code, switch back to the visual editor to validate it.
-Here are the settings to add this condition using the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | Request |
-> | Attribute | [Snapshot](storage-auth-abac-attributes.md#snapshot) |
-> | Exists | [Checked](../../role-based-access-control/conditions-format.md#exists) |
-> | **Expression 2** | |
-> | Operator | Or |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Is Current Version](storage-auth-abac-attributes.md#is-current-version) |
-> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
-> | Value | True |
+Currently no example provided.
++ ## Hierarchical namespace
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to storage accounts with hierarchical namespace enabled.](./media/storage-auth-abac-examples/hierarchical-namespace-accounts-read.png)
-Storage Blob Data Owner
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Is hierarchical namespace enabled](storage-auth-abac-attributes.md#is-hierarchical-namespace-enabled) |
+> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
+> | Value | True |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+**Storage Blob Data Owner**
``` (
Storage Blob Data Owner
) ```
-Storage Blob Data Reader, Storage Blob Data Contributor
+**Storage Blob Data Reader**, **Storage Blob Data Contributor**
``` (
Storage Blob Data Reader, Storage Blob Data Contributor
) ```
-#### Azure portal
+After entering your code, switch back to the visual editor to validate it.
-Here are the settings to add this condition using the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[All data operations for accounts with hierarchical namespace enabled](storage-auth-abac-attributes.md#all-data-operations-for-accounts-with-hierarchical-namespace-enabled) (if applicable) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Is hierarchical namespace enabled](storage-auth-abac-attributes.md#is-hierarchical-namespace-enabled) |
-> | Operator | [BoolEquals](../../role-based-access-control/conditions-format.md#boolean-comparison-operators) |
-> | Value | True |
+Currently no example provided.
++ ## Encryption scope
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read access to blobs with encryption scope validScope1 or validScope2.](./media/storage-auth-abac-examples/encryption-scope-read-blobs.png)
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | &lt;scopeName&gt; |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+ ``` ( (
You must add this condition to any role assignments that include the following a
) ```
-#### Azure portal
+After entering your code, switch back to the visual editor to validate it.
-Here are the settings to add this condition using the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
-> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
-> | Value | &lt;scopeName&gt; |
+Currently no example provided.
++ ### Example: Read or write blobs in named storage account with specific encryption scope
You must add this condition to any role assignments that include the following a
![Diagram of condition showing read or write access to blobs in sampleaccount storage account with encryption scope ScopeCustomKey1.](./media/storage-auth-abac-examples/encryption-scope-account-name-read-wite-blobs.png)
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
+
+# [Portal: Visual editor](#tab/portal-visual-editor)
+
+Here are the settings to add this condition using the Azure portal.
+
+> [!div class="mx-tableFixed"]
+> | Condition #1 | Setting |
+> | | |
+> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data) |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Account name](storage-auth-abac-attributes.md#account-name) |
+> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
+> | Value | &lt;accountName&gt; |
+> | **Expression 2** | |
+> | Operator | And |
+> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
+> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
+> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
+> | Value | &lt;scopeName&gt; |
+
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+ ``` ( (
You must add this condition to any role assignments that include the following a
) ```
-#### Azure portal
+After entering your code, switch back to the visual editor to validate it.
-Here are the settings to add this condition using the Azure portal.
+# [PowerShell](#tab/azure-powershell)
-> [!div class="mx-tableFixed"]
-> | Condition #1 | Setting |
-> | | |
-> | Actions | [Read a blob](storage-auth-abac-attributes.md#read-a-blob)<br/>[Write to a blob](storage-auth-abac-attributes.md#write-to-a-blob)<br/>[Create a blob or snapshot, or append data](storage-auth-abac-attributes.md#create-a-blob-or-snapshot-or-append-data) |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Account name](storage-auth-abac-attributes.md#account-name) |
-> | Operator | [StringEquals](../../role-based-access-control/conditions-format.md#stringequals) |
-> | Value | &lt;accountName&gt; |
-> | **Expression 2** | |
-> | Operator | And |
-> | Attribute source | [Resource](../../role-based-access-control/conditions-format.md#resource-attributes) |
-> | Attribute | [Encryption scope name](storage-auth-abac-attributes.md#encryption-scope-name) |
-> | Operator | [ForAnyOfAnyValues:StringEquals](../../role-based-access-control/conditions-format.md#foranyofanyvalues) |
-> | Value | &lt;scopeName&gt; |
+Currently no example provided.
++ ## Principal attributes
For more information, see [Allow read access to blobs based on tags and custom s
![Diagram of condition showing read or write access to blobs based on blob index tags and custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-read-write.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- )
- OR
- (
- @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
- )
-)
-AND
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
- AND
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
- )
- OR
- (
- @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
> | Attribute | [Blob index tags [Values in key]](storage-auth-abac-attributes.md#blob-index-tags-values-in-key) | > | Key | &lt;key&gt; |
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
+ )
+)
+AND
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ AND
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action'} AND SubOperationMatches{'Blob.Write.WithTagHeaders'})
+ )
+ OR
+ (
+ @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project] StringEquals @Request[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>]
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
+
+Currently no example provided.
+++ ### Example: Read blobs based on blob index tags and multi-value custom security attributes This condition allows read access to blobs if the user has a [custom security attribute](../../active-directory/fundamentals/custom-security-attributes-overview.md) with any values that matches the [blob index tag](storage-blob-index-how-to.md).
For more information, see [Allow read access to blobs based on tags and custom s
![Diagram of condition showing read access to blobs based on blob index tags and multi-value custom security attributes.](./media/storage-auth-abac-examples/principal-blob-index-tags-multi-value-read.png)
-```
-(
- (
- !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
- )
- OR
- (
- @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project]
- )
-)
-```
+The condition can be added to a role assignment using either the Azure portal or Azure PowerShell. The portal has two tools for building ABAC conditions - the visual editor and the code editor. You can switch between the two editors in the Azure portal to see your conditions in different views. Switch between the **Visual editor** tab and the **Code editor** tabs below to view the examples for your preferred portal editor.
-#### Azure portal
+# [Portal: Visual editor](#tab/portal-visual-editor)
Here are the settings to add this condition using the Azure portal.
Here are the settings to add this condition using the Azure portal.
> | Attribute source | [Principal](../../role-based-access-control/conditions-format.md#principal-attributes) | > | Attribute | &lt;attributeset&gt;_&lt;key&gt; |
+# [Portal: Code editor](#tab/portal-code-editor)
+
+To add the condition using the code editor, copy the condition code sample below and paste it into the code editor.
+
+```
+(
+ (
+ !(ActionMatches{'Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read'} AND NOT SubOperationMatches{'Blob.List'})
+ )
+ OR
+ (
+ @Resource[Microsoft.Storage/storageAccounts/blobServices/containers/blobs/tags:Project<$key_case_sensitive$>] ForAnyOfAnyValues:StringEquals @Principal[Microsoft.Directory/CustomSecurityAttributes/Id:Engineering_Project]
+ )
+)
+```
+
+After entering your code, switch back to the visual editor to validate it.
+
+# [PowerShell](#tab/azure-powershell)
+
+Currently no example provided.
+++ ## Environment attributes This section includes examples showing how to restrict access to objects based on the network environment or the current date and time.
After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
-Here's how to add this condition for the Storage Blob Data Reader role using Azure PowerShell.
+Here's how to add this condition for the **Storage Blob Data Reader** role using Azure PowerShell.
```azurepowershell $subId = "<your subscription id>"
After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
-Here's how to add this condition for the Storage Blob Data Contributor role using Azure PowerShell.
+Here's how to add this condition for the **Storage Blob Data Contributor** role using Azure PowerShell.
```azurepowershell $subId = "<your subscription id>"
After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
-Here's how to add this condition for the Storage Blob Data Reader role using Azure PowerShell.
+Here's how to add this condition for the **Storage Blob Data Reader** role using Azure PowerShell.
```azurepowershell $subId = "<your subscription id>"
After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
-Here's how to add this condition for the Storage Blob Data Contributor role using Azure PowerShell.
+Here's how to add this condition for the **Storage Blob Data Contributor** role using Azure PowerShell.
```azurepowershell $subId = "<your subscription id>"
After entering your code, switch back to the visual editor to validate it.
# [PowerShell](#tab/azure-powershell)
-Here's how to add this condition for the Storage Blob Data Reader role using Azure PowerShell.
+Here's how to add this condition for the **Storage Blob Data Reader** role using Azure PowerShell.
```azurepowershell $subId = "<your subscription id>"
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
description: Determine the level of support for each storage account feature giv
Previously updated : 01/09/2023 Last updated : 11/28/2023 # Blob Storage feature support in Azure Storage accounts
-Feature support is impacted by the type of account that you create and the settings that you enable on that account. You can use the tables in this article to assess feature support based on these factors. The items that appear in these tables will change over time as support continues to expand.
+Feature support is impacted by the type of account that you create and the settings that enable on that account. You can use the tables in this article to assess feature support based on these factors. The items that appear in these tables will change over time as support continues to expand.
## How to use these tables
The following table describes whether a feature is supported in a standard gener
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed planned failover (preview)](../common/storage-disaster-recovery-guidance.md#customer-managed-planned-failover-preview) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
-| [Customer-managed failover](../common/storage-disaster-recovery-guidance.md#customer-managed-failover) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
The following table describes whether a feature is supported in a standard gener
| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-metrics-migration.md?toc=/azure/storage/blobs/toc.json)<sup>3</sup> | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Microsoft Entra security. <sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
-<sup>3</sup> Storage Analytics metrics is retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
- ## Premium block blob accounts The following table describes whether a feature is supported in a premium block blob account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
The following table describes whether a feature is supported in a premium block
| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24;| &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-metrics-migration.md?toc=/azure/storage/blobs/toc.json)<sup>3</sup> | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Microsoft Entra security. <sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
-<sup>3</sup> Storage Analytics metrics is retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
- ## See also - [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)
storage Manage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/manage-storage-analytics-metrics.md
- Title: Enable and manage Azure Storage Analytics metrics (classic)
-description: Learn how to enable, edit, and view Azure Storage Analytics metrics.
--- Previously updated : 10/03/2022------
-# Enable and manage Azure Storage Analytics metrics (classic)
-
-[Azure Storage Analytics](storage-analytics.md) provides metrics for all storage services for blobs, queues, and tables. You can use the [Azure portal](https://portal.azure.com) to configure which metrics are recorded for your account, and configure charts that provide visual representations of your metrics data. This article shows you how to enable and manage metrics. To learn how to enable logs, see [Enable and manage Azure Storage Analytics logs (classic)](manage-storage-analytics-logs.md).
-
-We recommend you review [Azure Monitor for Storage](./storage-insights-overview.md?toc=/azure/azure-monitor/toc.json). It is a feature of Azure Monitor that offers comprehensive monitoring of your Azure Storage accounts by delivering a unified view of your Azure Storage services performance, capacity, and availability. It does not require you to enable or configure anything, and you can immediately view these metrics from the pre-defined interactive charts and other visualizations included.
-
-> [!NOTE]
-> There are costs associated with examining monitoring data in the Azure portal. For more information, see [Storage Analytics](storage-analytics.md).
->
-> Premium performance block blob storage accounts don't support Storage Analytics metrics. If you want to view metrics with premium performance block blob storage accounts, consider using [Azure Storage Metrics in Azure Monitor](../blobs/monitor-blob-storage.md).
->
-> For an in-depth guide on using Storage Analytics and other tools to identify, diagnose, and troubleshoot Azure Storage-related issues, see [Monitor, diagnose, and troubleshoot Microsoft Azure Storage](storage-monitoring-diagnosing-troubleshooting.md).
->
-
-<a id="Enable-metrics"></a>
-
-## Enable metrics
-
-### [Portal](#tab/azure-portal)
-
-1. In the [Azure portal](https://portal.azure.com), select **Storage accounts**, then the storage account name to open the account dashboard.
-
-2. Select **Diagnostic settings (classic)** in the **Monitoring (classic)** section of the menu blade.
-
- ![Screenshot that highlights the Diagnostic settings (classic) option under the Monitoring (Classic) section.](./media/manage-storage-analytics-metrics/storage-enable-metrics-00.png)
-
-3. Select the **type** of metrics data for each **service** you wish to monitor, and the **retention policy** for the data. You can also disable monitoring by setting **Status** to **Off**.
-
- > [!div class="mx-imgBorder"]
- > ![Configure logging in the Azure portal.](./media/manage-storage-analytics-logs/enable-diagnostics.png)
-
- To set the data retention policy, move the **Retention (days)** slider or enter the number of days of data to retain, from 1 to 365. The default for new storage accounts is seven days. If you do not want to set a retention policy, leave the **Delete data** checkbox unchecked. If there is no retention policy, it is up to you to delete the log data.
-
- > [!WARNING]
- > Metics are stored as data in your account. Metric data can accumulate in your account over time which can increase the cost of storage. If you need metric data for only a small period of time, you can reduce your costs by modifying the data retention policy. Stale metrics data (data older than your retention policy) is deleted by the system. We recommend setting a retention policy based on how long you want to retain the metrics data for your account. See [Billing on storage metrics](storage-analytics-metrics.md#billing-on-storage-metrics) for more information.
- >
-
-4. When you finish the monitoring configuration, select **Save**.
-
-A default set of metrics is displayed in charts on the **Overview** blade, as well as the **Metrics (classic)** blade.
-Once you've enabled metrics for a service, it may take up to an hour for data to appear in its charts. You can select **Edit** on any metric chart to configure which metrics are displayed in the chart.
-
-You can disable metrics collection and logging by setting **Status** to **Off**.
-
-> [!NOTE]
-> Azure Storage uses [table storage](storage-introduction.md#table-storage) to store the metrics for your storage account, and stores the metrics in tables in your account. For more information, see. [How metrics are stored](storage-analytics-metrics.md#how-metrics-are-stored).
-
-### [PowerShell](#tab/azure-powershell)
-
-1. Open a Windows PowerShell command window.
-
-2. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
-
- ```powershell
- Connect-AzAccount
- ```
-
-3. If your identity is associated with more than one subscription, then set your active subscription.
-
- ```powershell
- $context = Get-AzSubscription -SubscriptionId <subscription-id>
- Set-AzContext $context
- ```
-
- Replace the `<subscription-id>` placeholder value with the ID of your subscription.
-
-5. Get the storage account context that defines the storage account you want to use.
-
- ```powershell
- $storageAccount = Get-AzStorageAccount -ResourceGroupName "<resource-group-name>" -AccountName "<storage-account-name>"
- $ctx = $storageAccount.Context
- ```
-
- - Replace the `<resource-group-name>` placeholder value with the name of your resource group.
-
- - Replace the `<storage-account-name>` placeholder value with the name of your storage account.
-
-6. You can use PowerShell on your local machine to configure storage metrics in your storage account. Use the Azure PowerShell cmdlet **Set-AzStorageServiceMetricsProperty** to change the current settings.
-
- The following command switches on minute metrics for the blob service in your storage account with the retention period set to five days.
-
- ```powershell
- Set-AzStorageServiceMetricsProperty -MetricsType Minute -ServiceType Blob -MetricsLevel ServiceAndApi -RetentionDays 5 -Context $ctx
- ```
-
- This cmdlet uses the following parameters:
-
- - **ServiceType**: Possible values are **Blob**, **Queue**, **Table**, and **File**.
- - **MetricsType**: Possible values are **Hour** and **Minute**.
- - **MetricsLevel**: Possible values are:
- - **None**: Turns off monitoring.
- - **Service**: Collects metrics such as ingress and egress, availability, latency, and success percentages, which are aggregated for the blob, queue, table, and file services.
- - **ServiceAndApi**: In addition to the service metrics, collects the same set of metrics for each storage operation in the Azure Storage service API.
-
- The following command retrieves the current hourly metrics level and retention days for the blob service in your default storage account:
-
- ```powershell
- Get-AzStorageServiceMetricsProperty -MetricsType Hour -ServiceType Blob -Context $storagecontext.Context
- ```
-
- For information about how to configure the Azure PowerShell cmdlets to work with your Azure subscription and how to select the default storage account to use, see [Install and configure Azure PowerShell](/powershell/azure/).
-
-### [.NET](#tab/dotnet)
--
-For more information about using a .NET language to configure storage metrics, see [Azure Storage client libraries for .NET](/dotnet/api/overview/azure/storage).
-
-For general information about configuring storage metrics by using the REST API, see [Enabling and configuring Storage Analytics](/rest/api/storageservices/Enabling-and-Configuring-Storage-Analytics).
---
-<a id="view-metrics"></a>
-
-## View metrics in a chart
-
-After you configure Storage Analytics metrics to monitor your storage account, Storage Analytics records the metrics in a set of well-known tables in your storage account. You can configure charts to view hourly metrics in the [Azure portal](https://portal.azure.com).
-
-Use the following procedure to choose which storage metrics to view in a metrics chart.
-
-1. Start by displaying a storage metric chart in the Azure portal. You can find charts on the **storage account blade** and in the **Metrics (classic)** blade.
-
- In this example, uses the following chart that appears on the **storage account blade**:
-
- ![Chart selection in Azure portal](./media/manage-storage-analytics-metrics/stg-customize-chart-00.png)
-
-2. Click anywhere within the chart to edit the chart.
-
-3. Next, select the **Time Range** of the metrics to display in the chart, and the **service** (blob, queue, table, file) whose metrics you wish to display. Here, the past week's metrics are selected to display for the blob service:
-
- ![Time range and service selection in the Edit Chart blade](./media/manage-storage-analytics-metrics/storage-edit-metric-time-range.png)
-
-4. Select the individual **metrics** you'd like displayed in the chart, then click **OK**.
-
- ![Individual metric selection in Edit Chart blade](./media/manage-storage-analytics-metrics/storage-edit-metric-selections.png)
-
-Your chart settings do not affect the collection, aggregation, or storage of monitoring data in the storage account.
-
-#### Metrics availability in charts
-
-The list of available metrics changes based on which service you've chosen in the drop-down, and the unit type of the chart you're editing. For example, you can select percentage metrics like *PercentNetworkError* and *PercentThrottlingError* only if you're editing a chart that displays units in percentage:
-
-![Request error percentage chart in the Azure portal](./media/manage-storage-analytics-metrics/stg-customize-chart-04.png)
-
-#### Metrics resolution
-
-The metrics you selected in **Diagnostics** determines the resolution of the metrics that are available for your account:
--- **Aggregate** monitoring provides metrics such as ingress/egress, availability, latency, and success percentages. These metrics are aggregated from the blob, table, file, and queue services.-- **Per API** provides finer resolution, with metrics available for individual storage operations, in addition to the service-level aggregates.-
-## Download metrics to archive or analyze locally
-
-If you want to download the metrics for long-term storage or to analyze them locally, you must use a tool or write some code to read the tables. The tables don't appear if you list all the tables in your storage account, but you can access them directly by name. Many storage-browsing tools are aware of these tables and enable you to view them directly. For a list of available tools, see [Azure Storage client tools](./storage-explorers.md).
-
-|Metrics|Table names|Notes|
-|-|-|-|
-|Hourly metrics|$MetricsHourPrimaryTransactionsBlob<br /><br /> $MetricsHourPrimaryTransactionsTable<br /><br /> $MetricsHourPrimaryTransactionsQueue<br /><br /> $MetricsHourPrimaryTransactionsFile|In versions prior to August 15, 2013, these tables were known as:<br /><br /> $MetricsTransactionsBlob<br /><br /> $MetricsTransactionsTable<br /><br /> $MetricsTransactionsQueue<br /><br /> Metrics for the file service are available beginning with version April 5, 2015.|
-|Minute metrics|$MetricsMinutePrimaryTransactionsBlob<br /><br /> $MetricsMinutePrimaryTransactionsTable<br /><br /> $MetricsMinutePrimaryTransactionsQueue<br /><br /> $MetricsMinutePrimaryTransactionsFile|Can only be enabled by using PowerShell or programmatically.<br /><br /> Metrics for the file service are available beginning with version April 5, 2015.|
-|Capacity|$MetricsCapacityBlob|Blob service only.|
-
-For full details of the schemas for these tables, see [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema). The following sample rows show only a subset of the columns available, but they illustrate some important features of the way storage metrics saves these metrics:
-
-|PartitionKey|RowKey|Timestamp|TotalRequests|TotalBillableRequests|TotalIngress|TotalEgress|Availability|AverageE2ELatency|AverageServerLatency|PercentSuccess|
-|-|-|-|-|-|-|-|-|-|-|-|
-|20140522T1100|user;All|2014-05-22T11:01:16.7650250Z|7|7|4003|46801|100|104.4286|6.857143|100|
-|20140522T1100|user;QueryEntities|2014-05-22T11:01:16.7640250Z|5|5|2694|45951|100|143.8|7.8|100|
-|20140522T1100|user;QueryEntity|2014-05-22T11:01:16.7650250Z|1|1|538|633|100|3|3|100|
-|20140522T1100|user;UpdateEntity|2014-05-22T11:01:16.7650250Z|1|1|771|217|100|9|6|100|
-
-In this example of minute metrics data, the partition key uses the time at minute resolution. The row key identifies the type of information that's stored in the row. The information is composed of the access type and the request type:
--- The access type is either **user** or **system**, where **user** refers to all user requests to the storage service and **system** refers to requests made by Storage Analytics.-- The request type is either **all**, in which case it's a summary line, or it identifies the specific API such as **QueryEntity** or **UpdateEntity**.-
-This sample data shows all the records for a single minute (starting at 11:00AM), so the number of **QueryEntities** requests plus the number of **QueryEntity** requests plus the number of **UpdateEntity** requests adds up to seven. This total is shown in the **user:All** row. Similarly, you can derive the average end-to-end latency 104.4286 on the **user:All** row by calculating ((143.8 * 5) + 3 + 9)/7.
-
-## View metrics data programmatically
-
-The following listing shows sample C# code that accesses the minute metrics for a range of minutes and displays the results in a console window. The code sample uses the Azure Storage client library version 4.x or later, which includes the **CloudAnalyticsClient** class that simplifies accessing the metrics tables in storage.
-
-> [!NOTE]
-> The **CloudAnalyticsClient** class is not included in the Azure Blob storage client library v12 for .NET. On **August 31, 2023** Storage Analytics metrics, also referred to as *classic metrics* will be retired. For more information, see the [official announcement](https://azure.microsoft.com/updates/azure-storage-classic-metrics-will-be-retired-on-31-august-2023/). If you use classic metrics, we recommend that you transition to metrics in Azure Monitor prior to that date.
-
-```csharp
-private static void PrintMinuteMetrics(CloudAnalyticsClient analyticsClient, DateTimeOffset startDateTime, DateTimeOffset endDateTime)
-{
- // Convert the dates to the format used in the PartitionKey.
- var start = startDateTime.ToUniversalTime().ToString("yyyyMMdd'T'HHmm");
- var end = endDateTime.ToUniversalTime().ToString("yyyyMMdd'T'HHmm");
-
- var services = Enum.GetValues(typeof(StorageService));
- foreach (StorageService service in services)
- {
- Console.WriteLine("Minute Metrics for Service {0} from {1} to {2} UTC", service, start, end);
- var metricsQuery = analyticsClient.CreateMinuteMetricsQuery(service, StorageLocation.Primary);
- var t = analyticsClient.GetMinuteMetricsTable(service);
- var opContext = new OperationContext();
- var query =
- from entity in metricsQuery
- // Note, you can't filter using the entity properties Time, AccessType, or TransactionType
- // because they are calculated fields in the MetricsEntity class.
- // The PartitionKey identifies the DataTime of the metrics.
- where entity.PartitionKey.CompareTo(start) >= 0 && entity.PartitionKey.CompareTo(end) <= 0
- select entity;
-
- // Filter on "user" transactions after fetching the metrics from Azure Table storage.
- // (StartsWith is not supported using LINQ with Azure Table storage.)
- var results = query.ToList().Where(m => m.RowKey.StartsWith("user"));
- var resultString = results.Aggregate(new StringBuilder(), (builder, metrics) => builder.AppendLine(MetricsString(metrics, opContext))).ToString();
- Console.WriteLine(resultString);
- }
-}
-
-private static string MetricsString(MetricsEntity entity, OperationContext opContext)
-{
- var entityProperties = entity.WriteEntity(opContext);
- var entityString =
- string.Format("Time: {0}, ", entity.Time) +
- string.Format("AccessType: {0}, ", entity.AccessType) +
- string.Format("TransactionType: {0}, ", entity.TransactionType) +
- string.Join(",", entityProperties.Select(e => new KeyValuePair<string, string>(e.Key.ToString(), e.Value.PropertyAsObject.ToString())));
- return entityString;
-}
-```
-
-<a id="add-metrics-to-dashboard"></a>
-
-## Add metrics charts to the portal dashboard
-
-You can add Azure Storage metrics charts for any of your storage accounts to your portal dashboard.
-
-1. Select click **Edit dashboard** while viewing your dashboard in the [Azure portal](https://portal.azure.com).
-1. In the **Tile Gallery**, select **Find tiles by** > **Type**.
-1. Select **Type** > **Storage accounts**.
-1. In **Resources**, select the storage account whose metrics you wish to add to the dashboard.
-1. Select **Categories** > **Monitoring**.
-1. Drag-and-drop the chart tile onto your dashboard for the metric you'd like displayed. Repeat for all metrics you'd like displayed on the dashboard. In the following image, the "Blobs - Total requests" chart is highlighted as an example, but all the charts are available for placement on your dashboard.
-
- ![Tile gallery in Azure portal](./media/manage-storage-analytics-metrics/storage-customize-dashboard.png)
-1. Select **Done customizing** near the top of the dashboard when you're done adding charts.
-
-Once you've added charts to your dashboard, you can further customize them as described in Customize metrics charts.
-
-## Next steps
--- To learn more about Storage Analytics, see [Storage Analytics](storage-analytics.md) for Storage Analytics.-- [Configure Storage Analytics logs](manage-storage-analytics-logs.md).-- Learn more about the metrics schema. See [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema).
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Previously updated : 01/04/2024 Last updated : 01/11/2024
# Change how a storage account is replicated
-Azure Storage always stores multiple copies of your data to protect it in the face of both planned and unplanned events. These events include transient hardware failures, network or power outages, and massive natural disasters. Data redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/), even in the face of failures.
+Azure Storage always stores multiple copies of your data so that it's protected from planned and unplanned events. This including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets the [Service-Level Agreement (SLA) for Azure Storage](https://azure.microsoft.com/support/legal/sla/storage/) even in the face of failures.
-This article describes the process of changing replication setting(s) for an existing storage account.
+In this article, you'll learn how to change the replication setting(s) for an existing storage account.
## Options for changing the replication type
Four aspects of the redundancy configuration of a storage account determine how
- **Geo-redundancy** - replication within a single "local" region or between a primary and a secondary region (LRS vs. GRS) - **Read access (RA)** - read access to the secondary region when geo-redundancy is used (GRS vs. RA-GRS)
-For a detailed overview of all of the redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
+For an overview of all of the redundancy options, see [Azure Storage redundancy](storage-redundancy.md).
-You can change redundancy configurations when necessary, though some configurations are subject to [limitations](#limitations-for-changing-replication-types) and [downtime requirements](#downtime-requirements). To ensure that the limitations and requirements don't affect your timeframe and uptime requirements, always review these limitations and requirements before making any changes.
+You can change how your storage account is replicated from any redundancy configuration to any other with some limitations. Before making any changes, review those [limitations](#limitations-for-changing-replication-types) along with the [downtime requirements](#downtime-requirements) to ensure you have a plan that provides the best end result within a time frame that suits your needs, and that satisfies your uptime requirements.
There are three ways to change the replication settings: -- [Add or remove geo-replication or read access](#change-the-replication-setting-using-the-portal-powershell-or-the-cli) to the secondary region.-- [Add or remove zone-redundancy](#perform-a-conversion) by performing a conversion.-- [Perform a manual migration](#manual-migration) in scenarios where the first two options aren't supported, or to ensure the change is completed within a specific timeframe.
+- [Use the Azure portal, Azure PowerShell, or the Azure CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli) to add or remove geo-replication or read access to the secondary region.
+- [Perform a conversion](#perform-a-conversion) to add or remove zone-redundancy.
+- [Perform a manual migration](#manual-migration) in scenarios where the first two options aren't supported, or to ensure the change completes within a specific time.
-Geo-redundancy and read-access can be changed at the same time. However, any change that also involves zone-redundancy requires a conversion and must be performed separately using a two-step process. These two steps can be performed in any order.
+If you want to change both zone-redundancy and either geo-replication or read-access, a two-step process is required. Geo-redundancy and read-access can be changed at the same time, but the zone-redundancy conversion must be performed separately. These steps can be performed in any order.
### Replication change table
-The following table provides an overview of how to switch between replication types.
+The following table provides an overview of how to switch from each type of replication to another.
> [!NOTE]
-> Manual migration is an option for any scenario in which you want to change the replication setting within the [limitations for changing replication types](#limitations-for-changing-replication-types). The manual migration option is excluded from the following table for simplification.
+> Manual migration is an option for any scenario in which you want to change the replication setting within the [limitations for changing replication types](#limitations-for-changing-replication-types). The manual migration option has been omitted from the provided table to simplify it.
| Switching | …to LRS | …to GRS/RA-GRS <sup>6</sup> | …to ZRS | …to GZRS/RA-GZRS <sup>2,6</sup> | |--|-||-||
The following table provides an overview of how to switch between replication ty
<sup>1</sup> [Adding geo-redundancy incurs a one-time egress charge](#costs-associated-with-changing-how-data-is-replicated).<br /> <sup>2</sup> If your storage account contains blobs in the archive tier, review the [access tier limitations](#access-tier) before changing the redundancy type to geo- or zone-redundant.<br />
-<sup>3</sup> The type of conversion supported depends on the storage account type. For more information, see the [storage account table](#storage-account-type).<br />
-<sup>4</sup> Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported. For more information, see [Failover and failback](#failover-and-failback).<br />
-<sup>5</sup> Converting from LRS to ZRS [isn't supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares](#protocol-support). <br />
-<sup>6</sup> Even though enabling geo-redundancy appears to occur instantaneously, failover to the secondary region can't be initiated until data synchronization between the two regions is complete.<br />
+<sup>3</sup> The type of conversion supported depends on the storage account type. See [the storage account table](#storage-account-type) for more details.<br />
+<sup>4</sup> Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported. For more details, see [Failover and failback](#failover-and-failback).<br />
+<sup>5</sup> Converting from LRS to ZRS is [not supported if the NFSv3 protocol support is enabled for Azure Blob Storage or if the storage account contains Azure Files NFSv4.1 shares](#protocol-support). <br />
+<sup>6</sup> Even though enabling geo-redundancy appears to occur instantaneously, failover to the secondary region cannot be initiated until data synchronization between the two regions has completed.<br />
## Change the replication setting
Depending on your scenario from the [replication change table](#replication-chan
### Change the replication setting using the portal, PowerShell, or the CLI
-In most cases you can use the Azure portal, PowerShell, or the Azure CLI to change the geo-redundant or read access (RA) replication setting for a storage account.
+In most cases you can use the Azure portal, PowerShell, or the Azure CLI to change the geo-redundant or read access (RA) replication setting for a storage account. If you are initiating a zone redundancy conversion, you can change the setting from within the Azure portal, but not from PowerShell or the Azure CLI.
Changing how your storage account is replicated in the Azure portal doesn't result in down time for your applications, including changes that require a conversion.
To change the redundancy option for your storage account in the Azure portal, fo
1. Update the **Redundancy** setting. 1. Select **Save**.
- :::image type="content" source="media/redundancy-migration/change-replication-option-sml.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="media/redundancy-migration/change-replication-option.png":::
+ :::image type="content" source="media/redundancy-migration/change-replication-option.png" alt-text="Screenshot showing how to change replication option in portal." lightbox="media/redundancy-migration/change-replication-option.png":::
# [PowerShell](#tab/powershell)
-You can use Azure PowerShell to change the redundancy options for your storage account.
-
-To change between locally redundant and geo-redundant storage, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) cmdlet and specify the `-SkuName` parameter.
+To change the redundancy option for your storage account with PowerShell, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command and specify the `-SkuName` parameter:
```powershell Set-AzStorageAccount -ResourceGroupName <resource_group> `
Set-AzStorageAccount -ResourceGroupName <resource_group> `
-SkuName <sku> ```
-You can also add or remove zone redundancy to your storage account. To change between locally redundant and zone-redundant storage with PowerShell, call the [Start-AzStorageAccountMigration](/powershell/module/az.storage/start-azstorageaccountmigration) command and specify the `-TargetSku` parameter:
-
-```powershell
-Start-AzStorageAccountMigration
- -AccountName <String>
- -ResourceGroupName <String>
- -TargetSku <String>
- -AsJob
-```
-
-To track the current migration status of the conversion initiated on your storage account, call the [Get-AzStorageAccountMigration](/powershell/module/az.storage/get-azstorageaccountmigration) cmdlet:
-
-```powershell
-Get-AzStorageAccountMigration
- -AccountName <String>
- -ResourceGroupName <String>
-```
- # [Azure CLI](#tab/azure-cli)
-You can use the Azure CLI to change the redundancy options for your storage account.
-
-To change between locally redundant and geo-redundant storage, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and specify the `--sku` parameter:
+To change the redundancy option for your storage account with Azure CLI, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and specify the `--sku` parameter:
```azurecli-interactive az storage account update \
- --name <storage-account> \
+ --name <storage-account>
--resource-group <resource_group> \ --sku <sku> ```
-You can also add or remove zone redundancy to your storage account. To change between locally redundant and zone-redundant storage with Azure CLI, call the [az storage account migration start](/cli/azure/storage/account/migration#az-storage-account-migration-start) command and specify the `--sku` parameter:
-
-```azurecli-interactive
-az storage account migration start \
- -- account-name <string> \
- -- g <string> \
- --sku <string> \
- --no-wait
-```
-
-To track the current migration status of the conversion initiated on your storage account with Azure CLI, use the [az storage account migration show](/cli/azure/storage/account/migration#az-storage-account-migration-show) command:
-
-```azurecli-interactive
-az storage account migration show
- --account-name <string>
- - g <sting>
- -n "default"
-```
- ### Perform a conversion A redundancy "conversion" is the process of changing the zone-redundancy aspect of a storage account.
-During a conversion, [there's no data loss or application downtime required](#downtime-requirements).
+During a conversion, [there is no data loss or application downtime required](#downtime-requirements).
There are two ways to initiate a conversion:
Customer-initiated conversion adds a new option for customers to start a convers
> > There is no SLA for completion of a customer-initiated conversion. >
-> For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).
+> For more details about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).
Customer-initiated conversion is only available from the Azure portal, not from PowerShell or the Azure CLI. To initiate the conversion, perform the same steps used for changing other replication settings in the Azure portal as described in [Change the replication setting using the portal, PowerShell, or the CLI](#change-the-replication-setting-using-the-portal-powershell-or-the-cli).
-Customer-initiated conversion isn't available in all regions. For more information, see the [region limitations](#region) article.
+Customer-initiated conversion is not available in all regions. See the [region limitations](#region) for more details.
##### Monitoring customer-initiated conversion progress The status of your customer-initiated conversion is displayed on the **Redundancy** page of the storage account:
-As the conversion request is evaluated and processed, the status should progress through the list shown in the following table:
+As the conversion request is evaluated and processed, the status should progress through the list shown in the table below:
| Status | Explanation | ||--| | Submitted for conversion | The conversion request was successfully submitted for processing. |
-| In Progress<sup>1</sup> | The actual conversion is in progress. |
-| Completed<br>**- or -**</br>Failed<sup>2</sup> | The conversion is completed successfully.<br>**- or -**</br>The conversion failed. |
+| In Progress<sup>1</sup> | The actual conversion has begun. |
+| Completed<br>**- or -**</br>Failed<sup>2</sup> | The conversion has successfully completed.<br>**- or -**</br>The conversion failed. |
-<sup>1</sup> Once initiated, the conversion could take up to 72 hours to actually **begin**. If the conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. For more information about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br />
+<sup>1</sup> Once initiated, the conversion could take up to 72 hours to actually **begin**. If the conversion does not enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. For more details about the timing of a customer-initiated conversion, see [Timing and frequency](#timing-and-frequency).<br />
<sup>2</sup> If the conversion fails, submit a support request to Microsoft to determine the reason for the failure.<br /> > [!NOTE]
Follow these steps to request a conversion from Microsoft:
- **Problem type**: Choose **Data Migration**. - **Problem subtype**: Choose **Migrate to ZRS, GZRS, or RA-GZRS**.
- :::image type="content" source="media/redundancy-migration/request-live-migration-problem-desc-portal-sml.png" alt-text="Screenshot showing how to request a conversion - Problem description tab." lightbox="media/redundancy-migration/request-live-migration-problem-desc-portal.png":::
+ :::image type="content" source="media/redundancy-migration/request-live-migration-problem-desc-portal.png" alt-text="Screenshot showing how to request a conversion - Problem description tab." lightbox="media/redundancy-migration/request-live-migration-problem-desc-portal.png":::
1. Select **Next**. The **Recommended solution** tab might be displayed briefly before it switches to the **Solutions** page. On the **Solutions** page, you can check the eligibility of your storage account(s) for conversion: - **Target replication type**: (choose the desired option from the drop-down) - **Storage accounts from**: (enter a single storage account name or a list of accounts separated by semicolons) - Select **Submit**.
- :::image type="content" source="media/redundancy-migration/request-live-migration-solutions-portal-sml.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page." lightbox="media/redundancy-migration/request-live-migration-solutions-portal.png":::
+ :::image type="content" source="media/redundancy-migration/request-live-migration-solutions-portal.png" alt-text="Screenshot showing how to check the eligibility of your storage account(s) for conversion - Solutions page." lightbox="media/redundancy-migration/request-live-migration-solutions-portal.png":::
-1. Take the appropriate action if the results indicate your storage account isn't eligible for conversion. Otherwise, select **Return to support request**.
+1. Take the appropriate action if the results indicate your storage account is not eligible for conversion. If it is eligible, select **Return to support request**.
1. Select **Next**. If you have more than one storage account to migrate, on the **Details** tab, specify the name for each account, separated by a semicolon.
- :::image type="content" source="media/redundancy-migration/request-live-migration-details-portal-sml.png" alt-text="Screenshot showing how to request a conversion - Additional details tab." lightbox="media/redundancy-migration/request-live-migration-details-portal.png":::
+ :::image type="content" source="media/redundancy-migration/request-live-migration-details-portal.png" alt-text="Screenshot showing how to request a conversion - Additional details tab." lightbox="media/redundancy-migration/request-live-migration-details-portal.png":::
-1. Provide the required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. An Azure support agent reviews your case and contacts you to provide assistance.
+1. Fill out the additional required information on the **Additional details** tab, then select **Review + create** to review and submit your support ticket. A support person will contact you to provide any assistance you may need.
### Manual migration
-A manual migration provides more flexibility and control than a conversion. You can use this option if you need your data moved by a certain date, or if conversion [isn't supported for your scenario](#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. For more detail, see [Move an Azure Storage account to another region](storage-account-move.md).
+A manual migration provides more flexibility and control than a conversion. You can use this option if you need your data moved by a certain date, or if conversion is [not supported for your scenario](#limitations-for-changing-replication-types). Manual migration is also useful when moving a storage account to another region. See [Move an Azure Storage account to another region](storage-account-move.md) for more details.
You must perform a manual migration if: - You want to migrate your storage account to a different region. - Your storage account is a block blob account.-- Your storage account includes data in the archive tier and rehydrating the data isn't desired.
+- Your storage account includes data in the archive tier and rehydrating the data is not desired.
> [!IMPORTANT] > A manual migration can result in application downtime. If your application requires high availability, Microsoft also provides a [conversion](#perform-a-conversion) option. A conversion is an in-place migration with no downtime.
Limitations apply to some replication change scenarios depending on:
### Region
-Make sure the region where your storage account is located supports all of the desired replication settings. For example, if you're converting your account to zone-redundant (ZRS, GZRS, or RA-GZRS), make sure your storage account is in a region that supports it. See the lists of supported regions for [Zone-redundant storage](storage-redundancy.md#zone-redundant-storage) and [Geo-zone-redundant storage](storage-redundancy.md#geo-zone-redundant-storage).
+Make sure the region where your storage account is located supports all of the desired replication settings. For example, if you are converting your account to zone-redundant (ZRS, GZRS, or RA-GZRS), make sure your storage account is in a region that supports it. See the lists of supported regions for [Zone-redundant storage](storage-redundancy.md#zone-redundant-storage) and [Geo-zone-redundant storage](storage-redundancy.md#geo-zone-redundant-storage).
> [!IMPORTANT] > [Customer-initiated conversion](#customer-initiated-conversion) from LRS to ZRS is available in all public regions that support ZRS except for the following: > > - (Europe) Italy North
-> - (Europe) UK South
> - (Europe) Poland Central > - (Europe) West Europe
+> - (Europe) UK South
> - (Middle East) Israel Central > - (North America) Canada Central > - (North America) East US
Make sure the region where your storage account is located supports all of the d
### Feature conflicts
-Some storage account features aren't compatible with other features or operations. For example, the ability to fail over to the secondary region is the key feature of geo-redundancy, but other features aren't compatible with failover. For more information about features and services not supported with failover, see [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services). The conversion of an account to GRS, GZRS, or RA-GZRS might be blocked if a conflicting feature is enabled, or it might be necessary to disable the feature later before initiating a failover.
+Some storage account features are not compatible with other features or operations. For example, the ability to failover to the secondary region is the key feature of geo-redundancy, but other features are not compatible with failover. For more information about features and services not supported with failover, see [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services). Converting an account to GRS, GZRS, or RA-GZRS might be blocked if a conflicting feature is enabled, or it might be necessary to disable the feature later before initiating a failover.
### Storage account type When planning to change your replication settings, consider the following limitations related to the storage account type.
-Some storage account types only support certain redundancy configurations, which affect whether they can be converted or migrated and, if so, how. For more information on Azure storage account types and the supported redundancy options, see [the storage account overview](storage-account-overview.md#types-of-storage-accounts).
+Some storage account types only support certain redundancy configurations, which affects whether they can be converted or migrated and, if so, how. For more details on Azure storage account types and the supported redundancy options, see [the storage account overview](storage-account-overview.md#types-of-storage-accounts).
The following table provides an overview of redundancy options available for storage account types and whether conversion and manual migration are supported:
The following table provides an overview of redundancy options available for sto
| Standard general purpose v1 | &#x2705; | | <sup>3</sup> | | &#x2705; | | ZRS Classic<sup>4</sup><br /><sub>(available in standard general purpose v1 accounts)</sub> | &#x2705; | | | |
-<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-requested-conversion); [Customer-initiated conversion](#customer-initiated-conversion) isn't currently supported.<br />
-<sup>2</sup> Managed disks are available for LRS and ZRS, though ZRS disks have some [limitations](../../virtual-machines/disks-redundancy.md#limitations). If an LRS disk is regional (no zone specified), it can be converted by [changing the SKU](../../virtual-machines/disks-convert-types.md). If an LRS disk is zonal, then it can only be manually migrated by following the process in [Migrate your managed disks](../../reliability/migrate-vm.md#migrate-your-managed-disks). You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
-<sup>3</sup> If your storage account is v1, you need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br />
-<sup>4</sup> ZRS Classic storage accounts are deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
+<sup>1</sup> Conversion for premium file shares is only available by [opening a support request](#support-requested-conversion); [Customer-initiated conversion](#customer-initiated-conversion) is not currently supported.<br />
+<sup>2</sup> Managed disks are available for LRS and ZRS, though ZRS disks have some [limitations](../../virtual-machines/disks-redundancy.md#limitations). If a LRS disk is regional (no zone specified) it may be converted by [changing the SKU](../../virtual-machines/disks-convert-types.md). If a LRS disk is zonal, then it can only be manually migrated by following the process in [Migrate your managed disks](../../reliability/migrate-vm.md#migrate-your-managed-disks). You can store snapshots and images for standard SSD managed disks on standard HDD storage and [choose between LRS and ZRS options](https://azure.microsoft.com/pricing/details/managed-disks/). For information about integration with availability sets, see [Introduction to Azure managed disks](../../virtual-machines/managed-disks-overview.md#integration-with-availability-sets).<br />
+<sup>3</sup> If your storage account is v1, you'll need to upgrade it to v2 before performing a conversion. To learn how to upgrade your v1 account, see [Upgrade to a general-purpose v2 storage account](storage-account-upgrade.md).<br />
+<sup>4</sup> ZRS Classic storage accounts have been deprecated. For information about converting ZRS Classic accounts, see [Converting ZRS Classic accounts](#converting-zrs-classic-accounts).<br />
#### Converting ZRS Classic accounts
The following table provides an overview of redundancy options available for sto
ZRS Classic was available only for **block blobs** in general-purpose V1 (GPv1) storage accounts. For more information about storage accounts, see [Azure storage account overview](storage-account-overview.md).
-ZRS Classic accounts asynchronously replicated data across data centers within one to two regions. Replicated data wasn't available unless Microsoft initiated a failover to the secondary. A ZRS Classic account can't be converted to or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
+ZRS Classic accounts asynchronously replicated data across data centers within one to two regions. Replicated data was not available unless Microsoft initiated a failover to the secondary. A ZRS Classic account can't be converted to or from LRS, GRS, or RA-GRS. ZRS Classic accounts also don't support metrics or logging.
To change ZRS Classic to another replication type, use one of the following methods:
az storage account update -g <resource_group> -n <storage_account> --set kind=St
To manually migrate your ZRS Classic account data to another type of replication, follow the steps to [perform a manual migration](#manual-migration).
-If you want to migrate your data into a zone-redundant storage account located in a region different from the source account, you must perform a manual migration. For more information, see [Move an Azure Storage account to another region](storage-account-move.md).
+If you want to migrate your data into a zone-redundant storage account located in a region different from the source account, you must perform a manual migration. For more details, see [Move an Azure Storage account to another region](storage-account-move.md).
### Access tier
-Make sure the desired redundancy option supports the access tiers currently used in the storage account. For example, ZRS, GZRS and RA-GZRS storage accounts don't support the archive tier. For more information, see [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md). To convert an LRS, GRS or RA-GRS account to one that supports zone-redundancy, first move the archived blobs to a storage account that supports blobs in the archive tier. Then convert the source account to ZRS, GZRS and RA-GZRS.
+Make sure the desired redundancy option supports the access tiers currently used in the storage account. For example, ZRS, GZRS and RA-GZRS storage accounts do not support the archive tier. See [Hot, Cool, and Archive access tiers for blob data](../blobs/access-tiers-overview.md) for more details. To convert an LRS, GRS or RA-GRS account to one that supports zone-redundancy, first move the archived blobs to a storage account that supports blobs in the archive tier. Then convert the source account to ZRS, GZRS and RA-GZRS.
-An LRS storage account containing blobs in the archive tier can be switched to GRS or RA-GRS after rehydrating all archived blobs to the Hot or Cool tier. You can also perform a [manual migration](#manual-migration).
+To switch an LRS storage account that contains blobs in the archive tier to GRS or RA-GRS, you must first rehydrate all archived blobs to the Hot or Cool tier or perform a [manual migration](#manual-migration).
> [!TIP] > Microsoft recommends that you avoid changing the redundancy configuration for a storage account that contains archived blobs if at all possible, because rehydration operations can be costly and time-consuming. But if you must change it, a [manual migration](#manual-migration) can save you the expense of rehydration. ### Protocol support
-You can't convert storage accounts to zone-redundancy (ZRS, GZRS or RA-GZRS) if either of the following cases are true:
+Converting your storage account to zone-redundancy (ZRS, GZRS or RA-GZRS) is not supported if either of the following is true:
- NFSv3 protocol support is enabled for Azure Blob Storage - The storage account contains Azure Files NFSv4.1 shares ### Failover and failback
-After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). [Initiate the failover](storage-initiate-account-failover.md#initiate-the-failover).
+After an account failover to the secondary region, it's possible to initiate a failback from the new primary back to the new secondary with PowerShell or Azure CLI (version 2.30.0 or later). For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
-If you performed a customer-managed account failover to recover from an outage for your GRS or RA-GRS account, the account becomes locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover isn't supported, even for so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it remains LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it remains LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, perform a manual migration to add zone-redundancy.
+If you performed an account failover for your GRS or RA-GRS account, the account is locally redundant (LRS) in the new primary region after the failover. Conversion to ZRS or GZRS for an LRS account resulting from a failover is not supported. This is true even in the case of so-called failback operations. For example, if you perform an account failover from RA-GRS to LRS in the secondary region, and then configure it again as RA-GRS, it will be LRS in the new secondary region (the original primary). If you then perform another account failover to failback to the original primary region, it will be LRS again in the original primary. In this case, you can't perform a conversion to ZRS, GZRS or RA-GZRS in the primary region. Instead, you'll need to perform a manual migration to add zone-redundancy.
## Downtime requirements
-During a [conversion](#perform-a-conversion), you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and no data is lost during a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
+During a [conversion](#perform-a-conversion), you can access data in your storage account with no loss of durability or availability. [The Azure Storage SLA](https://azure.microsoft.com/support/legal/sla/storage/) is maintained during the migration process and there is no data loss associated with a conversion. Service endpoints, access keys, shared access signatures, and other account options remain unchanged after the migration.
If you choose to perform a manual migration, downtime is required but you have more control over the timing of the migration process. ## Timing and frequency
-If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to actually **begin**. It could take longer to start if you [request a conversion by opening a support request](#support-requested-conversion). If a customer-initiated conversion doesn't enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
+If you initiate a zone-redundancy [conversion](#customer-initiated-conversion) from the Azure portal, the conversion process could take up to 72 hours to actually **begin**. It could take longer to start if you [request a conversion by opening a support request](#support-requested-conversion). If a customer-initiated conversion does not enter the "In Progress" status within 96 hours of initiating the request, submit a support request to Microsoft to determine why. To monitor the progress of a customer-initiated conversion, see [Monitoring customer-initiated conversion progress](#monitoring-customer-initiated-conversion-progress).
> [!IMPORTANT] > There is no SLA for completion of a conversion. If you need more control over when a conversion begins and finishes, consider a [Manual migration](#manual-migration). Generally, the more data you have in your account, the longer it takes to replicate that data to other zones or regions.
After a zone-redundancy conversion, you must wait at least 72 hours before chang
## Costs associated with changing how data is replicated
-Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-GZRS, ordered by price where LRS is the least expensive and RA-GZRS is the most expensive.
+Ordering from the least to the most expensive, Azure Storage redundancy offerings include LRS, ZRS, GRS, RA-GRS, GZRS, and RA-GZRS.
-The costs associated with changing how data is replicated in your storage account depend on which [aspects of your redundancy configuration](#options-for-changing-the-replication-type) you change. A combination of data storage and egress bandwidth pricing determines the cost of making a change. For details on pricing, see [Azure Storage Pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
+The costs associated with changing how data is replicated in your storage account depend on which [aspects of your redundancy configuration](#options-for-changing-the-replication-type) you change. A combination of data storage and egress bandwidth pricing determine the cost of making a change. For details on pricing, see [Azure Storage Pricing page](https://azure.microsoft.com/pricing/details/storage/blobs/).
-If you add zone-redundancy in the primary region, there's no initial cost associated with making that conversion, but the ongoing data storage cost is higher due to the increased replication and storage space required.
+If you add zone-redundancy in the primary region, there is no initial cost associated with making that conversion, but the ongoing data storage cost will be higher due to the additional replication and storage space required.
-Geo-redundancy incurs an egress bandwidth charge at the time of the change because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region.
+If you add geo-redundancy, you will incur an egress bandwidth charge at the time of the change because your entire storage account is being replicated to the secondary region. All subsequent writes to the primary region also incur egress bandwidth charges to replicate the write to the secondary region.
-If you remove geo-redundancy (change from GRS to LRS), there's no cost for making the change, but your replicated data is deleted from the secondary location.
+If you remove geo-redundancy (change from GRS to LRS), there is no cost for making the change, but your replicated data is deleted from the secondary location.
> [!IMPORTANT] > If you remove read access to the secondary region (RA) (change from RA-GRS to GRS or LRS), that account is billed as RA-GRS for an additional 30 days beyond the date that it was converted.
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Previously updated : 01/09/2023 Last updated : 08/17/2023
To estimate the cost of storing and accessing blob data in a general-purpose v2
To decide on the best access tier for your needs, it can be helpful to determine your blob data capacity, and how that data is being used. This can be best done by looking at the monitoring metrics for your account.
-To estimate the cost of storing and accessing blob data in a general-purpose v2 storage account in a particular tier, evaluate your existing usage pattern or approximate your expected usage pattern. In general, you want to know:
+### Monitoring existing storage accounts
-- Your Blob storage consumption, in gigabytes, including:
- - How much data is being stored in the storage account?
- - How does the data volume change on a monthly basis; does new data constantly replace old data?
+To monitor your existing storage accounts and gather this data, you can make use of Azure Storage Analytics, which performs logging and provides metrics data for a storage account. Storage Analytics can store metrics that include aggregated transaction statistics and capacity data about requests to the storage service for GPv1, GPv2, and Blob storage account types. This data is stored in well-known tables in the same storage account.
-- The primary access pattern for your Blob storage data, including:
- - How much data is being read from and written to the storage account?
- - How many read operations versus write operations occur on the data in the storage account?
+For more information, see [About Storage Analytics Metrics](../blobs/monitor-blob-storage.md) and [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema)
-To decide on the best access tier for your needs, it can be helpful to determine your blob data capacity, and how that data is being used. This can be best done by looking at the monitoring metrics for your account.
+> [!NOTE]
+> Blob storage accounts expose the Table service endpoint only for storing and accessing the metrics data for that account.
-### Monitoring existing storage accounts
+To monitor the storage consumption for Blob storage, you need to enable the capacity metrics.
+With this enabled, capacity data is recorded daily for a storage account's Blob service and recorded as a table entry that is written to the *$MetricsCapacityBlob* table within the same storage account.
-To monitor your existing storage accounts and gather this data, you can make use of storage metrics in Azure Monitor. Azure Monitor stores metrics that include aggregated transaction statistics and capacity data about requests to the storage service. Azure Storage sends metric data to the Azure Monitor back end. Azure Monitor provides a unified monitoring experience that includes data from the Azure portal as well as data that is ingested. For more information, see any of these articles:
+To monitor data access patterns for Blob storage, you need to enable the hourly transaction metrics from the API. With hourly transaction metrics enabled, per API transactions are aggregated every hour, and recorded as a table entry that is written to the *$MetricsHourPrimaryTransactionsBlob* table within the same storage account. The *$MetricsHourSecondaryTransactionsBlob* table records the transactions to the secondary endpoint when using RA-GRS storage accounts.
-- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)-- [Monitoring Azure Files](../files/storage-files-monitoring.md)-- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)-- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+> [!NOTE]
+> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process isn't applicable. The capacity data doesn't differentiate block blobs from other types, and doesn't give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill.
-In order to estimate the data access costs for Blob storage accounts, you need to break down the transactions into two groups.
+To get a good approximation of your data consumption and access pattern, we recommend you choose a retention period for the metrics that is representative of your regular usage and extrapolate. One option is to retain the metrics data for seven days and collect the data every week, for analysis at the end of the month. Another option is to retain the metrics data for the last 30 days and collect and analyze the data at the end of the 30-day period.
+
+For details on enabling, collecting, and viewing metrics data, see [Storage analytics metrics](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
-- The amount of data retrieved from the storage account can be estimated by looking at the sum of the *'Ingress'* metric for primarily the *'GetBlob'* and *'CopyBlob'* operations.
+> [!NOTE]
+> Storing, accessing, and downloading analytics data is also charged just like regular user data.
+
+### Utilizing usage metrics to estimate costs
+
+#### Capacity costs
+
+The latest entry in the capacity metrics table *$MetricsCapacityBlob* with the row key *'data'* shows the storage capacity consumed by user data. The latest entry in the capacity metrics table *$MetricsCapacityBlob* with the row key *'analytics'* shows the storage capacity consumed by the analytics logs.
+
+This total capacity consumed by both user data and analytics logs (if enabled) can then be used to estimate the cost of storing data in the storage account. The same method can also be used for estimating storage costs in GPv1 storage accounts.
+
+#### Transaction costs
+
+The sum of *'TotalBillableRequests'*, across all entries for an API in the transaction metrics table indicates the total number of transactions for that particular API. *For example*, the total number of *'GetBlob'* transactions in a given period can be calculated by the sum of total billable requests for all entries with the row key *'user;GetBlob'*.
+
+In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they're priced differently.
+
+- Write transactions such as *'PutBlob'*, *'PutBlock'*, *'PutBlockList'*, *'AppendBlock'*, *'ListBlobs'*, *'ListContainers'*, *'CreateContainer'*, *'SnapshotBlob'*, and *'CopyBlob'*.
+- Delete transactions such as *'DeleteBlob'* and *'DeleteContainer'*.
+- All other transactions.
+
+In order to estimate transaction costs for GPv1 storage accounts, you need to aggregate all transactions irrespective of the operation/API.
+
+#### Data access and geo-replication data transfer costs
+
+While storage analytics doesn't provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes.
+
+In order to estimate the data access costs for Blob storage accounts, you need to break down the transactions into two groups.
-- The amount of data written to the storage account can be estimated by looking at the sum of *'Egress'* metrics for primarily the *'PutBlob'*, *'PutBlock'*, *'CopyBlob'* and *'AppendBlock'* operations.
+- The amount of data retrieved from the storage account can be estimated by looking at the sum of *'TotalEgress'* for primarily the *'GetBlob'* and *'CopyBlob'* operations.
-To determine the price of each operation against the blob storage service, see [Map each REST operation to a price](../blobs/map-rest-apis-transaction-categories.md).
+- The amount of data written to the storage account can be estimated by looking at the sum of *'TotalIngress'* for primarily the *'PutBlob'*, *'PutBlock'*, *'CopyBlob'* and *'AppendBlock'* operations.
The cost of geo-replication data transfer for Blob storage accounts can also be calculated by using the estimate for the amount of data written when using a GRS or RA-GRS storage account.
storage Storage Analytics Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics-metrics.md
- Title: "Azure Storage Analytics metrics (classic)"
-description: Learn how to use Storage Analytics metrics in Azure Storage. Learn about transaction and capacity metrics, how metrics are stored, enabling metrics, and more.
--- Previously updated : 01/02/2024------
-# Azure Storage Analytics metrics (classic)
-
-On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* will be retired. If you use classic metrics, make sure to transition to metrics in Azure Monitor prior to that date. This article helps you make the transition.
-
-Azure Storage uses the Storage Analytics solution to store metrics that include aggregated transaction statistics and capacity data about requests to a storage service. Transactions are reported at the API operation level and at the storage service level. Capacity is reported at the storage service level. Metrics data can be used to:
-- Analyze storage service usage.-- Diagnose issues with requests made against the storage service.-- Improve the performance of applications that use a service.-
- Storage Analytics metrics are enabled by default for new storage accounts. You can configure metrics in the [Azure portal](https://portal.azure.com/), by using PowerShell, or by using the Azure CLI. For step-by-step guidance, see [Enable and manage Azure Storage Analytic metrics (classic)](./manage-storage-analytics-logs.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the Set Service Properties operations to enable Storage Analytics for each service.
-
-> [!NOTE]
-> Storage Analytics metrics are available for Azure Blob storage, Azure Queue storage, Azure Table storage, and Azure Files.
-> Storage Analytics metrics are now classic metrics. We recommend that you use [storage metrics in Azure Monitor](../blobs/monitor-blob-storage.md) instead of Storage Analytics metrics.
-
-## Transaction metrics
-
- A robust set of data is recorded at hourly or minute intervals for each storage service and requested API operation, which includes ingress and egress, availability, errors, and categorized request percentages. For a complete list of the transaction details, see [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema).
-
- Transaction data is recorded at the service level and the API operation level. At the service level, statistics that summarize all requested API operations are written to a table entity every hour, even if no requests were made to the service. At the API operation level, statistics are only written to an entity if the operation was requested within that hour.
-
- For example, if you perform a **GetBlob** operation on your blob service, Storage Analytics Metrics logs the request and includes it in the aggregated data for the blob service and the **GetBlob** operation. If no **GetBlob** operation is requested during the hour, an entity isn't written to *$MetricsTransactionsBlob* for that operation.
-
- Transaction metrics are recorded for user requests and requests made by Storage Analytics itself. For example, requests by Storage Analytics to write logs and table entities are recorded.
-
-## Capacity metrics
-
-> [!NOTE]
-> Currently, capacity metrics are available only for the blob service.
-
- Capacity data is recorded daily for a storage account's blob service, and two table entities are written. One entity provides statistics for user data, and the other provides statistics about the `$logs` blob container used by Storage Analytics. The *$MetricsCapacityBlob* table includes the following statistics:
--- **Capacity**: The amount of storage used by the storage account's blob service, in bytes.-- **ContainerCount**: The number of blob containers in the storage account's blob service.-- **ObjectCount**: The number of committed and uncommitted block or page blobs in the storage account's blob service.-
- For more information about capacity metrics, see [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema).
-
-## How metrics are stored
-
- All metrics data for each of the storage services is stored in three tables reserved for that service. One table is for transaction information, one table is for minute transaction information, and another table is for capacity information. Transaction and minute transaction information consists of request and response data. Capacity information consists of storage usage data. Hour metrics, minute metrics, and capacity for a storage account's blob service is accessed in tables that are named as described in the following table.
-
-|Metrics level|Table names|Supported for versions|
-|-|--|-|
-|Hourly metrics, primary location|- $MetricsTransactionsBlob<br />- $MetricsTransactionsTable<br />- $MetricsTransactionsQueue|Versions prior to August 15, 2013, only. While these names are still supported, we recommend that you switch to using the tables that follow.|
-|Hourly metrics, primary location|- $MetricsHourPrimaryTransactionsBlob<br />- $MetricsHourPrimaryTransactionsTable<br />- $MetricsHourPrimaryTransactionsQueue<br />- $MetricsHourPrimaryTransactionsFile|All versions. Support for file service metrics is available only in version April 5, 2015, and later.|
-|Minute metrics, primary location|- $MetricsMinutePrimaryTransactionsBlob<br />- $MetricsMinutePrimaryTransactionsTable<br />- $MetricsMinutePrimaryTransactionsQueue<br />- $MetricsMinutePrimaryTransactionsFile|All versions. Support for file service metrics is available only in version April 5, 2015, and later.|
-|Hourly metrics, secondary location|- $MetricsHourSecondaryTransactionsBlob<br />- $MetricsHourSecondaryTransactionsTable<br />- $MetricsHourSecondaryTransactionsQueue|All versions. Read-access geo-redundant replication must be enabled.|
-|Minute metrics, secondary location|- $MetricsMinuteSecondaryTransactionsBlob<br />- $MetricsMinuteSecondaryTransactionsTable<br />- $MetricsMinuteSecondaryTransactionsQueue|All versions. Read-access geo-redundant replication must be enabled.|
-|Capacity (blob service only)|$MetricsCapacityBlob|All versions.|
-
- These tables are automatically created when Storage Analytics is enabled for a storage service endpoint. They're accessed via the namespace of the storage account, for example, `https://<accountname>.table.core.windows.net/Tables("$MetricsTransactionsBlob")`. The metrics tables don't appear in a listing operation and must be accessed directly via the table name.
-
-## Metrics alerts
-
-Consider setting up alerts in the [Azure portal](https://portal.azure.com) so you'll be automatically notified of important changes in the behavior of your storage services. For step-by-step guidance, see [Create metrics alerts](./manage-storage-analytics-logs.md).
-
-If you use a Storage Explorer tool to download this metrics data in a delimited format, you can use Microsoft Excel to analyze the data. For a list of available Storage Explorer tools, see [Azure Storage client tools](./storage-explorers.md).
-
-> [!IMPORTANT]
-> There might be a delay between a storage event and when the corresponding hourly or minute metrics data is recorded. In the case of minute metrics, several minutes of data might be written at once. This issue can lead to transactions from earlier minutes being aggregated into the transaction for the current minute. When this issue happens, the alert service might not have all available metrics data for the configured alert interval, which might lead to alerts firing unexpectedly.
->
-
-## Billing on storage metrics
-
-Write requests to create table entities for metrics are charged at the standard rates applicable to all Azure Storage operations.
-
-Read requests of metrics data by a client are also billable at standard rates.
-
-The capacity used by the metrics tables is also billable. Use the following information to estimate the amount of capacity used for storing metrics data:
--- If each hour a service utilizes every API in every service, approximately 148 KB of data is stored every hour in the metrics transaction tables if you enabled a service-level and API-level summary.-- If within each hour a service utilizes every API in the service, approximately 12 KB of data is stored every hour in the metrics transaction tables if you enabled only a service-level summary.-- The capacity table for blobs has two rows added each day provided you opted in for logs. This scenario implies that every day the size of this table increases by up to approximately 300 bytes.-
-## Next steps
--- [Storage Analytics metrics table schema](/rest/api/storageservices/storage-analytics-metrics-table-schema)-- [Storage Analytics logged operations and status messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages)-- [Storage Analytics logging](storage-analytics-logging.md)
storage Storage Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics.md
Title: Use Azure Storage analytics to collect log data
+ Title: Use Azure Storage analytics to collect logs and metrics data
description: Storage Analytics enables you to track metrics data for all storage services, and to collect logs for Blob, Queue, and Table storage. Previously updated : 01/09/2023 Last updated : 03/03/2017
# Storage Analytics
-Azure Storage Analytics performs logging for a storage account. You can use this data to trace requests, analyze usage trends, and diagnose issues with your storage account.
-
-> [!NOTE]
-> Storage Analytics supports only logs. Storage Analytics metrics are retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json). While Storage Analytics logs are still suppported, we recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. To learn more, see any of the following articles:
->
-> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
-> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
-> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
-> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
+Azure Storage Analytics performs logging and provides metrics data for a storage account. You can use this data to trace requests, analyze usage trends, and diagnose issues with your storage account.
To use Storage Analytics, you must enable it individually for each service you want to monitor. You can enable it from the [Azure portal](https://portal.azure.com). For details, see [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the [Set Blob Service Properties](/rest/api/storageservices/set-blob-service-properties), [Set Queue Service Properties](/rest/api/storageservices/set-queue-service-properties), [Set Table Service Properties](/rest/api/storageservices/set-table-service-properties), and [Set File Service Properties](/rest/api/storageservices/Get-File-Service-Properties) operations to enable Storage Analytics for each service.
-The aggregated log data is stored in a well-known blob, which may be accessed using the Blob service and Table service APIs.
+The aggregated data is stored in a well-known blob (for logging) and in well-known tables (for metrics), which may be accessed using the Blob service and Table service APIs.
Storage Analytics has a 20 TB limit on the amount of stored data that is independent of the total limit for your storage account. For more information about storage account limits, see [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md).
For an in-depth guide on using Storage Analytics and other tools to identify, di
## Billing for Storage Analytics
-The amount of storage used by logs data is billable. You're also billed for requests to create blobs for logging.
+All metrics data is written by the services of a storage account. As a result, each write operation performed by Storage Analytics is billable. Additionally, the amount of storage used by metrics data is also billable.
+
+The following actions performed by Storage Analytics are billable:
+
+- Requests to create blobs for logging.
+- Requests to create table entities for metrics.
-If you have configured a data retention policy, you can reduce the spending by deleting old log data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
+If you have configured a data retention policy, you can reduce the spending by deleting old logging and metrics data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
### Understanding billable requests
-Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. See [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
+Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. Similarly, Storage Analytics stores metrics for both a service and the API operations of that service, including the percentages and count of certain status messages. Together, these features can help you analyze your billable requests, make improvements on your application, and diagnose issues with requests to your services. For more information about billing, see [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
-When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your log data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual API operation.
+When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your logs and metrics data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual API operation.
## Next steps - [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md)
+- [Storage Analytics Metrics](storage-analytics-metrics.md)
- [Storage Analytics Logging](storage-analytics-logging.md)
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Previously updated : 01/04/2024 Last updated : 01/11/2024
# Azure storage disaster recovery planning and failover
-Microsoft strives to ensure that Azure services are always available. However, unplanned service outages might occasionally occur. Key components of a good disaster recovery plan include strategies for:
+Microsoft strives to ensure that Azure services are always available. However, unplanned service outages may occur. Key components of a good disaster recovery plan include strategies for:
- [Data protection](../blobs/data-protection-overview.md) - [Backup and restore](../../backup/index.yml)
Microsoft strives to ensure that Azure services are always available. However, u
- [Failover](#plan-for-storage-account-failover) - [Designing applications for high availability](#design-for-high-availability)
-This article describes the options available for globally redundant storage accounts, and provides recommendations for developing highly available applications and testing your disaster recovery plan.
+This article focuses on failover for globally redundant storage accounts (GRS, GZRS, and RA-GZRS), and how to design your applications to be highly available if there's an outage and subsequent failover.
## Choose the right redundancy option
-Azure Storage maintains multiple copies of your storage account to ensure that availability and durability targets are met, even in the face of failures. The way in which data is replicated provides differing levels of protection. Each option offers its own benefits, so the option you choose depends upon the degree of resiliency your applications require.
+Azure Storage maintains multiple copies of your storage account to ensure durability and high availability. Which redundancy option you choose for your account depends on the degree of resiliency you need for your applications.
-Locally redundant storage (LRS), the lowest-cost redundancy option, automatically stores and replicates three copies of your storage account within a single datacenter. Although LRS protects your data against server rack and drive failures, it doesn't account for disasters such as fire or flooding within a datacenter. In the face of such disasters, all replicas of a storage account configured to use LRS might be lost or unrecoverable.
+With locally redundant storage (LRS), three copies of your storage account are automatically stored and replicated within a single datacenter. With zone-redundant storage (ZRS), a copy is stored and replicated in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
-By comparison, zone-redundant storage (ZRS) retains a copy of a storage account and replicates it in each of three separate availability zones within the same region. For more information about availability zones, see [Azure availability zones](../../availability-zones/az-overview.md).
-
-Recovery of a single copy of a storage account occurs automatically with both LRS and ZRS.
+Recovery of a single copy of a storage account occurs automatically with LRS and ZRS.
### Globally redundant storage and failover
-Geo-redundant storage (GRS), geo-zone-redundant storage (GZRS), and read-access geo-zone-redundant storage (RA-GZRS) are examples of globally redundant storage options.
-When configured to use globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region located hundreds of miles away. This level of redundancy allows you to recover your data if there's an outage throughout the entire primary region.
-
-Unlike LRS and ZRS, globally redundant storage also allows for failover to a secondary region if there's an outage in the primary region. During the failover process, DNS entries for your storage account service endpoints are automatically updated such that the secondary region's endpoints become the new primary endpoints. Once the failover is complete, clients can begin writing to the new primary endpoints.
+With globally redundant storage (GRS, GZRS, and RA-GZRS), Azure copies your data asynchronously to a secondary geographic region at least hundreds of miles away. This allows you to recover your data if there's an outage in the primary region. A feature that distinguishes globally redundant storage from LRS and ZRS is the ability to fail over to the secondary region if there's an outage in the primary region. The process of failing over updates the DNS entries for your storage account service endpoints such that the endpoints for the secondary region become the new primary endpoints for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoints.
-Read-access geo-redundant storage (RA-GRS) and read-access geo-zone-redundant storage (RA-GZRS) also provide geo-redundant storage, but offer the added benefit of read access to the secondary endpoint. These options are ideal for applications designed for high availability business-critical applications. If the primary endpoint experiences an outage, applications configured for read access to the secondary region can continue to operate. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
+RA-GRS and RA-GZRS redundancy configurations provide geo-redundant storage with the added benefit of read access to the secondary endpoint if there is an outage in the primary region. If an outage occurs in the primary endpoint, applications configured for read access to the secondary region and designed for high availability can continue to read from the secondary endpoint. Microsoft recommends RA-GZRS for maximum availability and durability of your storage accounts.
-For more information about redundancy for Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
+For more information about redundancy in Azure Storage, see [Azure Storage redundancy](storage-redundancy.md).
## Plan for storage account failover
-Azure Storage accounts support three types of failover:
+Azure Storage accounts support two types of failover:
-- [**Customer-managed planned failover (preview)**](#customer-managed-planned-failover-preview) - Customers can manage storage account failover to test their disaster recovery plan. - [**Customer-managed failover**](#customer-managed-failover) - Customers can manage storage account failover if there's an unexpected service outage.-- [**Microsoft-managed failover**](#microsoft-managed-failover) - Potentially initiated by Microsoft due to a severe disaster in the primary region. <sup>1,2</sup>-
-<sup>1</sup> Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).<br/>
-<sup>2</sup> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances.
-
-Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover:
-
-| Type | Failover Scope | Use case | Expected data loss | HNS supported |
-|-|--|-|||
-| Customer-managed planned failover | Storage account | The storage service endpoints for the primary and secondary regions are available, and you want to perform disaster recovery testing. <br></br> The storage service endpoints for the primary region are available, but a networking or compute outage in the primary region is preventing your workloads from functioning properly. | [No](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#azure-data-lake-storage-gen2) |
-| Customer-managed failover | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#azure-data-lake-storage-gen2) |
-| Microsoft-managed | Entire region or scale unit | The primary region becomes unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
-
-### Customer-managed planned failover (preview)
+- [**Microsoft-managed failover**](#microsoft-managed-failover) - Potentially initiated by Microsoft only in the case of a severe disaster in the primary region. <sup>1,2</sup>
-> [!IMPORTANT]
-> Customer-managed planned failover is currently in PREVIEW.
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowSoftFailover` as the feature name. The provider name for this preview feature is **Microsoft.Storage**.
+<sup>1</sup>Microsoft-managed failover can't be initiated for individual storage accounts, subscriptions, or tenants. For more details see [Microsoft-managed failover](#microsoft-managed-failover). <br/>
+<sup>2</sup> Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which would only be used in extreme circumstances. <br/>
-To test your disaster recovery plan, you can perform a planned failover of your storage account from the primary to the secondary region. During the failover process, the original secondary region becomes the new primary and the original primary becomes the new secondary. After the failover is complete, users can proceed to access data in the new primary region and administrators can validate their disaster recovery plan. The storage account must be available in both the primary and secondary regions to perform a planned failover.
+Each type of failover has a unique set of use cases, corresponding expectations for data loss, and support for accounts with a hierarchical namespace enabled (Azure Data Lake Storage Gen2). This table summarizes those aspects of each type of failover :
-You can also use this type of failover during a partial networking or compute outage in your primary region. This type of outage occurs, for example, when an outage in your primary region prevents your workloads from functioning properly, but leaves your storage service endpoints available.
-
-During customer-managed planned failover and failback, data loss isn't expected as long as the primary and secondary regions are available throughout the entire process. See [Anticipate data loss and inconsistencies](#anticipate-data-loss-and-inconsistencies).
-
-To thoroughly understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How failover for disaster recovery testing (preview) works](storage-failover-customer-managed-planned.md).
+| Type | Failover Scope | Use case | Expected data loss | HNS supported |
+||--|-|||
+| Customer-managed | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes ](#azure-data-lake-storage-gen2)*[(In preview)](#azure-data-lake-storage-gen2)* |
+| Microsoft-managed | Entire region or scale unit | The primary region becomes completely unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#azure-data-lake-storage-gen2) |
### Customer-managed failover
-Although the two types of customer-managed failover work in a similar manner, there are primarily two ways in which they differ:
--- The management of the redundancy configurations within the primary and secondary regions (LRS or ZRS).-- The status of the geo-redundancy configuration at each stage of the failover and failback process.-
-The following table compares the redundancy state of a storage account after a failover of each type:
-
-| Result of failover on... | Customer-managed planned failover | Customer-managed failover |
-|--|-|-|
-| ...the secondary region | The secondary region becomes the new primary | The secondary region becomes the new primary |
-| ...the original primary region | The original primary region becomes the new secondary |The copy of the data in the original primary region is deleted |
-| ...the account redundancy configuration | The storage account is converted to GRS | The storage account is converted to LRS |
-| ...the geo-redundancy configuration | Geo-redundancy is retained | Geo-redundancy is lost |
-
-The following table summarizes the resulting redundancy configuration at every stage of the failover and failback process for each type of failover:
-
-| Original <br> configuration | After <br> failover | After re-enabling <br> geo redundancy | After <br> failback | After re-enabling <br> geo redundancy |
-||||||
-| **Customer-managed planned failover** | | | | |
-| GRS | GRS | n/a <sup>2</sup> | GRS | n/a <sup>2</sup> |
-| GZRS | GRS | n/a <sup>2</sup> | GZRS | n/a <sup>2</sup> |
-| **Customer-managed failover** | | | | |
-| GRS | LRS | GRS <sup>1</sup> | LRS | GRS <sup>1</sup> |
-| GZRS | LRS | GRS <sup>1</sup> | ZRS | GZRS <sup>1</sup> |
-
-<sup>1</sup> Geo-redundancy is lost during a failover to recover from an outage and must be manually reconfigured.<br>
-<sup>2</sup> Geo-redundancy is retained during a failover for disaster recovery testing and doesn't need to be manually reconfigured.
- If the data endpoints for the storage services in your storage account become unavailable in the primary region, you can fail over to the secondary region. After the failover is complete, the secondary region becomes the new primary and users can proceed to access data in the new primary region.
-To understand the effect of this type of failover on your users and applications, it's helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
+To fully understand the impact that customer-managed account failover would have on your users and applications, it is helpful to know what happens during every step of the failover and failback process. For details about how the process works, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
### Microsoft-managed failover
-In extreme circumstances such as major disasters, Microsoft **may** initiate a regional failover. Regional failovers are uncommon, and only take place when the original primary region is deemed unrecoverable within a reasonable amount of time. During these events, no action on your part is required. If your storage account is configured for RA-GRS or RA-GZRS, your applications can read from the secondary region during a Microsoft-managed failover. However, you don't have write access to your storage account until the failover process is complete.
+In extreme circumstances where the original primary region is deemed unrecoverable within a reasonable amount of time due to a major disaster, Microsoft **may** initiate a regional failover. In this case, no action on your part is required. Until the Microsoft-managed failover has completed, you won't have write access to your storage account. Your applications can read from the secondary region if your storage account is configured for RA-GRS or RA-GZRS.
> [!IMPORTANT] > Your disaster recovery plan should be based on customer-managed failover. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
-> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region, datacenter or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. If you need the ability to selectively failover your individual storage accounts, use [customer-managed planned failover](#customer-managed-planned-failover-preview).
-
+> A Microsoft-managed failover would be initiated for an entire physical unit, such as a region or scale unit. It can't be initiated for individual storage accounts, subscriptions, or tenants. For the ability to selectively failover your individual storage accounts, use [customer-managed account failover](#customer-managed-failover).
### Anticipate data loss and inconsistencies > [!CAUTION]
-> Storage account failover usually involves some amount data loss, and could also potentially introduce file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
+> Storage account failover usually involves some data loss, and potentially file and data inconsistencies. In your disaster recovery plan, it's important to consider the impact that an account failover would have on your data before initiating one.
-Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, it's possible that the most recent writes might not yet be copied to the secondary.
+Because data is written asynchronously from the primary region to the secondary region, there's always a delay before a write to the primary region is copied to the secondary. If the primary region becomes unavailable, the most recent writes may not yet have been copied to the secondary.
-When a failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary region is maintained when the failover happens. However, any data written to the primary that doesn't yet exist within the secondary region is lost permanently.
+When a failover occurs, all data in the primary region is lost as the secondary region becomes the new primary. All data already copied to the secondary is maintained when the failover happens. However, any data written to the primary that hasn't also been copied to the secondary region is lost permanently.
The new primary region is configured to be locally redundant (LRS) after the failover.
You also might experience file or data inconsistencies if your storage accounts
#### Last sync time
-The **Last Sync Time** property indicates the most recent time that data from the primary region was also written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary. By contrast, data and metadata written after the last sync time might not yet be copied to the secondary and could potentially be lost. During an outage, use this property to estimate the amount of data loss you might incur by initiating an account failover.
+The **Last Sync Time** property indicates the most recent time that data from the primary region is guaranteed to have been written to the secondary region. For accounts that have a hierarchical namespace, the same **Last Sync Time** property also applies to the metadata managed by the hierarchical namespace, including ACLs. All data and metadata written prior to the last sync time is available on the secondary, while data and metadata written after the last sync time may not have been written to the secondary, and may be lost. Use this property if there's an outage to estimate the amount of data loss you may incur by initiating an account failover.
-As a best practice, design your application so that you can use **Last Sync Time** to evaluate expected data loss. For example, logging all write operations allows you to compare the times of your last write operation to the last sync time. This method enables you to determine which writes aren't yet synced to the secondary and are in danger of being lost.
+As a best practice, design your application so that you can use the last sync time to evaluate expected data loss. For example, if you're logging all write operations, then you can compare the time of your last write operations to the last sync time to determine which writes haven't been synced to the secondary.
For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md). #### File consistency for Azure Data Lake Storage Gen2
-Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. Because replication occurs at this level, an outage in the primary region might prevent some of the files within a container or directory from successfully replicating to the secondary region. Consistency for all files within a container or directory after a storage account failover isn't guaranteed.
+Replication for storage accounts with a [hierarchical namespace enabled (Azure Data Lake Storage Gen2)](../blobs/data-lake-storage-introduction.md) occurs at the file level. This means if an outage in the primary region occurs, it is possible that only some of the files in a container or directory might have successfully replicated to the secondary region. Consistency for all files in a container or directory after a storage account failover is not guaranteed.
#### Change feed and blob data inconsistencies
-Geo-redundant failover of storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled could result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of change log updates and data replication between the primary and secondary regions. To avoid inconsistencies, ensure that all log records are flushed to the log files, and that all storage data is replicated from the primary to the secondary region.
+Storage account failover of geo-redundant storage accounts with [change feed](../blobs/storage-blob-change-feed.md) enabled may result in inconsistencies between the change feed logs and the blob data and/or metadata. Such inconsistencies can result from the asynchronous nature of both updates to the change logs and the replication of blob data from the primary to the secondary region. The only situation in which inconsistencies would not be expected is when all of the current log records have been successfully flushed to the log files, and all of the storage data has been successfully replicated from the primary to the secondary region.
-For more information about change feed, see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
+For information about how change feed works see [How the change feed works](../blobs/storage-blob-change-feed.md#how-the-change-feed-works).
-Keep in mind that other storage account features also require the change feed to be enabled. These features include [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
+Keep in mind that other storage account features require the change feed to be enabled such as [operational backup of Azure Blob Storage](../../backup/blob-backup-support-matrix.md#limitations), [Object replication](../blobs/object-replication-overview.md) and [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md).
#### Point-in-time restore inconsistencies
-Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure portal.
+Customer-managed failover is supported for general-purpose v2 standard tier storage accounts that include block blobs. However, performing a customer-managed failover on a storage account resets the earliest possible restore point for the account. Data for [Point-in-time restore for block blobs](../blobs/point-in-time-restore-overview.md) is only consistent up to the failover completion time. As a result, you can only restore block blobs to a point in time no earlier than the failover completion time. You can check the failover completion time in the redundancy tab of your storage account in the Azure Portal.
+
+For example, suppose you have set the retention period to 30 days. If more than 30 days have elapsed since the failover, then you can restore to any point within that 30 days. However, if fewer than 30 days have elapsed since the failover, then you can't restore to a point prior to the failover, regardless of the retention period. For example, if it's been 10 days since the failover, then the earliest possible restore point is 10 days in the past, not 30 days in the past.
### The time and cost of failing over
-The time it takes for a customer-initiated failover to complete after being initiated can vary, although it typically takes less than one hour.
+The time it takes for failover to complete after being initiated can vary, although it typically takes less than one hour.
-A customer-managed planned failover doesn't lose its geo-redundancy after a failover and subsequent failback. However, a customer-managed failover to recover from an outage does lose its geo-redundancy after a failover (and failback). In that type of failover, your storage account is automatically converted to locally redundant storage (LRS) in the new primary region during a failover, and the storage account in the original primary region is deleted.
+A customer-managed failover loses its geo-redundancy after a failover (and failback). Your storage account is automatically converted to locally redundant storage (LRS) in the new primary region during a failover, and the storage account in the original primary region is deleted.
-You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but re-replicating data to the new secondary region incurs a charge. Additionally, all archived blobs need to be rehydrated to an online tier before the account can be reconfigured for geo-redundancy. This rehydration also incurs an extra charge. For more information about pricing, see:
+You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account, but note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. Also, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy, which will incur a cost. For more information about pricing, see:
- [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/) - [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/)
-After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. The amount of time it takes for replication to complete depends on several factors. These factors include:
+After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
- The number and size of the objects in the storage account. Replicating many small objects can take longer than replicating fewer and larger objects. - The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.
All geo-redundant offerings support Microsoft-managed failover. In addition, som
| Type of failover | GRS/RA-GRS | GZRS/RA-GZRS | ||||
-| **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
-| **Customer-managed planned failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
-| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
+| **Customer-managed failover** | General-purpose v2 accounts</br> General-purpose v1 accounts</br> Legacy Blob Storage accounts | General-purpose v2 accounts |
+| **Microsoft-managed failover** | All account types | General-purpose v2 accounts |
#### Classic storage accounts > [!IMPORTANT]
-> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as the *classic* model, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
+> Customer-managed account failover is only supported for storage accounts deployed using the Azure Resource Manager (ARM) deployment model. The Azure Service Manager (ASM) deployment model, also known as *classic*, isn't supported. To make classic storage accounts eligible for customer-managed account failover, they must first be [migrated to the ARM model](classic-account-migration-overview.md). Your storage account must be accessible to perform the upgrade, so the primary region can't currently be in a failed state.
>
-> During a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+> if there's a disaster that affects the primary region, Microsoft will manage the failover for classic storage accounts. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
#### Azure Data Lake Storage Gen2
All geo-redundant offerings support Microsoft-managed failover. In addition, som
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-> During a significant disaster that affects the primary region, Microsoft will manage the failover for accounts with a hierarchical namespace. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
+> if there's a significant disaster that affects the primary region, Microsoft will manage the failover for accounts with a hierarchical namespace. For more information, see [Microsoft-managed failover](#microsoft-managed-failover).
### Unsupported features and services The following features and services aren't supported for account failover: -- Azure File Sync doesn't support storage account failover. Storage accounts containing Azure file shares and being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so causes sync to stop working and can also result in the unexpected data loss of any newly tiered files.
+- Azure File Sync doesn't support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.
- A storage account containing premium block blobs can't be failed over. Storage accounts that support premium block blobs don't currently support geo-redundancy. - Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md).-- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). You can [re-enable sftp](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support) if you want to resume using it after the failover is complete.
+- To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). If you want to resume using SFTP after the failover is complete, simply [re-enable it](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support).
- Network File System (NFS) 3.0 (NFSv3) isn't supported for storage account failover. You can't create a storage account configured for global-redundancy with NFSv3 enabled.
-### Failover isn't for account migration
+### Failover is not for account migration
-Storage account failover is a temporary solution to a service outage and shouldn't be used as part of your data migration strategy. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
+Storage account failover shouldn't be used as part of your data migration strategy. Failover is a temporary solution to a service outage. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
### Storage accounts containing archived blobs
-Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-failover) is complete, all archived blobs must be rehydrated to an online tier before the account can be configured for geo-redundancy.
+Storage accounts containing archived blobs support account failover. However, after a [customer-managed failover](#customer-managed-failover) is complete, all archived blobs need to be rehydrated to an online tier before the account can be configured for geo-redundancy.
### Storage resource provider
-Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis for all actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API enables you to manage the storage account and related resources.
+Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis of all actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API enables you to manage the storage account and related resources.
+
+After a failover is complete, clients can again read and write Azure Storage data in the new primary region. However, the Azure Storage resource provider does not fail over, so resource management operations must still take place in the primary region. If the primary region is unavailable, you will not be able to perform management operations on the storage account.
-As part of an account failover, the Azure Storage resource provider also fails over. As a result, resource management operations can occur in the new primary region after the failover is complete. The [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property returns the new primary location.
+Because the Azure Storage resource provider does not fail over, the [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property will return the original primary location after the failover is complete.
### Azure virtual machines
-Azure virtual machines (VMs) don't fail over as part of a storage account failover. Any VMs that failed over to a secondary region in response to an outage need to be recreated after the failover completes. Keep in mind that account failover can potentially result in data loss, including data stored in a temporary disk when the VM is shut down. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
+Azure virtual machines (VMs) don't fail over as part of an account failover. If the primary region becomes unavailable, and you fail over to the secondary region, then you will need to recreate any VMs after the failover. Also, there's a potential data loss associated with the account failover. Microsoft recommends following the [high availability](../../virtual-machines/availability.md) and [disaster recovery](../../virtual-machines/backup-recovery.md) guidance specific to virtual machines in Azure.
+
+Keep in mind that any data stored in a temporary disk is lost when the VM is shut down.
### Azure unmanaged disks
-Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. Before a failover can be initiated on an account containing unmanaged disks attached to Azure VMs, the disks must be shut down. For this reason, Microsoft's recommended best practices include converting any unmanaged disks to managed disks.
+As a best practice, Microsoft recommends converting unmanaged disks to managed disks. However, if you need to fail over an account that contains unmanaged disks attached to Azure VMs, you will need to shut down the VM before initiating the failover.
-To perform a failover on an account containing unmanaged disks, follow these steps:
+Unmanaged disks are stored as page blobs in Azure Storage. When a VM is running in Azure, any unmanaged disks attached to the VM are leased. An account failover can't proceed when there's a lease on a blob. To perform the failover, follow these steps:
-1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they're attached. Doing so will make it easier to reattach the disks after the failover.
-1. Shut down the VM.
-1. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
-1. Wait until the **Last Sync Time** updates, and ensure that it's later than the time at which you deleted the VM. This step ensures that the secondary endpoint is fully updated with the VHD files when the failover occurs, and that the VM functions properly in the new primary region.
-1. Initiate the account failover.
-1. Wait until the account failover is complete and the secondary region becomes the new primary region.
-1. Create a VM in the new primary region and reattach the VHDs.
-1. Start the new VM.
+1. Before you begin, note the names of any unmanaged disks, their logical unit numbers (LUN), and the VM to which they are attached. Doing so will make it easier to reattach the disks after the failover.
+2. Shut down the VM.
+3. Delete the VM, but retain the VHD files for the unmanaged disks. Note the time at which you deleted the VM.
+4. Wait until the **Last Sync Time** has updated, and is later than the time at which you deleted the VM. This step is important, because if the secondary endpoint hasn't been fully updated with the VHD files when the failover occurs, then the VM may not function properly in the new primary region.
+5. Initiate the account failover.
+6. Wait until the account failover is complete and the secondary region has become the new primary region.
+7. Create a VM in the new primary region and reattach the VHDs.
+8. Start the new VM.
Keep in mind that any data stored in a temporary disk is lost when the VM is shut down. ### Copying data as an alternative to failover
-As previously mentioned, you can maintain high availability by configuring applications to use a storage account configured for read access to a secondary region. However, if you prefer not to fail over during an outage within the primary region, you can manually copy your data as an alternative. Tools such as [AzCopy](./storage-use-azcopy-v10.md) and [Azure PowerShell](/powershell/module/az.storage/) enable you to copy data from your storage account in the affected region to another storage account in an unaffected region. After the copy operation is complete, you can reconfigure your applications to use the storage account in the unaffected region for both read and write availability.
+If your storage account is configured for read access to the secondary region, then you can design your application to read from the secondary endpoint. If you prefer not to fail over if there's an outage in the primary region, you can use tools such as [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy data from your storage account in the secondary region to another storage account in an unaffected region. You can then point your applications to that storage account for both read and write availability.
## Design for high availability
-It's important to design your application for high availability from the start. Refer to these Azure resources for guidance when designing your application and planning for disaster recovery:
+It's important to design your application for high availability from the start. Refer to these Azure resources for guidance in designing your application and planning for disaster recovery:
- [Designing resilient applications for Azure](/azure/architecture/framework/resiliency/app-design): An overview of the key concepts for architecting highly available applications in Azure. - [Resiliency checklist](/azure/architecture/checklist/resiliency-per-service): A checklist for verifying that your application implements the best design practices for high availability. - [Use geo-redundancy to design highly available applications](geo-redundant-design.md): Design guidance for building applications to take advantage of geo-redundant storage. - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md): A tutorial that shows how to build a highly available application that automatically switches between endpoints as failures and recoveries are simulated.
-Refer to these best practices to maintain high availability for your Azure Storage data:
+Keep in mind these best practices for maintaining high availability for your Azure Storage data:
-- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs from a regional disaster.
+- **Disks:** Use [Azure Backup](https://azure.microsoft.com/services/backup/) to back up the VM disks used by your Azure virtual machines. Also consider using [Azure Site Recovery](https://azure.microsoft.com/services/site-recovery/) to protect your VMs if there's a regional disaster.
- **Block blobs:** Turn on [soft delete](../blobs/soft-delete-blob-overview.md) to protect against object-level deletions and overwrites, or copy block blobs to another storage account in a different region using [AzCopy](./storage-use-azcopy-v10.md), [Azure PowerShell](/powershell/module/az.storage/), or the [Azure Data Movement library](storage-use-data-movement-library.md). - **Files:** Use [Azure Backup](../../backup/azure-file-share-backup-overview.md) to back up your file shares. Also enable [soft delete](../files/storage-files-prevent-file-share-deletion.md) to protect against accidental file share deletions. For geo-redundancy when GRS isn't available, use [AzCopy](./storage-use-azcopy-v10.md) or [Azure PowerShell](/powershell/module/az.storage/) to copy your files to another storage account in a different region. - **Tables:** use [AzCopy](./storage-use-azcopy-v10.md) to export table data to another storage account in a different region. ## Track outages
-Customers can subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
+Customers may subscribe to the [Azure Service Health Dashboard](https://azure.microsoft.com/status/) to track the health and status of Azure Storage and other Azure services.
Microsoft also recommends that you design your application to prepare for the possibility of write failures. Your application should expose write failures in a way that alerts you to the possibility of an outage in the primary region.
Microsoft also recommends that you design your application to prepare for the po
- [Use geo-redundancy to design highly available applications](geo-redundant-design.md) - [Tutorial: Build a highly available application with Blob storage](../blobs/storage-create-geo-redundant-storage.md) - [Azure Storage redundancy](storage-redundancy.md)-- [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md)-- [How failover for disaster recovery testing (preview) works](storage-failover-customer-managed-planned.md)
+- [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md)
+
storage Storage Failover Customer Managed Planned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-planned.md
- Title: How customer-managed planned failover works-
-description: Azure Storage supports account failover of geo-redundant storage accounts for disaster recovery testing and planning. Learn what happens to your storage account and storage services during a customer-managed planned failover (preview) to the secondary region to perform disaster recovery testing and planning.
----- Previously updated : 12/12/2023-----
-# How customer-managed planned failover works (preview)
-
-Customer-managed storage account planned failover enables you to fail over your entire geo-redundant storage account to the secondary region to do disaster recovery testing. During failover, the original secondary region becomes the new primary and all storage service endpoints for blobs, tables, queues and files are redirected to the new primary region. After testing is complete, you can perform another failover operation to *fail back* to the original primary region. A *failback* is an operation restores a storage account to its original regional configuration.
-
-This article describes what happens during a customer-managed planned storage account failover and failback at every stage of the process. To understand how a failover due to an unexpected storage endpoint outage works, see [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md).
-
-> [!IMPORTANT]
-> Customer-managed planned failover is currently in PREVIEW.
->
-> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
->
-> To opt in to the preview, see [Set up preview features in Azure subscription](../../azure-resource-manager/management/preview-features.md) and specify `AllowSoftFailover` as the feature name.
-
-## Redundancy management during failover and failback
-
-> [!TIP]
-> To understand the varying redundancy states during the storage account failover and failback process in detail, see [Azure Storage redundancy](storage-redundancy.md) for definitions of each.
-
-Azure storage provides a wide variety of redundancy options to help protect your data.
-
-Locally redundant storage (LRS) automatically maintains three copies of your storage account within a single datacenter. LRS is the least expensive replication option, but isn't recommended for applications requiring high availability or durability. Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location, and your data is still accessible for both read and write operations if a zone becomes unavailable.
-
-Geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) redundancy options can be used to ensure that your data is highly durable. GRS and RA-GRS use LRS to replicate your data three times locally within both the primary and secondary regions. Configuring your account for read access (RA) allows your data to be read from the secondary region, as long as the region's storage service endpoints are available.
-
-Geo-zone-redundant storage (GZRS) and read-access geo-zone-redundant storage (GZRS) use ZRS replication within the primary region, and LRS within the secondary. As with RA-GRS, configuring RA allows you to read data from the secondary region as long as the storage service endpoints to that region are available.
-
-During the planned failover process, the storage service endpoints to the primary region become read-only and any remaining updates are allowed to finish replicating to the secondary region. Afterward, storage service endpoint DNS entries are switched. Your storage account's secondary endpoints become the new primary endpoints, and the original primary endpoints become the new secondary. Data replication within each region remains unchanged even though the primary and secondary regions are switched. Replication within the new primary is always configured to use LRS, and replication within the original primary remains the same, whether LRS or ZRS.
-
-Azure stores the original redundancy configuration of your storage account in the account's metadata, allowing you eventually fail back when ready.
-
-After failover, the new redundancy configuration of your storage account temporarily becomes GRS. The way in which data is replicated within the primary region at a given point in time determines the zone-redundancy configuration of the storage account. Replication within the new primary is always configured to use LRS, so the account is temporarily nonzonal. Azure immediately begins copying data from the new primary region to the new secondary. If your storage account's original secondary region was configured for RA, access is configured for the new secondary region during failover and failback.
-
-The failback process is essentially the same as the failover process except Azure stores the original redundancy configuration of your storage account and restores it to its original state upon failback. So, if your storage account was originally configured as GZRS, the storage account will be GZRS after failback.
-
-> [!NOTE]
-> Unlike [customer-managed failover](storage-failover-customer-managed-unplanned.md), during planned failover, replication from the primary to secondary region is allowed to finish before the DNS entries for the endpoints are changed to the new secondary. Because of this, data loss is not expected during failover or failback as long as both the primary and secondary regions are available throughout the process.
-
-## How to initiate a failover
-
-To learn how to initiate a failover, see [Initiate a storage account failover](storage-initiate-account-failover.md).
-
-## The failover and failback process
-
-The following diagrams show what happens during a customer-managed planned failover and failback of a storage account.
-
-## [GRS/RA-GRS](#tab/grs-ra-grs)
-
-Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GRS:
--
-### The failover process (GRS/RA-GRS)
-
-Begin disaster recovery testing by initiating a failover of your storage account to the secondary region. The following steps describe the failover process, and the subsequent image provides illustration:
-
-1. The original primary region becomes read only.
-1. Replication of all data from the primary region to the secondary region completes.
-1. DNS entries for storage service endpoints in the secondary region are promoted and become the new primary endpoints for your storage account.
-
-The failover typically takes about an hour.
--
-After the failover is complete, the original primary region becomes the new secondary (1) and the original secondary region becomes the new primary (2). The URIs for the storage service endpoints for blobs, tables, queues, and files remain the same but their DNS entries are changed to point to the new primary region (3). Users can resume writing data to the storage account in the new primary region and the data is then copied asynchronously to the new secondary (4) as shown in the following image:
--
-While in the failover state, perform your disaster recovery testing.
-
-### The failback process (GRS/RA-GRS)
-
-After testing is complete, perform another failover to failback to the original primary region. During the failover process, as shown in the following image:
-
-1. The original primary region becomes read only.
-1. All data finishes replicating from the current primary region to the current secondary region.
-1. The DNS entries for the storage service endpoints are changed to point back to the region that was the primary before the initial failover was performed.
-
-The failback typically takes about an hour.
--
-After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1) while replication to the original secondary (2) continues as before the failover:
--
-## [GZRS/RA-GZRS](#tab/gzrs-ra-gzrs)
-
-Under normal circumstances, a client writes data to a storage account in the primary region via storage service endpoints (1). The data is then copied asynchronously from the primary region to the secondary region (2). The following image shows the normal state of a storage account configured as GZRS:
--
-### The failover process (GZRS/RA-GZRS)
-
-Begin disaster recovery testing by initiating a failover of your storage account to the secondary region. The following steps describe the failover process, and the subsequent image provides illustration:
-
-1. The current primary region becomes read only.
-1. All data finishes replicating from the primary region to the secondary region.
-1. Storage service endpoint DNS entries are switched. Your storage account's endpoints in the secondary region become your new primary endpoints.
-
-The failover typically takes about an hour.
--
-After the failover is complete, the original primary region becomes the new secondary (1) and the original secondary region becomes the new primary (2). The URIs for the storage service endpoints for blobs, tables, queues, and files remain the same but are pointing to the new primary region (3). Users can resume writing data to the storage account in the new primary region and the data is then copied asynchronously to the new secondary (4) as shown in the following image:
--
-While in the failover state, perform your disaster recovery testing.
-
-### The failback process (GZRS/RA-GZRS)
-
-When testing is complete, perform another failover to fail back to the original primary region. The following image illustrates the steps involved in the failover process.
-
-1. The current primary region becomes read only.
-1. All data finishes replicating from the current primary region to the current secondary region.
-1. The DNS entries for the storage service endpoints are changed to point back to the region that was the primary before the initial failover was performed.
-
-The failback typically takes about an hour.
--
-After the failback is complete, the storage account is restored to its original redundancy configuration. Users can resume writing data to the storage account in the original primary region (1) while replication to the original secondary (2) continues as before the failover:
----
-## See also
--- [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md)-- [Initiate an account failover](storage-initiate-account-failover.md)-- [How customer-managed failover works](storage-failover-customer-managed-unplanned.md)
storage Storage Failover Customer Managed Unplanned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-unplanned.md
Title: How Azure Storage account customer-managed failover to recover from an outage in the primary regin works
+ Title: How Azure Storage account customer-managed failover works
description: Azure Storage supports account failover for geo-redundant storage accounts to recover from a service endpoint outage. Learn what happens to your storage account and storage services during a customer-managed failover to the secondary region if the primary endpoint becomes unavailable.
Previously updated : 09/24/2023 Last updated : 09/22/2023
-# How customer-managed failover works
+# How customer-managed storage account failover works
Customer-managed failover of Azure Storage accounts enables you to fail over your entire geo-redundant storage account to the secondary region if the storage service endpoints for the primary region become unavailable. During failover, the original secondary region becomes the new primary and all storage service endpoints for blobs, tables, queues and files are redirected to the new primary region. After the storage service endpoint outage has been resolved, you can perform another failover operation to *fail back* to the original primary region.
When a storage account is configured for GRS or RA-GRS redundancy, data is repli
During the customer-managed failover process, the DNS entries for the storage service endpoints are changed such that those for the secondary region become the new primary endpoints for your storage account. After failover, the copy of your storage account in the original primary region is deleted and your storage account continues to be replicated three times locally within the original secondary region (the new primary). At that point, your storage account becomes locally redundant (LRS).
-The original and current redundancy configurations are stored in the properties of the storage account. This functionality allows you to eventually return to your original configuration when you fail back.
+The original and current redundancy configurations are stored in the properties of the storage account to allow you eventually return to your original configuration when you fail back.
To regain geo-redundancy after a failover, you will need to reconfigure your account as GRS. (GZRS is not an option post-failover since the new primary will be LRS after the failover). After the account is reconfigured for geo-redundancy, Azure immediately begins copying data from the new primary region to the new secondary. If you configure your storage account for read access (RA) to the secondary region, that access will be available but it may take some time for replication from the primary to make the secondary current.
To regain geo-redundancy after a failover, you will need to reconfigure your acc
> > **To avoid a major data loss**, check the value of the [**Last Sync Time**](last-sync-time-get.md) property before failing back. Compare the last sync time to the last times that data was written to the new primary to evaluate potential data loss.
-The failback process is essentially the same as the failover process except Azure restores the replication configuration to its original state before it was failed over (the replication configuration, not the data). So, if your storage account was originally configured as GZRS, the primary region after failback becomes ZRS.
+The failback process is essentially the same as the failover process except Azure restores the replication configuration to its original state before it was failed over (the replication configuration, not the data). So, if your storage account was originally configured as GZRS, the primary region after faillback becomes ZRS.
After failback, you can configure your storage account to be geo-redundant again. If the original primary region was configured for LRS, you can configure it to be GRS or RA-GRS. If the original primary was configured as ZRS, you can configure it to be GZRS or RA-GZRS. For additional options, see [Change how a storage account is replicated](redundancy-migration.md).
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Title: Initiate a storage account failover
-description: Learn how to initiate an account failover if the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account.
+description: Learn how to initiate an account failover in the event that the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account.
Previously updated : 09/25/2023 Last updated : 09/15/2023 # Initiate a storage account failover
-Azure Storage supports customer-initiated account failover for geo-redundant storage accounts. With account failover, you can initiate the failover process for your storage account if the primary storage service endpoints become unavailable, or to perform disaster recovery testing. The failover updates the DNS entries for the storage service endpoints such that the endpoints for the secondary region become the new primary endpoints for your storage account. Once the failover is complete, clients can begin writing to the new primary endpoints.
+If the primary endpoint for your geo-redundant storage account becomes unavailable for any reason, you can initiate an account failover. An account failover updates the secondary endpoint to become the primary endpoint for your storage account. Once the failover is complete, clients can begin writing to the new primary region. Forced failover enables you to maintain high availability for your applications.
-This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or the Azure CLI.
+This article shows how to initiate an account failover for your storage account using the Azure portal, PowerShell, or Azure CLI. To learn more about account failover, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
> [!WARNING] > An account failover typically results in some data loss. To understand the implications of an account failover and to prepare for data loss, review [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
-To learn more about account failover, see [Azure storage disaster recovery planning and failover](storage-disaster-recovery-guidance.md).
- [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)] ## Prerequisites
-Before failing over your storage account, review these important articles covered in the [Plan for storage account failover](storage-disaster-recovery-guidance.md#plan-for-storage-account-failover).
--- **Potential data loss**: When you fail over your storage account in response to an unexpected outage in the primary region, some data loss is expected.
+Before you can perform an account failover on your storage account, make sure that:
-> [!WARNING]
-> It is very important to understand the expectations for loss of data with certain types of failover, and to plan for it. For details on the implications of an account failover and to how to prepare for data loss, see [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
-- **Geo-redundancy**: Before you can perform an account failover on your storage account, make sure it's configured for geo-redundancy and that the initial synchronization from the primary to the secondary region is complete. For more information about Azure storage redundancy options, see [Azure Storage redundancy](storage-redundancy.md). If your account isn't configured for geo-redundancy, you can change it. For more information, see [Change how a storage account is replicated](redundancy-migration.md).-- **Understand the different types of account failover**: There are three types of storage account failover. To learn the use cases for each and how they function differently, see [Plan for storage account failover](storage-disaster-recovery-guidance.md#plan-for-storage-account-failover). This article focuses on how to initiate a *customer-managed failover* to recover from the service endpoints being unavailable in the primary region, or a *customer-managed* ***planned*** *failover* (preview) used primarily to perform disaster recovery testing.-- **Plan for unsupported features and services**: Review [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services) and take the appropriate action before initiating a failover.-- **Supported storage account types**: Ensure the type of your storage account supports customer-initiated failover. See [Supported storage account types](storage-disaster-recovery-guidance.md#supported-storage-account-types).-- **Set your expectations for timing and cost**: The time it takes to fail over after you initiate it can vary, but it typically takes less than one hour. A customer-managed failover associated with an outage in the primary region loses its geo-redundancy configuration after a failover (and failback). Reconfiguring GRS typically incurs extra time and cost. For more information, see [The time and cost of failing over](storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
+> [!div class="checklist"]
+> - Your storage account is configured for geo-replication (GRS, GZRS, RA-GRS or RA-GZRS). For more information about Azure Storage redundancy, see [Azure Storage redundancy](storage-redundancy.md).
+> - The type of your storage account supports customer-initiated failover. See [Supported storage account types](storage-disaster-recovery-guidance.md#supported-storage-account-types).
+> - Your storage account doesn't have any features or services enabled that are not supported for account failover. See [Unsupported features and services](storage-disaster-recovery-guidance.md#unsupported-features-and-services) for a detailed list.
## Initiate the failover
-You can initiate either type of customer-managed failover using the Azure portal, PowerShell, or the Azure CLI.
+You can initiate an account failover from the Azure portal, PowerShell, or the Azure CLI.
[!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
You can initiate either type of customer-managed failover using the Azure portal
To initiate an account failover from the Azure portal, follow these steps: 1. Navigate to your storage account.
-1. Under **Data management**, select **Redundancy**. The following image shows the geo-redundancy configuration and failover status of a storage account.
-
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-redundancy.png" alt-text="Screenshot showing redundancy and failover status." lightbox="media/storage-initiate-account-failover/portal-failover-redundancy.png":::
-
- If your storage account is configured with a hierarchical namespace enabled, the following message is displayed:
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-hns-not-supported.png" alt-text="Screenshot showing that failover isn't supported for hierarchical namespace." lightbox="media/storage-initiate-account-failover/portal-failover-hns-not-supported.png":::
-
-1. Verify that your storage account is configured for geo-redundant storage (GRS, RA-GRS, GZRS or RA-GZRS). If it's not, then select the desired redundancy configuration under **Redundancy** and select **Save** to change it. After changing the geo-redundancy configuration, it will take several minutes for your data to synchronize from the primary to the secondary region. You cannot initiate a failover until the synchronization is complete. You might see the following message on the **Redundancy** page until all of your data is replicated:
-
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-repl-in-progress.png" alt-text="Screenshot showing message indicating synchronization is still in progress." lightbox="media/storage-initiate-account-failover/portal-failover-repl-in-progress.png":::
-
-1. Select **Prepare for failover**. You will be presented with a page similar to the image that follows where you can select the type of failover to perform:
-
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare.png" lightbox="media/storage-initiate-account-failover/portal-failover-prepare.png" alt-text="Screenshot showing the prepare for failover window.":::
-
- > [!NOTE]
- > If your storage account is configured with a hierarchical namespace enabled, the `Failover` option will be grayed out.
-1. Select the type of failover to prepare for. The confirmation page varies depending on the type of failover you select.
+1. Under **Settings**, select **Geo-replication**. The following image shows the geo-replication and failover status of a storage account.
- **If you select `Failover`**:
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare.png" alt-text="Screenshot showing geo-replication and failover status":::
- You will see a warning about potential data loss and information about needing to manually reconfigure geo-redundancy after the failover:
+1. Verify that your storage account is configured for geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS). If it's not, then select **Configuration** under **Settings** to update your account to be geo-redundant.
+1. The **Last Sync Time** property indicates how far the secondary is behind from the primary. **Last Sync Time** provides an estimate of the extent of data loss that you will experience after the failover is completed. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
+1. Select **Prepare for failover**.
+1. Review the confirmation dialog. When you are ready, enter **Yes** to confirm and initiate the failover.
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare-failover.png" alt-text="Screenshot showing the failover option selected on the Prepare for failover window." lightbox="media/storage-initiate-account-failover/portal-failover-prepare-failover.png":::
-
- For more information about potential data loss and what happens to your account redundancy configuration during failover, see:
-
- > [Anticipate data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies)
- >
- > [Plan for storage account failover](storage-disaster-recovery-guidance.md#plan-for-storage-account-failover)
- The **Last Sync Time** property indicates the last time the secondary was synchronized with the primary. The difference between **Last Sync Time** and the current time provides an estimate of the extent of data loss that you will experience after the failover is completed. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
-
- **If you select `Planned failover`** (preview):
-
- You will see the **Last Sync Time** value, but notice in the image that follows that the failover will not occur until after all of the remaining data is synchronized to the secondary region.
-
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-prepare-failover-planned.png" alt-text="Screenshot showing the planned failover option selected on the prepare for failover window." lightbox="media/storage-initiate-account-failover/portal-failover-prepare-failover-planned.png":::
-
- As a result, data loss is not expected during the failover. Since the redundancy configuration within each region does not change during a planned failover or failback, there is no need to manually reconfigure geo-redundancy after a failover.
-
-1. Review the **Prepare for failover** page. When you are ready, type **yes** and select **Failover** to confirm and initiate the failover process.
-
- You will see a message indicating the failover is in progress:
-
- :::image type="content" source="media/storage-initiate-account-failover/portal-failover-in-progress.png" alt-text="Screenshot showing the failover in-progress message." lightbox="media/storage-initiate-account-failover/portal-failover-in-progress-redundancy.png":::
+ :::image type="content" source="media/storage-initiate-account-failover/portal-failover-confirm.png" alt-text="Screenshot showing confirmation dialog for an account failover":::
## [PowerShell](#tab/azure-powershell)
-To get the current redundancy and failover information for your storage account, and then initiate a failover, follow these steps:
-
-> [!div class="checklist"]
-> - [Install the Azure Storage preview module for PowerShell](#install-the-azure-storage-preview-module-for-powershell)
-> - [Get the current status of the storage account with PowerShell](#get-the-current-status-of-the-storage-account-with-powershell)
-> - [Initiate a failover of the storage account with PowerShell](#initiate-a-failover-of-the-storage-account-with-powershell)
-### Install the Azure Storage preview module for PowerShell
-
-To use PowerShell to initiate and monitor a **planned** customer-managed account failover (preview) in addition to a customer-initiated failover, install the [Az.Storage 5.2.2-preview module](https://www.powershellgallery.com/packages/Az.Storage/5.2.2-preview). Earlier versions of the module support customer-managed failover (unplanned), but not planned failover. The preview version supports the new `FailoverType` parameter which allows you to specify either `planned` or `unplanned`.
-
-#### Installing and running the preview module on PowerShell 5.1
-
-Microsoft recommends you install and use the latest version of PowerShell, but if you are installing the preview module on Windows PowerShell 5.1, and you get the following error, you will need to [update PowerShellGet to the latest version](/powershell/gallery/powershellget/update-powershell-51) before installing the Az.Storage 5.2.2 preview module:
-
-```Sample
-PS C:\Windows\system32> Install-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowPrerelease
-Install-Module : Cannot process argument transformation on parameter 'RequiredVersion'. Cannot convert value "5.2.2-preview" to type "System.Version". Error: "Input string was not in a correct format."
-At line:1 char:50
-+ ... nstall-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowP ...
-+ ~~~~~~~~~~~~~
- + CategoryInfo : InvalidData: (:) [Install-Module], ParameterBindingArgumentTransformationException
- + FullyQualifiedErrorId : ParameterArgumentTransformationError,Install-Module
-```
-
-To install the latest version of PowerShellGet and the Az.Storage preview module, perform the following steps:
-
-1. Run the following command to update PowerShellGet:
-
- ```powershell
- Install-Module PowerShellGet ΓÇôRepository PSGallery ΓÇôForce
- ```
-
-1. Close and reopen PowerShell
-1. Install the Az.Storage preview module using the following command:
-
- ```powershell
- Install-Module -Name Az.Storage -RequiredVersion 5.2.2-preview -AllowPrerelease
- ```
-
-1. Determine whether you already have a higher version of the Az.Storage module installed by running the command:
-
- ```powershell
- Get-InstalledModule Az.Storage -AllVersions
- ```
-
-If a higher version such as 5.3.0 or 5.4.0 is also installed, you will need to explicitly import the preview version before using it.
-
-1. Close and reopen PowerShell again
-1. Before running any other commands, import the preview version of the module using the following command:
-
- ```powershell
- Import-Module Az.Storage -RequiredVersion 5.2.2
- ```
-
-1. Verify that the `FailoverType` parameter is supported by running the following command:
-
- ```powershell
- Get-Help Invoke-AzStorageAccountFailover -Parameter FailoverType
- ```
-
-For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-az-ps).
-
-### Get the current status of the storage account with PowerShell
-
-Check the status of the storage account before failing over. Examine properties that can affect failing over such as:
--- The primary and secondary regions and their status-- The storage kind and access tier-- The current failover status-- The last sync time-- The storage account SKU conversion status-
-```powershell
- # Log in first with Connect-AzAccount
- Connect-AzAccount
- # Specify the resource group name and storage account name
- $rgName = "<your resource group name>"
- $saName = "<your storage account name>"
- # Get the storage account information
- Get-AzStorageAccount `
- -Name $saName `
- -ResourceGroupName $rgName `
- -IncludeGeoReplicationStats
-```
+To use PowerShell to initiate an account failover, install the [Az.Storage](https://www.powershellgallery.com/packages/Az.Storage) module, version 2.0.0 or later. For more information about installing Azure PowerShell, see [Install the Azure Az PowerShell module](/powershell/azure/install-azure-powershell).
-To refine the list of properties in the display to the most relevant set, consider replacing the Get-AzStorageAccount command in the example above with the following command:
+To initiate an account failover from PowerShell, call the following command:
```powershell
-Get-AzStorageAccount `
- -Name $saName `
- -ResourceGroupName $rgName `
- -IncludeGeoReplicationStats `
- | Select-Object Location,PrimaryLocation,SecondaryLocation,StatusOfPrimary,StatusOfSecondary,@{E={$_.Kind};L="AccountType"},AccessTier,LastGeoFailoverTime,FailoverInProgress,StorageAccountSkuConversionStatus,GeoReplicationStats `
- -ExpandProperty Sku `
- | Select-Object Location,PrimaryLocation,SecondaryLocation,StatusOfPrimary,StatusOfSecondary,AccountType,AccessTier,@{E={$_.Name};L="RedundancyType"},LastGeoFailoverTime,FailoverInProgress,StorageAccountSkuConversionStatus `
- -ExpandProperty GeoReplicationStats `
- | fl
-```
-
-### Initiate a failover of the storage account with PowerShell
-
-```powershell
-Invoke-AzStorageAccountFailover `
- -ResourceGroupName $rgName `
- -Name $saName `
- -FailoverType <planned|unplanned> # Specify "planned" or "unplanned" failover (without the quotes)
-
+Invoke-AzStorageAccountFailover -ResourceGroupName <resource-group-name> -Name <account-name>
``` ## [Azure CLI](#tab/azure-cli)
-To get the current redundancy and failover information for your storage account, and then initiate a failover, follow these steps:
-
-> [!div class="checklist"]
-> - [Install the Azure Storage preview extension for Azure CLI](#install-the-azure-storage-preview-extension-for-azure-cli)
-> - [Get the current status of the storage account with Azure CLI](#get-the-current-status-of-the-storage-account-with-azure-cli)
-> - [Initiate a failover of the storage account with Azure CLI](#initiate-a-failover-of-the-storage-account-with-azure-cli)
-
-### Install the Azure Storage preview extension for Azure CLI
-
-1. Install the latest version of the Azure CLI. For more information, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-1. Install the Azure CLI storage preview extension using the following command:
-
- ```azurecli
- az extension add -n storage-preview
- ```
-
- > [!IMPORTANT]
- > The Azure CLI storage preview extension adds support for features or arguments that are currently in PREVIEW.
- >
- > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-### Get the current status of the storage account with Azure CLI
-
-Run the following command to get the current geo-replication information for the storage account. Replace the placeholder values in angle brackets (**\<\>**) with your own values:
-
-```azurecli
-az storage account show \
- --resource-group <resource-group-name> \
- --name <storage-account-name> \
- --expand geoReplicationStats
-```
-
-For more information about the `storage account show` command, run:
-
-```azurecli
-az storage account show --help
-```
-
-### Initiate a failover of the storage account with Azure CLI
-
-Run the following command to initiate a failover of the storage account. Replace the placeholder values in angle brackets (**\<\>**) with your own values:
+To use Azure CLI to initiate an account failover, call the following commands:
-```azurecli
-az storage account failover \
- --resource-group <resource-group-name> \
- --name <storage-account-name> \
- --failover-type <planned|unplanned>
-```
-
-For more information about the `storage account failover` command, run:
-
-```azurecli
-az storage account failover --help
+```azurecli-interactive
+az storage account show \ --name accountName \ --expand geoReplicationStats
+az storage account failover \ --name accountName
```
-## Monitor the failover
-
-You can monitor the status of the failover using the Azure portal, PowerShell, or the Azure CLI.
-
-## [Portal](#tab/azure-portal)
-
-The status of the failover is shown in the Azure portal in **Notifications**, in the activity log, and on the **Redundancy** page of the storage account.
-
-### Notifications
-
-To check the status of the failover, select the notification icon (bell) on the far right of the Azure portal global page header:
--
-### Activity log
+## Important implications of account failover
-To view the detailed status of a failover, select the **More events in the activity log** link in the notification, or go to the **Activity log** page of the storage account:
+When you initiate an account failover for your storage account, the DNS records for the secondary endpoint are updated so that the secondary endpoint becomes the primary endpoint. Make sure that you understand the potential impact to your storage account before you initiate a failover.
+To estimate the extent of likely data loss before you initiate a failover, check the **Last Sync Time** property. For more information about checking the **Last Sync Time** property, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
-### Redundancy page
+The time it takes to failover after initiation can vary though typically less than one hour.
-Messages on the redundancy page of the storage account will show if the failover is still in progress:
+After the failover, your storage account type is automatically converted to locally redundant storage (LRS) in the new primary region. You can re-enable geo-redundant storage (GRS) or read-access geo-redundant storage (RA-GRS) for the account. Note that converting from LRS to GRS or RA-GRS incurs an additional cost. The cost is due to the network egress charges to re-replicate the data to the new secondary region. For additional information, see [Bandwidth Pricing Details](https://azure.microsoft.com/pricing/details/bandwidth/).
+After you re-enable GRS for your storage account, Microsoft begins replicating the data in your account to the new secondary region. Replication time depends on many factors, which include:
-If the failover is nearing completion, the redundancy page might show the original secondary region as the new primary, but still display a message indicating the failover is in progress:
--
-When the failover is complete, the redundancy page will show the last failover time and the location of the new primary region. If a planned failover was done, the new secondary region will also be displayed. The following image shows the new storage account status after a failover resulting from an outage of the endpoints for the original primary (unplanned):
--
-## [PowerShell](#tab/azure-powershell)
-
-You can use Azure PowerShell to get the current redundancy and failover information for your storage account. To check the status of the storage account failover see [Get the current status of the storage account with PowerShell](#get-the-current-status-of-the-storage-account-with-powershell).
-
-## [Azure CLI](#tab/azure-cli)
-
-You can use Azure PowerShell to get the current redundancy and failover information for your storage account. To check the status of the storage account failover see [Get the current status of the storage account with Azure CLI](#get-the-current-status-of-the-storage-account-with-azure-cli).
--
+- The number and size of the objects in the storage account. Many small objects can take longer than fewer and larger objects.
+- The available resources for background replication, such as CPU, memory, disk, and WAN capacity. Live traffic takes priority over geo replication.
+- If using Blob storage, the number of snapshots per blob.
+- If using Table storage, the [data partitioning strategy](/rest/api/storageservices/designing-a-scalable-partitioning-strategy-for-azure-table-storage). The replication process can't scale beyond the number of partition keys that you use.
-## See also
+## Next steps
- [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md) - [Check the Last Sync Time property for a storage account](last-sync-time-get.md)
storage Storage Metrics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-metrics-migration.md
description: Learn how to transition from Storage Analytics metrics (classic met
Previously updated : 01/09/2024 Last updated : 01/03/2024
# Transition to metrics in Azure Monitor
-On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* retired. If you used classic metrics, this article helps you transition to metrics in Azure Monitor.
+On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* will be retired. If you use classic metrics, make sure to transition to metrics in Azure Monitor prior to that date. This article helps you make the transition.
## Steps to complete the transition
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Previously updated : 01/05/2024 Last updated : 09/06/2023
# Azure Storage redundancy
-Azure Storage always stores multiple copies of your data to protect it from planned and unplanned events. Examples of these events include transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
+Azure Storage always stores multiple copies of your data so that it's protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Redundancy ensures that your storage account meets its availability and durability targets even in the face of failures.
When deciding which redundancy option is best for your scenario, consider the tradeoffs between lower costs and higher availability. The factors that help determine which redundancy option you should choose include:
When deciding which redundancy option is best for your scenario, consider the tr
The services that comprise Azure Storage are managed through a common Azure resource called a *storage account*. The storage account represents a shared pool of storage that can be used to deploy storage resources such as blob containers (Blob Storage), file shares (Azure Files), tables (Table Storage), or queues (Queue Storage). For more information about Azure Storage accounts, see [Storage account overview](storage-account-overview.md).
-The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. Consider isolating different types of resources in separate storage accounts if they have different redundancy requirements.
+The redundancy setting for a storage account is shared for all storage services exposed by that account. All storage resources deployed in the same storage account have the same redundancy setting. You may want to isolate different types of resources in separate storage accounts if they have different redundancy requirements.
## Redundancy in the primary region
Data in an Azure Storage account is always replicated three times in the primary
Locally redundant storage (LRS) replicates your storage account three times within a single data center in the primary region. LRS provides at least 99.999999999% (11 nines) durability of objects over a given year.
-LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS might be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
+LRS is the lowest-cost redundancy option and offers the least durability compared to other options. LRS protects your data against server rack and drive failures. However, if a disaster such as fire or flooding occurs within the data center, all replicas of a storage account using LRS may be lost or unrecoverable. To mitigate this risk, Microsoft recommends using [zone-redundant storage](#zone-redundant-storage) (ZRS), [geo-redundant storage](#geo-redundant-storage) (GRS), or [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS).
A write request to a storage account that is using LRS happens synchronously. The write operation returns successfully only after the data is written to all three replicas.
The following diagram shows how your data is replicated within a single data cen
LRS is a good choice for the following scenarios: -- If your application stores data that can be easily reconstructed if data loss occurs, consider choosing LRS.-- If your application is restricted to replicating data only within a country or region due to data governance requirements, consider choosing LRS. In some cases, the paired regions across which the data is geo-replicated might be within another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).-- If your scenario is using Azure unmanaged disks, consider using LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
+- If your application stores data that can be easily reconstructed if data loss occurs, you may opt for LRS.
+- If your application is restricted to replicating data only within a country or region due to data governance requirements, you may opt for LRS. In some cases, the paired regions across which the data is geo-replicated may be in another country or region. For more information on paired regions, see [Azure regions](https://azure.microsoft.com/regions/).
+- If your scenario is using Azure unmanaged disks, you may opt for LRS. While it's possible to create a storage account for Azure unmanaged disks that uses GRS, it isn't recommended due to potential issues with consistency over asynchronous geo-replication.
### Zone-redundant storage Zone-redundant storage (ZRS) replicates your storage account synchronously across three Azure availability zones in the primary region. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS offers durability for storage resources of at least 99.9999999999% (12 9's) over a given year.
-With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS repointing. These updates could affect your application if you access data before the updates are complete. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
+With ZRS, your data is still accessible for both read and write operations even if a zone becomes unavailable. If a zone becomes unavailable, Azure undertakes networking updates, such as DNS repointing. These updates may affect your application if you access data before the updates have completed. When designing applications for ZRS, follow practices for transient fault handling, including implementing retry policies with exponential back-off.
A write request to a storage account that is using ZRS happens synchronously. The write operation returns successfully only after the data is written to all replicas across the three availability zones. If an availability zone is temporarily unavailable, the operation returns successfully after the data is written to all available zones.
The following diagram shows how your data is replicated across availability zone
:::image type="content" source="media/storage-redundancy/zone-redundant-storage.png" alt-text="Diagram showing how data is replicated in the primary region with ZRS":::
-ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself might not fully protect your data against a regional disaster where multiple zones are permanently affected. [Geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS) uses ZRS in the primary region and also geo-replicates your data to a secondary region. GZRS is available in many regions, and is recommended for protection against regional disasters.
+ZRS provides excellent performance, low latency, and resiliency for your data if it becomes temporarily unavailable. However, ZRS by itself may not protect your data against a regional disaster where multiple zones are permanently affected. For protection against regional disasters, Microsoft recommends using [geo-zone-redundant storage](#geo-zone-redundant-storage) (GZRS), which uses ZRS in the primary region and also geo-replicates your data to a secondary region.
The archive tier for Blob Storage isn't currently supported for ZRS, GZRS, or RA-GZRS accounts. Unmanaged disks don't support ZRS or GZRS.
For more information about which regions support ZRS, see [Azure regions with av
ZRS is supported for all Azure Storage services through standard general-purpose v2 storage accounts, including: -- Azure Blob storage (hot and cool block blobs and append blobs, nondisk page blobs)
+- Azure Blob storage (hot and cool block blobs and append blobs, non-disk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
For a list of regions that support zone-redundant storage (ZRS) for managed disk
## Redundancy in a secondary region
-Redundancy options can help provide high durability for your applications. In many regions, you can copy the data within your storage account to a secondary region located hundreds of miles away from the primary region. Copying your storage account to a secondary region ensures that your data remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable.
+For applications requiring high durability, you can choose to additionally copy the data in your storage account to a secondary region that is hundreds of miles away from the primary region. If your storage account is copied to a secondary region, then your data is durable even in the case of a complete regional outage or a disaster in which the primary region isn't recoverable.
When you create a storage account, you select the primary region for the account. The paired secondary region is determined based on the primary region, and can't be changed. For more information about regions supported by Azure, see [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/).
Azure Storage offers two options for copying your data to a secondary region:
With GRS or GZRS, the data in the secondary region isn't available for read or write access unless there's a failover to the primary region. For read access to the secondary region, configure your storage account to use read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). For more information, see [Read access to data in the secondary region](#read-access-to-data-in-the-secondary-region).
-If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover completes, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+If the primary region becomes unavailable, you can choose to fail over to the secondary region. After the failover has completed, the secondary region becomes the primary region, and you can again read and write data. For more information on disaster recovery and to learn how to fail over to the secondary region, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
> [!IMPORTANT] > Because data is replicated to the secondary region asynchronously, a failure that affects the primary region may result in data loss if the primary region cannot be recovered. The interval between the most recent writes to the primary region and the last write to the secondary region is known as the recovery point objective (RPO). The RPO indicates the point in time to which data can be recovered. The Azure Storage platform typically has an RPO of less than 15 minutes, although there's currently no SLA on how long it takes to replicate data to the secondary region.
The following diagram shows how your data is replicated with GRS or RA-GRS:
### Geo-zone-redundant storage
-Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region. In addition, it's also replicated to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
+Geo-zone-redundant storage (GZRS) combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three [Azure availability zones](../../availability-zones/az-overview.md) in the primary region and is also replicated to a secondary geographic region for protection from regional disasters. Microsoft recommends using GZRS for applications requiring maximum consistency, durability, and availability, excellent performance, and resilience for disaster recovery.
-With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data also remains durable during a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year.
+With a GZRS storage account, you can continue to read and write data if an availability zone becomes unavailable or is unrecoverable. Additionally, your data is also durable in the case of a complete regional outage or a disaster in which the primary region isn't recoverable. GZRS is designed to provide at least 99.99999999999999% (16 9's) durability of objects over a given year.
The following diagram shows how your data is replicated with GZRS or RA-GZRS: :::image type="content" source="media/storage-redundancy/geo-zone-redundant-storage.png" alt-text="Diagram showing how data is replicated with GZRS or RA-GZRS":::
-Only standard general-purpose v2 storage accounts support GZRS. All Azure Storage services support GZRS, including:
+Only standard general-purpose v2 storage accounts support GZRS. GZRS is supported by all of the Azure Storage services, including:
-- Azure Blob storage (hot and cool block blobs, nondisk page blobs)
+- Azure Blob storage (hot and cool block blobs, non-disk page blobs)
- Azure Files (all standard tiers: transaction optimized, hot, and cool) - Azure Table storage - Azure Queue storage
For a list of regions that support geo-zone-redundant storage (GZRS), see [Azure
## Read access to data in the secondary region
-Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region isn't directly accessible to users or applications when an outage occurs in the primary region, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the storage service endpoints in the secondary region become the new primary endpoints for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover to recover from an outage works](storage-failover-customer-managed-unplanned.md).
+Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages. With an account configured for GRS or GZRS, data in the secondary region is not directly accessible to users or applications, unless a failover occurs. The failover process updates the DNS entry provided by Azure Storage so that the secondary endpoint becomes the new primary endpoint for your storage account. During the failover process, your data is inaccessible. After the failover is complete, you can read and write data to the new primary region. For more information, see [How customer-managed storage account failover works](storage-failover-customer-managed-unplanned.md).
If your applications require high availability, then you can configure your storage account for read access to the secondary region. When you enable read access to the secondary region, then your data is always available to be read from the secondary, including in a situation where the primary region becomes unavailable. Read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS) configurations permit read access to the secondary region.
If your applications require high availability, then you can configure your stor
If your storage account is configured for read access to the secondary region, then you can design your applications to seamlessly shift to reading data from the secondary region if the primary region becomes unavailable for any reason.
-The secondary region is available for read access after you enable RA-GRS or RA-GZRS. This availability allows you to test your application in advance to ensure that it reads properly from the secondary region during an outage. For more information about how to design your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
+The secondary region is available for read access after you enable RA-GRS or RA-GZRS, so that you can test your application in advance to make sure that it will properly read from the secondary in the event of an outage. For more information about how to design your applications to take advantage of geo-redundancy, see [Use geo-redundancy to design highly available applications](geo-redundant-design.md).
-When read access to the secondary is enabled, your application can be read from both the secondary and primary endpoints. The secondary endpoint appends the suffix *-secondary* to the account name. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
+When read access to the secondary is enabled, your application can be read from the secondary endpoint as well as from the primary endpoint. The secondary endpoint appends the suffix *-secondary* to the account name. For example, if your primary endpoint for Blob storage is `myaccount.blob.core.windows.net`, then the secondary endpoint is `myaccount-secondary.blob.core.windows.net`. The account access keys for your storage account are the same for both the primary and secondary endpoints.
#### Plan for data loss
-Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster strikes the primary region, it's likely that some data would be lost and that files within a directory or container wouldn't be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
+Because data is replicated asynchronously from the primary to the secondary region, the secondary region is typically behind the primary region in terms of write operations. If a disaster were to strike the primary region, it's likely that some data would be lost and that files within a directory or container would not be consistent. For more information about how to plan for potential data loss, see [Data loss and inconsistencies](storage-disaster-recovery-guidance.md#anticipate-data-loss-and-inconsistencies).
## Summary of redundancy options
The following table describes key parameters for each redundancy option:
| Parameter | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-|
-| Percent durability of objects over a given year | at least 99.999999999%<br/>(11 9's) | at least 99.9999999999%<br/>(12 9's) | at least 99.99999999999999%<br/>(16 9's) | at least 99.99999999999999%<br/>(16 9's) |
+| Percent durability of objects over a given year | at least 99.999999999% (11 9's) | at least 99.9999999999% (12 9's) | at least 99.99999999999999% (16 9's) | at least 99.99999999999999% (16 9's) |
| Availability for read requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) for GRS<br/><br/>At least 99.99% (99.9% for cool or archive access tiers) for RA-GRS | At least 99.9% (99% for cool access tier) for GZRS<br/><br/>At least 99.99% (99.9% for cool access tier) for RA-GZRS | | Availability for write requests | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | At least 99.9% (99% for cool or archive access tiers) | At least 99.9% (99% for cool access tier) | | Number of copies of data maintained on separate nodes | Three copies within a single region | Three copies across separate availability zones within a single region | Six copies total, including three in the primary region and three in the secondary region | Six copies total, including three across separate availability zones in the primary region and three locally redundant copies in the secondary region |
The following table indicates whether your data is durable and available in a gi
| Outage scenario | LRS | ZRS | GRS/RA-GRS | GZRS/RA-GZRS | |:-|:-|:-|:-|:-| | A node within a data center becomes unavailable | Yes | Yes | Yes | Yes |
-| An entire data center (zonal or nonzonal) becomes unavailable | No | Yes | Yes<sup>1</sup> | Yes |
+| An entire data center (zonal or non-zonal) becomes unavailable | No | Yes | Yes<sup>1</sup> | Yes |
| A region-wide outage occurs in the primary region | No | No | Yes<sup>1</sup> | Yes<sup>1</sup> | | Read access to the secondary region is available if the primary region becomes unavailable | No | No | Yes (with RA-GRS) | Yes (with RA-GZRS) |
The following table indicates whether your data is durable and available in a gi
### Supported Azure Storage services
-The following table shows the redundancy options supported by each Azure Storage service.
+The following table shows which redundancy options are supported by each Azure Storage service.
| Service | LRS | ZRS | GRS | RA-GRS | GZRS | RA-GZRS | ||--|--|--|--|||
Unmanaged disks don't support ZRS or GZRS.
For pricing information for each redundancy option, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). > [!NOTE]
-> Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
+Block blob storage accounts support locally redundant storage (LRS) and zone redundant storage (ZRS) in certain regions.
## Data integrity
storage Storage Use Azcopy Authorize Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-authorize-azure-active-directory.md
If you sign in by using Azure PowerShell, then Azure PowerShell obtains an OAuth
To enable AzCopy to use that token, type the following command, and then press the ENTER key. ```PowerShell
-set AZCOPY_AUTO_LOGIN_TYPE=PSCRED
+$Env:AZCOPY_AUTO_LOGIN_TYPE="PSCRED"
``` For more information about how to sign in with the Azure PowerShell, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
storage Files Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-disaster-recovery.md
Write access is restored for geo-redundant accounts once the DNS entry has been
> [!IMPORTANT] > After the failover is complete, the storage account is configured to be locally redundant in the new primary endpoint/region. To resume replication to the new secondary, configure the account for geo-redundancy again. >
-> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [The time and cost of failing over](../common/storage-disaster-recovery-guidance.md#the-time-and-cost-of-failing-over).
+> Keep in mind that converting a locally redundant storage account to use geo-redundancy incurs both cost and time. For more information, see [Important implications of account failover](../common/storage-initiate-account-failover.md#important-implications-of-account-failover).
### Anticipate data loss
storage Storage Files Identity Ad Ds Configure Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md
description: Learn how to configure Windows ACLs for directory and file level pe
Previously updated : 11/28/2023 Last updated : 01/11/2024 recommendations: false
Both share-level and file/directory-level permissions are enforced when a user a
> To configure Windows ACLs, you'll need a client machine running Windows that has unimpeded network connectivity to the domain controller. If you're authenticating with Azure Files using Active Directory Domain Services (AD DS) or Microsoft Entra Kerberos for hybrid identities, this means you'll need unimpeded network connectivity to the on-premises AD. If you're using Microsoft Entra Domain Services, then the client machine must have unimpeded network connectivity to the domain controllers for the domain that's managed by Microsoft Entra Domain Services, which are located in Azure. ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
You can configure the Windows ACLs using either [icacls](#configure-windows-acls
If you have directories or files in on-premises file servers with Windows ACLs configured against the AD DS identities, you can copy them over to Azure Files persisting the ACLs with traditional file copy tools like Robocopy or [Azure AzCopy v 10.4+](https://github.com/Azure/azure-storage-azcopy/releases). If your directories and files are tiered to Azure Files through Azure File Sync, your ACLs are carried over and persisted in their native format.
+Remember to sync your identities in order for the set permissions to take effect. You can set ACLs for not-synced identities, but these ACLs won't be enforced because the not-synced identities won't be present in the Kerberos ticket used for authentication/authorization.
+ ### Configure Windows ACLs with icacls To grant full permissions to all directories and files under the file share, including the root directory, run the following Windows command from a machine that has line-of-sight to the AD domain controller. Remember to replace the placeholder values in the example with your own values.
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
description: Learn about the capacity, IOPS, and throughput rates for Azure file
Previously updated : 11/2/2022 Last updated : 01/11/2024 # Azure Files scalability and performance targets+ [Azure Files](storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the SMB and NFS file system protocols. This article discusses the scalability and performance targets for Azure Files and Azure File Sync.
-The targets listed here might be affected by other variables in your deployment. For example, the performance of I/O for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You should test your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements. You should also expect these limits will increase over time.
+The targets listed here might be affected by other variables in your deployment. For example, the performance of I/O for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You should test your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements.
## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
The targets listed here might be affected by other variables in your deployment.
| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ## Azure Files scale targets+ Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to consider: storage accounts, Azure file shares, and individual files. ### Storage account scale targets+ Storage account scale targets apply at the storage account level. There are two main types of storage accounts for Azure Files: - **General purpose version 2 (GPv2) storage accounts**: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables. File shares can be deployed into the transaction optimized (default), hot, or cool tiers.
Storage account scale targets apply at the storage account level. There are two
<sup>2</sup> General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact [Azure Support](https://azure.microsoft.com/support/faq/). ### Azure file share scale targets+ Azure file share scale targets apply at the file share level. | Attribute | Standard file shares<sup>1</sup> | Premium file shares |
Azure file share scale targets apply at the file share level.
<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names. ### File scale targets+ File scale targets apply to individual files stored in Azure file shares. | Attribute | Files in standard file shares | Files in premium file shares |
File scale targets apply to individual files stored in Azure file shares.
<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./smb-performance.md).</sup>
-<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If your applications are not opening a handle on the root directory, Azure Files can support more than 10,000 active users per share.</sup>
+<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use two Azure file shares. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share.</sup>
## Azure File Sync scale targets+ The following table indicates which targets are soft, representing the Microsoft tested boundary, and hard, indicating an enforced maximum: | Resource | Target | Hard limit |
The following table indicates which targets are soft, representing the Microsoft
> An Azure File Sync endpoint can scale up to the size of an Azure file share. If the Azure file share size limit is reached, sync will not be able to operate. ## Azure File Sync performance metrics+ Since the Azure File Sync agent runs on a Windows Server machine that connects to the Azure file shares, the effective sync performance depends upon a number of factors in your infrastructure: Windows Server and the underlying disk configuration, network bandwidth between the server and the Azure storage, file size, total dataset size, and the activity on the dataset. Since Azure File Sync works on the file level, the performance characteristics of an Azure File Sync-based solution should be measured by the number of objects (files and directories) processed per second. For Azure File Sync, performance is critical in two stages:
For Azure File Sync, performance is critical in two stages:
> When many server endpoints in the same sync group are syncing at the same time, they are contending for cloud service resources. As a result, upload performance will be impacted. In extreme cases, some sync sessions will fail to access the resources, and will fail. However, those sync sessions will resume shortly and eventually succeed once the congestion is reduced. ## Internal test results+ To help you plan your deployment for each of the stages (initial one-time provisioning and ongoing sync), below are the results observed during the internal testing on a system with the following configuration: | System configuration | Details |
synapse-analytics Continuous Integration Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/cicd/continuous-integration-delivery.md
Here's an explanation of how the preceding template is constructed, by resource
- The `maxConcurrency` property has a default value and is the `string` type. The default parameter name of the `maxConcurrency` property is `<entityName>_properties_typeProperties_maxConcurrency`. - The `recurrence` property also is parameterized. All properties under the `recurrence` property are set to be parameterized as strings, with default values and parameter names. An exception is the `interval` property, which is parameterized as the `int` type. The parameter name is suffixed with `<entityName>_properties_typeProperties_recurrence_triggerSuffix`. Similarly, the `freq` property is a string and is parameterized as a string. However, the `freq` property is parameterized without a default value. The name is shortened and suffixed, such as `<entityName>_freq`.
+ > [!NOTE]
+ > A maximum of 50 triggers is supported currently.
+ **`linkedServices`** - Linked services are unique. Because linked services and datasets have a wide range of types, you can provide type-specific customization. In the preceding example, for all linked services of the `AzureDataLakeStore` type, a specific template is applied. For all others (identified through the use of the `*` character), a different template is applied.
virtual-desktop App Attach Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-setup.md
In order to use MSIX app attach in Azure Virtual Desktop, you need to meet the p
- If you want to use Azure PowerShell locally, see [Use Azure PowerShell with Azure Virtual Desktop](cli-powershell.md) to make sure you have the [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization) and [Microsoft Graph](/powershell/microsoftgraph/installation) PowerShell modules installed. Alternatively, use the [Azure Cloud Shell](../cloud-shell/overview.md). ::: zone pivot="app-attach"-- You need to use version 4.2.0 or later of the *Az.DesktopVirtualization* PowerShell module, which contains the cmdlets that support app attach. You can download and install the Az.DesktopVirtualization PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/).
+- You need to use version 4.2.1 of the *Az.DesktopVirtualization* PowerShell module, which contains the cmdlets that support app attach. You can download and install the Az.DesktopVirtualization PowerShell module from the [PowerShell Gallery](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/).
::: zone-end ::: zone pivot="app-attach"
In order to use MSIX app attach in Azure Virtual Desktop, you need to meet the p
> > - All MSIX and Appx application packages include a certificate. You're responsible for making sure the certificates are trusted in your environment. Self-signed certificates are supported with the appropriate chain of trust. >
-> - You have to choose whether you want to use MSIX app attach or app attach with a host pool. You can't use both versions with the same host pool.
+> - You have to choose whether you want to use MSIX app attach or app attach with a host pool. You can't use both versions with the same package in the same host pool.
::: zone-end ::: zone pivot="msix-app-attach"
Here's how to add an MSIX or Appx image as an app attach package using the [Az.D
Your output should be similar to the following output: ```output
- CommandType Name Version Source
- -- - -
- Function Get-AzWvdAppAttachPackage 4.2.0 Az.DesktopVirtualization
- Function New-AzWvdAppAttachPackage 4.2.0 Az.DesktopVirtualization
- Function Remove-AzWvdAppAttachPackage 4.2.0 Az.DesktopVirtualization
- Function Update-AzWvdAppAttachPackage 4.2.0 Az.DesktopVirtualization
+ CommandType Name Version Source
+ -- - -
+ Function Get-AzWvdAppAttachPackage 4.2.1 Az.DesktopVirtualization
+ Function Import-AzWvdAppAttachPackageInfo 4.2.1 Az.DesktopVirtualization
+ Function New-AzWvdAppAttachPackage 4.2.1 Az.DesktopVirtualization
+ Function Remove-AzWvdAppAttachPackage 4.2.1 Az.DesktopVirtualization
+ Function Update-AzWvdAppAttachPackage 4.2.1 Az.DesktopVirtualization
``` 3. Get the properties of the image you want to add and store them in a variable by running the following command. You need to specify a host pool, but it can be any host pool where session hosts have access to the file share.
Here's how to add an MSIX or Appx image as an app attach package using the [Az.D
$parameters = @{ HostPoolName = '<HostPoolName>' ResourceGroupName = '<ResourceGroupName>'
- Uri = '<UNCPathToImageFile>'
+ Path = '<UNCPathToImageFile>'
}
- $app = Expand-AzWvdMsixImage @parameters
+ $app = Import-AzWvdAppAttachPackageInfo @parameters
``` 4. Check you only have one object in the application properties by running the following command:
Here's how to add an MSIX or Appx image as an app attach package using the [Az.D
$parameters = @{ HostPoolName = '<HostPoolName>' ResourceGroupName = '<ResourceGroupName>'
- Uri = '<UNCPathToImageFile>'
+ Path = '<UNCPathToImageFile>'
}
- $app = Expand-AzWvdMsixImage @parameters | ? PackageFullName -like *$packageFullName*
+ $app = Import-AzWvdAppAttachPackageInfo @parameters | ? ImagePackageFullName -like *$packageFullName*
``` 5. Add the image as an app attach package by running the following command. In this example, the [application state](app-attach-overview.md#application-state) is marked as *active*, the [application registration](app-attach-overview.md#application-registration) is set to **on-demand**, and [session host health check status](troubleshoot-statuses-checks.md) on failure is set to **NeedsAssistance**:
Here's how to add an MSIX or Appx image as an app attach package using the [Az.D
ResourceGroupName = '<ResourceGroupName>' Location = '<AzureRegion>' FailHealthCheckOnStagingFailure = 'NeedsAssistance'
- IsLogonBlocking = $false
- DisplayName = '<AppDisplayName>'
- IsActive = $true
+ ImageIsRegularRegistration = $false
+ ImageDisplayName = '<AppDisplayName>'
+ ImageIsActive = $true
}
- New-AzWvdAppAttachPackage -ImageObject $app @parameters
+ New-AzWvdAppAttachPackage -AppAttachPackage $app @parameters
``` There's no output when the package is added successfully.
Here's how to update an existing package using the [Az.DesktopVirtualization](/p
1. In the same PowerShell session, get the properties of the updated application and store them in a variable by running the following command: ```azurepowershell+ # Get the properties of the application $parameters = @{ HostPoolName = '<HostPoolName>' ResourceGroupName = '<ResourceGroupName>'
- Uri = '<UNCPathToImageFile>'
+ Path = '<UNCPathToImageFile>'
}
- $app = Expand-AzWvdMsixImage @parameters
+ $app = Import-AzWvdAppAttachPackageInfo @parameters
``` 1. Check you only have one object in the application properties by running the following command:
Here's how to update an existing package using the [Az.DesktopVirtualization](/p
$parameters = @{ HostPoolName = '<HostPoolName>' ResourceGroupName = '<ResourceGroupName>'
- Uri = '<UNCPathToImageFile>'
+ Path = '<UNCPathToImageFile>'
}
- $app = Expand-AzWvdMsixImage @parameters | ? PackageFullName -like *$packageFullName*
+ $app = Import-AzWvdAppAttachPackageInfo @parameters | ? ImagePackageFullName -like *$packageFullName*
``` 1. Update an existing package by running the following command. The new disk image supersedes the existing one, but existing assignments are kept. Don't delete the existing image until users have stopped using it.
Here's how to update an existing package using the [Az.DesktopVirtualization](/p
$parameters = @{ Name = '<PackageName>' ResourceGroupName = '<ResourceGroupName>'
- Location = '<AzureRegion>'
}
- Update-AzWvdAppAttachPackage -ImageObject $app @parameters
+ Update-AzWvdAppAttachPackage -AppAttachPackage $app @parameters
```
Here's how to add an MSIX package using the [Az.DesktopVirtualization](/powershe
[!INCLUDE [include-cloud-shell-local-powershell](includes/include-cloud-shell-local-powershell.md)]
-2. Get the properties of the application in the MSI image you want to add and store them in a variable by running the following command:
+2. Get the properties of the application in the MSIX image you want to add and store them in a variable by running the following command:
```azurepowershell # Get the properties of the MSIX image
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
The following platform SKUs are currently supported (and more are added periodic
- Must use application health probes or [Application Health extension](virtual-machine-scale-sets-health-extension.md) for non-Service Fabric scale sets. For Service Fabric requirements, see [Service Fabric requirement](#service-fabric-requirements). - Use Compute API version 2018-10-01 or higher. - Ensure that external resources specified in the scale set model are available and updated. Examples include SAS URI for bootstrapping payload in VM extension properties, payload in storage account, reference to secrets in the model, and more.-- For scale sets using Windows virtual machines, starting with Compute API version 2019-03-01, the property *virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates* property must set to *false* in the scale set model definition. The *enableAutomaticUpdates* property enables in-VM patching where "Windows Update" applies operating system patches without replacing the OS disk. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required.
+- For scale sets using Windows virtual machines, starting with Compute API version 2019-03-01, the *virtualMachineProfile.osProfile.windowsConfiguration.enableAutomaticUpdates* property must set to *false* in the scale set model definition. The *enableAutomaticUpdates* property enables in-VM patching where "Windows Update" applies operating system patches without replacing the OS disk. With automatic OS image upgrades enabled on your scale set, an extra patching process through Windows Update is not required.
> [!NOTE] > After an OS disk is replaced through reimage or upgrade, the attached data disks may have their drive letters reassigned. To retain the same drive letters for attached disks, it is suggested to use a custom boot script.
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
If you deploy a virtual machine in Azure and it doesn't have explicit outbound c
* Customers don't own the default outbound access IP. This IP might change, and any dependency on it could cause issues in the future.
+Some examples of configurations that will not work when using default outbound access:
+- When you have multiple NICs on the same VM, note that default outbound IPs will not consistently be the same across all NICs.
+- When scaling up/down Virtual Machine Scale sets, default outbound IPs assigned to individual instances can and will often change.
+- Similarly, default outbound IPs are not consistent or contigious across VM instances in a Virtual Machine Scale Set.
+ ## How can I transition to an explicit method of public connectivity (and disable default outbound access)? There are multiple ways to turn off default outbound access. The following sections describe the options available to you.
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
In regions without availability zones, all public IP addresses are created as no
There are other attributes that can be used for a public IP address.
-* The Global **Tier** allows a public IP address to be used with cross-region load balancers.
+* The Global **Tier** option creates a global anycast IP that can be used with cross-region load balancers.
* The Internet **Routing Preference** option minimizes the time that traffic spends on the Microsoft network, lowering the egress data transfer cost.