Updates from: 12/02/2023 02:10:02
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Define your application and service architecture, inventory current systems, and
| Migrate existing apps to b2clogin.com | The deprecation of login.microsoftonline.com will go into effect for all Azure AD B2C tenants on 04 December 2020. [Learn more](b2clogin.md). | | Use Identity Protection and Conditional Access | Use these capabilities for significantly greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). | |Tenant size | You need to plan with Azure AD B2C tenant size in mind. By default, Azure AD B2C tenant can accommodate 1.25 million objects (user accounts and applications). You can increase this limit to 5.25 million objects by adding a custom domain to your tenant, and verifying it. If you need a bigger tenant size, you need to contact [Support](find-help-open-support-ticket.md).|
-| Use Identity Protection and Conditional Access | Use these capabilities for greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). |
## Implementation
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 11/01/2023 Last updated : 12/01/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
+## November 2023
+
+### Updated articles
+
+- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md) - Editorial updates
+- [Enrich tokens with claims from external sources using API connectors](add-api-connector-token-enrichment.md) - Editorial updates
+- [Enable custom domains for Azure Active Directory B2C](custom-domain.md) - Editorial updates
+- [Set up sign-in for multitenant Microsoft Entra ID using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md) - Editorial updates
+- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md) - Editorial updates
+- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md) - Editorial updates
+- [What is Azure Active Directory B2C?](overview.md) - Editorial updates
+- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md) - Editorial updates
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md) - Editorial updates
+- [User flows and custom policies overview](user-flow-overview.md) - Editorial updates
+- [OAuth 2.0 authorization code flow in Azure Active Directory B2C](authorization-code-flow.md) - Editorial updates
+- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates
+- [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates
+ ## October 2023 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Set up a sign-up and sign-in flow with a social account by using Azure Active Directory B2C custom policy](custom-policies-series-sign-up-or-sign-in-federation.md) - Editorial updates - [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates
-## August 2023
-
-### Updated articles
--- [Page layout versions](page-layout.md) - Editorial updates-- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - Oauth Bearer Authentication updated to GA-
ai-services Export Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/export-programmatically.md
Integrate your exported model into an application by exploring one of the follow
* See the sample for [CoreML model in an iOS application](https://go.microsoft.com/fwlink/?linkid=857726) for real-time image classification with Swift. * See the sample for [Tensorflow model in an Android application](https://github.com/Azure-Samples/cognitive-services-android-customvision-sample) for real-time image classification on Android. * See the sample for [CoreML model with Xamarin](https://github.com/xamarin/ios-samples/tree/master/ios11/CoreMLAzureModel) for real-time image classification in a Xamarin iOS app.
+* See the sample for how to use the exported model [(VAIDK/OpenVino)](https://github.com/Azure-Samples/customvision-export-samples)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
After ingesting your data, you can start chatting with the model on your data us
* [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples/Sample08_UseYourOwnData.cs) * [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/openai/azure-ai-openai/src/samples/java/com/azure/ai/openai/ChatCompletionsWithYourData.java) * [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/openai/openai/samples/v1-beta/javascript/bringYourOwnData.js)
+* [PowerShell](../use-your-data-quickstart.md?tabs=command-line%2Cpowershell&pivots=programming-language-powershell#example-powershell-commands)
* [Python](https://github.com/openai/openai-cookbook/blob/main/examples/azure/chat_with_your_own_data.ipynb) # [Azure Cosmos DB for MongoDB vCore](#tab/mongo-db)
To add a new data source to Azure OpenAI on your data, you need the following Az
| [Contributor](/azure/role-based-access-control/built-in-roles#contributor) | Your subscription, to access Azure Resource Manager. | You want to deploy a web app. | | [Cognitive Services Contributor Role](/azure/role-based-access-control/built-in-roles#cognitive-services-contributor) | The Azure AI Search resource, to access Azure OpenAI resource. | You want to deploy a [web app](#using-the-web-app). |
-## Virtual network support & private endpoint support (Azure AI Search only)
+## Virtual network support & private endpoint support
-> [!TIP]
-> For instructions on setting up your resources to work on a virtual private network or private endpoint, see [Use Azure OpenAI on your data securely](../how-to/use-your-data-securely.md)
-
-### Azure OpenAI resources
-
-You can protect Azure OpenAI resources in [virtual networks and private endpoints](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI service.
-
-### Azure AI Search resources
-
-If you have an Azure AI Search resource protected by a private network, and want to allow Azure OpenAI on your data to access your search service, complete [an application form](https://aka.ms/applyacsvpnaoaioyd). The application will be reviewed in ten business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request.
+* For instructions on setting up your resources to work on a virtual private network or private endpoint, see [Use Azure OpenAI on your data securely](../how-to/use-your-data-securely.md)
+* Azure OpenAI, Azure AI Search, and Azure Storage Accounts can be protected under private endpoints and virtual private networks.
+## Document-level access control
-Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
-
-After you approve the request in your search service, you can start using the [chat completions extensions API](/azure/ai-services/openai/reference#completions-extensions). Public network access can be disabled for that search service.
-
-## Document-level access control (Azure AI Search only)
+> [!NOTE]
+> Document-level access control is supported for Azure AI search only.
Azure OpenAI on your data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure AI Search and used to generate a response will be trimmed based on user Microsoft Entra group membership. You can only enable document-level access on existing Azure AI Search indexes. To enable document-level access:
When using the API, pass the `filter` parameter in each API request. For example
* `my_group_ids` is the field name that you selected for **Permitted groups** during [fields mapping](#index-field-mapping). * `group_id1, group_id2` are groups attributed to the logged in user. The client application can retrieve and cache users' groups.
-## Schedule automatic index refreshes (Azure AI Search only)
+## Schedule automatic index refreshes
+
+> [!NOTE]
+> Automatic index refreshing is supported for Azure Blob storage only.
To keep your Azure AI Search index up-to-date with your latest data, you can schedule a refresh for it that runs automatically rather than manually updating it every time your data is updated. Automatic index refresh is only available when you choose **blob storage** as the data source. To enable an automatic index refresh:
When you chat with a model, providing a history of the chat will help the model
} ```
+## Token usage estimation for Azure OpenAI on your data
++
+| Model | Total tokens available | Max tokens for system message | Max tokens for model response |
+|-||||
+| ChatGPT Turbo (0301) 8k | 8000 | 400 | 1500 |
+| ChatGPT Turbo 16k | 16000 | 1000 | 3200 |
+| GPT-4 (8k) | 8000 | 400 | 1500 |
+| GPT-4 32k | 32000 | 2000 | 6400 |
+
+The table above shows the total number of tokens available for each model type. It also determines the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
++
+* The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt as well as the query rewriting prompts for retrieval.
+* User question and history: Variable but capped at 2000 tokens.
+* Retrieved documents (chunks): The number of tokens used by the retrieved document chunks depends on multiple factors. The upper bound for this is the number of retrieved document chunks multiplied by the chunk size. It will, however, be truncated based on the tokens available tokens for the specific model being used after counting the rest of fields.
+
+ 20% of the available tokens are reserved for the model response. The remaining 80% of available tokens include the meta prompt, the user question and conversation history, and the system message. The remaining token budget is used by the retrieved document chunks.
+
+```python
+import tiktoken
+
+class TokenEstimator(object):
+
+ GPT2_TOKENIZER = tiktoken.get_encoding("gpt2")
+
+ def estimate_tokens(self, text: str) -> int:
+ return len(self.GPT2_TOKENIZER.encode(text))
+
+token_output = TokenEstimator.estimate_tokens(input_text)
+```
## Next steps * [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
When you ingest data into Azure OpenAI on your data, the following process is us
1. The ingestion process is started when a client sends data to be processed. 1. Ingestion assets (indexers, indexes, data sources, a [custom skill](/azure/search/cognitive-search-custom-skill-interface) and container in the search resource) are created in the Azure AI Search resource and Azure storage account.
-1. If the ingestion is triggered by a [scheduled refresh](../concepts/use-your-data.md#schedule-automatic-index-refreshes-azure-ai-search-only), the ingestion process starts at `[3]`.
+1. If the ingestion is triggered by a [scheduled refresh](../concepts/use-your-data.md#schedule-automatic-index-refreshes), the ingestion process starts at `[3]`.
1. Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue. 1. Azure OpenAI: 1. Internally uses the indexer created earlier to crack the documents.
To set the managed identities via the management API, see [the management API re
## Security support for Azure AI Search
+You can protect Azure OpenAI resources in [virtual networks and private endpoints](/azure/ai-services/cognitive-services-virtual-networks) the same way as any Azure AI service.
+ ### Inbound security: authentication As Azure OpenAI will use managed identity to access Azure AI Search, you need to enable Azure AD based authentication in your Azure AI Search. To do it on Azure portal, select **Both** in the **Keys** tab in the Azure portal.
To use Azure OpenAI Studio, you can't disable the API key based authentication f
### Inbound security: networking
-Use **Selected networks** in the Azure portal. Azure AI Search doesn't support bypassing trusted services, so it is the most complex part in the setup. Create a private endpoint for theAzure OpenAI on your data (as a multitenant service managed by Microsoft), and link it to your Azure AI Search resource. This requires you to submit an [application form](https://aka.ms/applyacsvpnaoaioyd).
+Use **Selected networks** in the Azure portal. Azure AI Search doesn't support bypassing trusted services, so it is the most complex part in the setup. Create a private endpoint for the Azure OpenAI on your data resource (as a multitenant service managed by Microsoft), and link it to your Azure AI Search resource. This requires you to submit an [application form](https://aka.ms/applyacsvpnaoaioyd). The application will be reviewed in ten business days and you will be contacted via email about the results. If you are eligible, we will send a private endpoint request to your search service, and you will need to approve the request.
++
+Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
> [!NOTE] > To use Azure OpenAI Studio, you cannot disable public network access, and you need to add your local IP to the IP rules, because Azure AI Studio calls the search API from your browser to list available indexes.
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
The following parameters can be used inside of the `parameters` field inside of
| `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. | | `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. | | `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.|
-| `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control-azure-ai-search-only)
+| `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control)
| `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. | | `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data will use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.|
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
In this tutorial you learn how to:
### Python libraries
+# [OpenAI Python 0.28.1](#tab/python)
+ If you haven't already, you need to install the following libraries: ```cmd pip install "openai==0.28.1" json requests os tiktoken time ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```cmd
+pip install openai json requests os tiktoken time
+```
+++ [!INCLUDE [get-key-endpoint](../includes/get-key-endpoint.md)] ### Environment variables
p5 / p95: 11.6, 20.9
## Upload fine-tuning files
+# [OpenAI Python 0.28.1](#tab/python)
+ ```Python # Upload fine-tuning files import openai
import os
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY") openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_type = 'azure'
-openai.api_version = '2023-09-15-preview' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
+openai.api_version = '2023-10-01-preview' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
training_file_name = 'training_set.jsonl' validation_file_name = 'validation_set.jsonl'
print("Training file ID:", training_file_id)
print("Validation file ID:", validation_file_id) ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+# Upload fine-tuning files
+
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-10-01-preview" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
+)
+
+training_file_name = 'training_set.jsonl'
+validation_file_name = 'validation_set.jsonl'
+
+# Upload the training and validation dataset files to Azure OpenAI with the SDK.
+
+training_response = client.files.create(
+ file=open(training_file_name, "rb"), purpose="fine-tune"
+)
+training_file_id = training_response.id
+
+validation_response = client.files.create(
+ file=open(validation_file_name, "rb"), purpose="fine-tune"
+)
+validation_file_id = validation_response.id
+
+print("Training file ID:", training_file_id)
+print("Validation file ID:", validation_file_id)
+```
+++ **Output:** ```output
Validation file ID: file-70a3f525ed774e78a77994d7a1698c4b
Now that the fine-tuning files have been successfully uploaded you can submit your fine-tuning training job:
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python response = openai.FineTuningJob.create( training_file=training_file_id,
print("Status:", response["status"])
print(response) ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+response = client.fine_tuning.jobs.create(
+ training_file=training_file_id,
+ validation_file=validation_file_id,
+ model="gpt-35-turbo-0613", # Enter base model name. Note that in Azure OpenAI the model name contains dashes and cannot contain dot/period characters.
+)
+
+job_id = response.id
+
+# You can use the job ID to monitor the status of the fine-tuning job.
+# The fine-tuning job will take some time to start and complete.
+
+print("Job ID:", response.id)
+print("Status:", response.id)
+print(response.model_dump_json(indent=2))
+```
+++ **Output:** ```output
Status: pending
} ```
-To retrieve the training job ID, you can run:
-
-```python
-response = openai.FineTuningJob.retrieve(job_id)
-
-print("Job ID:", response["id"])
-print("Status:", response["status"])
-print(response)
-```
-
-**Output:**
-
-```output
-Fine-tuning model with job ID: ftjob-0f4191f0c59a4256b7a797a3d9eed219.
-```
- ## Track training job status If you would like to poll the training job status until it's complete, you can run:
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python # Track training status
response = openai.FineTuningJob.list()
print(f'Found {len(response["data"])} fine-tune jobs.') ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+# Track training status
+
+from IPython.display import clear_output
+import time
+
+start_time = time.time()
+
+# Get the status of our fine-tuning job.
+response = client.fine_tuning.jobs.retrieve(job_id)
+
+status = response.status
+
+# If the job isn't done yet, poll it every 10 seconds.
+while status not in ["succeeded", "failed"]:
+ time.sleep(10)
+
+ response = client.fine_tuning.jobs.retrieve(job_id)
+ print(response.model_dump_json(indent=2))
+ print("Elapsed time: {} minutes {} seconds".format(int((time.time() - start_time) // 60), int((time.time() - start_time) % 60)))
+ status = response.status
+ print(f'Status: {status}')
+ clear_output(wait=True)
+
+print(f'Fine-tuning job {job_id} finished with status: {status}')
+
+# List all fine-tuning jobs for this resource.
+print('Checking other fine-tune jobs for this resource.')
+response = client.fine_tuning.jobs.list()
+print(f'Found {len(response.data)} fine-tune jobs.')
+```
+++ **Output:** ```ouput
Found 2 fine-tune jobs.
To get the full results, run the following:
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python #Retrieve fine_tuned_model name
print(response)
fine_tuned_model = response["fine_tuned_model"] ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+#Retrieve fine_tuned_model name
+
+response = client.fine_tuning.jobs.retrieve(job_id)
+
+print(response.model_dump_json(indent=2))
+fine_tuned_model = response.fine_tuned_model
+```
+++ ## Deploy fine-tuned model Unlike the previous Python SDK commands in this tutorial, since the introduction of the quota feature, model deployment must be done using the [REST API](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update?tabs=HTTP), which requires separate authorization, a different API path, and a different API version.
It isn't uncommon for this process to take some time to complete when dealing wi
After your fine-tuned model is deployed, you can use it like any other deployed model in either the [Chat Playground of Azure OpenAI Studio](https://oai.azure.com), or via the chat completion API. For example, you can send a chat completion call to your deployed model, as shown in the following Python example. You can continue to use the same parameters with your customized model, such as temperature and max_tokens, as you can with other deployed models.
+# [OpenAI Python 0.28.1](#tab/python)
+ ```python #Note: The openai-python library support for Azure OpenAI is in preview. import os
print(response)
print(response['choices'][0]['message']['content']) ```
+# [OpenAI Python 1.x](#tab/python-new)
+
+```python
+import os
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2023-05-15"
+)
+
+response = client.chat.completions.create(
+ model="gpt-35-turbo-ft", # model = "Custom deployment name you chose for your fine-tuning model"
+ messages=[
+ {"role": "system", "content": "You are a helpful assistant."},
+ {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
+ {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
+ {"role": "user", "content": "Do other Azure AI services support this too?"}
+ ]
+)
+
+print(response.choices[0].message.content)
+```
+++ ## Delete deployment Unlike other types of Azure OpenAI models, fine-tuned/customized models have [an hourly hosting cost](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/#pricing) associated with them once they are deployed. It is **strongly recommended** that once you're done with this tutorial and have tested a few chat completion calls against your fine-tuned model, that you **delete the model deployment**.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whispe
### Azure OpenAI on your own data (preview) updates - You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).-- [Azure OpenAI on your data](./concepts/use-your-data.md#virtual-network-support--private-endpoint-support-azure-ai-search-only) now supports private endpoints.-- Ability to [filter access to sensitive documents](./concepts/use-your-data.md#document-level-access-control-azure-ai-search-only).-- [Automatically refresh your index on a schedule](./concepts/use-your-data.md#schedule-automatic-index-refreshes-azure-ai-search-only).
+- [Azure OpenAI on your data](./concepts/use-your-data.md#virtual-network-support--private-endpoint-support) now supports private endpoints.
+- Ability to [filter access to sensitive documents](./concepts/use-your-data.md#document-level-access-control).
+- [Automatically refresh your index on a schedule](./concepts/use-your-data.md#schedule-automatic-index-refreshes).
- [Vector search and semantic search options](./concepts/use-your-data.md#search-options). - [View your chat history in the deployed web app](./concepts/use-your-data.md#chat-history)
ai-services Openai Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/openai-speech.md
Title: "Azure OpenAI speech to speech chat - Speech service" description: In this how-to guide, you can use Speech to converse with Azure OpenAI. The text recognized by the Speech service is sent to Azure OpenAI. The text response from Azure OpenAI is then synthesized by the Speech service.
-#
Previously updated : 04/15/2023 Last updated : 11/30/2023 zone_pivot_groups: programming-languages-csharp-python keywords: speech to text, openai
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
The `--pod-cidr` parameter is required when upgrading from legacy CNI because th
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
-You must have the latest aks-preview Azure CLI extension installed and register the `Microsoft.ContainerService` `AzureOverlayDualStackPreview` feature flag.
+You must have the latest aks-preview Azure CLI extension installed and register the `Microsoft.ContainerService` `AzureOverlayPreview` feature flag.
Update an existing Kubenet cluster to use Azure CNI Overlay using the [`az aks update`][az-aks-update] command.
az aks update --name $clusterName \
Since the cluster is already using a private CIDR for pods, you don't need to specify the `--pod-cidr` parameter and the Pod CIDR will remain the same.
-> [NOTE!]
+> [!NOTE]
> When upgrading from Kubenet to CNI Overlay, the route table will no longer be required for pod routing. If the cluster is using a customer provided route table, the routes which were being used to direct pod traffic to the correct node will automatically be deleted during the migration operation. If the cluster is using a managed route table (the route table was created by AKS and lives in the node resource group) then that route table will be deleted as part of the migration. ## Dual-stack Networking (Preview)
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
When you create an Azure disk for use with AKS, you can create the disk resource
pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s ```
-5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*.
+5. Create an *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*.
```yaml apiVersion: v1
aks Open Service Mesh About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-about.md
OSM runs an Envoy-based control plane on Kubernetes and can be configured with [
Microsoft started the OSM project, but it's now governed by the [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/).
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ ## Enable the OSM add-on OSM can be added to your Azure Kubernetes Service (AKS) cluster by enabling the OSM add-on using the [Azure CLI][osm-azure-cli] or a [Bicep template][osm-bicep]. The OSM add-on provides a fully supported installation of OSM that's integrated with AKS.
aks Open Service Mesh Binary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-binary.md
zone_pivot_groups: client-operating-system
This article will discuss how to download the OSM client library to be used to operate and configure the OSM add-on for AKS, and how to configure the binary for your environment.
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ > [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. >
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster. The OSM add-on installs the OSM mesh on your cluster. The OSM mesh is a service mesh that provides traffic management, policy enforcement, and telemetry collection for your applications. For more information about the OSM mesh, see [Open Service Mesh](https://openservicemesh.io/).
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ > [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. >
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
ms.editor: schaffererin
This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) using a [Bicep](../azure-resource-manager/bicep/index.yml) template.
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ > [!IMPORTANT] > Based on the version of Kubernetes your cluster is running, the OSM add-on installs a different version of OSM. >
aks Open Service Mesh Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-integrations.md
Last updated 03/23/2022
The Open Service Mesh (OSM) add-on integrates with features provided by Azure and some open source projects.
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ > [!IMPORTANT] > Integrations with open source projects aren't covered by the [AKS support policy][aks-support-policy].
aks Open Service Mesh Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-troubleshoot.md
When you deploy the Open Service Mesh (OSM) add-on for Azure Kubernetes Service (AKS), you may experience problems associated with the service mesh configuration. The article explores common troubleshooting errors and how to resolve them.
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ ## Verifying and troubleshooting OSM components ### Check OSM Controller deployment, pod, and service
aks Open Service Mesh Uninstall Add On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-uninstall-add-on.md
Last updated 06/19/2023
This article shows you how to uninstall the OMS add-on and related resources from your AKS cluster.
+> [!NOTE]
+> With the retirement of [Open Service Mesh (OSM)](https://docs.openservicemesh.io/) by the Cloud Native Computing Foundation (CNCF), we recommend identifying your OSM configurations and migrating them to an equivalent Istio configuration. For information about migrating from OSM to Istio, see [Migration guidance for Open Service Mesh (OSM) configurations to Istio](open-service-mesh-istio-migration-guidance.md).
+ ## Disable the OSM add-on from your cluster * Disable the OSM add-on from your cluster using the [`az aks disable-addon`][az-aks-disable-addon] command and the `--addons` parameter.
aks Use Wasi Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-wasi-node-pools.md
Last updated 05/17/2023
# Create WebAssembly System Interface (WASI) node pools in Azure Kubernetes Service (AKS) to run your WebAssembly (WASM) workload (preview)
-[WebAssembly (WASM)][wasm] is a binary format that is optimized for fast download and maximum execution speed in a WASM runtime. A WASM runtime is designed to run on a target architecture and execute WebAssemblies in a sandbox, isolated from the host computer, at near-native performance. By default, WebAssemblies can't access resources on the host outside of the sandbox unless it is explicitly allowed, and they can't communicate over sockets to access things environment variables or HTTP traffic. The [WebAssembly System Interface (WASI)][wasi] standard defines an API for WASM runtimes to provide access to WebAssemblies to the environment and resources outside the host using a capabilities model.
+[WebAssembly (WASM)][wasm] is a binary format that is optimized for fast download and maximum execution speed in a WASM runtime. A WASM runtime is designed to run on a target architecture and execute WebAssemblies in a sandbox, isolated from the host computer, at near-native performance. By default, WebAssemblies can't access resources on the host outside of the sandbox unless it is explicitly allowed, and they can't communicate over sockets to access things like environment variables or HTTP traffic. The [WebAssembly System Interface (WASI)][wasi] standard defines an API for WASM runtimes to provide access to WebAssemblies to the environment and resources outside the host using a capabilities model.
> [!IMPORTANT] > WASI nodepools now use [containerd shims][wasm-containerd-shims] to run WASM workloads. Previously, AKS used [Krustlet][krustlet] to allow WASM modules to be run on Kubernetes. If you are still using Krustlet-based WASI nodepools, you can migrate to containerd shims by creating a new WASI nodepool and migrating your workloads to the new nodepool.
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
For more information about the `stv1` and `stv2` platforms and the benefits of u
## What happens during migration?
-API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/api configuration persisted in the storage layer.
+API Management platform migration from `stv1` to `stv2` involves updating the underlying compute alone and has no impact on the service/API configuration persisted in the storage layer.
* The upgrade process involves creating a new compute in parallel the old compute. Both instances coexist for 48 hours. * The API Management status in the Portal will be "Updating".
Run the following Azure CLI commands, setting variables where indicated with the
> [!NOTE] > The Migrate to `stv2` REST API is available starting in API Management REST API version `2022-04-01-preview`.
+> [!NOTE]
+> The following script is written for the bash shell. To run the script in PowerShell, prefix the variable names with the `$` character. Example: `$APIM_NAME`.
```azurecli #!/bin/bash
app-service Overview Local Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-local-cache.md
The Azure App Service Local Cache feature provides a web role view of your conte
## How the local cache changes the behavior of App Service * _D:\home_ points to the local cache, which is created on the VM instance when the app starts up. _D:\local_ continues to point to the temporary VM-specific storage.
-* The local cache contains a one-time copy of the _/site_ and _/siteextensions_ folders of the shared content store, at _D:\home\site_ and _D:\home\siteextensions_, respectively. The files are copied to the local cache when the app starts. The size of the two folders for each app is limited to 1 GB by default, but can be increased to 2 GB. Note that as the cache size increases, it will take longer to load the cache. If you've increased local cache limit to 2 GB and the copied files exceed the maximum size of 2 GB, App Service silently ignores local cache and reads from the remote file share. If there is no limit defined or the limit is set to anything lower than 2 GB and the copied files exceeds the limit, the deployment or swap may fail with an error.
+* The local cache contains a one-time copy of the _/site_ and _/siteextensions_ folders of the shared content store, at _D:\home\site_ and _D:\home\siteextensions_, respectively. The files are copied to the local cache when the app starts. The size of the two folders for each app is limited to 1 GB by default, but can be increased to 2 GB. Note that as the cache size increases, it will take longer to load the cache. If you've increased local cache limit to 2 GB and the copied files exceed the maximum size of 2 GB, App Service silently ignores local cache and reads from the remote file share.
+> [!IMPORTANT]
+> When the copied files exceed the defined Local Cache size limit or when no limit is defined, deployment and swapping operations may fail with an error. See the [FAQ](#frequently-asked-questions-faq) for more information.
+>
* The local cache is read-write. However, any modification is discarded when the app moves virtual machines or gets restarted. Do not use the local cache for apps that store mission-critical data in the content store. * _D:\home\LogFiles_ and _D:\home\Data_ contain log files and app data. The two subfolders are stored locally on the VM instance, and are copied to the shared content store periodically. Apps can persist log files and data by writing them to these folders. However, the copy to the shared content store is best-effort, so it is possible for log files and data to be lost due to a sudden crash of a VM instance. * [Log streaming](troubleshoot-diagnostic-logs.md#stream-logs) is affected by the best-effort copy. You could observe up to a one-minute delay in the streamed logs.
We recommend that you use Local Cache in conjunction with the [Staging Environme
## Frequently asked questions (FAQ)
+### What if Local Cache size limit is exceeded?
+When the copied files exceed the Local Cache size limit, the app will read from the remote share. However, deployment and swap operations may fail with an error. See the table below for size limits and results.
+
+| **Local Cache size** | **Coped files** | **Result** |
+| | | |
+|Γëñ 2 GB|Γëñ Local Cache size|Reads from local cache.|
+|Γëñ 2 GB|> Local Cache size|Reads from remote share.<br/> **Note:** Deployment and swap operations may fail with an error.|
+ ### How can I tell if Local Cache applies to my app? If your app needs a high-performance, reliable content store, does not use the content store to write critical data at runtime, and is less than 2 GB in total size, then the answer is "yes"! To get the total size of your /site and /siteextensions folders, you can use the site extension "Azure Web Apps Disk Usage."
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django or Flask web app with PostgreSQL'
description: Create a Python Django or Flask web app with a PostgreSQL database and deploy it to Azure. The tutorial uses either the Django or Flask framework and the app is hosted on Azure App Service on Linux. ms.devlang: python Previously updated : 10/31/2023 Last updated : 11/30/2023
zone_pivot_groups: app-service-portal-azd
# Deploy a Python (Django or Flask) web app with PostgreSQL in Azure
-### [Flask](#tab/flask)
-
-> [!TIP]
-> With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can skip to the end of the tutorial by running the following commands in an empty working directory:
->
-> ```bash
-> azd auth login
-> azd init --template msdocs-flask-postgresql-sample-app
-> azd up
-> ```
-
-### [Django](#tab/django)
+In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment.
-> [!TIP]
-> With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can skip to the end of the tutorial by running the following commands in an empty working directory:
->
-> ```bash
-> azd auth login
-> azd init --template msdocs-django-postgresql-sample-app
-> azd up
-> ```
+**To complete this tutorial, you'll need:**
::: zone pivot="azure-portal"
-In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment.
+* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
+* Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/)
-**To complete this tutorial, you'll need:**
+ * An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
+* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed. You can follow the steps with the [Azure Cloud Shell](https://shell.azure.com) because it already has Azure Developer CLI installed.
* Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/) +
+## Skip to the end
+
+### [Flask](#tab/flask)
+
+With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can deploy a fully configured sample app shown in this tutorial and see it running in Azure. Just running the following commands in an empty working directory:
+
+```bash
+azd auth login
+azd init --template msdocs-flask-postgresql-sample-app
+azd up
+```
+
+### [Django](#tab/django)
+
+With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can skip to the end of the tutorial by running the following commands in an empty working directory:
+
+```bash
+azd auth login
+azd init --template msdocs-django-postgresql-sample-app
+azd up
+```
+
+--
++ ## Sample application Sample Python applications using the Flask and Django framework are provided to help you follow along with this tutorial. To deploy them without running them locally, skip this part.
-To run the application locally, make sure you have [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) installed locally. Then, download or clone the app and go to the application folder:
+To run the application locally, make sure you have [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) installed locally. Then, clone the sample repository's `starter-no-infra` branch and change to the repository root.
### [Flask](#tab/flask) ```bash
-git clone git clone https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
+git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
cd msdocs-flask-postgresql-sample-app ```
+Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
+
+```
+DBNAME=<database name>
+DBHOST=<database-hostname>
+DBUSER=<db-user-name>
+DBPASS=<db-password>
+```
+ ### [Django](#tab/django) ```bash
-git clone https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app.git
+git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app
cd msdocs-django-postgresql-sample-app ``` - Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance. ```
Set the returned value as the value of `SECRET_KEY` in the .env file.
SECRET_KEY=<secret-key> ```
+--
+ Create a virtual environment for the app: [!INCLUDE [Virtual environment setup](<./includes/quickstart-python/virtual-environment-setup.md>)]
python manage.py runserver
-- + ## 1. Create App Service and PostgreSQL
In this step, you create the Azure resources. The steps used in this tutorial cr
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources.
+### [Flask](#tab/flask)
+ :::row::: :::column span="2"::: **Step 1:** In the Azure portal:
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png":::
:::column-end::: :::row-end::: :::row:::
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
1. After validation completes, select **Create**. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2.png":::
:::column-end::: :::row-end::: :::row:::
Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps
- **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png":::
+ :::column-end:::
+
+### [Django](#tab/django)
+
+ :::column span="2":::
+ **Step 1:** In the Azure portal:
+ 1. Enter "web app database" in the search bar at the top of the Azure portal.
+ 1. Select the item labeled **Web App + Database** under the **Marketplace** heading.
+ You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-1.png":::
:::column-end::: :::row-end:::
+ :::column span="2":::
+ **Step 2:** In the **Create Web App + Database** page, fill out the form as follows.
+ 1. *Resource Group* &rarr; Select **Create new** and use a name of **msdocs-python-postgres-tutorial**.
+ 1. *Region* &rarr; Any Azure region near you.
+ 1. *Name* &rarr; **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure.
+ 1. *Runtime stack* &rarr; **Python 3.12**.
+ 1. *Database* &rarr; **PostgreSQL - Flexible Server** is selected by default as the database engine. The server name and database name are also set by default to appropriate values.
+ 1. *Add Azure Cache for Redis* &rarr; **Yes**.
+ 1. *Hosting plan* &rarr; **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later.
+ 1. Select **Review + create**.
+ 1. After validation completes, select **Create**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2-django.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-2-django.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created:
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
+ - **Azure Cache for Redis** &rarr; Accessible only from within the virtual network.
+ - **Private DNS zones** &rarr; Enables DNS resolution of the PostgreSQL server and the Redis server in the virtual network.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-create-app-postgres-3.png":::
+ :::column-end:::
+
+--
## 2. Verify connection settings The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings). App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, here's an [article on storing in Azure Key Vault](../key-vault/certificates/quick-create-python.md).
+### [Flask](#tab/flask)
+ :::row::: :::column span="2":::
- **Step 1:** In the App Service page, in the left menu, select Configuration.
+ **Step 1:** In the App Service page, in the left menu, select **Configuration**.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png":::
:::column-end::: :::row-end::: :::row:::
The creation wizard generated the connectivity variables for you already as [app
**Step 2:** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. That will be injected into the runtime environment as an environment variable. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png" alt-text="A screenshot showing how to see the autogenerated connection string (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 3:** In a terminal or command prompt, run the following Python script to generate a unique secret: `python -c 'import secrets; print(secrets.token_hex())'`. Copy the output value to use in the next step.
+ :::column-end:::
+ :::column:::
+ :::column-end:::
+
+### [Django](#tab/django)
+
+ :::column span="2":::
+ **Step 1:** In the App Service page, in the left menu, select **Configuration**.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png" alt-text="A screenshot showing how to open the configuration page in App Service (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** In the **Application settings** tab of the **Configuration** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING` are present. They will be injected into the runtime environment as an environment variable.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2-django.png" alt-text="A screenshot showing how to see the autogenerated connection string (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-get-connection-string-2.png":::
:::column-end::: :::row-end::: :::row:::
The creation wizard generated the connectivity variables for you already as [app
:::row-end::: :::row::: :::column span="2":::
- **Step 4:** In the **Application settings** tab of the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**.
+ **Step 4:** Back in the **Configuration** page, select **New application setting**. Name the setting `SECRET_KEY`. Paste the value from the previous value. Select **OK**.
:::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png" alt-text="A screenshot showing how to set the SECRET_KEY app setting in the Azure portal (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting.png":::
:::column-end::: :::row-end::: :::row:::
The creation wizard generated the connectivity variables for you already as [app
**Step 5:** Select **Save**. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the SECRET_KEY app setting in the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png" alt-text="A screenshot showing how to save the SECRET_KEY app setting in the Azure portal (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-app-service-app-setting-save.png":::
:::column-end::: :::row-end::: -
-Having issues? Check the [Troubleshooting guide](configure-language-python.md#troubleshooting).
-
+--
## 3. Deploy sample code
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
1. In **Repository**, select **msdocs-flask-postgresql-sample-app**. 1. In **Branch**, select **main**. 1. Keep the default option selected to **Add a workflow**.
+ 1. Under **Authentication type**, select **User-assigned identity**.
1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2":::
- **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu,
+ 1. Select **SSH**.
1. Select **Go**. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
:::row::: :::column span="2":::
- **Step 1:** Back in the App Service page, in the left menu, select **SSH**.
+ **Step 1:** Back in the App Service page, in the left menu,
+ 1. Select **SSH**.
1. Select **Go**. :::column-end::: :::column:::
With the PostgreSQL database protected by the virtual network, the easiest way t
## 5. Browse to the app
+### [Flask](#tab/flask)
+
+ :::column span="2":::
+ **Step 1:** In the App Service page:
+ 1. From the left menu, select **Overview**.
+ 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png":::
+ :::column-end:::
+ :::column span="2":::
+ **Step 2:** Add a few restaurants to the list.
+ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL.
+ :::column-end:::
+ :::column:::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+ :::column-end:::
+
+### [Django](#tab/django)
+ :::row::: :::column span="2"::: **Step 1:** In the App Service page:
With the PostgreSQL database protected by the virtual network, the easiest way t
1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-1.png":::
:::column-end::: :::row-end::: :::row:::
With the PostgreSQL database protected by the virtual network, the easiest way t
Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL. :::column-end::: :::column:::
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2-django.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
:::column-end::: :::row-end:::
+The home page shows that the name of the last visited restaurant, and the data is saved to and retrieved from the Azure cache. Remember that the sample app uses the connection string `AZURE_REDIS_CONNECTIONSTRING`, which was created for you by the wizard.
+
+--
+ ## 6. Stream diagnostic logs Azure App Service captures all messages output to the console to help you diagnose issues with your application. The sample app includes `print()` statements to demonstrate this capability as shown below.
Azure App Service captures all messages output to the console to help you diagno
### [Django](#tab/django) --
When you're finished, you can delete all of the resources from your Azure subscr
::: zone-end ::: zone pivot="azure-developer-cli"
-In this tutorial, you'll deploy a data-driven Python web app (**[Django](https://www.djangoproject.com/)** or **[Flask](https://flask.palletsprojects.com/)**) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment.
--
-**To complete this tutorial, you'll need:**
-
-* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python).
-* [Git](https://git-scm.com/downloads) installed locally.
-* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed locally.
-* Knowledge of Python with Flask development or [Python with Django development](/training/paths/django-create-data-driven-websites/).
-
-> [!NOTE]
-> If you want, you can follow the steps using the [Azure Cloud Shell](https://shell.azure.com). It has all tools you need to follow this tutorial.
-
-## Sample application
-
-A sample Python application using the Flask framework is provided to help you follow along with this tutorial. To deploy it without running it locally, skip this part.
-
-> [!NOTE]
-> To run the sample application locally, you need [Python 3.7 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) installed locally.
-
-Clone the sample repository's `starter-no-infra` branch and change to the repository root.
-
-```bash
-git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app
-cd msdocs-flask-postgresql-sample-app
-```
-
-Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the values of `DBNAME`, `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance.
-
-```
-DBNAME=<database name>
-DBHOST=<database-hostname>
-DBUSER=<db-user-name>
-DBPASS=<db-password>
-```
-
-Create a virtual environment for the app.
--
-Run the sample.
-
-```bash
-# Install dependencies
-pip install -r requirements.txt
-# Run database migrations
-flask db upgrade
-# Run the app at http://127.0.0.1:5000
-flask run
-```
## 1. Create Azure resources and deploy a sample app
In this step, you create the Azure resources and deploy a sample app to App Serv
1. If you haven't already, clone the sample repository's `starter-no-infra` branch in a local terminal.
+ ### [Flask](#tab/flask)
+
```bash git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-flask-postgresql-sample-app cd msdocs-flask-postgresql-sample-app ```
+ ### [Django](#tab/django)
+
+ ```bash
+ git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-django-postgresql-sample-app
+ cd msdocs-django-postgresql-sample-app
+ ```
+
+ --
+ This cloned branch is your starting point. It contains a simple data-drive Flask application. 1. From the repository root, run `azd init`.
In this step, you create the Azure resources and deploy a sample app to App Serv
azd init --template python-app-service-postgresql-infra ```
- This azd template contains files (*azure.yaml* and the *infra* directory) that will generate a secure-by-default architecture with the following Azure resources:
-
- - **Resource group** &rarr; The container for all the created resources.
- - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *B1* tier is specified.
- - **App Service** &rarr; Represents your app and runs in the App Service plan.
- - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
- - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
- - **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network.
- - **Log Analytics workspace** &rarr; Acts as the target container for your app to ship its logs, where you can also query the logs.
- 1. When prompted, give the following answers: |Question |Answer |
In this step, you create the Azure resources and deploy a sample app to App Serv
``` The `azd up` command might take a few minutes to complete. It also compiles and deploys your application code, but you'll modify your code later to work with App Service. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application.+
+ This azd template contains files (*azure.yaml* and the *infra* directory) that generate a secure-by-default architecture with the following Azure resources:
+
+ ### [Flask](#tab/flask)
+
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *B1* tier is specified.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network.
+ - **Log Analytics workspace** &rarr; Acts as the target container for your app to ship its logs, where you can also query the logs.
+
+ ### [Django](#tab/django)
+
+ - **Resource group** &rarr; The container for all the created resources.
+ - **App Service plan** &rarr; Defines the compute resources for App Service. A Linux plan in the *B1* tier is specified.
+ - **App Service** &rarr; Represents your app and runs in the App Service plan.
+ - **Virtual network** &rarr; Integrated with the App Service app and isolates back-end network traffic.
+ - **Azure Database for PostgreSQL flexible server** &rarr; Accessible only from within the virtual network. A database and a user are created for you on the server.
+ - **Azure Cache for Redis** &rarr; Accessible only from within the virtual network.
+ - **Private DNS zone** &rarr; Enables DNS resolution of the PostgreSQL server in the virtual network.
+ - **Log Analytics workspace** &rarr; Acts as the target container for your app to ship its logs, where you can also query the logs.
+
+ --
## 2. Use the database connection string The azd template you use generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings) and outputs the them to the terminal for your convenience. App settings are one way to keep connection secrets out of your code repository.
-1. In the azd output, find the app settings and find the `AZURE_POSTGRESQL_CONNECTIONSTRING`. To keep secrets safe, only the setting names are displayed. They look like this in the azd output:
+1. In the azd output, find the app settings and find the settings `AZURE_POSTGRESQL_CONNECTIONSTRING` and `AZURE_REDIS_CONNECTIONSTRING`. To keep secrets safe, only the setting names are displayed. They look like this in the azd output:
<pre> App Service app has the following settings: - AZURE_POSTGRESQL_CONNECTIONSTRING
+ - AZURE_REDIS_CONNECTIONSTRING
- FLASK_DEBUG - SCM_DO_BUILD_DURING_DEPLOYMENT - SECRET_KEY </pre>
-1. `AZURE_POSTGRESQL_CONNECTIONSTRING` contains the connection string to the Postgres database in Azure, and you can use it in your code to connect to it. Open *azureproject/production.py*, uncomment the following lines, and save the file:
+1. `AZURE_POSTGRESQL_CONNECTIONSTRING` contains the connection string to the Postgres database in Azure, and `AZURE_REDIS_CONNECTIONSTRING` contains the connection string to the Redis cache in Azure. You need to use them your code to connect to it. Open *azureproject/production.py*, uncomment the following lines, and save the file:
+
+ ### [Flask](#tab/flask)
```python conn_str = os.environ['AZURE_POSTGRESQL_CONNECTIONSTRING'] conn_str_params = {pair.split('=')[0]: pair.split('=')[1] for pair in conn_str.split(' ')}
-
DATABASE_URI = 'postgresql+psycopg2://{dbuser}:{dbpass}@{dbhost}/{dbname}'.format( dbuser=conn_str_params['user'], dbpass=conn_str_params['password'],
The azd template you use generated the connectivity variables for you already as
dbname=conn_str_params['dbname'] ) ```-
+
Your application code is now configured to connect to the PostgreSQL database in Azure. If you want, open `app.py` and see how the `DATABASE_URI` environment variable is used.
-2. In the terminal, run `azd deploy`
+ ### [Django](#tab/django)
+
+ ```python
+ conn_str = os.environ['AZURE_POSTGRESQL_CONNECTIONSTRING']
+ conn_str_params = {pair.split('=')[0]: pair.split('=')[1] for pair in conn_str.split(' ')}
+ DATABASES = {
+ 'default': {
+ 'ENGINE': 'django.db.backends.postgresql',
+ 'NAME': conn_str_params['dbname'],
+ 'HOST': conn_str_params['host'],
+ 'USER': conn_str_params['user'],
+ 'PASSWORD': conn_str_params['password'],
+ }
+ }
+
+ CACHES = {
+ "default": {
+ "BACKEND": "django_redis.cache.RedisCache",
+ "LOCATION": os.environ.get('AZURE_REDIS_CONNECTIONSTRING'),
+ "OPTIONS": {
+ "CLIENT_CLASS": "django_redis.client.DefaultClient",
+ "COMPRESSOR": "django_redis.compressors.zlib.ZlibCompressor",
+ },
+ }
+ }
+ ```
+
+ Your application code is now configured to connect to the PostgreSQL database and Redis cache in Azure.
+
+ --
+
+1. In the terminal, run `azd deploy`.
```bash azd deploy
The azd template you use generated the connectivity variables for you already as
With the PostgreSQL database protected by the virtual network, the easiest way to run [Flask database migrations](https://flask-migrate.readthedocs.io/en/latest/) is in an SSH session with the App Service container.
+### [Flask](#tab/flask)
+ 1. In the azd output, find the URL for the SSH session and navigate to it in the browser. It looks like this in the output: <pre>
With the PostgreSQL database protected by the virtual network, the easiest way t
> Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. >
+### [Django](#tab/django)
+
+1. In the azd output, find the URL for the SSH session and navigate to it in the browser. It looks like this in the output:
+
+ <pre>
+ Open SSH session to App Service container at: https://&lt;app-name>.scm.azurewebsites.net/webssh/host
+ </pre>
+
+1. In the SSH terminal, run `python manage.py migrate`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations).
+
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-generate-db-schema-django-2.png":::
+
+ > [!NOTE]
+ > Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted.
+ >
+
+--
+ ## 5. Browse to the app 1. In the azd output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output:
With the PostgreSQL database protected by the virtual network, the easiest way t
2. Add a few restaurants to the list.
- :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+ ### [Flask](#tab/flask)
+
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png" alt-text="A screenshot of the Flask web app with PostgreSQL running in Azure showing restaurants and restaurant reviews (Flask)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+
+ ### [Django](#tab/django)
+
+ :::image type="content" source="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2-django.png" alt-text="A screenshot of the Django web app with PostgreSQL running in Azure showing restaurants and restaurant reviews (Django)." lightbox="./media/tutorial-python-postgresql-app/azure-portal-browse-app-2.png":::
+
+ --
Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL.
Azure App Service can capture console logs to help you diagnose issues with your
The sample app includes `print()` statements to demonstrate this capability as shown in the following snippet.
+### [Flask](#tab/flask)
+ :::code language="python" source="~/msdocs-flask-postgresql-sample-app/app.py" range="37-41" highlight="3"::: -- In the azd output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the azd output:
+### [Django](#tab/django)
- <pre>
- Stream App Service logs at: https://portal.azure.com/#@/resource/subscriptions/&lt;subscription-guid>/resourceGroups/&lt;group-name>/providers/Microsoft.Web/sites/&lt;app-name>/logStream
- </pre>
+
+--
+
+In the azd output, find the link to stream App Service logs and navigate to it in the browser. The link looks like this in the azd output:
+
+<pre>
+Stream App Service logs at: https://portal.azure.com/#@/resource/subscriptions/&lt;subscription-guid>/resourceGroups/&lt;group-name>/providers/Microsoft.Web/sites/&lt;app-name>/logStream
+</pre>
Learn more about logging in Python apps in the series on [setting up Azure Monitor for your Python application](/azure/azure-monitor/app/opencensus-python).
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
### Hitachi |Solution and version |Kubernetes version |Azure Arc-enabled data services version |SQL engine version |PostgreSQL server version| |--|--|--|--|--|
-|Red Hat OCP 4.12.30|1.25.11|1.25.0_2023-11-14|16.0.5100.7246|Not validated|
+|[Hitachi UCP with Red Hat OpenShift](https://www.hitachivantara.com/en-us/solutions/hybrid-cloud-infrastructure.html)|1.25.11|1.25.0_2023-11-14|16.0.5100.7246|Not validated|
|Hitachi Virtual Storage Software Block software-defined storage (VSSB)|1.24.12 |1.20.0_2023-06-13 |16.0.5100.7242 |14.5 (Ubuntu 20.04)| |Hitachi Virtual Storage Platform (VSP) |1.24.12 |1.19.0_2023-05-09 |16.0.937.6221 |14.5 (Ubuntu 20.04)|
-|[Hitachi UCP with RedHat OpenShift](https://www.hitachivantara.com/en-us/solutions/modernize-digital-core/infrastructure-modernization/hybrid-cloud-infrastructure.html) |1.23.12 |1.16.0_2023-02-14 |16.0.937.6221 |14.5 (Ubuntu 20.04)|
+|[Hitachi UCP with VMware Tanzu](https://www.hitachivantara.com/en-us/solutions/hybrid-cloud-infrastructure.html)|1.23.8 |1.16.0_2023-02-14 |16.0.937.6221 |14.5 (Ubuntu 20.04)|
### HPE
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 10/18/2023 Last updated : 12/01/2023
Starting with [`microsoft.flux` v1.8.0](extensions-release.md#flux-gitops), you
1. Be sure to provide proper permissions for workload identity for the resource that you want source-controller or image-reflector controller to pull. For example, if using Azure Container Registry, `AcrPull` permissions are required. - ## Delete the Flux configuration and extension
-Use the following commands to delete your Flux configuration and, if desired, the Flux extension itself.
+Use the following commands to delete your Flux configurations and, if desired, the Flux extension itself.
### [Azure CLI](#tab/azure-cli)
-#### Delete the Flux configuration
+#### Delete the Flux configurations
The following command deletes both the `fluxConfigurations` resource in Azure and the Flux configuration objects in the cluster. Because the Flux configuration was originally created with the `prune=true` parameter for the kustomization, all of the objects created in the cluster based on manifests in the Git repository will be removed when the Flux configuration is removed. However, this command doesn't remove the Flux extension itself.
az k8s-configuration flux delete -g flux-demo-rg -c flux-demo-arc -n cluster-con
When you delete the Flux extension, both the `microsoft.flux` extension resource in Azure and the Flux extension objects in the cluster will be removed.
+> [!IMPORTANT]
+> Be sure to delete all Flux configurations in the cluster before you delete the Flux extension. Deleting the extension without first deleting the Flux configurations may leave your cluster in an unstable condition.
+ If the Flux extension was created automatically when the Flux configuration was first created, the extension name will be `flux`. ```azurecli
When you delete a Flux configuration, all of the Flux configuration objects in t
When you delete the Flux extension, both the `microsoft.flux` extension resource in Azure and the Flux extension objects in the cluster will be removed.
+> [!IMPORTANT]
+> Be sure to delete all Flux configurations in the cluster before you delete the Flux extension. Deleting the extension without first deleting the Flux configurations may leave your cluster in an unstable condition.
+ For an Azure Arc-enabled Kubernetes cluster, navigate to the cluster and select **Extensions**. Select the `flux` extension and select **Uninstall**, then confirm the deletion. For AKS clusters, you can't use the Azure portal to delete the extension. Instead, use the following Azure CLI command:
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
Links to the current and previous releases of the Windows agents are available b
sudo zypper install -f azcmagent-1.28.02260-755 ``` + ## Upgrade the agent
Proxy bypass value when set to `ArcData` only bypasses the traffic of the Azure
| | | | `AAD` | `login.windows.net`</br>`login.microsoftonline.com`</br> `pas.windows.net` | | `ARM` | `management.azure.com` |
-| `Arc` | `his.arc.azure.com`</br>`guestconfiguration.azure.com`</br> `san-af-<location>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com`|
+| `Arc` | `his.arc.azure.com`</br>`guestconfiguration.azure.com` |
| `ArcData` <sup>1</sup> | `san-af-<region>-prod.azurewebsites.net`</br>`telemetry.<location>.arcdataservices.com` | <sup>1</sup> The proxy bypass value `ArcData` is available starting with Azure Connected Machine agent version 1.36 and Azure Extension for SQL Server version 1.1.2504.99. Earlier versions include the SQL Server enabled by Azure Arc endpoints in the "Arc" proxy bypass value.
azure-arc Enable Guest Management At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md
Previously updated : 11/15/2023 Last updated : 12/01/2023 keywords: "VMM, Arc, Azure" #Customer intent: As an IT infrastructure admin, I want to install arc agents to use Azure management services for SCVMM VMs.
An admin can install agents for multiple machines from the Azure portal if the m
## Next steps
-[Recover from accidental deletion of resource bridge virtual machine](disaster-recovery.md).
+[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md).
azure-arc Install Arc Agents Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script.md
Title: Install Arc agent using a script for SCVMM VMs description: Learn how to enable guest management using a script for Arc enabled SCVMM VMs. Previously updated : 11/29/2023 Last updated : 12/01/2023
Ensure the following before you install Arc agents using a script for SCVMM VMs:
## Next steps
-[Manage VM extensions to use Azure management services](https://learn.microsoft.com/azure/azure-arc/servers/manage-vm-extensions).
+[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md).
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 11/27/2023 Last updated : 12/01/2023
If for any reason, the appliance creation fails, you need to retry it. Run the c
## Next steps
-[Create a VM](create-virtual-machine.md)
+- [Browse and enable SCVMM resources through Azure RBAC](enable-scvmm-inventory-resources.md).
+- [Create a VM using Azure Arc-enabled SCVMM](create-virtual-machine.md).
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
description: This article explains the steps to cleanly remove your VMware vCent
Previously updated : 11/06/2023 Last updated : 11/30/2023
You can remove your VMware vSphere resources from Azure Arc using either the deb
### Remove VMware vSphere resources from Azure Arc using deboarding script
-Use the deboarding script to do a full cleanup of all the Arc-enabled VMware resources. The script removes all the Azure resources, including vCenter, custom location, virtual machines, virtual templates, hosts, clusters, resource pools, datastores, virtual networks, Azure Resource Manager (ARM) resource of Appliance, and the appliance VM running on vCenter.
+Download the [deboarding script](https://aka.ms/arcvmwaredeboard) to do a full cleanup of all the Arc-enabled VMware resources. The script removes all the Azure resources, including vCenter, custom location, virtual machines, virtual templates, hosts, clusters, resource pools, datastores, virtual networks, Azure Resource Manager (ARM) resource of Appliance, and the appliance VM running on vCenter.
-```powershell
-[CmdletBinding()]
-Param(
- [string] $vCenterId,
- [string] $AVSId,
- [string] $ApplianceConfigFilePath,
- [switch] $Force
-)
-
-$DeleteFailedThreshold = 20
-$AVS_API_Version = "2022-05-01"
-
-$logFile = Join-Path $PSScriptRoot "arcvmware-deboard.log"
-
-function logText($msg) {
- $msgFull = "$(Get-Date -UFormat '%T') $msg"
- Write-Host $msgFull
- Write-Output $msgFull >> $logFile
-}
-
-function fail($msg) {
- $msgFull = @"
- $(Get-Date -UFormat '%T') Script execution failed with error: $msg
- $(Get-Date -UFormat '%T') Debug logs have been dumped to $logFile
- $(Get-Date -UFormat '%T') The script will terminate shortly
-"@
- Write-Host -ForegroundColor Red $msgFull >> $logFile
- Write-Output $msgFull >> $logFile
- Start-Sleep -Seconds 5
- exit 1
-}
-
-if (!($PSBoundParameters.ContainsKey('vCenterId') -xor $PSBoundParameters.ContainsKey('AVSId'))) {
- fail "Please specify either vCenterId or AVSId, not both."
-}
--
-logText "Writing debug logs to $logFile"
-
-logText "Installing az cli extensions for Arc"
-az extension add --upgrade --name arcappliance
-az extension add --upgrade --name k8s-extension
-az extension add --upgrade --name customlocation
-$vmware_ext_ver = az version --query 'extensions.connectedvmware' -o tsv 2>> $logFile
-if ($vmware_ext_ver -and [System.Version]$vmware_ext_ver -gt [System.Version]"0.1.12") {
- logText "Removing the connectedvmware extension and pinning it to 0.1.12"
- az extension remove --name connectedvmware --debug 2>> $logFile
-}
-az extension add --upgrade --name connectedvmware --version 0.1.12
-az extension add --upgrade --name resource-graph
-
-logText "Fetching some information related to the vCenter..."
-if ($PSBoundParameters.ContainsKey('AVSId')) {
- $vCenterId = az rest --method get --url "$AVSId/addons/arc?api-version=$AVS_API_Version" --query "properties.vCenter" -o tsv --debug 2>> $logFile
- if ($null -eq $vCenterId) {
- fail "Unable to find vCenter ID for AVS $AVSId"
- }
- logText "vCenterId is $vCenterId"
-}
-else {
- $exists = az connectedvmware vcenter show --ids $vCenterId --debug 2>> $logFile
- if ($null -eq $exists) {
- fail "Unable to find vCenter ID $vCenterId"
- }
-}
-
-$customLocationID = az resource show --ids $vCenterId --query extendedLocation.name -o tsv --debug 2>> $logFile
-$customLocation = az resource show --ids $customLocationID --debug 2>> $logFile | ConvertFrom-Json
-
-if ($null -ne $customLocation) {
- $clusterExtensionIds = $customLocation.properties.clusterExtensionIds
- $applianceId = $customLocation.properties.hostResourceId
-}
-
-$otherCustomLocationsInAppliance = $(az graph query -q @"
- Resources
- | where type =~ 'Microsoft.ExtendedLocation/customLocations'
- | where id !~ '$customLocationID'
- | where properties.hostResourceId =~ '$applianceId'
- | project id
-"@.Replace("`r`n", " ").Replace("`n", " ") --debug 2>> $logFile | ConvertFrom-Json).data.id
-
-$resourceTypes = [PSCustomObject]@(
- @{ Type = "Microsoft.ConnectedVMwareVsphere/VirtualMachines"; InventoryType = "VirtualMachine"; AzSubCommand = "vm"; AzArgs = @("--retain") },
- @{ Type = "Microsoft.ConnectedVMwareVsphere/VirtualMachineTemplates"; InventoryType = "VirtualMachineTemplate"; AzSubCommand = "vm-template" },
- @{ Type = "Microsoft.ConnectedVMwareVsphere/Hosts"; InventoryType = "Host"; AzSubCommand = "host" },
- @{ Type = "Microsoft.ConnectedVMwareVsphere/Clusters"; InventoryType = "Cluster"; AzSubCommand = "cluster" },
- @{ Type = "Microsoft.ConnectedVMwareVsphere/ResourcePools"; InventoryType = "ResourcePool"; AzSubCommand = "resource-pool" },
- @{ Type = "Microsoft.ConnectedVMwareVsphere/Datastores"; InventoryType = "Datastore"; AzSubCommand = "datastore" },
- @{ Type = "Microsoft.ConnectedVMwareVsphere/VirtualNetworks"; InventoryType = "VirtualNetwork"; AzSubCommand = "virtual-network" }
-)
-
-foreach ($resourceType in $resourceTypes) {
- $resourceIds = @()
- $skipToken = $null
- $query = @"
-(
- Resources
- | where type =~ '$($resourceType.Type)'
- | where properties.vCenterId =~ '$vCenterId'
- | project id=tolower(id)
- | union (
- ConnectedVMwareVsphereResources
- | where type =~ 'Microsoft.ConnectedVMwareVsphere/VCenters/InventoryItems' and kind =~ '$($resourceType.InventoryType)'
- | where id startswith '$vCenterId/InventoryItems'
- | where properties.managedResourceId != ''
- | extend id=tolower(tostring(properties.managedResourceId))
- | project id
- )
-) | distinct id
-"@.Replace("`r`n", " ").Replace("`n", " ")
- logText "Searching $($resourceType.Type)..."
- $deleteFailed = @()
- while ($true) {
- if ($skipToken) {
- $page = az graph query --skip-token $skipToken -q $query --debug 2>> $logFile | ConvertFrom-Json
- }
- else {
- $page = az graph query -q $query --debug 2>> $logFile | ConvertFrom-Json
- }
- $page.data | ForEach-Object {
- $resourceIds += $_.id
- }
- if ($null -eq $page.skip_token) {
- break
- }
- $skipToken = $page.skip_token
- }
- logText "Found $($resourceIds.Count) $($resourceType.Type)"
-
- $azArgs = $resourceType.AzArgs
- if ($Force) {
- $azArgs = @("--force")
- }
- $width = $resourceIds.Count.ToString().Length
- for ($i = 0; $i -lt $resourceIds.Count; $i++) {
- $resourceId = $resourceIds[$i]
- logText $("({0,$width}/$($resourceIds.Count)) Deleting $resourceId" -f $($i + 1))
- az connectedvmware $resourceType.AzSubCommand delete --debug --yes --ids $resourceId $azArgs 2>> $logFile
- if ($LASTEXITCODE -ne 0) {
- logText "Failed to delete $resourceId"
- $deleteFailed += $resourceId
- }
- if ($deleteFailed.Count -gt $DeleteFailedThreshold) {
- fail @"
- Failed to delete $($deleteFailed.Count) resources. Skipping the deletion of the rest of the resources in the vCenter.
- The resource ID of these resources are:
-`t$($deleteFailed -join "`n`t")
-
- Skipping vCenter deletion.
-"@
- }
- }
-}
-
-if ($deleteFailed.Count -gt 0) {
- fail @"
- Failed to delete $($deleteFailed.Count) resources. The resource ID of these resources are:
-`t$($deleteFailed -join "`n`t")
-
- Skipping vCenter deletion.
-"@
-}
-
-Write-Host ""
-logText "Successfully deleted all the resources in the vCenter"
-logText "Deleting the vCenter: $vCenterId"
-$azArgs = @()
-if ($Force) {
- $azArgs = @("--force")
-}
-az connectedvmware vcenter delete --debug --yes --ids $vCenterId $azArgs 2>> $logFile
-if ($LASTEXITCODE -ne 0) {
- fail "Failed to delete $vCenterId"
-}
-if ($PSBoundParameters.ContainsKey('AVSId')) {
- logText "Deleting the arc addon for the AVS $AVSId"
- az rest --method delete --debug --url "$AVSId/addons/arc?api-version=$AVS_API_Version" 2>> $logFile
- if ($LASTEXITCODE -ne 0) {
- fail "Failed to delete $AVSId/addons/arc"
- }
-}
-
-function extractPartsFromID($id) {
- $id -match "/+subscriptions/+([^/]+)/+resourceGroups/+([^/]+)/+providers/+([^/]+)/+([^/]+)/+([^/]+)"
- return @{
- SubscriptionId = $Matches[1]
- ResourceGroup = $Matches[2]
- Provider = $Matches[3]
- Type = $Matches[4]
- Name = $Matches[5]
- }
-}
-
-if ($null -ne $clusterExtensionIds -and $clusterExtensionIds.Count -gt 1) {
- logText "Skipping the deletion of custom location and appliance because there are multiple cluster extensions enabled in the custom location"
- logText "The cluster extension IDs are:"
- logText " $($clusterExtensionIds -join "`n ")"
- exit 0
-}
-if ($null -eq $customLocation) {
- logText "The custom location '$customLocationID' is not found. Skipping the deletion of the custom location."
-}
-else {
- logText "Deleting the custom location: $customLocationID"
- $clInfo = extractPartsFromID $customLocationID
- az customlocation delete --debug --yes --subscription $clInfo.SubscriptionId --resource-group $clInfo.ResourceGroup --name $clInfo.Name 2>> $logFile
- # The command above is returning error when the cluster is not reachable, so $LASTEXITCODE is not reliable.
- # Instead, check if resource is not found after delete, else throw error.
- $cl = az resource show --ids $customLocationID --debug 2>> $logFile
- if ($cl) {
- fail "Failed to delete $customLocationID"
- }
-}
-if ($otherCustomLocationsInAppliance.Count -gt 0) {
- logText "Skipping the deletion of the appliance because there are other custom locations in the appliance"
- logText "The custom location IDs of these custom locations are:"
- logText " $($otherCustomLocationsInAppliance -join "`n ")"
- exit 0
-}
-
-if ($PSBoundParameters.ContainsKey('ApplianceConfigFilePath')) {
- logText "Deleting the appliance: $applianceId"
- az arcappliance delete vmware --debug --yes --config-file $ApplianceConfigFilePath 2>> $logFile
- if ($LASTEXITCODE -ne 0) {
- fail "Failed to delete $applianceId"
- }
-}
-else {
- logText "Skipping the deletion of the appliance VM on the VCenter because the appliance config file path is not provided"
- logText "Just deleting the ARM resource of the appliance: $applianceId"
- az resource delete --debug --ids $applianceId 2>> $logFile
- if ($LASTEXITCODE -ne 0) {
- fail "Failed to delete $applianceId"
- }
-}
-logText "Cleanup Complete!"
-```
#### Run the script To run the deboarding script, follow these steps:
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
The following table explains the binding configuration properties that you set i
## Optional Configuration
-In addition to the required ConnectionStringSetting [application setting](./functions-how-to-use-azure-function-app-settings.md#settings), the following optional settings can be configured for the SQL trigger:
+The following optional settings can be configured for the SQL trigger:
-| App Setting | Description|
+
+| Setting | Description|
||| |**Sql_Trigger_BatchSize** |The maximum number of changes processed with each iteration of the trigger loop before being sent to the triggered function. The default value is 100.| |**Sql_Trigger_PollingIntervalMs**|The delay in milliseconds between processing each batch of changes. The default value is 1000 (1 second).| |**Sql_Trigger_MaxChangesPerWorker**|The upper limit on the number of pending changes in the user table that are allowed per application-worker. If the count of changes exceeds this limit, it might result in a scale-out. The setting only applies for Azure Function Apps with [runtime driven scaling enabled](#enable-runtime-driven-scaling). The default value is 1000.| -- ## Set up change tracking (required) Setting up change tracking for use with the Azure SQL trigger requires two steps. These steps can be completed from any SQL tool that supports running queries, including [Visual Studio Code](/sql/tools/visual-studio-code/mssql-extensions), [Azure Data Studio](/azure-data-studio/download-azure-data-studio) or [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms).
azure-functions Functions Develop Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-local.md
Title: Develop and run Azure Functions locally description: Learn how to code and test Azure Functions on your local computer before you run them on Azure Functions.- Previously updated : 09/22/2022- Last updated : 11/29/2023 + # Code and test Azure Functions locally While you're able to develop and test Azure Functions in the [Azure portal], many developers prefer a local development experience. When you use Functions, using your favorite code editor and development tools to create and test functions on your local computer becomes easier. Your local functions can connect to live Azure services, and you can debug them on your local computer using the full Functions runtime.
When you develop your functions locally, any local settings required by your app
## Triggers and bindings
-When you develop your functions locally, you need to take trigger and binding behaviors into consideration. The easiest way to test bindings during local development is to use connection strings that target live Azure services. You can target live services by adding the appropriate connection string settings in the `Values` array in the local.settings.json file. When you do this, local executions during testing impact live service data. Because of this, consider setting-up separate services to use during development and testing, and then switch to different services during production. You can also use a local storage emulator.
+When you develop your functions locally, you need to take trigger and binding behaviors into consideration. For HTTP triggers, you can simply call the HTTP endpoint on the local computer, using `http://localhost/`. For non-HTTP triggered functions, there are several options to run locally:
+++ The easiest way to test bindings during local development is to use connection strings that target live Azure services. You can target live services by adding the appropriate connection string settings in the `Values` array in the local.settings.json file. When you do this, local executions during testing impact live service data. Because of this, consider setting-up separate services to use during development and testing, and then switch to different services during production.++ For storage-based triggers, you can use a [local storage emulator](#local-storage-emulator).++ You can manually run non-HTTP trigger functions by using special administrator endpoints. For more information, see [Manually run a non HTTP-triggered function](functions-manually-run-non-http.md). +
+During local testing, you must be running the host provided by Core Tools (func.exe) locally. For more information, see [Azure Functions Core Tools](functions-run-local.md).
## Local storage emulator
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
The following list shows the QPS usage limits for each Azure Maps service by Pri
| -- | :--: | :: | :: | | Copyright service | 10 | 10 | 10 | | Creator - Alias, TilesetDetails | 10 | Not Available | Not Available |
-| Creator - Conversion, Dataset, Feature State, WFS | 50 | Not Available | Not Available |
+| Creator - Conversion, Dataset, Feature State, Features, Map Configuration, Style, Routeset, Wayfinding | 50 | Not Available | Not Available |
| Data service (Deprecated<sup>1</sup>) | 50 | 50 | Not Available | | Data registry service | 50 | 50 | Not Available | | Geolocation service | 50 | 50 | 50 |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
View [supported operating systems for Azure Arc Connected Machine agent](../../a
| Red Hat Enterprise Linux Server 8.6+ | Γ£ô<sup>3</sup> | Γ£ô | Γ£ô<sup>2</sup> | | Red Hat Enterprise Linux Server 8.0-8.5 | Γ£ô | Γ£ô | Γ£ô<sup>2</sup> | | Red Hat Enterprise Linux Server 7 | Γ£ô | Γ£ô | Γ£ô |
-| Red Hat Enterprise Linux Server 6.7+ | | | Γ£ô |
+| Red Hat Enterprise Linux Server 6.7+ | | | |
| Rocky Linux 9 | Γ£ô | Γ£ô | | | Rocky Linux 8 | Γ£ô | Γ£ô | | | SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>3</sup> | | |
Yes, but you need to [onboard to Defender for Cloud](./azure-monitor-agent-overv
Azure Monitor Agent authenticates to your workspace via managed identity, which is created when you install the Connected Machine agent. Managed Identity is a more secure and manageable authentication solution from Azure. The legacy Log Analytics agent authenticated by using the workspace ID and key instead, so it didn't need Azure Arc.
-### Does the new Azure Monitor Agent have hardening support for Linux?
-
-Hardening support for Linux isn't available yet.
- ## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.
azure-monitor Azure Monitor Agent Send Data To Event Hubs And Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md
Create a data collection rule for collecting events and sending to storage and e
}, { "streams": [
- "Microsoft-WindowsEvent"
+ "Microsoft-Event"
], "destinations": [ "myEh1",
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
Let's assume the input log message body is `User account with userId 123456xx fa
"body": { "toAttributes": { "rules": [
- "^User account with userId (?<redactedUserId>[\\da-zA-Z]+)[\\w\\s]+"
+ "userId (?<redactedUserId>[\\da-zA-Z]+)"
] } }
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
The following table represents the currently supported custom telemetry types:
> [!NOTE] > Custom Metrics are under preview in Azure Monitor Application Insights. Custom metrics without dimensions are available by default. To view and alert on dimensions, you need to [opt-in](pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation).
-Consider collecting more metrics beyond what's provided by the instrumentation libraries.
- The OpenTelemetry API offers six metric "instruments" to cover various metric scenarios and you need to pick the correct "Aggregation Type" when visualizing metrics in Metrics Explorer. This requirement is true when using the OpenTelemetry Metric API to send metrics and when using an instrumentation library. The following table shows the recommended [aggregation types](../essentials/metrics-aggregation-explained.md#aggregation-types) for each of the OpenTelemetry Metric Instruments.
Currently unavailable.
### Send custom telemetry using the Application Insights Classic API
-We recommend you use the OpenTelemetry APIs whenever possible, but there might be some scenarios when you have to use the Application Insights [Classic API](api-custom-events-metrics.md)s.
+We recommend you use the OpenTelemetry APIs whenever possible, but there might be some scenarios when you have to use the Application Insights [Classic API](api-custom-events-metrics.md).
#### [ASP.NET Core](#tab/aspnetcore)
-Not available in .NET.
+##### Events
+
+1. Add `Microsoft.ApplicationInsights` to your application.
+
+2. Create a `TelemetryClient` instance.
+
+> [!NOTE]
+> It's important to only create once instance of the TelemetryClient per application.
+
+```csharp
+var telemetryConfiguration = new TelemetryConfiguration { ConnectionString = "" };
+var telemetryClient = new TelemetryClient(telemetryConfiguration);
+```
+
+3. Use the client to send custom telemetry.
+
+```csharp
+telemetryClient.TrackEvent("testEvent");
+```
#### [.NET](#tab/net)
-Not available in .NET.
+##### Events
+
+1. Add `Microsoft.ApplicationInsights` to your application.
+
+2. Create a `TelemetryClient` instance.
+
+> [!NOTE]
+> It's important to only create once instance of the TelemetryClient per application.
+
+```csharp
+var telemetryConfiguration = new TelemetryConfiguration { ConnectionString = "" };
+var telemetryClient = new TelemetryClient(telemetryConfiguration);
+```
+
+3. Use the client to send custom telemetry.
+
+```csharp
+telemetryClient.TrackEvent("testEvent");
+```
#### [Java](#tab/java)
Attaching custom dimensions to logs can be accomplished using a [message templat
Logback, Log4j, and java.util.logging are [autoinstrumented](#logs). Attaching custom dimensions to your logs can be accomplished in these ways:
-* [Log4j 2.0 MapMessage](https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
+* [Log4j 2.0 MapMessage](https://logging.apache.org/log4j/2.0/javadoc/log4j-api/org/apache/logging/log4j/message/MapMessage.html) (a `MapMessage` key of `"message"` is captured as the log message)
* [Log4j 2.0 Thread Context](https://logging.apache.org/log4j/2.x/manual/thread-context.html) * [Log4j 1.2 MDC](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/MDC.html)
azure-monitor Azure Monitor Rest Api Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-monitor-rest-api-index.md
Organized by subject area.
| [Predictive metric](/rest/api/monitor/predictive-metric) | Retrieves predicted autoscale metric data. | | ***Data Collection Endpoints*** | | | [Data collection endpoints](/rest/api/monitor/data-collection-endpoints) | Create and manage a data collection endpoint and retrieve the data collection endpoints within a resource group or subscription. |
-| ***Data Collection Rules*** | Create and manage a data collection rule and retrieve the data collection rules within a resource group or subscription. |
+| ***Data Collection Rules*** | |
| [Data collection rule associations](/rest/api/monitor/data-collection-rule-associations) | Create and manage a data collection rule association and retrieve the data collection rule associations for a data collection endpoint, resource, or data collection rule. | | [Data collection rules](/rest/api/monitor/data-collection-rules) | Create and manage a data collection rule and retrieve the data collection rules within a resource group or subscription. | | ***Diagnostic Settings*** | |
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Container Insights offers the ability to collect Syslog events from Linux nodes in your [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. This includes the ability to collect logs from control plane components like kubelet. Customers can also use Syslog for monitoring security and health events, typically by ingesting syslog into a SIEM system like [Microsoft Sentinel](https://azure.microsoft.com/products/microsoft-sentinel/#overview). > [!IMPORTANT]
-> Syslog collection is now GA. However due to slower rollouts towards the year end, the agent version with the GA changes will not be in all regions until January 2024. Agent versions 3.1.16 and above have Syslog GA changes. Please check agent version before enabling in production.
+> Syslog collection is now GA. However due to slower rollouts towards the year end, the agent version with the GA changes will not be in all regions until the end of January 2024. Agent versions 3.1.16 and above have Syslog GA changes. Please check agent version before enabling in production.
## Prerequisites
azure-monitor Vminsights Enable Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md
Previously updated : 09/28/2023 Last updated : 11/28/2023 # Enable VM insights by using PowerShell
This article describes how to enable VM insights on Azure virtual machines by us
- Azure Virtual Machines - Azure Virtual Machine Scale Sets
-> [!NOTE]
-> The PowerShell script provided in this article enables VM Insights with the Log Analytics agent. We'll update it to support Azure Monitoring Agent shortly. In the meantime, to enable VM insights with Azure Monitor Agent, use the other installation methods described in [Enable VM insights overview](vminsights-enable-overview.md).
+This script installs VM extensions for Log Analytics/Azure Monitoring Agent (AMA) and Dependency Agent if needed for VM Insights. If AMA is onboarded, a Data Collection Rule (DCR) and a User Assigned Managed Identity (UAMI) is also associated with the virtual machines and virtual machine scale sets.
+ ## Prerequisites You need to: - [Configure a Log Analytics workspace for VM insights](../vm/vminsights-configure-workspace.md).-- See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or virtual machine scale set you're enabling is supported. - See [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md#prerequisites) for prerequisites related to Azure Monitor Agent.
+- See [Supported operating systems](./vminsights-enable-overview.md#supported-operating-systems) to ensure that the operating system of the virtual machine or virtual machine scale set you're enabling is supported.
+ ## PowerShell script
-To enable VM insights for multiple VMs or virtual machine scale set, use the PowerShell script [Install-VMInsights.ps1](https://www.powershellgallery.com/packages/Install-VMInsights). The script is available from the Azure PowerShell Gallery. This script iterates through:
+To enable VM insights for multiple VMs or virtual machine scale set, use the PowerShell script [Install-VMInsights.ps1](https://www.powershellgallery.com/packages/Install-VMInsights). The script is available from the Azure PowerShell Gallery. This script iterates through the virtual machines or virtual machine scale sets according to the parameters that you specify. The script can be used to enable VM insights for:
- Every virtual machine and virtual machine scale set in your subscription.-- The scoped resource group that's specified by *ResourceGroup*.-- A single VM or virtual machine scale set that's specified by *Name*.
+- The scoped resource group that's specified by `-ResourceGroup`.
+- A single VM or virtual machine scale set that's specified by `-Name`.
-For each virtual machine or virtual machine scale set, the script verifies whether the VM extension for the Log Analytics agent and Dependency agent is already installed. If both extensions are installed, the script tries to reinstall it. If both extensions aren't installed, the script installs them.
-Verify that you're using Azure PowerShell module Az version 1.0.0 or later with `Enable-AzureRM` compatibility aliases enabled. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
+Verify that you're using Az PowerShell module version 1.0.0 or later with `Enable-AzureRM` compatibility aliases enabled. Run `Get-Module -ListAvailable Az` to find the version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-To get a list of the script's argument details and example usage, run `Get-Help`.
+For a list of the script's argument details and example usage, run `Get-Help`.
```powershell
-Get-Help .\Install-VMInsights.ps1 -Detailed
-
-SYNOPSIS
- This script installs VM extensions for Log Analytics and the Dependency agent as needed for VM Insights.
--
-SYNTAX
- .\Install-VMInsights.ps1 [-WorkspaceId] <String> [-WorkspaceKey] <String> [-SubscriptionId] <String> [[-ResourceGroup]
- <String>] [[-Name] <String>] [[-PolicyAssignmentName] <String>] [-ReInstall] [-TriggerVmssManualVMUpdate] [-Approve] [-WorkspaceRegion] <String>
- [-WhatIf] [-Confirm] [<CommonParameters>]
--
-DESCRIPTION
- This script installs or reconfigures the following on VMs and virtual machine scale sets:
- - Log Analytics VM extension configured to supplied Log Analytics workspace
- - Dependency agent VM extension
-
- Can be applied to:
- - Subscription
- - Resource group in a subscription
- - Specific VM or virtual machine scale set
- - Compliance results of a policy for a VM or VM extension
-
- Script will show you a list of VMs or virtual machine scale sets that will apply to and let you confirm to continue.
- Use -Approve switch to run without prompting, if all required parameters are provided.
+Get-Help Install-VMInsights.ps1 -Detailed
+```
- If the extensions are already installed, they will not install again.
- Use -ReInstall switch if you need to, for example, update the workspace.
+Use the script to enable VM insights using Azure Monitoring Agent and Dependency Agent, or Log Analytics Agent.
- Use -WhatIf if you want to see what would happen in terms of installs, what workspace configured to, and status of the extension.
-PARAMETERS
- -WorkspaceId <String>
- Log Analytics WorkspaceID (GUID) for the data to be sent to
+### [Azure Monitor Agent](#tab/AMA)
- -WorkspaceKey <String>
- Log Analytics Workspace primary or secondary key
- -SubscriptionId <String>
- SubscriptionId for the VMs/VM Scale Sets
- If using PolicyAssignmentName parameter, subscription that VMs are in
+AMA Onboarding
+If AMA is onboarded, a Data Collection Rule (DCR) and a User Assigned Managed Identity (UAMI) is also associated to the VM/VMSS and UAMI settings are passed over to AMA extension.
- -ResourceGroup <String>
- <Optional> Resource Group to which the VMs or VM Scale Sets belong
- -Name <String>
- <Optional> To install to a single VM/VM Scale Set
+```powershell
+Install-VMInsights.ps1 -SubscriptionId <SubscriptionId> `
+[-ResourceGroup <ResourceGroup>] `
+[-ProcessAndDependencies ] `
+[-Name <MV or VMSS name>] `
+-DcrResourceId <DataCollectionRuleResourceId> `
+-UserAssignedManagedIdentityName <UserAssignedIdentityName> `
+-UserAssignedManagedIdentityResourceGroup <UserAssignedIdentityResourceGroup>
- -PolicyAssignmentName <String>
- <Optional> Take the input VMs to operate on as the Compliance results from this Assignment
- If specified will only take from this source.
+```
- -ReInstall [<SwitchParameter>]
- <Optional> If VM/VM Scale Set is already configured for a different workspace, set this to change to the new workspace
+Required Arguments:
+ + `-SubscriptionId <String>` Azure subscription ID.
+ + `-DcrResourceId <String> ` Data Collection Rule (DCR) Azure resource ID identifier.
+ + `-UserAssignedManagedIdentityResourceGroup <String> ` Name of User Assigned Managed Identity (UAMI) resource group.
+ + `-UserAssignedManagedIdentityName <String> ` Name of User Assigned Managed Identity (UAMI).
+
+Optional Arguments:
+ + `-ProcessAndDependencies` Set this flag to onboard the Dependency Agent with Azure Monitoring Agent (AMA) settings. If not specified, only Azure Monitoring Agent (AMA) will be onboarded.
+ + ` - Name <String>` Name of the VM or VMSS to be onboarded. If not specified, all VMs and VMSS in the subscription or resource group will be onboarded.
+ + `- ResourceGroup <String>` Name of the resource group containing the VM or VMSS to be onboarded. If not specified, all VMs and VMSS in the subscription will be onboarded.
+
+Example:
+```azurepowershell
+Install-VMInsights.ps1 -SubscriptionId 12345678-abcd-abcd-1234-12345678 `
+-ResourceGroup rg-AMAPowershell `
+-ProcessAndDependencies `
+-Name vmAMAPowershellWindows `
+-DcrResourceId /subscriptions/12345678-abcd-abcd-1234-12345678/resourceGroups/rg-AMAPowershell/providers/Microsoft.Insights/dataCollectionRules/MSVMI-ama-vmi-default-dcr `
+-UserAssignedManagedIdentityName miamatest1 `
+-UserAssignedManagedIdentityResourceGroup amapowershell
+```
- -TriggerVmssManualVMUpdate [<SwitchParameter>]
- <Optional> Set this flag to trigger update of VM instances in a scale set whose upgrade policy is set to Manual
+The output has the following format:
- -Approve [<SwitchParameter>]
- <Optional> Gives the approval for the installation to start with no confirmation prompt for the listed VMs/VM Scale Sets
+```powershell
+Name Account SubscriptionName Environment TenantId
+- - - -- --
+AzMon001 12345678-abcd-123… MSI@9876 AzMon001 AzureCloud abcd1234-9876-abcd-1234-1234abcd5648
- -WorkspaceRegion <String>
- Region the Log Analytics Workspace is in
- Supported values: "East US","eastus","Southeast Asia","southeastasia","West Central US","westcentralus","West Europe","westeurope"
- For Health supported is: "East US","eastus","West Central US","westcentralus"
+Getting list of VMs or VM Scale Sets matching specified criteria.
+VMs and VMSS matching selection criteria :
- -WhatIf [<SwitchParameter>]
- <Optional> See what would happen in terms of installs.
- If extension is already installed will show what workspace is currently configured, and status of the VM extension
+ResourceGroup : rg-AMAPowershell
+ vmAMAPowershellWindows
- -Confirm [<SwitchParameter>]
- <Optional> Confirm every action
- <CommonParameters>
- This cmdlet supports the common parameters: Verbose, Debug,
- ErrorAction, ErrorVariable, WarningAction, WarningVariable,
- OutBuffer, PipelineVariable, and OutVariable. For more information, see
- about_CommonParameters (https://go.microsoft.com/fwlink/?LinkID=113216).
+Confirm
+Continue?
+[Y] Yes [N] No [S] Suspend [?] Help (default is "Y"):
- -- EXAMPLE 1 --
- .\Install-VMInsights.ps1 -WorkspaceRegion eastus -WorkspaceId <WorkspaceId> -WorkspaceKey <WorkspaceKey> -SubscriptionId <SubscriptionId>
- -ResourceGroup <ResourceGroup>
+(rg-AMAPowershell) : Assigning roles
- Install for all VMs in a resource group in a subscription
+(rg-AMAPowershell) vmAMAPowershellWindows : Assigning User Assigned Managed Identity edsMIAMATest
+(rg-AMAPowershell) vmAMAPowershellWindows : Successfully assigned User Assigned Managed Identity edsMIAMATest
+(rg-AMAPowershell) vmAMAPowershellWindows : Data Collection Rule Id /subscriptions/12345678-abcd-abcd-1234-12345678/resourceGroups/rg-AMAPowershell/providers/Microsoft.Insights/dataCollectionRules/MSVMI-ama-vmi-default-dcr already associated with the VM.
+(rg-AMAPowershell) vmAMAPowershellWindows : Extension AzureMonitorWindowsAgent, type = Microsoft.Azure.Monitor.AzureMonitorWindowsAgent already installed. Provisioning State : Succeeded
+(rg-AMAPowershell) vmAMAPowershellWindows : Installing/Updating extension AzureMonitorWindowsAgent, type = Microsoft.Azure.Monitor.AzureMonitorWindowsAgent
+(rg-AMAPowershell) vmAMAPowershellWindows : Successfully installed/updated extension AzureMonitorWindowsAgent, type = Microsoft.Azure.Monitor.AzureMonitorWindowsAgent
+(rg-AMAPowershell) vmAMAPowershellWindows : Installing/Updating extension DA-Extension, type = Microsoft.Azure.Monitoring.DependencyAgent.DependencyAgentWindows
+(rg-AMAPowershell) vmAMAPowershellWindows : Successfully installed/updated extension DA-Extension, type = Microsoft.Azure.Monitoring.DependencyAgent.DependencyAgentWindows
+(rg-AMAPowershell) vmAMAPowershellWindows : Successfully onboarded VM insights
- -- EXAMPLE 2 --
- .\Install-VMInsights.ps1 -WorkspaceRegion eastus -WorkspaceId <WorkspaceId> -WorkspaceKey <WorkspaceKey> -SubscriptionId <SubscriptionId>
- -ResourceGroup <ResourceGroup> -ReInstall
+Summary :
+Total VM/VMSS to be processed : 1
+Succeeded : 1
+Skipped : 0
+Failed : 0
+VMSS Instance Upgrade Failures : 0
+```
- Specify to reinstall extensions even if already installed, for example, to update to a different workspace
- -- EXAMPLE 3 --
- .\Install-VMInsights.ps1 -WorkspaceRegion eastus -WorkspaceId <WorkspaceId> -WorkspaceKey <WorkspaceKey> -SubscriptionId <SubscriptionId>
- -PolicyAssignmentName a4f79f8ce891455198c08736 -ReInstall
+### [Log Analytics Agent](#tab/LogAnalyticsAgent)
- Specify to use a PolicyAssignmentName for source and to reinstall (move to a new workspace)
-```
+Use the following command to enable VM insights using Log Analytics Agent and Dependency Agent.
-The following example demonstrates using the PowerShell commands in the folder to enable VM insights and understand the expected output:
```powershell $WorkspaceId = "<GUID>" $WorkspaceKey = "<Key>" $SubscriptionId = "<GUID>"
-.\Install-VMInsights.ps1 -WorkspaceId $WorkspaceId -WorkspaceKey $WorkspaceKey -SubscriptionId $SubscriptionId -WorkspaceRegion eastus
+Install-VMInsights.ps1 -WorkspaceId $WorkspaceId `
+-WorkspaceKey $WorkspaceKey `
+-SubscriptionId $SubscriptionId `
+-WorkspaceRegion <region>
+```
+The output has the following format:
+
+```powershell
Getting list of VMs or virtual machine scale sets matching criteria specified VMs or virtual machine scale sets matching criteria:
Not running - start VM to configure: (0)
Failed: (0) ``` ++
+Check your VM/VMSS in Azure portal to see if the extensions are installed or use the following command:
+
+```powershell
+
+az vm extension list --resource-group <resource group> --vm-name <VM name> -o table
++
+Name ProvisioningState Publisher Version AutoUpgradeMinorVersion
+ - -
+AzureMonitorWindowsAgent Succeeded Microsoft.Azure.Monitor 1.16 True
+DA-Extension Succeeded Microsoft.Azure.Monitoring.DependencyAgent 9.10 True
+```
+ ## Next steps * See [Use VM insights Map](vminsights-maps.md) to view discovered application dependencies.
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Metadata is never cooled and always remains in the hot tier. As such, the activi
Standard storage with cool access is supported for the following regions:
+* Australia Central
+* Australia Central 2
* Australia East * Australia Southeast * Brazil South
Standard storage with cool access is supported for the following regions:
* France Central * North Central US * North Europe
+* Switzerland North
+* Switzerland West
+* UAE North
## Effects of cool access on data
azure-resource-manager Quickstart Create Bicep Use Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/quickstart-create-bicep-use-visual-studio-code.md
Title: Create Bicep files - Visual Studio Code description: Use Visual Studio Code and the Bicep extension to Bicep files for deploy Azure resources Previously updated : 11/03/2022 Last updated : 11/30/2023 #Customer intent: As a developer new to Azure deployment, I want to learn how to use Visual Studio Code to create and edit Bicep files, so I can use them to deploy Azure resources.
# Quickstart: Create Bicep files with Visual Studio Code
-This quickstart guides you through the steps to create a [Bicep file](overview.md) with Visual Studio Code. You'll create a storage account and a virtual network. You'll also learn how the Bicep extension simplifies development by providing type safety, syntax validation, and autocompletion.
+This quickstart guides you through the steps to create a [Bicep file](overview.md) with Visual Studio Code. You create a storage account and a virtual network. You also learn how the Bicep extension simplifies development by providing type safety, syntax validation, and autocompletion.
Similar authoring experience is also supported in Visual Studio. See [Quickstart: Create Bicep files with Visual Studio](./quickstart-create-bicep-use-visual-studio.md).
Similar authoring experience is also supported in Visual Studio. See [Quickstar
If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
-To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you'll have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az).
+To set up your environment for Bicep development, see [Install Bicep tools](install.md). After completing those steps, you have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az).
## Add resource snippet
-Launch Visual Studio Code and create a new file named *main.bicep*.
+VS Code with the Bicep extension simplifies development by providing predefined snippets. In this quickstart, you add a snippet that creates a virtual network.
-VS Code with the Bicep extension simplifies development by providing pre-defined snippets. In this quickstart, you'll add a snippet that creates a virtual network.
+Launch Visual Studio Code and create a new file named **main.bicep**.
-In *main.bicep*, type **vnet**. Select **res-vnet** from the list, and then Tab or Enter.
+In *main.bicep*, type **vnet**, and then select **res-vnet** from the list, and then press [TAB] or [ENTER].
> [!TIP]
-> If you don't see those intellisense options in VS Code, make sure you've installed the Bicep extension as specified in [Prerequisites](#prerequisites). If you have installed the extension, give the Bicep language service some time to start after opening your Bicep file. It usually starts quickly, but you will not have intellisense options until it starts. A notification in the lower right corner indicates that the service is starting. When that notification disappears, the service is running.
+> If you don't see those intellisense options in VS Code, make sure you've installed the Bicep extension as specified in [Prerequisites](#prerequisites). If you have installed the extension, give the Bicep language service some time to start after opening your Bicep file. It usually starts quickly, but you don't have intellisense options until it starts. A notification in the lower right corner indicates that the service is starting. When that notification disappears, the service is running.
Your Bicep file now contains the following code: ```bicep resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = { name: 'name'
- location: resourceGroup().location
+ location: location
properties: { addressSpace: { addressPrefixes: [
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
} ```
-This snippet contains all of the values you need to define a virtual network. However, you can modify this code to meet your requirements. For example, `name` isn't a great name for the virtual network. Change the `name` property to `examplevnet`.
+Within this snippet, you find all the necessary values for defining a virtual network. You may notice two curly underlines. A yellow one denotes a warning related to an outdated API version, while a red curly underline signals an error caused by a missing parameter definition.
+
+Remove `@2019-11-01`, and replace it with `@`. Select the latest API version.
++
+You'll fix the missing parameter definition error in the next section.
+
+You can also modify this code to meet your requirements. For example, `name` isn't a great name for the virtual network. Change the `name` property to `examplevnet`.
```bicep
-name: 'examplevnet'
+name: 'exampleVNet'
```
-You could deploy this Bicep file, but we'll add a parameter and storage account before deploying.
- ## Add parameter
-Now, we'll add a parameter for the storage account name. At the top of file, add:
+The code snippet you added in the last section misses a parameter definition.
+
+At the top of the file, add:
```bicep
-param storageName
+param location
```
-When you add a space after **storageName**, notice that intellisense offers the data types that are available for the parameter. Select **string**.
+When you add a space after **location**, notice that intellisense offers the data types that are available for the parameter. Select **string**.
-You have the following parameter:
+Give the parameter a default value:
```bicep
-param storageName string
+param location string = resourceGroup().location
```
-This parameter works fine, but storage accounts have limits on the length of the name. The name must have at least 3 characters and no more than 24 characters. You can specify those requirements by adding decorators to the parameter.
+For more information about the function used in the default value, see [resourceGroup()](./bicep-functions-scope.md#resourcegroup).
+
+Add another parameter for the storage account name with a default value:
+
+```bicep
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
+```
+
+For more information, see [Interpolation](./data-types.md#strings) and [uniqueString()](./bicep-functions-string.md#uniquestring).
+
+This parameter works fine, but storage accounts have limits on the length of the name. The name must have at least three characters and no more than 24 characters. You can specify those requirements by adding decorators to the parameter.
Add a line above the parameter, and type **@**. You see the available decorators. Notice there are decorators for both **minLength** and **maxLength**.
-Add both decorators and specify the character limits, as shown below:
+Add both decorators and specify the character limits:
```bicep @minLength(3) @maxLength(24)
-param storageName string
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
``` You can also add a description for the parameter. Include information that helps people deploying the Bicep file understand the value to provide.
You can also add a description for the parameter. Include information that helps
@minLength(3) @maxLength(24) @description('Provide a name for the storage account. Use only lower case letters and numbers. The name must be unique across Azure.')
-param storageName string
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
```
-Your parameter is ready to use.
+Your parameters are ready to use.
## Add resource
-Instead of using a snippet to define the storage account, we'll use intellisense to set the values. Intellisense makes this step much easier than having to manually type the values.
+Instead of using a snippet to define the storage account, you use intellisense to set the values. Intellisense makes this step easier than having to manually type the values.
To define a resource, use the `resource` keyword. Below your virtual network, type **resource exampleStorage**:
resource exampleStorage
**exampleStorage** is a symbolic name for the resource you're deploying. You can use this name to reference the resource in other parts of your Bicep file.
-When you add a space after the symbolic name, a list of resource types is displayed. Continue typing **storage** until you can select it from the available options.
+When you add a space after the symbolic name, a list of resource types is displayed. Continue typing **storageacc** until you can select it from the available options.
-After selecting **Microsoft.Storage/storageAccounts**, you're presented with the available API versions. Select **2021-02-01**.
+After selecting **Microsoft.Storage/storageAccounts**, you're presented with the available API versions. Select the latest version. For the following screenshot, it is **2023-01-01**.
-After the single quote for the resource type, add `=` and a space. You're presented with options for adding properties to the resource. Select **required-properties**.
+After the single quote for the resource type, add **=** and a space. You're presented with options for adding properties to the resource. Select **required-properties**.
This option adds all of the properties for the resource type that are required for deployment. After selecting this option, your storage account has the following properties: ```bicep
-resource exampleStorage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
+resource exampleStorage 'Microsoft.Storage/storageAccounts@2023-01-01' = {
name: location: sku: { name: } kind:- } ``` You're almost done. Just provide values for those properties.
-Again, intellisense helps you. Set `name` to `storageName`, which is the parameter that contains a name for the storage account. For `location`, set it to `'eastus'`. When adding SKU name and kind, intellisense presents the valid options.
+Again, intellisense helps you. Set `name` to `storageAccountName`, which is the parameter that contains a name for the storage account. For `location`, set it to `location`, which is a parameter you created earlier. When adding `sku.name` and `kind`, intellisense presents the valid options.
-When you've finished, you have:
+When finished, you have:
```bicep @minLength(3) @maxLength(24)
-param storageName string
+param storageAccountName string = 'store${uniqueString(resourceGroup().id)}'
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
- name: 'examplevnet'
+ name: 'exampleVNet'
location: resourceGroup().location properties: { addressSpace: {
resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
} resource exampleStorage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
- name: storageName
+ name: storageAccountName
location: 'eastus' sku: { name: 'Standard_LRS'
You can view a representation of the resources in your file.
From the upper right corner, select the visualizer button to open the Bicep Visualizer. The visualizer shows the resources defined in the Bicep file with the resource dependency information. The two resources defined in this quickstart don't have dependency relationship, so you don't see a connector between the two resources. + ## Deploy the Bicep file
-1. Right-click the Bicep file inside the VSCode, and then select **Deploy Bicep file**.
+1. Right-click the Bicep file inside the VS Code, and then select **Deploy Bicep file**.
:::image type="content" source="./media/quickstart-create-bicep-use-visual-studio-code/vscode-bicep-deploy.png" alt-text="Screenshot of Deploy Bicep file.":::
+1. In the **Please enter name for deployment** text box, type **deployStorageAndVNet**, and then press **[ENTER]**.
1. From the **Select Resource Group** listbox on the top, select **Create new Resource Group**. 1. Enter **exampleRG** as the resource group name, and then press **[ENTER]**.
-1. Select a location for the resource group, and then press **[ENTER]**.
+1. Select a location for the resource group, select **Central US** or a location of your choice, and then press **[ENTER]**.
1. From **Select a parameter file**, select **None**. :::image type="content" source="./media/quickstart-create-bicep-use-visual-studio-code/vscode-bicep-select-parameter-file.png" alt-text="Screenshot of Select parameter file.":::
-1. Enter a unique storage account name, and then press **[ENTER]**. If you get an error message indicating the storage account is already taken, the storage name you provided is in use. Provide a name that is more likely to be unique.
-1. From **Create parameters file from values used in this deployment?**, select **No**.
-
-It takes a few moments to create the resources. For more information, see [Deploy Bicep files with visual Studio Code](./deploy-vscode.md).
+It takes a few moments to create the resources. For more information, see [Deploy Bicep files with Visual Studio Code](./deploy-vscode.md).
You can also deploy the Bicep file by using Azure CLI or Azure PowerShell:
You can also deploy the Bicep file by using Azure CLI or Azure PowerShell:
```azurecli az group create --name exampleRG --location eastus
-az deployment group create --resource-group exampleRG --template-file main.bicep --parameters storageName=uniquename
+az deployment group create --resource-group exampleRG --template-file main.bicep --parameters storageAccountName=uniquename
``` # [PowerShell](#tab/PowerShell)
az deployment group create --resource-group exampleRG --template-file main.bicep
```azurepowershell New-AzResourceGroup -Name exampleRG -Location eastus
-New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -storageName "uniquename"
+New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -storageAccountName "uniquename"
```
communication-services End Of Call Survey Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/end-of-call-survey-logs.md
# End of call survey
->
+> [!NOTE]
> End of Call Survey is currently supported only for our JavaScript / Web SDK. ## Prerequisites
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
For each endpoint within a call, a distinct call diagnostic log is created for o
| `jitterMax` | The maximum jitter value measured between packets for each media stream. Bursts in network conditions can cause problems in the audio/video traffic flow. | | `packetLossRateAvg` | The average percentage of packets that are lost. Packet loss directly affects audio quality. Small, individual lost packets have almost no impact, whereas back-to-back burst losses cause audio to cut out completely. The packets being dropped and not arriving at their intended destination cause gaps in the media. This situation results in missed syllables and words, along with choppy video and sharing. <br><br>A packet loss rate of greater than 10% (0.1) is likely having a negative quality impact. This metric is measured for each media stream over the `participantDuration` period in a group call or over the `callDuration` period in a P2P call. | | `packetLossRateMax` | This value represents the maximum packet loss rate (percentage) for each media stream over the `participantDuration` period in a group call or over the `callDuration` period in a P2P call. Bursts in network conditions can cause problems in the audio/video traffic flow.
+| `JitterBufferSizeAvg` | The average size of jitter buffer over the duration of each media stream. A jitter buffer is a shared data area where voice packets can be collected, stored, and sent to the voice processor in evenly spaced intervals. Jitter buffer is used to counter the effects of jitter. <br><br> Jitter buffers can be either static or dynamic. Static jitter buffers are set to a fixed size, while dynamic jitter buffers can adjust their size based on network conditions. The goal of the jitter buffer is to provide a smooth and uninterrupted stream of audio and video data to the user. <br><br> In the web SDK, this 'JitterBufferSizeAvg' is the average value of the 'jitterBufferDelay' during the call, the 'jitterBufferDelay' is the duration of an audio sample or a video frame that stays in the jitter buffer. <br><br> Normally when 'JitterBufferSizeAvg' value is greater than 200 ms, it will cause a negative quality impact.
+| `JitterBufferSizeMax` | The maximum jitter buffer size measured during the duration of each media stream. <br><br> Normally when this value is greater than 200 ms, it will cause a negative quality impact.
+| `HealedDataRatioAvg` | The average percentage of lost or damaged data packets that are successfully reconstructed or recovered by the healer over the duration of audio stream. Healed data ratio is a measure of the effectiveness of error correction techniques used in VoIP systems. <br><br> When this value is greater than 0.1 (10%), we consider the stream as bad quality.
+| `HealedDataRatioMax` | The maximum healed data ratio measured during the duration of each media stream. <br><br> When this value is greater than 0.1 (10%), we consider the stream as bad quality.
+| `VideoFrameRateAvg` | The average number of video frames that are transmitted per second during a video/screensharing call. The video frame rate can impact the quality and smoothness of the video stream, with higher frame rates generally resulting in smoother and more fluid motion. The standard frame rate for WebRTC video is typically 30 frames per second (fps), although this can vary depending on the specific implementation and network conditions. <br><br> The stream quality is considered poor when this value is less than 7 for video stream, or less than 1 for screensharing stream.
+| `RecvResolutionHeight` | The average of vertical size of the incoming video stream that is transmitted during a video/screensharing call. It's measured in pixels and is one of the factors that determines the overall resolution and quality of the video stream. The specific resolution used may depend on the capabilities of the devices and network conditions involved in the call. <br><br> The stream quality is considered poor when this value is less than 240 for video stream, or less than 768 for screensharing stream.
+| `RecvFreezeDurationPerMinuteInMs` | The average freeze duration in milliseconds per minute for incoming video/screensharing stream. Freezes are typically due to bad network condition and can degrade the stream quality. <br><br> The stream quality is considered poor when this value is greater than 6,000 ms for video stream, or greater than 25,000 ms for screensharing stream.
+ ### P2P vs. group calls
communication-services Azure Communication Services Azure Cognitive Services Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/azure-communication-services-azure-cognitive-services-integration.md
Azure Communication Services Call Automation APIs provide developers the ability
All this is possible with one-click where enterprises can access a secure solution and link their models through the portal. Furthermore, developers and enterprises don't need to manage credentials. Connecting your Azure AI services uses managed identities to access user-owned resources. Developers can use managed identities to authenticate any resource that supports Microsoft Entra authentication.
-BYO Azure AI services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the BYO option and provide the URL to the Azure AI services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
+Azure AI services can be easily integrated into any application regardless of the programming language. When creating an Azure Resource in Azure portal, enable the option and provide the URL to the Azure AI services. This simple experience allows developers to meet their needs, scale, and avoid investing time and resources into designing and maintaining a custom solution.
> [!NOTE] > This integration is supported in limited regions for Azure AI services, for more information about which regions are supported please view the limitations section at the bottom of this document. This integration only supports Multi-service Cognitive Service resource, we recommend if you're creating a new Azure AI Service resource you create a Multi-service Cognitive Service resource or when you're connecting an existing resource confirm that it is a Multi-service Cognitive Service resource.
With the ability to, connect your Azure AI services to Azure Communication Servi
[![Screen shot of integration run time flow.](./media/run-time-flow.png)](./media/run-time-flow.png#lightbox) ## Azure portal experience
-You can configure and bind your Communication Services and Azure AI services through the Azure portal.
+You will need to connect your Azure Communication Services resource with the Azure AI resource through the Azure portal. There are two ways you can accomplish this step:
+- By navigating through the steps of the Cognitive Services tab in your Azure Communication Services (recommended).
+- Manually adding the Managed Identity to your Azure Communication Services resource. This step is more advanced and requires a little more effort to connect your Azure Communication Services to your Azure AI services.
## Prerequisites - Azure account with an active subscription and access to Azure portal, for details see [Create an account for free](https://azure.microsoft.com/free/). - Azure Communication Services resource. See [Create an Azure Communication Services resource](../../quickstarts/create-communication-resource.md?tabs=windows&pivots=platform-azp). -- An Azure Cognitive Services resource.
+- An [Azure AI Services resource](../../../../articles/ai-services/multi-service-resource.md) .
### Connecting through the Azure portal 1. Open your Azure Communication Services resource and click on the Cognitive Services tab.
-2. If system-assigned managed identity isn't enabled, there are two ways to enable it.
-
- 2.1. In the Cognitive Services tab, click on "Enable Managed Identity" button.
-
+2. If system-assigned managed identity isn't enabled, you will need to enable it.
+3. In the Cognitive Services tab, click on "Enable Managed Identity" button.
+
[![Screenshot of Enable Managed Identity button.](./media/enabled-identity.png)](./media/enabled-identity.png#lightbox)
- or
-
- 2.2. Navigate to the identity tab.
-
- 2.3. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed.
+4. Enable system assigned identity. This action begins the creation of the identity; A pop-up notification appears notifying you that the request is being processed.
[![Screen shot of enable managed identiy.](./media/enable-system-identity.png)](./media/enable-system-identity.png#lightbox)
- 2.4. Once the identity is enabled, you should see something similar.
+5. Once the identity is enabled, you should see something similar.
[![Screenshot of enabled identity.](./media/identity-saved.png)](./media/identity-saved.png#lightbox)
-3. When managed identity is enabled the Cognitive Service tab should show a button 'Connect cognitive service' to connect the two services.
+6. When managed identity is enabled the Cognitive Service tab should show a button 'Connect cognitive service' to connect the two services.
[![Screenshot of Connect cognitive services button.](./media/cognitive-services.png)](./media/cog-svc.png#lightbox)
-4. Click on 'Connect cognitive service', select the Subscription, Resource Group and Resource and click 'Connect' in the context pane that opens up.
+7. Click on 'Connect cognitive service', select the Subscription, Resource Group and Resource and click 'Connect' in the context pane that opens up.
[![Screenshot of Subscription, Resource Group and Resource in pane.](./media/choose-options.png)](./media/choose-options.png#lightbox)
-5. If connection is successful, you should see a green banner confirming successful connection.
+8. If connection is successful, you should see a green banner confirming successful connection.
[![Screenshot of successful connection.](./media/connected.png)](./media/connected.png#lightbox)
-6. Now in the Cognitive Service tab you should see your connected services showing up.
+9. Now in the Cognitive Service tab you should see your connected services showing up.
[![Screenshot of connected cognitive service on main page.](./media/new-entry-created.png)](./media/new-entry-created.png#lightbox)
-### Manually adding Managed Identity to Azure Communication Services resource
+### Advanced option: Manually adding Managed Identity to Azure Communication Services resource
Alternatively if you would like to go through the manual process of connecting your resources you can follow these steps. #### Enable system assigned identity
Your Azure Communication Service has now been linked to your Azure Cognitive Ser
## Azure AI services regions supported This integration between Azure Communication Services and Azure AI services is only supported in the following regions:-- westus-- westus2-- westus3-- eastus-- eastus2 - centralus - northcentralus - southcentralus - westcentralus
+- eastus
+- eastus2
+- westus
+- westus2
+- westus3
+- canadacentral
+- northeurope
- westeurope - uksouth-- northeurope - southafricanorth-- canadacentral - centralindia - eastasia - southeastasia
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/recognize-action.md
The recognize action can be used for many reasons, here are a few examples of ho
![Recognize Action](./media/recognize-flow.png)
+## Known limitation
+- In-band DTMF is not supported, use RFC 2833 DTMF instead.
+ ## Next steps - Check out our how-to guide to learn how you can [gather user input](../../how-tos/call-automation/recognize-action.md). - Learn about [usage and operational logs](../analytics/logs/call-automation-logs.md) published by call automation.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/chat/concepts.md
For customers that use Virtual appointments, refer to our Teams Interoperability
- The maximum number of participants allowed in a chat thread is 250. - The maximum message size allowed is approximately 28 KB. - For chat threads with more than 20 participants, read receipts and typing indicator features are not supported.-- For Teams Interop scenarios, it is the number of Azure Communication Services users, not Teams users that must be below 20 for read receipts and typing indicator features to be supported.
+- For Teams Interop scenarios, it is the number of Azure Communication Services users, not Teams users, that must be below 20 for the typing indicator feature to be supported.
+- For Teams Interop scenarios, the typing indicator event might contain a blank display name when sent from Teams user.
+- For Teams Interop scenarios, read receipts aren't supported for Teams users.
## Chat architecture
communication-services Calling Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/calling-chat.md
While in private preview, a Communication Services user can do various actions u
- Communication Services users can delete the chat. This action removes the Teams user from the chat thread and hides the message history from the Teams client. - Known issue: Communication Services users aren't displayed correctly in the participant list. They're currently displayed as External, but their people cards show inconsistent data. In addition, their displayname might not be shown properly in the Teams client. - Known issue: The typing event from Teams side might contain a blank display name.
+- Known issue: Read receipts aren't supported for Teams users.
- Known issue: A chat can't be escalated to a call from within the Teams app. -- Known issue: Editing of messages by the Teams user isn't supported.
+- Known issue: Editing of messages by the Teams user isn't supported.
Please refer to [Chat Capabilities](../interop/guest/capabilities.md) to learn more.
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Sending SMS to any recipient requires getting a phone number. Choosing the right
|**Calling support**|Yes| No | No |No | |**Provisioning time**| 5-6 weeks| 6-8 weeks | Instant | 4-5 weeks| |**Throughput** | 200 messages/min (can be increased upon request)| 6000 messages/ min (can be increased upon request) | 600 messages/ min (can be increased upon request)|600 messages/ min (can be increased upon request)|
-|**Supported Destinations**| United States, Canada, Puerto Rico| United States | Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia| Norway, Finland, Slovakia, Slovenia, Czech Republic|
+|**Supported Destinations**| United States, Canada, Puerto Rico| United States, Canada, United Kingdom | Germany, Netherlands, United Kingdom, Australia, France, Switzerland, Sweden, Italy, Spain, Denmark, Ireland, Portugal, Poland, Austria, Lithuania, Latvia, Estonia| Norway, Finland, Slovakia, Slovenia, Czech Republic|
|**Get started**|[Get a toll-free number](../../quickstarts/telephony/get-phone-number.md)|[Get a short code](../../quickstarts/sms/apply-for-short-code.md) | [Enable alphanumeric sender ID](../../quickstarts/sms/enable-alphanumeric-sender-id.md) |[Enable alphanumeric sender ID](../../quickstarts/sms/enable-alphanumeric-sender-id.md) | \* See [Alphanumeric sender ID FAQ](./sms-faq.md#alphanumeric-sender-id) for detailed formatting requirements.
communication-services Diagnose Calls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/diagnose-calls.md
+
+ Title: Azure Communication Services Call Diagnostics
+
+description: Use Call Diagnostics to diagnose call issues with Azure Communication Services
+++++ Last updated : 11/21/2023+++++++
+# Call Diagnostics
++
+Understanding your call quality and reliability is foundational to
+delivering a great customer calling experience. There are various
+issues that can affect the quality of your calls, such as poor internet
+connectivity, software compatibility issues, and technical difficulties
+with devices. These issues can be frustrating for all call participants,
+whether they're a patient checking in for a doctorΓÇÖs call, or a student
+taking a lesson with their teacher. As a developer, diagnosing and
+fixing these issues can be time-consuming and frustrating.
+
+Call Diagnostics acts as a detective for your calls. It helps developers
+using Azure Communication Services investigate events that happened in a call to
+identify likely causes of poor call quality and reliability. Just like a
+real conversation, many things happen simultaneously in a call that may
+or may not affect your communication. Call DiagnosticsΓÇÖ timeline makes
+it easier to visualize what happened in a call by showing you rich data
+visualizations of call events and providing insights into issues that
+commonly affect calls.
+
+## How to enable Call Diagnostics
+
+Azure Communication Services collects call data in the form of metrics
+and events. You must enable a Diagnostic Setting in Azure Monitor to
+send these data to a Log Analytics workspace for Call Diagnostics to
+analyze new call data.
+++
+> [!IMPORTANT]
+> Call Diagnostics canΓÇÖt query data from data that wasnΓÇÖt sent to a Log Analytics workspace. Diagnostic Settings will only begin collect data by single Azure Communications Services Resource ID once enabled. See our Frequently Asked Question on enabling Call Diagnostics [here](#frequently-asked-questions)
+++
+Since Call Diagnostics is an application layer on top of data for your
+Azure Communications Service Resource you can query these call data and
+[build workbook reports on top of your data](../../../azure-monitor/logs/data-platform-logs.md#what-can-you-do-with-azure-monitor-logs)
+
+You can access Call Diagnostics from any Azure Communication Services
+Resource in your Azure portal. When you open your Azure Communications
+Services Resource, just look for the ΓÇ£MonitoringΓÇ¥ section on the left
+side of the screen and select "Call Diagnostics."
+
+Once you have setup Call Diagnostics for your Azure Communication Services Resource you can search for calls using valid callIDs that took place in that resource. Data can take several hours after call completion to appear in your resource and populate in Call Diagnostics.
+
+**Call Diagnostics has four main sections:**
+
+- [Call Search](#call-search)
+
+- [Call Overview](#call-overview)
+
+- [Call Issues](#call-issues)
+
+- [Call Timeline](#call-timeline)
+
+## Call Search
+
+The search section lets you find individual calls, or filter calls to explore calls with issues. Clicking on a call takes you to a detail screen where you
+see three sections, **Overview**, **Issues**, and **Timeline** for the
+selected call.
+
+The search field allows you to search by callID. See our documentation to [access your client call ID.](../troubleshooting-info.md#access-your-client-call-id)
+
+ <!-- (**insert image)** -->
+
+> [!NOTE]
+> You can explore information icons and links within Call Diagnostics to learn functionality, definitions, and helpful tips.
+
+## Call Overview
+
+Once you select a call from the Call Search page, your call details will
+display in the Call Overview tab. YouΓÇÖll see a call summary highlighting
+the participants in the call and key metrics for their call quality. You
+can select a participant to drill into their call timeline details
+directly or navigate to the Call Issues tab for further analysis.
+
+<!-- (**<u>TODO insert image)</u>** -->
++
+<!-- > [!NOTE]
+> You can explore information icons and links on each page within Call Diagnostics to learn functionality, definitions, and helpful tips. -->
+
+<!-- (**insert image)** -->
+
+## Call Issues
+
+The Call Issues tab gives you a high-level analysis of any media quality
+and reliability issues that were detected during the call.
+
+Call Issues highlights detected issues commonly known to affect userΓÇÖs call
+quality such as poor network conditions, speaking while muted, or device
+failures during a call. If you want to explore a detected issue, select
+the highlighted item and you'll see a pre-populated view of the
+related events in the Timeline tab.
+
+<!-- (**<u>TODO insert image)</u>** -->
++
+<!-- > [!NOTE]
+> You can explore information icons and links on each page within Call Diagnostics to learn functionality, definitions, and helpful tips. -->
+
+## Call Timeline
+
+When call issues are difficult to troubleshoot, you can explore the
+timeline tab to see a detailed sequence of events that occurred during
+the call.
+
+The timeline view is complex and designed for developers who need
+explore details of a call and interpret detailed debugging data. In
+large calls the timeline view can present an overwhelming amount of
+information, we recommend relying on filtering to narrow your search
+results and reduce complexity.
+
+You can view detailed call logs for each participant within a call. Call information may not be present, this can be due to various reasons such as privacy constraints between different calling resources. See frequently asked questions to learn more.
+
+<!-- (**<u>TODO insert image)</u>** -->
++
+<!-- > [!NOTE]
+> You can explore information icons and links on each page within Call Diagnostics to learn functionality, definitions, and helpful tips. -->
+
+<!-- # Common issues
+
+Issue categories can include:
+
+- Azure Communication Services issue
+
+- Calling deployment issue
+
+- Network issue
+
+- User actions or inactions (e.g. not allowing device permissions),
+ driving through a tunnel.
+
+To help you get started, you will find below the steps to triage common
+issues using Call Diagnostics.
+
+***ΓÇ£Other participants couldnΓÇÖt hear me on the callΓÇ¥***
+
+Dive into the audio section for the participant to see if there are any
+issues detected. In the case below, we see that the microphone was muted
+unexpectedly. In other cases, we might see errors with the deviceΓÇÖs set
+up and permissions.
+
+(**<u>TODO insert image)</u>**
+
+***ΓÇ£My video was choppy and pixelatedΓÇ¥***
+Explore the video section for the participant to see if a poor network
+connection in a call may have caused the issue.
+
+(**<u>TODO insert image)</u>**
+
+***ΓÇ£My call unexpectedly droppedΓÇ¥***
+**<u>TODO -</u>** Show how you might drill down to show the end-user
+lost connection.
+
+(**<u>TODO insert image)</u>**
+
+***ΓÇ£Other participants couldnΓÇÖt see me on the callΓÇ¥***
+Show how you might drill down to show the status of the camera in the
+call and any detected failures.
+
+(**<u>TODO insert image)</u>**
+
+## Call quality resources
+
+Ensuring good call quality starts with your calling setup, please
+explore our documentation to learn how you can use the UI Library to
+benefit from our quality and reliability tools \<[link to manage call
+quality](https://learn.microsoft.com/azure/communication-services/concepts/voice-video-calling/manage-call-quality)\>. -->
+
+## Frequently asked questions:
+
+- How do I setup Call Diagnostics?
+
+ - Follow instructions to add diagnostic settings for your resource here [Enable logs via Diagnostic Settings in Azure Monitor.](../analytics/enable-logging.md) When prompted to select [select logs](../analytics/enable-logging.md#adding-a-diagnostic-setting) select "**allLogs**".
+ - If you have multiple Azure Communications Services Resource IDs you must enable these settings for each resource ID and query call details for participants within their respective Azure Communications Services Resource ID. Your data volume, retention, and CDC query usage in Log Analytics is billed through existing Azure data meters, monitor your data usage and retention policies for [cost considerations as needed](../../../azure-monitor/logs/cost-logs.md)
+
+- If Azure Communication Services participants join from different Azure Communication Services Resources, how will they display in Call Diagnostics?
+
+ - If all the participants are from the same Azure subscription, they'll appear as "remote participants". However, Call Diagnostics wonΓÇÖt show any participant details for Azure Communication Services participants from another resource. You need to review that same call ID from the specific Azure Communication Services Resource the participant belongs to.
+
+ <!-- 2. If that ACS resource isn't part of **<u>your Azure subscription
+ and / or hasn't enabled Diagnostics Settings to store call logs,
+ there will not be any data available</u>** for Call Diagnostics. -->
+
+<!-- 1. If Teams participants join a call, how will they display in Call
+ Diagnostics?
+
+ 1. If a Teams participant organized the call through Microsoft
+ Teams, that participant will appear as a participant in Call
+ Diagnostics, however they'll have fewer call details populated.
+
+ 2. If there were other Teams participants besides the Teams meeting
+ organizer, those participants won't appear in Call
+ Diagnostics. -->
++
+<!-- 1. How do I find a Call ID?
+
+ a. Link -->
+
+<!-- 1. My call ID should be here?
+
+ a. It could no longer be stored by your Log Analytics workspace, you may need to ensure you retain your call data in diagnostics settings. It's possible your callID is incorrect. (**ENG add details on which call ID to specifically pull in the event of multiple callIDs.**)
+
+ a. Maybe itΓÇÖs not the ACS call ID, check ΓÇ£how do I find a callID?ΓÇ¥ to learn more. -->
+
+<!-- 1. My call had issues, but Call Diagnostics doesnΓÇÖt show any issues.
+
+ a. Call Diagnostics relies on several common call issues to help diagnose calls. Issues can still occur outside of the existing telemetry or can be caused by unlisted call participants you arenΓÇÖt allowed to view due to privacy restrictions. -->
+
+<!-- 1. What types of calls are visible in Call Diagnostics?
+
+ a. Call types included.
+ 1. Includes call data for Web JS SDK, Native SKD, PSTN, Call Automation.
+
+ 1. Includes some Call Automation Bot data edges
+
+ a. Partial data.
+
+ a. Different SDKs, privacy considerations may prevent you from receiving those data. -->
++
+<!-- 1. What are limits of what our data reaches.
+ 1. Privacy restrictions may prevent you from seeing the full call roster.
+
+1. What are bots?
+
+1. What capabilities does Search have?
+
+1. What capabilities does Overview have?
+
+1. What capabilities does Issues have?
+
+1. What capabilities does Timeline have?
+
+ 1. You can zoom within the timeline by using SHIFT+mouse-scroll wheel and pan left and right by clicking and dragging within the timeline itself. -->
+
+<!-- 1. What types of issues might I find?
+
+ a. Participant’s call issues generally fall into these categories: 
+ 1. They can’t join a call. 
+
+ 1. They can’t do something in a call (mute, start video, etc.). 
+
+ 1. They get dropped from a call. 
+
+ 1. They have a poor call experience (audio/video quality).  -->
++
+<!-- FAQ - Clear cache - Ask Nan.
+People need to do X, in case your cache is stale or causing issues,
+
+choose credential A vs. B
+
+Clear your cache to ensure X, you may need clear your cache occasionally if you experience issues using Call Diagnostics. -->
+
+## Next steps
+
+- Learn how to manage call quality, see: [Improve and manage call quality](manage-call-quality.md)
+
+- Continue to learn other quality best practices, see: [Best practices: Azure Communication Services calling SDKs](../best-practices.md)
+
+- Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../../articles/azure-monitor/logs/log-analytics-tutorial.md)
+
+- Create your own queries in Log Analytics, see: [Get Started Queries](../../../../articles/azure-monitor/logs/get-started-queries.md)
+
+- Explore known call issues, see: [Known issues in the SDKs and APIs](../known-issues.md)
++
+<!-- added to the toc.yml file at row 583.
+
+ - name: Monitor and manage call quality
+ items:
+ - name: Manage call quality
+ href: concepts/voice-video-calling/manage-call-quality.md
+ displayName: diagnostics, Survey, feedback, quality, reliability, users, end, call, quick
+ - name: End of Call Survey
+ href: concepts/voice-video-calling/end-of-call-survey-concept.md
+ displayName: diagnostics, Survey, feedback, quality, reliability, users, end, call, quick
+ -->
communication-services Manage Call Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md
The call may have fired a User Facing Diagnostic indicating a severe problem wit
- Create your own queries in Log Analytics, see: [Get Started Queries](../../../../articles/azure-monitor/logs/get-started-queries.md)
+- Explore known issues, see: [Known issues in the SDKs and APIs](../known-issues.md)
<!-- Comment this out - add to the toc.yml file at row 583.
communication-services Custom Context https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/custom-context.md
+
+ Title: Azure Communication Services Call Automation how-to for passing call contextual data in Call Automation
+
+description: Provides a how-to guide for passing contextual information with Call Automation.
++++ Last updated : 11/28/2023+++++
+# How to pass contextual data between calls
+
+Call Automation allows developers to pass along custom contextual information when routing calls. Developers can pass metadata about the call, callee or any other information that is relevant to their application or business logic. This allows businesses to manage, and route calls across networks without having to worry about losing context.
+
+Passing context is supported by specifying custom headers. These are an optional list of key-value pairs that can be included as part of `AddParticipant` or `Transfer` actions. The context can be later retrieved as part of the `IncomingCall` event payload.
+
+Custom call context is also forwarded to the SIP protocol, this includes both the freeform custom headers as well as the standard User-to-User Information (UUI) SIP header. When routing an inbound call from your telephony network, the data set from your SBC in the custom headers and UUI is similarly included in the `IncomingCall` event payload.
+
+All custom context data is opaque to Call Automation or SIP protocols and its content is unrelated to any basic functions.
+
+Below are samples on how to get started using custom context headers in Call Automation.
+
+As a prerequisite, we recommend you to read these articles to make the most of this guide:
+
+- Call Automation [concepts guide](../../concepts/call-automation/call-automation.md#call-actions) that describes the action-event programming model and event callbacks.
+- Learn about [user identifiers](../../concepts/identifiers.md#the-communicationidentifier-type) like CommunicationUserIdentifier and PhoneNumberIdentifier used in this guide.
+
+For all the code samples, `client` is CallAutomationClient object that can be created as shown and `callConnection` is the CallConnection object obtained from Answer or CreateCall response. You can also obtain it from callback events received by your application.
+
+## Technical parameters
+Call Automation supports up to 5 custom SIP headers and 1000 custom VOIP headers. Additionally, developers can include a dedicated User-To-User header as part of SIP headers list.
+
+The custom SIP header key must start with a mandatory ΓÇÿX-MS-Custom-ΓÇÖ prefix. The maximum length of a SIP header key is 64 chars, including the X-MS-Custom prefix. The maximum length of SIP header value is 256 chars. The same limitations apply when configuring the SIP headers on your SBC.
+
+The maximum length of a VOIP header key is 64 chars. These headers can be sent without ΓÇÿx-MS-CustomΓÇÖ prefix. The maximum length of VOIP header value is 1024 chars.
+
+## Adding custom context when inviting a participant
+
+### [csharp](#tab/csharp)
+
+```csharp
+// Invite a communication services user and include one VOIP header
+var addThisPerson = new CallInvite(new CommunicationUserIdentifier("<user_id>"));
+addThisPerson.CustomCallingContext.AddVoip("myHeader", "myValue");
+AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson);
+// Invite a PSTN user and set UUI and custom SIP headers
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234");
+var addThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
+
+// Set custom UUI header. This key is sent on SIP protocol as User-to-User
+addThisPerson.CustomCallingContext.AddSipUui("value");
+
+// This provided key will be automatically prefixed with X-MS-Custom on SIP protocol, such as 'X-MS-Custom-{key}'
+addThisPerson.CustomCallingContext.AddSipX("header1", "customSipHeaderValue1");
+AddParticipantsResult result = await callConnection.AddParticipantAsync(addThisPerson);
+```
+### [Java](#tab/java)
+```java
+// Invite a communication services user and include one VOIP header
+CallInvite callInvite = new CallInvite(new CommunicationUserIdentifier("<user_id>"));
+callInvite.getCustomCallingContext().addVoip("voipHeaderName", "voipHeaderValue");
+AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite);
+Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block();
+
+// Invite a PSTN user and set UUI and custom SIP headers
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234");
+CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber);
+callInvite.getCustomCallingContext().addSipUui("value");
+callInvite.getCustomCallingContext().addSipX("header1", "customSipHeaderValue1");
+AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite);
+Response<AddParticipantResult> addParticipantResultResponse = callConnectionAsync.addParticipantWithResponse(addParticipantOptions).block();
+```
+
+### [JavaScript](#tab/javascript)
+```javascript
+// Invite a communication services user and include one VOIP header
+const customCallingContext: CustomCallingContext = [];
+customCallingContext.push({ kind: "voip", key: "voipHeaderName", value: "voipHeaderValue" })
+const addThisPerson = {
+ targetParticipant: { communicationUserId: "<acs_user_id>" },
+ customCallingContext: customCallingContext,
+};
+const addParticipantResult = await callConnection.addParticipant(addThisPerson);
+
+// Invite a PSTN user and set UUI and custom SIP headers
+const callerIdNumber = { phoneNumber: "+16044561234" };
+const customCallingContext: CustomCallingContext = [];
+customCallingContext.push({ kind: "sipuui", key: "", value: "value" });
+customCallingContext.push({ kind: "sipx", key: "headerName", value: "headerValue" })
+const addThisPerson = {
+ targetParticipant: { phoneNumber: "+16041234567" },
+ sourceCallIdNumber: callerIdNumber,
+ customCallingContext: customCallingContext,
+};
+const addParticipantResult = await callConnection.addParticipant(addThisPerson);
+```
+
+### [Python](#tab/python)
+```python
+#Invite a communication services user and include one VOIP header
+voip_headers = {"voipHeaderName", "voipHeaderValue"}
+target = CommunicationUserIdentifier("<acs_user_id>")
+result = call_connection_client.add_participant(
+ target,
+ voip_headers=voip_headers
+)
+
+#Invite a PSTN user and set UUI and custom SIP headers
+caller_id_number = PhoneNumberIdentifier("+16044561234")
+sip_headers = {}
+sip_headers.add("User-To-User", "value")
+sip_headers.add("X-MS-Custom-headerName", "headerValue")
+target = PhoneNumberIdentifier("+16041234567")
+result = call_connection_client.add_participant(
+ target,
+ sip_headers=sip_headers,
+ source_caller_id_number=caller_id_number
+)
+```
+
+--
+## Adding custom context during call transfer
+
+### [csharp](#tab/csharp)
+
+```csharp
+//Transfer to communication services user and include one VOIP header
+var transferDestination = new CommunicationUserIdentifier("<user_id>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+var transferOption = new TransferToParticipantOptions(transferDestination) {
+ OperationContext = "<Your_context>",
+ OperationCallbackUri = new Uri("<uri_endpoint>") // Sending event to a non-default endpoint.
+};
+transferOption.CustomCallingContext.AddVoip("customVoipHeader1", "customVoipHeaderValue1");
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption);
+
+//Transfer a PSTN call to phone number and set UUI and custom SIP headers
+var transferDestination = new PhoneNumberIdentifier("<target_phoneNumber>");
+var transferOption = new TransferToParticipantOptions(transferDestination);
+transferOption.CustomCallingContext.AddSipUui("uuivalue");
+transferOption.CustomCallingContext.AddSipX("header1", "headerValue");
+TransferCallToParticipantResult result = await callConnection.TransferCallToParticipantAsync(transferOption)
+```
+
+### [Java](#tab/java)
+```java
+//Transfer to communication services user and include one VOIP header
+CommunicationIdentifier transferDestination = new CommunicationUserIdentifier("<user_id>");
+TransferCallToParticipantOptions options = new TransferCallToParticipantOptions(transferDestination);
+options.getCustomCallingContext().addVoip("voipHeaderName", "voipHeaderValue");
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+
+//Transfer a PSTN call to phone number and set UUI and custom SIP headers
+CommunicationIdentifier transferDestination = new PhoneNumberIdentifier("<taget_phoneNumber>");
+TransferCallToParticipantOptions options = new TransferCallToParticipantOptions(transferDestination);
+options.getCustomCallingContext().addSipUui("UUIvalue");
+options.getCustomCallingContext().addSipX("sipHeaderName", "value");
+Response<TransferCallResult> transferResponse = callConnectionAsync.transferToParticipantCallWithResponse(options).block();
+```
+
+### [JavaScript](#tab/javascript)
+```javascript
+//Transfer to communication services user and include one VOIP header
+const transferDestination = { communicationUserId: "<user_id>" };
+const transferee = { communicationUserId: "<transferee_user_id>" };
+const options = { transferee: transferee, operationContext: "<Your_context>", operationCallbackUrl: "<url_endpoint>" };
+const customCallingContext: CustomCallingContext = [];
+customCallingContext.push({ kind: "voip", key: "customVoipHeader1", value: "customVoipHeaderValue1" })
+options.customCallingContext = customCallingContext;
+const result = await callConnection.transferCallToParticipant(transferDestination, options);
+
+//Transfer a PSTN call to phone number and set UUI and custom SIP headers
+const transferDestination = { phoneNumber: "<taget_phoneNumber>" };
+const transferee = { phoneNumber: "<transferee_phoneNumber>" };
+const options = { transferee: transferee, operationContext: "<Your_context>", operationCallbackUrl: "<url_endpoint>" };
+const customCallingContext: CustomCallingContext = [];
+customCallingContext.push({ kind: "sipuui", key: "", value: "uuivalue" });
+customCallingContext.push({ kind: "sipx", key: "headerName", value: "headerValue" })
+options.customCallingContext = customCallingContext;
+const result = await callConnection.transferCallToParticipant(transferDestination, options);
+```
+
+### [Python](#tab/python)
+```python
+#Transfer to communication services user and include one VOIP header
+transfer_destination = CommunicationUserIdentifier("<user_id>")
+transferee = CommnunicationUserIdentifer("transferee_user_id")
+voip_headers = {"customVoipHeader1", "customVoipHeaderValue1"}
+result = call_connection_client.transfer_call_to_participant(
+ target_participant=transfer_destination,
+ transferee=transferee,
+ voip_headers=voip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
+)
+
+#Transfer a PSTN call to phone number and set UUI and custom SIP headers
+transfer_destination = PhoneNumberIdentifer("<target_phoneNumber>")
+transferee = PhoneNumberIdentifer("transferee_phoneNumber")
+sip_headers={}
+sip_headers.add("X-MS-Custom-headerName", "headerValue")
+sip_headers.add("User-To-User","uuivale")
+result = call_connection_client.transfer_call_to_participant(
+ target_participant=transfer_destination,
+ transferee=transferee,
+ sip_headers=sip_headers,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
+)
+```
+
+Transfer of a VoIP call to a phone number is currently not supported.
+
+--
+## Reading custom context from an incoming call event
+
+### [csharp](#tab/csharp)
+
+```csharp
+AcsIncomingCallEventData incomingEvent = <incoming call event from Event Grid>;
+// Retrieve incoming call custom context
+AcsIncomingCallCustomContext callCustomContext = incomingEvent.CustomContext;
+
+// Inspect dictionary with key/value pairs
+var voipHeaders = callCustomContext.VoipHeaders;
+var sipHeaders = callCustomContext.SipHeaders;
+
+// Proceed to answer or reject call as usual
+```
+
+### [Java](#tab/java)
+```java
+AcsIncomingCallEventData incomingEvent = <incoming call event from Event Grid>;
+// Retrieve incoming call custom context
+AcsIncomingCallCustomContext callCustomContext = incomingEvent.getCustomContext();
+
+// Inspect dictionary with key/value pairs
+Map<String, String> voipHeaders = callCustomContext.getVoipHeaders();
+Map<String, String> sipHeaders = callCustomContext.getSipHeaders();
+
+// Proceed to answer or reject call as usual
+```
+
+### [JavaScript](#tab/javascript)
+```javascript
+// Retrieve incoming call custom context
+const callCustomContext = incomingEvent.customContext;
+
+// Inspect dictionary with key/value pairs
+const voipHeaders = callCustomContext.voipHeaders;
+const sipHeaders = callCustomContext.sipHeaders;
+
+// Proceed to answer or reject call as usual
+```
+
+### [Python](#tab/python)
+```python
+# Retrieve incoming call custom context
+callCustomContext = incomingEvent.customContext
+
+# Inspect dictionary with key/value pairs
+voipHeaders = callCustomContext.voipHeaders
+sipHeaders = callCustomContext.sipHeaders
+```
+
+--
+## Additional resources
+
+- For a sample payload of the incoming call, refer to this [guide](../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationincomingcall).
+
+- Learn more about [SIP protocol details for direct routing](../../concepts/telephony/direct-routing-sip-specification.md).
communication-services Recognize Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/recognize-action.md
This guide will help you get started with recognizing DTMF input provided by par
| RecognizeFailed | 500 | 8511 | Action failed, encountered failure while trying to play the prompt. | | RecognizeFailed | 500 | 8512 | Unknown internal server error. |
+## Known limitations
+- In-band DTMF is not supported, use RFC 2833 DTMF instead.
+ ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources).
container-apps Start Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/start-containers.md
Previously updated : 11/14/2023 Last updated : 11/30/2023
Containers package your applications in an easy-to-transport unit. Here are a fe
- **Simplicity**: Moving shipping containers requires specific, yet standardized tools. Similarly, Azure Container Apps simplifies how you use containers, which allows you focus on app development without worrying about the details of container management. > [!div class="nextstepaction"]
-> [Build your first app using a container](quickstart-portal.md)
+> [Use serverless containers](start-serverless-containers.md)
container-apps Start Serverless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/start-serverless-containers.md
Title: Introduction to serverless containers on Azure
-description: Get started with serverless containers on Azure with Azure Container Apps
+ Title: Using serverless containers on Azure
+description: Get started using serverless containers on Azure with Azure Container Apps
Previously updated : 11/14/2023 Last updated : 11/30/2023
-# Introduction to serverless containers on Azure
+# Use serverless containers on Azure
Serverless computing offers services that manage and maintain servers, which relieve you of the burden of physically operating servers yourself. Azure Container Apps is a serverless platform that handles scaling, security, and infrastructure management for you - all while reducing costs. Once freed from server-related concerns, you're able to spend your time focusing on your application code.
Use the following table to help you get acquainted with Azure Container Apps.
| Action | Description | |||
-| [Build the app](quickstart-code-to-cloud.md) | Deploy your first app, then create an event driven app to process a message queue. |
+| [Build the app](quickstart-portal.md) | Deploy your first app, then create an event driven app to process a message queue. |
| [Scale the app](scale-app.md) | Learn how Containers Apps handles meeting variable levels of demand. | | [Enable public access](ingress-overview.md) | Enable ingress on your container app to accept request from the public web. | | [Observe app behavior](observability.md) | Use log streaming, your apps console, application logs, and alerts to observe the state of your container app. |
copilot Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/capabilities.md
Use Microsoft Copilot for Azure (preview) to perform many basic tasks. There are
- [Discover performance recommendations with Code Optimizations](optimize-code-application-insights.md) - [Author API Management policies](author-api-management-policies.md) - [Generate Kubernetes YAML files](generate-kubernetes-yaml.md)
+ - [Troubleshoot apps faster with App Service](troubleshoot-app-service.md)
## Get information
copilot Get Monitoring Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md
When you ask Microsoft Copilot for Azure (preview) about logs, it automatically
[!INCLUDE [preview-note](includes/preview-note.md)]
-### Sample prompts
+## Sample prompts
Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information.
copilot Troubleshoot App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/troubleshoot-app-service.md
+
+ Title: Troubleshoot your apps faster with App Service using Microsoft Copilot for Azure (preview)
+description: Learn how Microsoft Copilot for Azure (preview) can help you troubleshoot your web apps hosted with App Service.
Last updated : 12/01/2023++++++
+# Troubleshoot your apps faster with App Service using Microsoft Copilot for Azure (preview)
+
+Microsoft Copilot for Azure (preview) can act as your expert companion for [Azure App Service](/azure/app-service/overview) diagnostics and solutions.
+
+App Service offers more than sixty troubleshooting tools for different types of issues. Rather than figure out which tool to use, you can ask Microsoft Copilot for Azure (preview) about the problem you're experiencing. Microsoft Copilot for Azure (preview) will determine which tool is best suited to your question, whether it's related to high CPU usage, networking issues, getting a memory dump, or more. You'll see relevant diagnostics to help you resolve any problems you're experiencing.
+
+When you ask Microsoft Copilot for Azure (preview) for App Service troubleshooting help, it automatically pulls context when possible, based on the current conversation or the app you're viewing in the Azure portal. If the context isn't clear, you'll be prompted to specify the resource for which you want information.
+++
+## Sample prompts
+
+Here are a few examples of the kinds of prompts you can use to get help with App Service troubleshooting. Modify these prompts based on your real-life scenarios, or try additional prompts to get help with different types of issues.
+
+- "My web app is down"
+- "My web app is slow"
+- "Enable auto heal"
+- "Take a memory dump"
+
+## Examples
+
+You can tell Microsoft Copilot for Azure (preview) "my web app is down." After you select the resource that you want to troubleshoot, Microsoft Copilot for Azure opens the **App Service - Web App Down** tool so you can view diagnostics.
++
+When you say "Take a memory dump" to Microsoft Copilot for Azure (preview), Microsoft Copilot for Azure (preview) suggests opening the **Collect a Memory Dump** tool so that you can take a snapshot of the app's current state. In this example, Microsoft Copilot for Azure (preview) continues to work with the resource selected earlier in the conversation.
++
+## Next steps
+
+- Explore [capabilities](capabilities.md) of Microsoft Copilot for Azure (preview).
+- Learn more about [Azure Monitor](/azure/azure-monitor/).
+- [Request access](https://aka.ms/MSCopilotforAzurePreview) to Microsoft Copilot for Azure (preview).
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Multiple-region accounts experience different behaviors depending on the followi
* After the previously affected write region recovers, it will show as "online" in Azure portal, and become available as a read region. At this point, it is safe to switch back to the recovered region as the write region by using [PowerShell, the Azure CLI, or the Azure portal](how-to-manage-database-account.md#manual-failover). There is *no data or availability loss* before, while, or after you switch the write region. Your application continues to be highly available.
+> [!WARNING]
+> In the event of a write region outage, where the Azure Cosmos DB account promotes a secondary region to be the new primary write region via *service-managed failover*, the original write region will **not be be promoted back as the write region automatically** once it is recovered. It is your responsibility to switch back to the recovered region as the write region using [PowerShell, the Azure CLI, or the Azure portal](how-to-manage-database-account.md#manual-failover) (once safe to do so, as described above).
+ ## SLAs The following table summarizes the high-availability capabilities of various account configurations.
cosmos-db Concepts Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-availability-zones.md
+
+ Title: Availability zone (AZ) outage resiliency ΓÇô Azure Cosmos DB for PostgreSQL
+description: Disaster recovery using Azure availability zones (AZ) concepts
+++++ Last updated : 11/28/2023++
+# Availability zone outage resiliency in Azure Cosmos DB for PostgreSQL
++
+Many Azure regions have availability zones. Availability zones (AZs) are separated groups of datacenters within a region. Availability zones are close enough to have low-latency connections to other availability zones within their region. They're connected by a high-performance network with a round-trip latency of less than 2 milliseconds.
+
+At the same time, availability zones are far enough apart to reduce the likelihood that more than one will be affected by local outages or weather. Availability zones have independent power, cooling, and networking infrastructure. They're designed so that if one zone experiences an outage, then regional services are supported by the remaining zones across various Azure services.
+
+Azure Cosmos DB for PostgreSQL supports availability zones for improved reliability and disaster recovery. Advantages of availability zones vary depending on whether [high availability](./concepts-high-availability.md) is enabled on an Azure Cosmos DB for PostgreSQL cluster.
+
+## Availability zone outage resiliency for regional service components
+There are many Azure Cosmos DB for PostgreSQL service components in each supported Azure region that don't belong to individual clusters but are rather critical parts of running the managed service. These components allow ongoing execution of all management operations such as new cluster provisioning and scaling existing clusters and all internal operations such as monitoring node health.
+
+When Azure region supports availability zones, all of these service components are configured to be AZ redundant. It means that all Azure Cosmos DB for PostgreSQL service components can sustain outage of an AZ, or in other words are resilient to a single AZ outage.
+
+Whether a cluster is configured with high availability or not, its ongoing operations depend on these service components. AZ redundancy of the service components is a critical element of availability zone outage resiliency in Azure Cosmos DB for PostgreSQL.
+
+## Availability zone outage impact on clusters with and without high availability
+
+All nodes in a cluster are provisioned into one availability zone. Preferred AZ setting allows you to put all cluster nodes in the same availability zone where the application is deployed. Having all nodes in the same AZ ensures lower latency between the nodes thus improving overall cluster performance.
+
+When high availability (HA) is enabled on a cluster, all primary nodes are created in one AZ and all standby nodes are provisioned into another AZ. Nodes can move between availability zones during the following events:
+
+- A failure occurs on a primary HA-enabled node. In this case primary node's standby is going to become a new primary and standby node's AZ is going to be the new AZ for that primary node.
+- A [scheduled maintenance](./concepts-maintenance.md) event happens on cluster. At the end of maintenance all primary nodes in a cluster are going to be in the same AZ.
+
+If high availability *is* enabled, cluster continues to be available throughout AZ outage with a possible failover on those primary nodes that are in the impacted AZ.
+If high availability *is not* enabled on a cluster, only outage in the AZ where nodes are deployed would impact cluster availability.
+
+You can always check availability zone for each primary node using the [Azure portal](./concepts-cluster.md#node-availability-zone) or using programmatic methods such as [REST APIs](/rest/api/postgresqlhsc/servers/get).
+
+To get resiliency benefits of availability zones, your cluster needs to be in [one of the Azure regions](./resources-regions.md) where Azure Cosmos DB for PostgreSQL is configured for AZ outage resiliency.
+
+## Next steps
+
+- Check out [regions that are configured for AZ outage resiliency](./resources-regions.md) in Azure Cosmos DB for PostgreSQL
+- Learn about [availability zones in Azure](../../reliability/availability-zones-overview.md)
+- Learn how to [enable high availability](howto-high-availability.md) in a cluster
cosmos-db Concepts Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-cluster.md
Title: Cluster - Azure Cosmos DB for PostgreSQL description: What is a cluster in Azure Cosmos DB for PostgreSQL--++ Previously updated : 06/05/2023 Last updated : 11/28/2023 # Clusters in Azure Cosmos DB for PostgreSQL
values:
### Node availability zone
-Azure Cosmos DB for PostgreSQL displays the [availability
-zone](../../availability-zones/az-overview.md#availability-zones) of each node
+Azure Cosmos DB for PostgreSQL displays the [availability zone](./concepts-availability-zones.md) of each node
in a cluster on the Overview page in the Azure portal. The **Availability zone** column contains either the name of the zone, or `--` if the node isn't
-assigned to a zone. (Only [certain
-regions](https://azure.microsoft.com/global-infrastructure/geographies/#geographies)
+assigned to a zone. (Only [certain regions](./resources-regions.md)
support availability zones.) Azure Cosmos DB for PostgreSQL allows you to set a preferred availability zone for cluster. Usually the reason for it is to put cluster nodes in the same availability zone where the application and the rest of the application stack components are.
-If [high availability](./concepts-high-availability.md) is enabled for the cluster, and a node [fails
-over](concepts-high-availability.md) to a standby, you may see its availability
+If [high availability](./concepts-high-availability.md) is enabled for the cluster, and a node fails
+over to a standby, you may see its availability
zone differs from the other nodes. In this case, the nodes will be moved back
-into the same availability zone together during the next [maintenance
-window](concepts-maintenance.md).
+into the same availability zone together during the next [maintenance event](./concepts-maintenance.md).
## Next steps
cosmos-db Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/concepts-high-availability.md
Title: High availability ΓÇô Azure Cosmos DB for PostgreSQL description: High availability and disaster recovery concepts--++ Previously updated : 06/05/2023 Last updated : 11/28/2023 # High availability in Azure Cosmos DB for PostgreSQL
switches incoming connections from the failed node to its standby. Failover
happens within a few minutes, and promoted nodes always have fresh data through PostgreSQL synchronous streaming replication.
-All primary nodes in a cluster are provisioned into one availability zone
+All primary nodes in a cluster are provisioned into one [availability zone](./concepts-availability-zones.md)
for better latency between the nodes. The preferred availability zone allows you to put all cluster nodes in the same availability zone where the application is deployed. This proximity could improve performance further by decreasing app-database latency. The standby nodes are provisioned into another availability zone. The Azure portal [displays](concepts-cluster.md#node-availability-zone) the availability
-zone of each primary node in a cluster.
+zone of each primary node in a cluster. You can also check availability zone of each node in a cluster using one of the programmatic methods such as [REST APIs](/rest/api/postgresqlhsc/servers/get).
Even without HA enabled, each node has its own locally redundant storage (LRS) with three synchronous replicas maintained by Azure
for clusters in the Azure portal.
## Next steps -- Learn how to [enable high availability](howto-high-availability.md) in a cluster
+- Learn how to [enable high availability](howto-high-availability.md) in a cluster.
+- Learn about [availability zones](./concepts-availability-zones.md) in Azure Cosmos DB for PostgreSQL.
cosmos-db Howto Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-availability-zones.md
+
+ Title: Configure and view availability zones in Azure Cosmos DB for PostgreSQL
+description: How to set preferred availability zone and check AZs for nodes
++++++ Last updated : 11/29/2023++
+# Use availability zones in Azure Cosmos DB for PostgreSQL
++
+Azure Cosmos DB for PostgreSQL provisions all nodes of a cluster in a single [availability zone](./concepts-availability-zones.md) (AZ) for better performance and availability. If cluster has [high availability](./concepts-high-availability.md) enabled, all standby nodes are provisioned into another availability zone to make sure all nodes in cluster continue to be available, with a possible failover, if there is an AZ outage.
+
+To get resiliency benefits of availability zones, your cluster needs to be in [one of the Azure regions](./resources-regions.md) where Azure Cosmos DB for PostgreSQL is configured for AZ outage resiliency.
+
+In this article, you learn how to specify preferred availability zone for your Azure Cosmos DB for PostgreSQL cluster. You will also learn how to check availability zone for each node once cluster is provisioned.
+
+## Specify preferred availability zone for new cluster
+
+By default preferred availability zone isn't set for a new cluster. In that case Azure Cosmos DB for PostgreSQL service would randomly select an availability zone for primary nodes.
+
+Selecting preferred AZ is possible during cluster creation on the **Scale** page in the **Availability zones** section.
+
+## Change preferred availability zone
+
+Once cluster is provisioned, select AZ in the **Preferred availability zone** drop-down list on the **Scale** page for your cluster in the Azure portal. Click the **Save** button to apply your selection.
+
+To avoid disruption, change of the availability zone isn't applied immediately. Rather all nodes are going to be moved to preferred availability zone during the next [maintenance](./concepts-maintenance.md) event.
+
+## Check availability zone for each node
+
+The **Overview** tab for the cluster lists all nodes along with the **Availability zone** column that shows actual availability zone for each primary cluster node.
+
+## Next steps
+
+- Learn more about [availability zones](./concepts-availability-zones.md) in Azure Cosmos DB for PostgreSQL.
+- Learn more about [availability zones in Azure](/azure/reliability/availability-zones-overview).
+- Use [REST APIs](/rest/api/postgresqlhsc/clusters/update), [Terraform](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/cosmosdb_postgresql_cluster), or [Azure CLI](/cli/azure/cosmosdb/postgres/cluster#az-cosmosdb-postgres-cluster-update) to perform operations with availability zones in Azure Cosmos DB for PostgreSQL.
cosmos-db Howto High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-high-availability.md
Title: Configure high availability - Azure Cosmos DB for PostgreSQL description: How to enable or disable high availability--++ Previously updated : 06/05/2023 Last updated : 11/28/2023 # Configure high availability in Azure Cosmos DB for PostgreSQL
Azure Cosmos DB for PostgreSQL provides high availability
gets a standby. If the original node becomes unhealthy, its standby is promoted to replace it.
-> [!IMPORTANT]
-> Because HA doubles the number of servers in the group, it will also double
-> the cost.
- Enabling HA is possible during cluster creation on **Scale** page. Once cluster is provisioned, set **Enable high availability (HA)** checkbox in the **High availability** tab for your cluster in the Azure portal. - Click the **Save** button to apply your selection. Enabling HA can take some time as the cluster provisions standby nodes and streams data to them.
The **Overview** tab for the cluster lists all nodes along with a **High availab
:::image type="content" source="media/howto-high-availability/02-ha-column.png" alt-text="the ha column in cluster overview":::
-### Next steps
+## Next steps
-Learn more about [high availability](concepts-high-availability.md).
+- Learn more about [high availability](concepts-high-availability.md).
+- Learn more about [availability zones](./concepts-availability-zones.md) in Azure Cosmos DB for PostgreSQL.
cosmos-db Howto Ingest Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-data-factory.md
Previously updated : 01/30/2023 Last updated : 11/28/2023 # How to ingest data by using Azure Data Factory in Azure Cosmos DB for PostgreSQL
for storage, processing, and reporting.
:::image type="content" source="media/howto-ingestion/azure-data-factory-architecture.png" alt-text="Dataflow diagram for Azure Data Factory." border="false":::
+> [!IMPORTANT]
+> Data Factory doesn't support private endpoints for Azure Cosmos DB for PostgreSQL at this time.
+ ## Data Factory for real-time ingestion Here are key reasons to choose Azure Data Factory for ingesting data into
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Previously updated : 11/20/2023 Last updated : 11/29/2023 # Product updates for Azure Cosmos DB for PostgreSQL
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### November 2023
+* General availability: [Availability zone (AZ) outage resiliency](./concepts-availability-zones.md) is now supported in [select regions](./resources-regions.md)
* General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.22, 12.17, 13.13, 14.10, 15.5, and 16.1) are now available in all supported regions. * PostgreSQL 16 is now the default Postgres version for Azure Cosmos DB for PostgreSQL in Azure portal. * Learn how to do [in-place upgrade of major PostgreSQL versions](./howto-upgrade.md) in Azure Cosmos DB for PostgreSQL.
cosmos-db Resources Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/resources-regions.md
Previously updated : 11/14/2023 Last updated : 11/28/2023 # Regional availability for Azure Cosmos DB for PostgreSQL
Last updated 11/14/2023
Azure Cosmos DB for PostgreSQL is available in the following Azure regions:
-* Americas:
- * Brazil South
- * Canada Central
- * Canada East
- * Central US
- * East US
- * East US 2
- * North Central US
- * South Central US
- * West Central US
- * West US
- * West US 2
- * West US 3
-* Asia Pacific:
- * Australia Central
- * Australia East
- * Central India
- * East Asia
- * Japan East
- * Japan West
- * Korea Central
- * South India
- * Southeast Asia
-* Europe:
- * France Central
- * Germany West Central
- * North Europe
- * Sweden Central
- * Switzerland North
- * Switzerland WestΓÇá
- * UK South
- * West Europe
-* Middle East:
- * Qatar Central
-
+| Region | HA | AZ outage resiliency | Geo-redundant backup stored in |
+| | | | |
+| Australia Central | :heavy_check_mark: | N/A | :x: |
+| Australia East | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Canada Central | :heavy_check_mark: | :heavy_check_mark: | Canada East |
+| Canada East | :heavy_check_mark: | N/A | Canada Central |
+| Central India | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Central US | :heavy_check_mark: | :heavy_check_mark: | East US 2 |
+| East Asia | :heavy_check_mark: | :heavy_check_mark: | Southeast Asia |
+| East US | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| East US 2 | :heavy_check_mark: | :x: | Central US |
+| France Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Japan East | :heavy_check_mark: | :heavy_check_mark: | Japan West |
+| Japan West | :heavy_check_mark: | :x: | Japan East |
+| Korea Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| North Central US | :heavy_check_mark: | N/A | South Central US |
+| North Europe | :heavy_check_mark: | :heavy_check_mark: | West Europe |
+| Qatar Central | :heavy_check_mark: | :x: | :x: |
+| South Central US | :heavy_check_mark: | :heavy_check_mark: | North Central US |
+| South India | :heavy_check_mark: | N/A | :x: |
+| Southeast Asia | :heavy_check_mark: | :x:| East Asia |
+| Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | Switzerland West |
+| Switzerland West ΓÇá | :heavy_check_mark: | N/A | Switzerland North |
+| UK South | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| West Central US | :heavy_check_mark: | N/A | West US 2 |
+| West Europe | :heavy_check_mark: | :x: | North Europe |
+| West US | :heavy_check_mark: | :x: | East US |
+| West US 2 | :heavy_check_mark: | :heavy_check_mark: | West Central US |
+| West US 3 | :heavy_check_mark: | :heavy_check_mark: | :x: |
ΓÇá This Azure region is a [restricted one](../../availability-zones/cross-region-replication-azure.md#azure-paired-regions). To use it, you need to request access to it by opening a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). - Some of these regions may not be activated on all Azure subscriptions. If you want to use a region from the list and don't see it in your subscription, or if you want to use a region not on this list, open a
request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportB
**Next steps** -- Learn how to [create a cluster in the portal](quickstart-create-portal.md).-- See [Azure regions with availability zones](../../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support)
+- Learn how to [create a cluster in the portal](./quickstart-create-portal.md).
+- See details about [availability zone outage resiliency](./concepts-availability-zones.md) in Azure Cosmos DB for PostgreSQL.
+- Check out [backup redundancy options](./concepts-backup.md#backup-redundancy).
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
Each export creates a new file, so older exports aren't overwritten.
You can use a management group to aggregate subscription cost information in a single container. Exports support management group scope for Enterprise Agreement but not for Microsoft Customer Agreement or other subscription types. Multiple currencies are also not supported in management group exports.
-Exports at the management group scope support only usage charges, purchases (including reservations and savings plans). Amortized cost reports aren't supported. When you create an export from the Azure portal for a management group scope, the metric field isn't shown because it defaults to the usage type. When you create a management group scope export using the REST API, choose [ExportType](/rest/api/cost-management/exports/create-or-update#exporttype) as `Usage`.
+Exports at the management group scope support only usage charges. Purchases, including reservations and savings plans aren't supported. Amortized cost reports are also not supported. When you create an export from the Azure portal for a management group scope, the metric field isn't shown because it defaults to the usage type. When you create a management group scope export using the REST API, choose [ExportType](/rest/api/cost-management/exports/create-or-update#exporttype) as `Usage`.
1. Create one management group and assign subscriptions to it, if you haven't already. 1. In cost analysis, set the scope to your management group and select **Select this management group**.
data-factory Connector Dynamics Crm Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-dynamics-crm-office-365.md
Previously updated : 10/20/2023 Last updated : 11/20/2023 # Copy and transform data in Dynamics 365 (Microsoft Dataverse) or Dynamics CRM using Azure Data Factory or Azure Synapse Analytics
The following properties are supported for the Dynamics linked service.
| serviceUri | The service URL of your Dynamics instance, the same one you access from browser. An example is "https://\<organization-name>.crm[x].dynamics.com". | Yes | | authenticationType | The authentication type to connect to a Dynamics server. Valid values are "AADServicePrincipal", "Office365" and "ManagedIdentity". | Yes | | servicePrincipalId | The client ID of the Microsoft Entra application. | Yes when authentication is "AADServicePrincipal" |
-| servicePrincipalCredentialType | The credential type to use for service-principal authentication. Valid values are "ServicePrincipalKey" and "ServicePrincipalCert". | Yes when authentication is "AADServicePrincipal" |
+| servicePrincipalCredentialType | The credential type to use for service-principal authentication. Valid values are "ServicePrincipalKey" and "ServicePrincipalCert". <br/><br/>Note: It's recommended to use ServicePrincipalKey. There's known limitation for ServicePrincipalCert credential type where the service may encounter transient issue of failing to retrieve secret from the key vault.| Yes when authentication is "AADServicePrincipal" |
| servicePrincipalCredential | The service-principal credential. <br/><br/>When you use "ServicePrincipalKey" as the credential type, `servicePrincipalCredential` can be a string that the service encrypts upon linked service deployment. Or it can be a reference to a secret in Azure Key Vault. <br/><br/>When you use "ServicePrincipalCert" as the credential, `servicePrincipalCredential` must be a reference to a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes when authentication is "AADServicePrincipal" | | username | The username to connect to Dynamics. | Yes when authentication is "Office365" | | password | The password for the user account you specified as the username. Mark this field with "SecureString" to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes when authentication is "Office365" |
deployment-environments How To Create Configure Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-dev-center.md
Title: Create and configure a dev center for Azure Deployment Environments by using the Azure CLI
-description: Learn how to create and access an environment in an Azure Deployment Environments project using Azure CLI.
+description: Learn how to create and access a dev center for Azure Deployment Environments project using the Azure CLI.
Previously updated : 04/28/2023 Last updated : 11/29/2023 # Create and configure a dev center for Azure Deployment Environments by using the Azure CLI
-This quickstart shows you how to create and configure a dev center in Azure Deployment Environments.
-
-A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
+This quickstart guide shows you how to create and configure a dev center in Azure Deployment Environments.
+A platform engineering team typically sets up a dev center, attaches external catalogs to the dev center, creates projects, and provides access to development teams. Development teams can then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure role-based access control role with permissions to create and manage resources in the subscription, such as [Contributor](../role-based-access-control/built-in-roles.md#contributor) or [Owner](../role-based-access-control/built-in-roles.md#owner).-- [Install the Azure CLI](/cli/azure/install-azure-cli).-- [Install dev center CLI extension](how-to-install-devcenter-cli-extension.md)-- A GitHub Account and a [Personal Access Token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) with Repo Access.
+- Install the [Azure CLI devcenter extension](how-to-install-devcenter-cli-extension.md).
+- A GitHub account and a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) with repo access.
## Create a dev center
-To create and configure a Dev center in Azure Deployment Environments by using the Azure portal:
+
+To create and configure a dev center in Azure Deployment Environments:
1. Sign in to the Azure CLI:
To create and configure a Dev center in Azure Deployment Environments by using t
az login ```
-1. Install the Azure Dev Center extension for the CLI.
+1. Install the Azure CLI *devcenter* extension.
```azurecli az extension add --name devcenter --upgrade
To create and configure a Dev center in Azure Deployment Environments by using t
1. Configure the default subscription as the subscription in which you want to create the dev center: ```azurecli
- az account set --subscription <name>
+ az account set --subscription <subscriptionName>
```
-1. Configure the default location as the location in which you want to create the dev center. Make sure to choose an [available regions for Azure Deployment Environments](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=deployment-environments&regions=all):
+1. Configure the default location where you want to create the dev center. Make sure to choose an [available region for Azure Deployment Environments](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=deployment-environments&regions=all):
```azurecli az configure --defaults location=eastus
To create and configure a Dev center in Azure Deployment Environments by using t
1. Create the resource group in which you want to create the dev center: ```azurecli
- az group create -n <group name>
+ az group create -n <resourceGroupName>
``` 1. Configure the default resource group as the resource group you created: ```azurecli
- az config set defaults.group=<group name>
+ az config set defaults.group=<resourceGroupName>
``` 1. Create the dev center: ```azurecli
- az devcenter admin devcenter create -n <devcenter name>
+ az devcenter admin devcenter create -n <devcenterName>
```
- After a few minutes, you'll get an output that it's created:
+ After a few minutes, the output indicates that it was created:
```output { "devCenterUri": "https://...",
- "id": "/subscriptions/.../<devcenter name>",
+ "id": "/subscriptions/.../<devcenterName>",
"location": "eastus", "name": "<devcenter name>", "provisioningState": "Succeeded",
- "resourceGroup": "<group name>",
+ "resourceGroup": "<resourceGroupName>",
"systemData": { "createdAt": "...", "createdBy": "...",
To create and configure a Dev center in Azure Deployment Environments by using t
> [!NOTE] > You can use `--help` to view more details about any command, accepted arguments, and examples. For example, use `az devcenter admin devcenter create --help` to view more details about creating a dev center.
-## Adding personal access token to Key Vault
-You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository.
+## Add a personal access token to Azure Key Vault
+
+You need an Azure Key Vault to store the GitHub personal access token (PAT) that's used to grant Azure access to your GitHub repository.
-1. Create a Key Vault:
+1. Create a key vault:
```azurecli # Change the name to something Globally unique
- az keyvault create -n <kv name>
+ az keyvault create -n <keyvaultName>
``` > [!NOTE]
- > You may get the following Error:
+ > You might get the following error:
`Code: VaultAlreadyExists Message: The vault name 'kv-devcenter-unique' is already in use. Vault names are globally unique so it is possible that the name is already taken.` You must use a globally unique key vault name.
-1. Add GitHub personal access token (PAT) to Key Vault as a secret:
+1. Add the GitHub PAT to Key Vault as a secret:
```azurecli
- az keyvault secret set --vault-name <kv name> --name GHPAT --value <PAT>
+ az keyvault secret set --vault-name <keyvaultName> --name GHPAT --value <personalAccessToken>
``` ## Attach an identity to the dev center
After you create a dev center, attach an [identity](concept-environments-key-con
In this quickstart, you configure a system-assigned managed identity for your dev center.
-## Attach a system-assigned managed identity
+### Attach a system-assigned managed identity
To attach a system-assigned managed identity to your dev center: ```azurecli
- az devcenter admin devcenter update -n <devcenter name> --identity-type SystemAssigned
+ az devcenter admin devcenter update -n <devcenterName> --identity-type SystemAssigned
```
-### Assign the system-assigned managed identity access to the key vault secret
-Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository. Key Vaults support two methods of access; Azure role-based access control or Vault access policy. In this quickstart, you use a vault access policy.
+### Give the system-assigned managed identity access to the key vault secret
-1. Retrieve Object ID of your dev center's identity:
+Make sure that the identity has access to the key vault secret that contains the GitHub PAT to access your repository. Key Vaults support two methods of access; Azure role-based access control or vault access policy. In this quickstart, you use a vault access policy.
+
+1. Retrieve the Object ID of your dev center's identity:
```azurecli
- OID=$(az ad sp list --display-name <devcenter name> --query [].id -o tsv)
+ OID=$(az ad sp list --display-name <devcenterName> --query [].id -o tsv)
echo $OID ```
-1. Add a Key Vault Policy to allow dev center to get secrets from Key Vault:
+1. Add a Key Vault policy to allow the dev center to get secrets from Key Vault:
```azurecli
- az keyvault set-policy -n <kv name> --secret-permissions get --object-id $OID
+ az keyvault set-policy -n <keyvaultName> --secret-permissions get --object-id $OID
``` ## Add a catalog to the dev center
-Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
+
+Azure Deployment Environments supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and allows them to quickly create consistent environments.
In this quickstart, you attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team. To add a catalog to your dev center, you first need to gather some information. ### Gather GitHub repo information+ To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your environment definitions. You can gather this information before you begin the process of adding the catalog to the dev center.
+You can use this [sample catalog](https://github.com/Azure/deployment-environments) as your repository. Make a fork of the repository for the following steps.
+ > [!TIP]
-> If you are attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-for-your-azure-devops-repository).
+> If you're attaching an Azure DevOps repository, use these steps: [Get the clone URL of an Azure DevOps repository](how-to-configure-catalog.md#get-the-clone-url-for-your-azure-devops-repository).
-1. On your [GitHub](https://github.com) account page, select **<> Code**, and then select copy.
-1. Take a note of the branch that you're working in.
+1. Navigate to your repository, select **<> Code**, and then copy the clone URL.
+1. Make a note of the branch that you're working in.
1. Take a note of the folder that contains your environment definitions.
-
- :::image type="content" source="media/how-to-create-configure-dev-center/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted.":::
+
+ :::image type="content" source="media/how-to-create-configure-dev-center/github-info.png" alt-text="Screenshot that shows the GitHub repo with branch, copy URL, and folder highlighted." lightbox="media/how-to-create-configure-dev-center/github-info.png":::
### Add a catalog to your dev center 1. Retrieve the secret identifier:
-
+ ```azurecli
- SECRETID=$(az keyvault secret show --vault-name <kv name> --name GHPAT --query id -o tsv)
+ SECRETID=$(az keyvault secret show --vault-name <keyvaultName> --name GHPAT --query id -o tsv)
echo $SECRETID ```
-1. Add Catalog:
+1. Add the catalog.
```azurecli
- # Sample Catalog example
+ # Sample catalog example
REPO_URL="https://github.com/Azure/deployment-environments.git"
- az devcenter admin catalog create --git-hub path="/Environments" branch="main" secret-identifier=$SECRETID uri=$REPO_URL -n <catalog name> -d <devcenter name>
+ az devcenter admin catalog create --git-hub path="/Environments" branch="main" secret-identifier=$SECRETID uri=$REPO_URL -n <catalogName> -d <devcenterName>
```
-1. Confirm that the catalog is successfully added and synced:
+1. Confirm that the catalog was successfully added and synced:
```azurecli
- az devcenter admin catalog list -d <devcenter name> -o table
+ az devcenter admin catalog list -d <devcenterName> -o table
``` ## Create an environment type Use an environment type to help you define the different types of environments your development teams can deploy. You can apply different settings for each environment type.
-1. Create an Environment Type:
+1. Create an environment type:
```azurecli
- az devcenter admin environment-type create -d <devcenter name> -n <environment type name>
+ az devcenter admin environment-type create -d <devcenterName> -n <environmentTypeName>
```
-1. Confirm that the Environment type is created:
+1. Confirm that the environment type was created:
```azurecli
- az devcenter admin environment-type list -d <devcenter name> -o table
+ az devcenter admin environment-type list -d <devcenterName> -o table
``` ## Next steps
Use an environment type to help you define the different types of environments y
In this quickstart, you created a dev center and configured it with an identity, a catalog, and an environment type. To learn how to create and configure a project, advance to the next quickstart. > [!div class="nextstepaction"]
-> [Create and configure a project with Azure CLI](how-to-create-configure-projects.md)
+> [Create and configure a project by using the Azure CLI](how-to-create-configure-projects.md)
deployment-environments How To Create Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-create-configure-projects.md
Title: Create and configure a project by using the Azure CLI
-description: Learn how to create a project in Azure Deployment Environments and associate the project with a dev center using Azure CLI.
+description: Learn how to create a project in Azure Deployment Environments and associate the project with a dev center using the Azure CLI.
Previously updated : 04/28/2023 Last updated : 11/29/2023 # Create and configure a project by using the Azure CLI
-This quickstart shows you how to create a project in Azure Deployment Environments. Then, you associate the project with the dev center you created in [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md).
+This quickstart guide shows you how to create a project in Azure Deployment Environments. Then, you associate the project with the dev center you created in [Create and configure a dev center by using the Azure CLI](how-to-create-configure-dev-center.md).
A platform engineering team typically creates projects and provides project access to development teams. Development teams then create [environments](concept-environments-key-concepts.md#environments) by using [environment definitions](concept-environments-key-concepts.md#environment-definitions), connect to individual resources, and deploy applications.
To create a project in your dev center:
az login ```
-1. Install the Azure Dev Center extension for the CLI.
+1. Install the Azure CLI *devcenter* extension.
```azurecli az extension add --name devcenter --upgrade
To create a project in your dev center:
1. Configure the default subscription as the subscription where your dev center resides: ```azurecli
- az account set --subscription <name>
+ az account set --subscription <subscriptionName>
``` 1. Configure the default resource group as the resource group where your dev center resides: ```azurecli
- az configure --defaults group=<name>
+ az configure --defaults group=<resourceGroupName>
``` 1. Configure the default location as the location where your dev center resides. Location of project must match the location of dev center:
To create a project in your dev center:
1. Retrieve dev center resource ID: ```azurecli
- DEVCID=$(az devcenter admin devcenter show -n <devcenter name> --query id -o tsv)
+ DEVCID=$(az devcenter admin devcenter show -n <devcenterName> --query id -o tsv)
echo $DEVCID ``` 1. Create project in dev center: ```azurecli
- az devcenter admin project create -n <project name> \
+ az devcenter admin project create -n <projectName> \
--description "My first project." \ --dev-center-id $DEVCID ```
To create a project in your dev center:
1. Confirm that the project was successfully created: ```azurecli
- az devcenter admin project show -n <project name>
+ az devcenter admin project show -n <projectName>
```
-### Assign a managed identity the owner role to the subscription
+### Assign the Owner role to a managed identity
+ Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types). In this quickstart, you assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
In this quickstart, you assign the Owner role to the system-assigned managed ide
1. Retrieve Subscription ID: ```azurecli
- SUBID=$(az account show -n <name> --query id -o tsv)
+ SUBID=$(az account show --name <subscriptionName> --query id -o tsv)
echo $SUBID ```
-1. Retrieve Object ID of Dev Center's Identity using name of dev center resource:
+1. Retrieve the Object ID of the dev center's identity using the name of the dev center resource:
```azurecli
- OID=$(az ad sp list --display-name <devcenter name> --query [].id -o tsv)
- echo $SUBID
+ OID=$(az ad sp list --display-name <devcenterName> --query [].id -o tsv)
+ echo $OID
```
-1. Assign dev center the Role of Owner on the Subscription:
+1. Assign the role of Owner to the dev center on the subscription:
```azurecli az role assignment create --assignee $OID \
In this quickstart, you assign the Owner role to the system-assigned managed ide
To configure a project, add a [project environment type](how-to-configure-project-environment-types.md):
-1. Retrieve Role ID for the Owner of Subscription
+1. Retrieve the Role ID for the Owner of the subscription:
```azurecli # Remove group default scope for next command. Leave blank for group.
To configure a project, add a [project environment type](how-to-configure-projec
echo $ROID # Set default resource group again
- az configure --defaults group=<group name>
+ az configure --defaults group=<resourceGroupName>
```
-1. Show allowed environment type for project:
+1. Show allowed environment type for the project:
```azurecli
- az devcenter admin project-allowed-environment-type list --project <project name> --query [].name
+ az devcenter admin project-allowed-environment-type list --project <projectName> --query [].name
``` 1. Choose an environment type and create it for the project: ```azurecli
- az devcenter admin project-environment-type create -n <available env type> \
- --project <project name> \
+ az devcenter admin project-environment-type create -n <availableEnvironmentType> \
+ --project <projectName> \
--identity-type "SystemAssigned" \ --roles "{\"${ROID}\":{}}" \ --deployment-target-id "/subscriptions/${SUBID}" \
To configure a project, add a [project environment type](how-to-configure-projec
``` > [!NOTE]
-> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Owner role](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
+> At least one identity (system-assigned or user-assigned) must be enabled for deployment identity. The identity is used to perform the environment deployment on behalf of the developer. Additionally, the identity attached to the dev center should be [assigned the Owner role](how-to-configure-managed-identity.md) for access to the deployment subscription for each environment type.
## Assign environment access
In this quickstart, you give access to your own ID. Optionally, you can replace
--scope "/subscriptions/$SUBID" ```
-1. Optionally, you can assign Dev Environment User:
+1. Optionally, you can assign the Dev Environment User role:
```azurecli az role assignment create --assignee $MYOID \
In this quickstart, you give access to your own ID. Optionally, you can replace
## Next steps
-In this quickstart, you created a project and granted project access to your development team. To learn about how your development team members can create environments, advance to the next quickstart.
+In this quickstart, you created a project and granted project access to your development team. To learn how your development team members can create environments, advance to the next quickstart.
> [!div class="nextstepaction"]
-> [Create and access an environment with Azure CLI](how-to-create-access-environments.md)
+> [Create and access an environment by using the Azure CLI](how-to-create-access-environments.md)
energy-data-services Concepts Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-authentication.md
In the Azure Data Manager for Energy instance,
2. The app-id is used for API access. The same app-id is used to provision ADME instance. 3. The app-id doesn't have access to infrastructure resources. 4. The app-id also gets added as OWNER to all OSDU groups by default.
-5. For service-to-service (S2S) communication, ADME uses MSI (msft service identity).
+5. For service-to-service (S2S) communication, ADME uses MSI (Microsoft Service Identity).
In the OSDU instance, 1. Terraform scripts create two Service Principals:
-2. The first Service Principal is used for API access. It can also manage infrastructure resources.
-3. The second Service Principal is used for service-to-service (S2S) communications.
+ 1. The first Service Principal is used for API access. It can also manage infrastructure resources.
+ 2. The second Service Principal is used for service-to-service (S2S) communications.
## Refresh Auth Token You can refresh the authorization token using the steps outlined in [Generate a refresh token](how-to-generate-refresh-token.md).
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Access management is a critical function for any service or resource. The entitl
## Groups
-The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for a given data partition in your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions. Please note that different groups and associated user entitlements need to be set for a new data partition even in the same Azure Data Manager for Energy instance.
+The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for a given data partition in your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions.
+
+Please note that different groups and associated user entitlements need to be set for every **new data partition** even in the same Azure Data Manager for Energy instance.
The entitlements service enables three use cases for authorization:
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
A `client-secret` is a string value your app can use in place of a certificate t
:::image type="content" source="media/how-to-manage-users/endpoint-url.png" alt-text="Screenshot of finding the URL from Azure Data Manager for Energy instance."::: #### Find the `data-partition-id`
-1. You have two ways to get the list of data partitions in your Azure Data Manager for Energy instance. '
+1. You have two ways to get the list of data partitions in your Azure Data Manager for Energy instance.
2. One option is to navigate the *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy UI. :::image type="content" source="media/how-to-manage-users/data-partition-id.png" alt-text="Screenshot of finding the data-partition-id from the Azure Data Manager for Energy instance.":::
Run the below curl command in Azure Cloud Bash to get all the groups that are av
## Add users to an OSDU group in a data partition 1. Run the below curl command in Azure Cloud Bash to add the user(s) to the "Users" group using the Entitlement service.
-2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email.
+2. The value to be sent for the param `email` is the `Object_ID` (OID) of the user and not the user's email.
```bash curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/<group-name>@<data-partition-id>.dataservices.energy/members' \
event-grid Communication Services Voice Video Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-voice-video-events.md
This section contains an example of what that data would look like for each even
}, "serverCallId": "tob2JIV0wzOHdab3dWcGVWZmsrL2QxYVZnQ2U1bVVLQTh1T056YmpvdXdnQjNzZTlnTEhjNFlYem5BVU9nRGY5dUFQ", "callerDisplayName": "John Doe",
+ "customContext": {
+ "voipHeaders": {
+ "voipHeaderName": "value"
+ }
+ },
"incomingCallContext": "eyJhbGciOiJub25lIiwidHliSldUIn0.eyJjYyI6Ikg0c0lBQi9iT0JiOUs0SVhtQS9UMGhJbFVaUUlHQVBIc1J1M1RlbzgyNW4xcmtHJNa2hCNVVTQkNUbjFKTVo1NCt3ZDk1WFY0ZnNENUg0VDV2dk5VQ001NWxpRkpJb0pDUWlXS0F3OTJRSEVwUWo4aFFleDl4ZmxjRi9lMTlaODNEUmN6QUpvMVRWVXoxK1dWYm1lNW5zNmF5cFRyVGJ1KzMxU3FMY3E1SFhHWHZpc3FWd2kwcUJWSEhta0xjVFJEQ0hlSjNhdzA5MHE2T0pOaFNqS0pFdXpCcVdidzRoSmJGMGtxUkNaOFA4T3VUMTF0MzVHN0kvS0w3aVQyc09aS2F0NHQ2cFV5d0UwSUlEYm4wQStjcGtiVjlUK0E4SUhLZ2JKUjc1Vm8vZ0hFZGtRT3RCYXl1akc4cUt2U1dITFFCR3JFYjJNY3RuRVF0TEZQV1JEUzJHMDk3TGU5VnhhTktob2JIV0wzOHdab3dWcGVWZmsrL2QxYVZnQ2U1bVVLQTh1T056YmpvdXdnQjNzZTlnTEhjNFlYem5BVU9nRGY5dUFQMndsMXA0WU5nK1cySVRxSEtZUzJDV25IcEUySkhVZzd2UnVHOTBsZ081cU81MngvekR0OElYWHBFSi9peUxtNkdibmR1eEdZREozRXNWWXh4ZzZPd1hqc0pCUjZvR1U3NDIrYTR4M1RpQXFaV245UVIrMHNaVDg3YXpRQzbDNUR3BuZFhST1FTMVRTRzVVTkRGeU5UVjNORTFHU2kxck1UTk9VMUF0TWtWNVNreFRUVVI0YlMxRk1VdEVabnBRTjFsQ1EwWkVlVTQxZURCc1IyaHljVTVYTFROeWVTMVJNVjgyVFhrdGRFNUJZV3hrZW5SSVUwMTFVVE5GWkRKUkluMTlmUS5hMTZ0eXdzTDhuVHNPY1RWa2JnV3FPbTRncktHZmVMaC1KNjZUZXoza0JWQVJmYWYwOTRDWDFJSE5tUXRJeDN1TWk2aXZ3QXFFQWV1UlNGTjhlS3gzWV8yZXppZUN5WDlaSHp6Q1ZKemdZUVprc0RjYnprMGJoR09laWkydkpEMnlBMFdyUW1SeGFxOGZUM25EOUQ1Z1ZSUVczMGRheGQ5V001X1ZuNFNENmxtLVR5TUSVEifQ.", "correlationId": "d732db64-4803-462d-be9c-518943ea2b7a" },
event-grid Subscribe To Graph Api Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md
Title: Receive Microsoft Graph change notifications through Azure Event Grid (preview) description: This article explains how to subscribe to events published by Microsoft Graph API.++ Previously updated : 09/01/2022 Last updated : 12/08/2023
-# Receive Microsoft Graph change notifications through Azure Event Grid (preview)
+# Receive Microsoft Graph API change events through Azure Event Grid (preview)
-This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the resources for which events are available through Graph API. For every resource, events for create, update and delete state changes are supported.
+This article describes steps to subscribe to events published by Microsoft Graph API. The following table lists the event sources for which events are available through Graph API. For most resources, events announcing its creation, update, and deletion are supported. For detailed information about the resources for which events are raised for event sources, see [supported resources by Microsoft Graph API change notifications](/graph/webhooks#supported-resources)
+.
> [!IMPORTANT]
-> Microsoft Graph API's ability to send events to Azure Event Grid is currently in **private preview**. If you have questions or need support, email us at [ask-graph-and-grid@microsoft.com](mailto:ask-graph-and-grid@microsoft.com?subject=Support%20Request).
-
-|Microsoft event source |Resource(s) | Available event types |
-|: | : | :-|
-|Microsoft Entra ID| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Microsoft Entra event types](microsoft-entra-events.md) |
-|Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) |
-|Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) |
-|Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| |
-|Microsoft SharePoint| [List](/graph/api/resources/list)|
-|Security alerts| [Alert](/graph/api/resources/alert)|
-|Microsoft Conversations| [Conversation](/graph/api/resources/conversation)| |
+> Microsoft Graph API's ability to send events to Azure Event Grid is currently in **public preview**. If you have questions or need support, email us at [ask-graph-and-grid@microsoft.com](mailto:ask-graph-and-grid@microsoft.com?subject=Support%20Request).
+
+|Microsoft event source |Available event types |
+|: | :-|
+|Microsoft Entra ID| [Microsoft Entra event types](azure-active-directory-events.md) |
+|Microsoft Outlook| [Microsoft Outlook event types](outlook-events.md) |
+|Microsoft 365 group conversations ||
+|Microsoft Teams| [Microsoft Teams event types](teams-events.md) |
+|Microsoft SharePoint and OneDrive| |
+|Microsoft SharePoint| |
+|Security alerts| |
+|Microsoft Conversations| |
+|Microsoft Universal Print||
> [!IMPORTANT] >If you aren't familiar with the **Partner Events** feature, see [Partner Events overview](partner-events-overview.md).
-## Why should I use Microsoft Graph API as a destination?
+## Why should I subscribe to events from Microsoft Graph API sources via Event Grid?
+ Besides the ability to subscribe to Microsoft Graph API events via Event Grid, you have [other options](/graph/webhooks#receiving-change-notifications) through which you can receive similar notifications (not events). Consider using Microsoft Graph API to deliver events to Event Grid if you have at least one of the following requirements: - You're developing an event-driven solution that requires events from Microsoft Entra ID, Outlook, Teams, etc. to react to resource changes. You require the robust eventing model and publish-subscribe capabilities that Event Grid provides. For an overview of Event Grid, see [Event Grid concepts](concepts.md). - You want to use Event Grid to route events to multiple destinations using a single Graph API subscription and you want to avoid managing multiple Graph API subscriptions.-- You require to route events to different downstream applications, webhooks or Azure services depending on some of the properties in the event. For example, you may want to route event types such as `Microsoft.Graph.UserCreated` and `Microsoft.Graph.UserDeleted` to a specialized application that processes users' onboarding and off-boarding. You may also want to send `Microsoft.Graph.UserUpdated` events to another application that syncs contacts information, for example. You can achieve that using a single Graph API subscription when using Event Grid as a notification destination. For more information, see [event filtering](event-filtering.md) and [event handlers](event-handlers.md).-- Interoperability is important to you. You want to forward and handle events in a standard way using CNCF's [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specification standard, to which Event Grid fully complies.-- You like the extensibility support that CloudEvents provides. For example, if you want to trace events across compliant systems, you may use CloudEvents extension [Distributed Tracing](https://github.com/cloudevents/spec/blob/v1.0.1/extensions/distributed-tracing.md). Learn more about more [CloudEvents extensions](https://github.com/cloudevents/spec/blob/v1.0.1/documented-extensions.md).-- You want to use proven event-driven approaches adopted by the industry.
+- You require to route events to different downstream applications, webhooks, or Azure services depending on some of the properties in the event. For example, you might want to route event types such as `Microsoft.Graph.UserCreated` and `Microsoft.Graph.UserDeleted` to a specialized application that processes users' onboarding and off-boarding. You might also want to send `Microsoft.Graph.UserUpdated` events to another application that syncs contacts information, for example. You can achieve that using a single Graph API subscription when using Event Grid as a notification destination. For more information, see [event filtering](event-filtering.md) and [event handlers](event-handlers.md).
+- Interoperability is important to you. You want to forward and handle events in a standard way using CNCF's [CloudEvents](https://github.com/cloudevents/spec/blob/v1.0.2/cloudevents/spec.md) specification standard.
+- You like the extensibility support that CloudEvents provides. For example, if you want to trace events across compliant systems, use CloudEvents extension [Distributed Tracing](https://github.com/cloudevents/spec/blob/v1.0.1/extensions/distributed-tracing.md). Learn more about more [CloudEvents extensions](https://github.com/cloudevents/spec/blob/v1.0.1/documented-extensions.md).
+- You want to use proven event-driven approaches adopted by the industry.
-## High-level steps
+## Enable Graph API events to flow to your partner topic
-1. [Register the Event Grid resource provider](#register-the-event-grid-resource-provider) with your Azure subscription.
-1. [Authorize partner](#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
-3. [Enable events to flow to a partner topic](#enable-graph-api-events-to-flow-to-your-partner-topic)
-4. [Activate partner topic](#activate-a-partner-topic) so that your events start flowing to your partner topic.
-5. [Subscribe to events](#subscribe-to-events).
+You request Microsoft Graph API to forward events to an Event Grid partner topic by creating a Graph API subscription using the Microsoft Graph API SDKs and **following the steps in the links to samples provided** in this section. See [Supported languages for Microsoft Graph API SDK](/graph/sdks/sdks-overview#supported-languages.md) for available SDK support.
+### General prerequisites
+You should meet these general prerequisites before implementing your application to create and renew Microsoft Graph API subscriptions:
+- Become familiar with the [high-level steps to subscribe to partner events](subscribe-to-partner-events.md#high-level-steps). As described in that article, prior to creating a Graph API subscription you should follow the instructions in:
+ - [Register the Event Grid resource provider](subscribe-to-partner-events.md#register-the-event-grid-resource-provider) with your Azure subscription.
-## Enable Graph API events to flow to your partner topic
+ - [Authorize partner](subscribe-to-partner-events.md#authorize-partner-to-create-a-partner-topic) to create a partner topic in your resource group.
-You request Microsoft Graph API to send events by creating a Graph API subscription. When you create a Graph API subscription, the http request should look like the following sample:
+- Have a working knowledge of [Microsoft Graph API notifications](/graph/api/resources/webhooks). As part of your learning, you could use the [Graph API Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer) to create Graph API subscriptions.
+- Understand [Partner Events concepts](partner-events-overview.md).
+- Identify the Microsoft Graph API resource from which you want to receive system state change events. See [Microsoft Graph API change notifications](/graph/webhooks#supported-resources) for more information. For example, for tracking changes to users in Microsoft Entra ID you should use the [user](/graph/api/resources/user) resource. Use [group](/graph/api/resources/group) for tracking changes to user groups.
+- Have a tenant administrator account on a Microsoft 365 tenant. You can get a development tenant for free by joining the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program).
-```json
-POST https://graph.microsoft.com/v1.0/subscriptions
+You'll find other prerequisites specific to the programming language of choice and the development environment you use in the Microsoft Graph API samples links found in a coming section.
+
+> [!IMPORTANT]
+> While detailed instructions to implement your application are found in the [samples with detailed instructions](#samples-with-detailed-instructions) section, you should read all sections in this article as they contain additional, important information related to forwarding Microsoft Graph API events using Event Grid.
+
+### How to create a Microsoft Graph API subscription
-x-ms-enable-features: EventGrid
+When you create a Graph API subscription, a partner topic is created for you. You pass the following information in parameter *notificationUrl* to specify what partner topic to create and be associated to the new Graph API subscription:
+
+- partner topic name
+- resource group name in which the partner topic is created
+- region (location)
+- Azure subscription
+
+These code samples show you how to create a Graph API subscription. They show examples for creating a subscription to receive events from all users in a Microsoft Entra ID tenant when they're created, updated, or deleted.
+
+# [HTTP](#tab/http)
+<!-- {
+ "blockType": "request",
+ "name": "create_subscription_from_subscriptions"
+}-->
+
+```http
+POST https://graph.microsoft.com/v1.0/subscriptions
+Content-type: application/json
-Body:
{ "changeType": "Updated,Deleted,Created",
- "notificationUrl": "EventGrid:?azuresubscriptionid=8A8A8A8A-4B4B-4C4C-4D4D-12E12E12E12E&resourcegroup=yourResourceGroup&partnertopic=youPartnerTopic&location=theNameOfAzureRegionFortheTopic",
+ "notificationUrl": "EventGrid:?azuresubscriptionid=8A8A8A8A-4B4B-4C4C-4D4D-12E12E12E12E&resourcegroup=yourResourceGroup&partnertopic=yourPartnerTopic&location=theNameOfAzureRegionFortheTopic",
+ "lifecycleNotificationUrl": "EventGrid:?azuresubscriptionid=8A8A8A8A-4B4B-4C4C-4D4D-12E12E12E12E&resourcegroup=yourResourceGroup&partnertopic=yourPartnerTopic&location=theNameOfAzureRegionFortheTopic",
"resource": "users",
- "expirationDateTime": "2022-04-30T00:00:00Z",
- "clientState": "mysecret"
+ "expirationDateTime": "2024-03-31T00:00:00Z",
+ "clientState": "secretClientValue"
} ```
-Here are some of the key headers and payload properties:
+# [C#](#tab/csharp)
+
+# [CLI](#tab/cli)
+
+# [Go](#tab/go)
+
+# [Java](#tab/java)
+
+# [JavaScript](#tab/javascript)
+
+# [PHP](#tab/php)
+
+# [PowerShell](#tab/powershell)
+
+# [Python](#tab/python)
++ -- `x-ms-enable-features`: Header used to indicate your desire to participate in the preview capability to send events to Azure Event Grid. Its value must be `EventGrid`. This header must be included with the request when creating a Microsoft Graph API subscription. - `changeType`: the kind of resource changes for which you want to receive events. Valid values: `Updated`, `Deleted`, and `Created`. You can specify one or more of these values separated by commas.-- `notificationUrl`: a URI that conforms to the following pattern: `EventGrid:?azuresubscriptionid=<you-azure-subscription-id>&resourcegroup=<your-resource-group-name>&partnertopic=<the-name-for-your-partner-topic>&location=<the-Azure-region-name-where-you-want-the-topic-created>`. The location (also known as Azure region) `name` can be obtained by executing the **az account list-locations** command. Don't use a location displayname. For example, don't use "West Central US". Use `westcentralus` instead.
+- `notificationUrl`: a URI used to define the partner topic to which events are sent. It must conform to the following pattern: `EventGrid:?azuresubscriptionid=<you-azure-subscription-id>&resourcegroup=<your-resource-group-name>&partnertopic=<the-name-for-your-partner-topic>&location=<the-Azure-region-name-where-you-want-the-topic-created>`. The location (also known as Azure region) `name` can be obtained by executing the **az account list-locations** command. Don't use a location display name. For example, don't use "West Central US". Use `westcentralus` instead.
```azurecli-interactive az account list-locations ```-- resource: the resource that generates events to announce state changes.-- expirationDateTime: the expiration time at which the subscription expires and hence the flow of events stop. It must conform to the format specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). You must specify an expiration time that is within the [maximum subscription length allowable for the resource type](/graph/api/resources/subscription#maximum-length-of-subscription-per-resource-type) used. -- client state. A value that is set by you when creating a Graph API subscription. For more information, see [Graph API subscription properties](/graph/api/resources/subscription#properties).
+- `lifecycleNotificationUrl`: a URI used to define the partner topic to which `microsoft.graph.subscriptionReauthorizationRequired`events are sent. This event signals your application that the Graph API subscription is expiring soon. The URI follows the same pattern as *notificationUrl* described above if using Event Grid as destination to lifecycle events. In that case, the partner topic should be the same as the one specified in *notificationUrl*.
+- resource: the resource that generates events that announce state changes.
+- expirationDateTime: the expiration time at which the subscription expires and the flow of events stop. It must conform to the format specified in [RFC 3339](https://tools.ietf.org/html/rfc3339). You must specify an expiration time that is within the [maximum subscription length allowable per resource type](/graph/api/resources/subscription#subscription-lifetime).
+- client state. This property is optional. It is used for verification of calls to your event handler application during event delivery. For more information, see [Graph API subscription properties](/graph/api/resources/subscription#properties).
-> [!NOTE]
-> Microsoft Graph API's capability to send events to Event Grid is only available in a specific Graph API environment. You will need to update your code so that it uses the following Graph API endpoint `https://graph.microsoft.com/beta`. For example, this is the way you can set the endpoint on your graph client (`com.microsoft.graph.requests.GraphServiceClient`) using the Graph API Java SDK:
+> [!IMPORTANT]
+>
+> - The partner topic name must be unique within the same Azure region. Each tenant-application ID combination can create up to 10 unique partner topics.
+>
+> - Be mindful of certain [Graph API resources' service limits](/graph/webhooks#azure-ad-resource-limitations) when developing your solution.
>
->```java
->graphClient.setServiceRoot("https://graph.microsoft.com/beta");
->```
+> - Existing Graph API subscriptions without a `lifecycleNotificationUrl` property don't receive lifecycle events. To add the lifecycleNotificationUrl property, you should delete the existing subscription and create a new subscription specifying the property during subscription creation.
+> [!NOTE]
+> If your application uses the header `x-ms-enable-features` with your request to create a Graph API subscription during **private preview**, you should remove it as it is no longer necessary.
-**You can create a Microsoft Graph API subscription by following the instructions in the [Microsoft Graph API webhook samples](https://github.com/microsoftgraph?q=webhooks&type=public&language=&sort=)** that include code samples for [NodeJS](https://github.com/microsoftgraph/nodejs-webhooks-sample), [Java (Spring Boot)](https://github.com/microsoftgraph/java-spring-webhooks-sample), and [.NET Core](https://github.com/microsoftgraph/aspnetcore-webhooks-sample). There are no samples available for Python, Go and other languages yet, but the [Graph SDK](/graph/sdks/sdks-overview) supports creating Graph API subscriptions using those programming languages.
+After creating a Graph API subscription, you have a partner topic created on Azure.
-> [!NOTE]
-> - Partner topic names must be unique within the same Azure region. Each tenant-application ID combination can create up to 10 unique partner topics.
-> - Be mindful of certain [Graph API resources' service limits](/graph/webhooks#azure-ad-resource-limitations) when developing your solution.
+### Renew a Microsoft Graph API subscription
+
+A Graph API subscription must be renewed by your application before it expires to avoid stopping the flow of events. To help you automate the renewal process, Microsoft Graph API supports **lifecycle notifications events** to which your application can subscribe. Currently, all type of Microsoft Graph API resources support the `microsoft.graph.subscriptionReauthorizationRequired`, which is sent when any of the following conditions occur:
+
+- Access token is about to expire.
+- Graph API subscription is about to expire.
+- A tenant administrator has revoked your app's permissions to read a resource.
+
+If you didn't renew your Graph API subscription after it has been expired, you need to create a new Graph API subscription. You could refer to the same partner topic you used in your expired subscription as long as it has been expired for less than 30 days. If the Graph API subscription has expired for more than 30 days, you can't reuse your existing partner topic. In this case, you'll need to either specify another partner topic name. Alternatively, you can delete the existing partner topic to create a new partner topic with the same name during the Graph API subscription creation.
+
+#### How to renew a Microsoft Graph API subscription
+
+Upon receiving a `microsoft.graph.subscriptionReauthorizationRequired` event your application should renew the Graph API subscription by doing these actions:
+
+1. If you provided a client secret in the *clientState* property when you created the Graph API subscription, that client secret in included with the event. Validate that the event's clientState matches the value used when you created the Graph API subscription.
+1. Ensure that the app has a valid access token to take the next step. More information is provided in the coming [samples with detailed instructions](#samples-with-detailed-instructions) section.
+1. Call either of the following two APIs. If the API call succeeds, the change notification flow resumes.
+
+ - Call the `/reauthorize` action to reauthorize the subscription without extending its expiration date.
+
+ <!-- {
+ "blockType": "request",
+ "name": "change-notifications-lifecycle-notifications-reauthorize"
+ }-->
+ ```http
+ POST https://graph.microsoft.com/beta/subscriptions/{id}/reauthorize
+ ```
+
+ - Perform a regular "renew" action to reauthorize *and* renew the subscription at the same time.
+
+ <!-- {
+ "blockType": "request",
+ "name": "change-notifications-lifecycle-notifications-renew"
+ }-->
+ ```http
+ PATCH https://graph.microsoft.com/beta/subscriptions/{id}
+ Content-Type: application/json
+
+ {
+ "expirationDateTime": "2024-04-30T11:00:00.0000000Z"
+ }
+ ```
-#### What happens when you create a Microsoft Graph API subscription?
+ Renewing might fail if the app is no longer authorized to access to the resource. It might then be necessary for the app to obtain a new access token to successfully reauthorize a subscription.
-When you create a Graph API subscription with a `notificationUrl` bound to Event Grid, a partner topic is created in your Azure subscription. For that partner topic, you [configure event subscriptions](event-filtering.md) to send your events to any of the supported [event handlers](event-handlers.md) that best meets your requirements to process the events.
+Authorization challenges don't replace the need to renew a subscription before it expires. The lifecycles of access tokens and subscription expiration are not the same. Your access token may expire before your subscription. It is important to be prepared to regularly reauthorize your endpoint to refresh your access token. Reauthorizing your endpoint will not renew your subscription. However, renewing your subscription will also reauthorize your endpoint.
-#### Test APIs using Graph Explorer
-For quick tests and to get to know the API, you could use the [Graph Explorer](/graph/graph-explorer/graph-explorer-features). For anything else beyond casuals tests or learning, you should use the Microsoft Graph SDKs.
+When renewing and/or reauthorizing your Graph API subscription the same partner topic specified when the subscription was created is used.
+When Specifying a new *expirationDateTime*, it must be at least three hours from the current time. Otherwise, your application may receive `microsoft.graph.subscriptionReauthorizationRequired` events soon after renewal.
+For examples about how to reauthorize your Graph API subscription using any of the supported languages, see [subscription reauthorize request](/graph/api/subscription-reauthorize#request).
+
+For examples about how to renew and reauthorize your Graph API subscription using any of the supported languages, see [update subscription request.](/graph/api/subscription-update#request).
+
+### Samples with detailed instructions
+
+Microsoft Graph API documentation provides code samples with instructions to:
+
+- Set up your development environment with specific instructions according to the language you use. Instructions also include how to get a Microsoft 365 tenant for development purposes.
+- Create a Graph API subscriptions. To renew a subscription, you can call the Graph API using the code snippets in [How to renew a Graph API subscription](#how-to-renew-a-microsoft-graph-api-subscription) above.
+- Get authentication tokens to use them when calling Microsoft Graph API.
+
+>[!NOTE]
+> It is possible to create your Graph API subscription using the [Microsoft Graph API Explorer](https://developer.microsoft.com/en-us/graph/graph-explorer). You should still use the samples for other important aspects of your solution such as authentication and receiving events.
+
+Web application samples are available for the following languages:
+
+- [C# sample](https://github.com/microsoftgraph/aspnetcore-webhooks-sample)
+- [Java sample](https://github.com/microsoftgraph/java-spring-webhooks-sample)
+ - [GraphAPIController](https://github.com/jfggdl/event-grid-ms-graph-api-java-snippet) contains sample code to create, delete, and renew a Graph API subscription. It must be used along with the Java sample application above.
+- [NodeJS sample](https://github.com/microsoftgraph/nodejs-webhooks-sample).
+
+> [!IMPORTANT]
+> You need to activate your partner topic that is created as part of your Graph API subscription creation. You also need to create an Event Grid event subscription to your web application to receive events. To that end, you use the URL configured in your web application to receive events as a webhook endpoint in your event subscription. [Next steps](#next-steps) for more information.
+
+> [!IMPORTANT]
+> Do you need sample code for another language or have questions? Please email us at [ask-graph-and-grid@microsoft.com](mailto:ask-graph-and-grid@microsoft.com?subject=Need%20support%20for%20sample%20in%20other%20language).
## Next steps
-See the following articles:
+Follow the instructions in the following two steps to complete set-up to receive Microsoft Graph API events using Event Grid:
+
+- [Activate the partner topic](subscribe-to-partner-events.md#activate-a-partner-topic) created as part of the Microsoft Graph API creation.
+- [Subscribe to events](subscribe-to-partner-events.md#subscribe-to-events) by creating an event subscription to your partner topic.
+Other useful links:
+
- [Azure Event Grid - Partner Events overview](partner-events-overview.md)-- [Microsoft Graph API webhook samples](https://github.com/microsoftgraph?q=webhooks&type=public&language=&sort=). Use these samples to send events to Event Grid. You just need to provide a suitable value ``notificationUrl`` according to the request example above.-- [Varied set of resources on Microsoft Graph API](https://developer.microsoft.com/en-us/graph/rest-api).
+- [Information on Microsoft Graph API](https://developer.microsoft.com/graph/rest-api).
- [Microsoft Graph API webhooks](/graph/api/resources/webhooks) - [Best practices for working with Microsoft Graph API](/graph/best-practices-concept) - [Microsoft Graph API SDKs](/graph/sdks/sdks-overview)-- [Microsoft Graph API tutorials](/graph/tutorials), which shows how to use Graph API in different programming languages.This doesn't necessarily include examples for sending events to Event Grid.
+- [Microsoft Graph API tutorials](/graph/tutorials), which shows how to use Graph API. This article doesn't necessarily include examples for sending events to Event Grid.
expressroute About Fastpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/about-fastpath.md
While FastPath supports most configurations, it doesn't support the following fe
* Private Link: FastPath Connectivity to a private endpoint or Private Link service over an ExpressRoute Direct circuit is supported for limited scenarios. For more information, see [enable FastPath and Private Link for 100 Gbps ExpressRoute Direct](expressroute-howto-linkvnet-arm.md#fastpath-and-private-link-for-100-gbps-expressroute-direct). FastPath connectivity to a Private endpoint/Private Link service is not supported for ExpressRoute partner circuits.
+* DNS Private Resolver: Azure ExpressRoute FastPath does not support connectivity to [DNS Private Resolver](../dns/dns-private-resolver-overview.md).
+ ### IP address limits | ExpressRoute SKU | Bandwidth | FastPath IP limit |
hdinsight-aks Spark Job Orchestration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/spark/spark-job-orchestration.md
+
+ Title: Azure Data Factory Managed Airflow with Apache Spark® on HDInsight on AKS
+description: Learn how to perform Apache Spark® job orchestration using Azure Data Factory Managed Airflow
++ Last updated : 11/28/2023++
+# Apache Spark® job orchestration using Azure Data Factory Managed Airflow
++
+This article covers managing a Spark job using [Apache Spark Livy API](https://livy.incubator.apache.org/docs/latest/rest-api.html) and orchestration data pipeline with Azure Data Factory Managed Airflow. [Azure Data Factory Managed Airflow](/azure/data-factory/concept-managed-airflow) service is a simple and efficient way to create and manage [Apache Airflow](https://airflow.apache.org/) environments, enabling you to run data pipelines at scale easily.
+
+Apache Airflow is an open-source platform that programmatically creates, schedules, and monitors complex data workflows. It allows you to define a set of tasks, called operators that can be combined into directed acyclic graphs (DAGs) to represent data pipelines.
+
+The following diagram shows the placement of Airflow, Key Vault, and HDInsight on AKS in Azure.
++
+Multiple Azure Service Principals are created based on the scope to limit the access it needs and to manage the client credential life cycle independently.
+
+It is recommended to rotate access keys or secrets periodically (you can use various [design patternΓÇÖs](/azure/key-vault/secrets/tutorial-rotation-dual?tabs=azure-cli) to rotate secrets).
+
+## Setup steps
+
+1. [Setup Spark Cluster](create-spark-cluster.md)
+
+1. Upload your Apache Spark Application jar to the storage account. It can be the primary storage account associated with the Spark cluster or any other storage account, where you should assign the "Storage Blob Data Owner" role to the user-assigned MSI used for the cluster in this storage account.
+
+1. Azure Key Vault - You can follow [this tutorial to create a new Azure Key Vault](/azure/key-vault/general/quick-create-portal/) in case, if you don't have one.
+
+1. Create [Microsoft Entra service principal](/cli/azure/ad/sp/) to access Key Vault – Grant permission to access Azure Key Vault with the “Key Vault Secrets Officer” role, and make a note of ‘appId’ ‘password’, and ‘tenant’ from the response. We need to use the same for Airflow to use Key Vault storage as backends for storing sensitive information.
+
+ ```
+ az ad sp create-for-rbac -n <sp name> --role ΓÇ£Key Vault Secrets OfficerΓÇ¥ --scopes <key vault Resource ID>
+ ```
++
+1. Create Managed Airflow enable with [Azure Key Vault](/azure/data-factory/enable-azure-key-vault-for-managed-airflow) to store and manage your sensitive information in a secure and centralized manner. By doing this, you can use variables and connections, and they automatically be stored in Azure Key Vault. The name of connections and variables need to be prefixed by variables_prefix  defined in AIRFLOW__SECRETS__BACKEND_KWARGS. For example, If variables_prefix has a value as  hdinsight-aks-variables then for a variable key of hello, you would want to store your Variable at hdinsight-aks-variable -hello.
+
+ - Add the following settings for the Airflow configuration overrides in integrated runtime properties:
+
+ - AIRFLOW__SECRETS__BACKEND:
+ `"airflow.providers.microsoft.azure.secrets.key_vault.AzureKeyVaultBackend"`
+
+ - AIRFLOW__SECRETS__BACKEND_KWARGS:
+ `"{"connections_prefix": "airflow-connections", "variables_prefix": "hdinsight-aks-variables", "vault_url":ΓÇ»<your keyvault uri>}ΓÇ¥`
+
+ - Add the following setting for the Environment variables configuration in the Airflow integrated runtime properties:
+
+ - AZURE_CLIENT_IDΓÇ»= `<App Id from Create Azure AD Service Principal>`
+
+ - AZURE_TENANT_IDΓÇ»= `<Tenant from Create Azure AD Service Principal> `
+
+ - AZURE_CLIENT_SECRETΓÇ»= `<Password from Create Azure AD Service Principal> `
+
+ Add Airflow requirements - [apache-airflow-providers-microsoft-azure](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/https://docsupdatetracker.net/index.html)
+
+ :::image type="content" source="./media/spark-job-orchestration/airflow-configuration-environment-variable.png" alt-text="Screenshot shows airflow configuration and environment variables." lightbox="./media/spark-job-orchestration/airflow-configuration-environment-variable.png":::
+
+
+1. Create [Microsoft Entra service principal](/cli/azure/ad/sp/) to access HDInsight on AKS cluster Azure – [Grant access to HDInsight AKS Cluster](/azure/hdinsight-aks/hdinsight-on-aks-manage-authorization-profile#how-to-grant-access), make a note of appId, password, and tenant from the response.
+
+ `az ad sp create-for-rbac -n <sp name>`
+
+1. Create the following secrets in your key vault with the value from the previous AD Service principal appId, password, and tenant, prefixed by property variables_prefix defined in AIRFLOW__SECRETS__BACKEND_KWARGS. The DAG code can access any of these variables without variables_prefix.
+
+ - hdinsight-aks-variables-api-client-id=`<App ID from previous step> `
+
+ - hdinsight-aks-variables-api-secret=`<Password from previous step> `
+
+ - hdinsight-aks-variables-tenant-id=`<Tenant from previous step> `
+
+ ```python
+ from airflow.models import Variable
+
+ def retrieve_variable_from_akv():
+
+ variable_value = Variable.get("client-id")
+
+ print(variable_value)
+ ```
+
+
+## DAG definition
+
+A DAG (Directed Acyclic Graph) is the core concept of Airflow, collecting Tasks together, organized with dependencies and relationships to say how they should run.
+
+There are three ways to declare a DAG:
+
+ - You can use a context manager, which adds the DAG to anything inside it implicitly
+
+ - You can use a standard constructor, passing the DAG into any operators you use
+
+ - You can use the @dag decorator to turn a function into a DAG generator (from airflow.decorators import dag)
+
+DAGs are nothing without Tasks to run, and those are come in the form of either Operators, Sensors or TaskFlow.
+
+You can read more details about DAGs, Control Flow, SubDAGs, TaskGroups, etc. directly fromΓÇ»[Apache Airflow](https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/dags.html).ΓÇ»
+
+## DAG execution
+
+Example code is available on the [git](https://github.com/sethiaarun/hdinsight-aks/blob/spark-airflow-example/spark/Airflow/airflow-python-example-code.py); download the code locally on your computer and upload the wordcount.py to a blob storage. Follow the [steps](/azure/data-factory/how-does-managed-airflow-work#steps-to-import) to import DAG into your Managed Airflow created during setup.
+
+The airflow-python-example-code.py is an example of orchestrating a Spark job submission using Apache Spark with HDInsight on AKS. The example is based on [SparkPi](https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala) example provided on Apache Spark.
+
+The DAG has the following steps:
+
+1. get `OAuth Token`
+
+1. Invoke Apache Spark Livy Batch API to submit a new job
+
+The DAG expects to have setup for the Service Principal, as described during the setup process for the OAuth Client credential and pass the following input configuration for the execution.
+
+### Execution steps
+
+1. Execute the DAG from the [Airflow UI](https://airflow.apache.org/docs/apache-airflow/stable/ui.html), you can open the Azure Data Factory Managed Airflow UI by clicking on Monitor icon.
+
+ :::image type="content" source="./media/spark-job-orchestration/airflow-user-interface-step-1.png" alt-text="Screenshot shows open the Azure data factory managed airflow UI by clicking on monitor icon." lightbox="./media/spark-job-orchestration/airflow-user-interface-step-1.png":::
+
+1. Select the ΓÇ£SparkWordCountExampleΓÇ¥ DAG from the ΓÇ£DAGsΓÇ¥ page.
+
+ :::image type="content" source="./media/spark-job-orchestration/airflow-user-interface-step-2.png" alt-text="Screenshot shows select the Spark word count example." lightbox="./media/spark-job-orchestration/airflow-user-interface-step-2.png":::
+
+1. Click on the “execute” icon from the top right corner and select “Trigger DAG w/ config”.
+
+ :::image type="content" source="./media/spark-job-orchestration/airflow-user-interface-step-3.png" alt-text="Screenshot shows select execute icon." lightbox="./media/spark-job-orchestration/airflow-user-interface-step-3.png":::
+
+
+1. Pass required configuration JSON
+
+ ```JSON
+ {
+
+ "spark_cluster_fqdn":"<<domain name>>.eastus2.hdinsightaks.net",
+
+ "app_jar_path":"abfs://filesystem@<storageaccount>.dfs.core.windows.net",
+
+ "job_name":"<job_name>",
+
+ }
+ ```
+
+1. Click on ΓÇ£TriggerΓÇ¥ button, it starts the execution of the DAG.
+
+1. You can visualize the status of DAG tasks from the DAG run
+
+ :::image type="content" source="./media/spark-job-orchestration/dag-task-status.png" alt-text="Screenshot shows dag task status." lightbox="./media/spark-job-orchestration/dag-task-status.png":::
+
+1. Validate the job from ΓÇ£Apache Spark History ServerΓÇ¥
+
+ :::image type="content" source="./media/spark-job-orchestration/validate-job-execution.png" alt-text="Screenshot shows validate job execution." lightbox="./media/spark-job-orchestration/validate-job-execution.png":::
+
+## Example code
+
+This is an example of orchestrating data pipeline using Airflow with HDInsight on AKS
+
+The example is based on [SparkPi](https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/SparkPi.scala) example provided on Apache Spark.
+
+### Reference
+
+- Refer to the [sample code](https://github.com/Azure-Samples/hdinsight-aks/blob/main/spark/Airflow/airflow-python-example-code.py).
+- [Apache Spark Website](https://spark.apache.org/)
+- Apache, Apache Airflow, Airflow, Apache Spark, Spark, and associated open source project names are [trademarks](../trademarks.md) of the [Apache Software Foundation](https://www.apache.org/) (ASF).
hdinsight-aks Trino Superset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/trino/trino-superset.md
Summary of the steps covered in this article:
5. Install [Helm](https://helm.sh/docs/intro/install/). - ## Create kubernetes cluster for Apache Superset This step creates the Azure Kubernetes Service (AKS) cluster where you can install Apache Superset. You need to bind the managed identity you've associated to the cluster to allow the Superset to authenticate with Trino cluster with that identity.
This step creates the Azure Kubernetes Service (AKS) cluster where you can insta
|hostname|mytrinocluster.00000000000000000000000000<br>.eastus.hdinsightaks.net|The hostname of your Trino cluster. <br> You can get this information from "Overview" page of your cluster in the Azure portal.| |catalog|/tpch|After the slash, is the default catalog name. <br> You need to change this catalog to the catalog that has the data you want to visualize.|
- trino://<mark>$USER</mark>@<mark>$TRINO_CLUSTER_HOST_NAME</mark>.hdinsightaks.net:443/<mark>$DEFAULT_CATALOG</mark>
+ `trino://$USER@$TRINO_CLUSTER_HOST_NAME.hdinsightaks.net:443/$DEFAULT_CATALOG`
Example: `trino://trino@mytrinocluster.00000000000000000000000000.westus3.hdinsightaks.net:443/tpch`
iot-operations Quickstart Add Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-add-assets.md
To verify the configuration, run the following command to view the Akri instance
kubectl get akrii -n azure-iot-operations ```
+Note that it may take a few minutes for the instance to show up.
+ The output from the previous command looks like the following example. You may need to wait for a few seconds for the Akri instance to be created: ```console
iot-operations Howto Autodetect Opcua Assets Using Akri https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-autodetect-opcua-assets-using-akri.md
To configure the custom OPC UA discovery handler with asset detection, first you
kubectl get akrii -n azure-iot-operations ```
+ Note that it may take a few minutes for the instance to show up.
+ You can inspect the instance custom resource by using an editor such as OpenLens, under `CustomResources/akri.sh/Instance`. You can also view the custom resource definition YAML of the instance that was created:
iot-operations Concept Iot Operations In Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/concept-iot-operations-in-layered-network.md
+
+ Title: How does Azure IoT Operations work in layered network?
+#
+
+description: Use the Layered Network Management service to enable Azure IoT Operations in industrial network environment.
+++ Last updated : 11/29/2023+
+#CustomerIntent: As an operator, I want to learn about the architecture of Azure IoT Operations in a Purdue Network environment and how does Layered Network Managment support this scenario.
++
+# How does Azure IoT Operations work in layered network?
++
+## Industrial scenario for the Azure IoT Operations
+
+In the basic architecture described in [Azure IoT Operations Architecture Overview](../get-started/overview-iot-operations.md#architecture-overview), all the Azure IoT Operations components are deployed to a single internet-connected cluster. In this type of environment, component-to-component and component-to-Azure connections are enabled by default.
+
+However, in many industrial scenarios, computing units for different purposes are located in separate networks. For example:
+- Assets and servers on the factory floor
+- Data collecting and processing solutions in the data center
+- Business logic applications with information workers
++
+In some cases, the network design includes a single isolated network that is located behind the firewall or is physically disconnected from the internet. In other cases, a more complicated layered network topology is configured, such as the [ISA-95](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa95)/[Purdue Network architecture](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture).
+
+Layered Network Management is designed for facilitating connections between Azure and clusters in different kinds of isolated network environments. Enabling Azure IoT Operations to function in top-level isolated layers and nested isolated layers as needed.
+
+## How does Layered Network Management work?
+
+The following diagram describes the mechanism to redirect traffic from an isolated network to Azure Arc. It explains the underlying logic. For information on specific steps to achieve this mechanism, see [Configure Azure IoT Layered Network Management](howto-configure-l4-cluster-layered-network.md).
+
+1. When an Arc agent or extension is attempting to connect to its corresponding cloud side service, it uses the DNS to resolve the domain name of the target service endpoint.
+
+1. The custom DNS returns the **IP address of the Layered Network Management instance** at the upper level instead of the real IP address of the service endpoint.
+1. The Arc extension initiates a connection to the Layered Network Management instance with its IP address.
+1. If the Layered Network Management instance is at the internet facing level, it forwards the traffic to the target Arc service endpoint. If the Layered Network Management instance isn't at the top level, it forwards the traffic to the next Layered Network Management instance, and so on.
+> [!NOTE]
+> Layered Network Management only forwards internet traffic when the destination is on the allowlist.
++
+![Diagram of Layered Network Management redirecting traffic.](./media/concept-iot-operations-in-layered-network/how-does-layered-network-management-work.png)
+
+## Example of Azure IoT Operations in layered network
+
+The following diagram is an example of Azure IoT Operations being deployed to multiple clusters in multiple network layers. Based on the Purdue Network paradigm, level 4 is the enterprise network, level 3 is the operation and control layer, and level 2 is the controller system layer. Moreover, in our prototypical network, only level 4 has direct internet access.
++
+In the pictured example, Azure IoT Operations is deployed to level 2 through 4. At level 3 and level 4, the **Layered Network Management services** are configured to receive and forward the network traffic from the layer that is one level below. With this forwarding mechanism, all the clusters illustrated in this deployment are able to connect to Azure and become Arc-enabled. The connection to Arc enables users to manage any Arc-enabled endpoint such as the servers, the cluster and the Arc-enabled service workloads from the cloud.
+
+With extra configurations, the Layered Network Management service can also direct east-west traffic. This route enables Azure IoT Operations components to send data to other components at upper level and form data pipelines from the bottom layer to the cloud.
+In a multi-layer network, the Azure IoT Operations components can be deployed across layers based on your architecture and data flow needs. This example provides some general ideas of where individual components will be placed.
+- The **OPC UA Broker** may locate at the lower layer that is closer to your assets and OPC UA servers. This is also true for the **Akri** agent.
+- The data shall be transferred towards the cloud side through the **MQ** components in each layer.
+- The **Data Processor** is generally placed at the top layer as the most likely layer to have significant compute capacity and as a final stop for the data to get prepared before being sent to the cloud.
+
+## Next steps
+
+- To understand how to set up a cluster in an isolated environment for Azure IoT Operations scenarios, see [Configure Layered Network Management service to enable Azure IoT Operations in an isolated network](howto-configure-aks-edge-essentials-layered-network.md).
+
iot-operations Howto Configure Aks Edge Essentials Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-aks-edge-essentials-layered-network.md
This walkthrough is an example of deploying Azure IoT Operations to a special en
In this example, you Arc-enable AKS Edge Essentials or K3S clusters in the isolated layer of an ISA-95 network environment using the Layered Network Management service running in one level above. The network and cluster architecture are described as follows:-- A level 4 single-node cluster running on a host machine with:
- - Direct access to the internet.
- - A secondary network interface card (NIC) connected to the local network. The secondary NIC makes the level 4 cluster visible to the level 3 local network.
+- A level 4 single-node cluster running on a host machine with direct access to the internet.
- A custom DNS in the local network. See the [Configure custom DNS](howto-configure-layered-network.md#configure-custom-dns) for the options. To set up the environment quickly, you should use the *CoreDNS* approach instead of a DNS server.-- The level 3 cluster connects to the Layered Network Management service as a proxy for all the Azure Arc related traffic.
+- The level 3 cluster that is blocked from accessing internet. It connects to the Layered Network Management service as a proxy for all the Azure Arc related traffic.
-![Diagram showing a level 4 and level 3 AKS Edge Essentials network.](./media/howto-configure-aks-edge-essentials-layered-network/arc-enabled-aks-edge-essentials-cluster.png)
+For more information, see [Example of logical segmentation with minimum hardware](howto-configure-layered-network.md#example-of-logical-segmentation-with-minimum-hardware).
-### Configure level 4 AKS Edge Essentials and Layered Network Management
+![Diagram of a logical isolated network configuration.](./media/howto-configure-layered-network/logical-network-segmentation.png)
++
+### Configure level 4 Kubernetes cluster and Layered Network Management
After you configure the network, you need to configure the level 4 Kubernetes cluster. Complete the steps in [Configure IoT Layered Network Management level 4 cluster](./howto-configure-l4-cluster-layered-network.md). In the article, you: -- Set up a Windows 11 machine and configure AKS Edge Essentials.
+- Set up a Windows 11 machine and configure AKS Edge Essentials or set up K3S Kubernetes on an Ubuntu machine.
- Deploy and configure the Layered Network Management service to run on the cluster. You need to identify the **local IP** of the host machine. In later steps, you direct traffic from level 3 to this IP address with a custom DNS.
After you complete this section, the Layered Network Management service is ready
### Configure the custom DNS
-In the local network, you need to set up the mechanism to redirect all the network traffic to the Layered Network Management service. Use the steps in [Configure custom DNS](howto-configure-layered-network.md#configure-custom-dns). In the article:
- - If you choose the *CoreDNS* approach, you can skip to *Configure and Arc enable level 3 cluster* and configure the CoreDNS before your Arc-enable the level 3 cluster.
- - If you choose to use a *DNS server*, follow the steps to set up the DNS server before you move to the next section in this article.
+In the local network, you need to set up the mechanism to redirect all the network traffic to the Layered Network Management service. Use the steps in [Configure custom DNS](howto-configure-layered-network.md#configure-custom-dns). In the article:
+- If you choose the *CoreDNS* approach, you can skip to *Configure and Arc enable level 3 cluster* and configure the CoreDNS before your Arc-enable the level 3 cluster.
+- If you choose to use a *DNS server*, follow the steps to set up the DNS server before you move to the next section in this article.
### Configure and Arc enable level 3 cluster
For more information, see [Access Kubernetes resources from Azure portal](/azure
Once your level 3 cluster is Arc-enabled, you can deploy IoT Operations to the cluster. All IoT Operations components are deployed to the level 3 cluster and connect to Arc through the Layered Network Management service. The data pipeline also routes through the Layered Network Management service.
-![Network diagram that shows IoT Operations running on a level 3 cluster.](./media/howto-configure-aks-edge-essentials-layered-network/iot-operations-level-3-cluster.png)
+![Network diagram that shows IoT Operations running on a level 3 cluster.](./media/howto-configure-layered-network/logical-network-segmentation-2.png)
Follow the steps in [Quickstart: Deploy Azure IoT Operations to an Arc-enabled Kubernetes cluster](../get-started/quickstart-deploy.md) to deploy IoT Operations to the level 3 cluster.
iot-operations Howto Configure L3 Cluster Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l3-cluster-layered-network.md
Follow the guidance for **hardware requirements** and **prerequisites** sections
You can choose to use [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) hosted on Windows 11 or a K3S cluster on Ubuntu for the Kubernetes cluster.
-# [AKS Edge Essentials](#tab/aksee)
-
-## Prepare Windows 11
-
-You should complete this step in an *internet facing environment* outside of the isolated network. Otherwise, you need to prepare the offline installation package for the following required software.
-
-If you're using VM to create your Windows 11 machines, use the [VM image](https://developer.microsoft.com/windows/downloads/virtual-machines/) that includes Visual Studio preinstalled. Having Visual Studio ensures the required certificates needed by Arc onboarding are included.
-
-1. Install [Windows 11](https://www.microsoft.com/software-download/windows11) on your device.
-1. Install [Helm](https://helm.sh/docs/intro/install/) 3.8.0 or later.
-1. Install [Kubectl](https://kubernetes.io/docs/tasks/tools/)
-1. Download the [installer for the validated AKS Edge Essentials](https://aka.ms/aks-edge/msi-k3s-1.2.414.0) version.
-1. Install AKS Edge Essentials. Follow the steps in [Prepare your machines for AKS Edge Essentials](/azure/aks/hybrid/aks-edge-howto-setup-machine). Be sure to use the installer you downloaded in the previous step and not the most recent version.
-1. Install Azure CLI. Follow the steps in [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows).
-1. Install *connectedk8s* and other extensions.
-
- ```bash
- az extension add --name connectedk8s
- az extension add --name k8s-extension
- az extension add --name customlocation
- ```
-1. [Install Azure CLI extension](/cli/azure/iot/ops) using `az extension add --name azure-iot-ops`.
-1. **Certificates:** For Level 3 and lower, you ARC onboard the cluster that isn't connected to the internet. Therefore, you need to install certificates steps in [Prerequisites for AKS Edge Essentials offline installation](/azure/aks/hybrid/aks-edge-howto-offline-install).
-1. Install the following optional software if you plan to try IoT Operations quickstarts or MQTT related scenarios.
- - [MQTTUI](https://github.com/EdJoPaTo/mqttui/releases) or other MQTT client
- - [Mosquitto](https://mosquitto.org/)
-
-## Create the AKS Edge Essentials cluster
-
-To create the AKS Edge Essentials cluster that's compatible with Azure IoT Operations:
-
-1. Complete the steps in [Create a single machine deployment](/azure/aks/hybrid/aks-edge-howto-single-node-deployment).
-
- At the end of [Step 1: single machine configuration parameters](/azure/aks/hybrid/aks-edge-howto-single-node-deployment#step-1-single-machine-configuration-parameters), modify the following values in the *aksedge-config.json* file as follows:
-
- - `Init.ServiceIPRangeSize` = 10
- - `LinuxNode.DataSizeInGB` = 30
- - `LinuxNode.MemoryInMB` = 8192
-
- In the **Network** section, set the `SkipDnsCheck` property to **true**.Add and set the `DnsServers` to the address of the DNS server in the subnet.
-
- ```json
- "DnsServers": ["<IP ADDRESS OF THE DNS SERVER IN SUBNET>"],
- "SkipDnsCheck": true,
- ```
-
-1. Install **local-path** storage in the cluster by running the following command:
-
- ```cmd
- kubectl apply -f https://raw.githubusercontent.com/Azure/AKS-Edge/main/samples/storage/local-path-provisioner/local-path-storage.yaml
- ```
-
-## Move the device to level 3 isolated network
-
-In your isolated network layer, the DNS server was configured in a prerequisite step using [Create sample network environment](./howto-configure-layered-network.md). Complete the step if you haven't done so.
-
-After the device is moved to level 3, configure the DNS setting using the following steps:
-
-1. In **Windows Control Panel** > **Network and Internet** > **Network and Sharing Center**, select the current network connection.
-1. In the network properties dialog, select **Properties** > **Internet Protocol Version 4 (TCP/IPv4)** > **Properties**.
-1. Select **Use the following DNS server addresses**.
-1. Enter the level 3 DNS server local IP address.
-
- :::image type="content" source="./media/howto-configure-l3-cluster-layered-network/windows-dns-setting.png" alt-text="Screenshot that shows Windows DNS setting with the level 3 DNS server local IP address.":::
- # [K3S cluster](#tab/k3s) You should complete this step in an *internet facing environment outside of the isolated network*. Otherwise, you need to prepare the offline installation package for the following software in the next section.
After the device is moved to your level 3 isolated network layer, it's required
1. Select the setting of the current connection. 1. In the IPv4 tab, disable the **Automatic** setting for DNS and enter the local IP of DNS server.
+# [AKS Edge Essentials](#tab/aksee)
+There are few limitations for setting up AKS Edge Essentials as the level 3 cluster.
+- When configuring the custom DNS, you must use a DNS server. The CoreDNS approach is not applicable to AKS Edge Essentials cluster.
+- If you plan to access and manage the cluster remotely, you need to make a [full deployment](/azure/aks/hybrid/aks-edge-howto-multi-node-deployment) instead of a [single machine deployment](/azure/aks/hybrid/aks-edge-howto-single-node-deployment). Moreover, the full deployment can't be hosted on an Azure VM.
+
+## Prepare Windows 11
+
+You should complete this step in an *internet facing environment* outside of the isolated network. Otherwise, you need to prepare the offline installation package for the following required software.
+
+If you're using VM to create your Windows 11 machines, use the [VM image](https://developer.microsoft.com/windows/downloads/virtual-machines/) that includes Visual Studio preinstalled. Having Visual Studio ensures the required certificates needed by Arc onboarding are included.
+
+1. Install [Windows 11](https://www.microsoft.com/software-download/windows11) on your device.
+1. Install [Helm](https://helm.sh/docs/intro/install/) 3.8.0 or later.
+1. Install [Kubectl](https://kubernetes.io/docs/tasks/tools/)
+1. Download the [installer for the validated AKS Edge Essentials](https://aka.ms/aks-edge/msi-k3s-1.2.414.0) version.
+1. Install AKS Edge Essentials. Follow the steps in [Prepare your machines for AKS Edge Essentials](/azure/aks/hybrid/aks-edge-howto-setup-machine). Be sure to use the installer you downloaded in the previous step and not the most recent version.
+1. **Certificates:** For level 3 and lower, you ARC onboard the cluster that isn't connected to the internet. Therefore, you need to install certificates steps in [Prerequisites for AKS Edge Essentials offline installation](/azure/aks/hybrid/aks-edge-howto-offline-install).
+1. Install the following optional software if you plan to try IoT Operations quickstarts or MQTT related scenarios.
+ - [MQTTUI](https://github.com/EdJoPaTo/mqttui/releases) or other MQTT client
+ - [Mosquitto](https://mosquitto.org/)
+1. Install Azure CLI. You can install the Azure CLI directly onto the level 3 machine or on another *developer* or *jumpbox* machine if you plan to access the level 3 cluster remotely. If you choose to access the Kubernetes cluster remotely to keep the cluster host clean, you run the *kubectl* and *az* related commands from the developer machine for the rest of the steps in this article.
+ The *AKS Edge Essentials - Single machine deployment* does not support accessing Kubernetes remotely. If you want to enable remote kubectl access, you will need to create the [Full Kubernetes Deployment](/azure/aks/hybrid/aks-edge-howto-multi-node-deployment) instead. Additional configurations are needed when creating this type of Kubernetes cluster.
+ - Install Azure CLI. Follow the steps in [Install Azure CLI on Windows](/cli/azure/install-azure-cli-windows).
+ - Install *connectedk8s* and other extensions.
+
+ ```bash
+ az extension add --name connectedk8s
+ az extension add --name k8s-extension
+ az extension add --name customlocation
+ ```
+ - [Install Azure CLI extension](/cli/azure/iot/ops) using `az extension add --name azure-iot-ops`.
+## Create the AKS Edge Essentials cluster
+
+To create the AKS Edge Essentials cluster that's compatible with Azure IoT Operations:
+
+1. Complete the steps in [Create a single machine deployment](/azure/aks/hybrid/aks-edge-howto-single-node-deployment).
+ Create a [Full Kubernetes Deployment](/azure/aks/hybrid/aks-edge-howto-multi-node-deployment) instead if you plan to remotely access the kubernetes from another machine.
+
+ At the end of [Step 1: single machine configuration parameters](/azure/aks/hybrid/aks-edge-howto-single-node-deployment#step-1-single-machine-configuration-parameters), modify the following values in the *aksedge-config.json* file as follows:
+
+ - `Init.ServiceIPRangeSize` = 10
+ - `LinuxNode.DataSizeInGB` = 30
+ - `LinuxNode.MemoryInMB` = 8192
+
+ In the **Network** section, set the `SkipDnsCheck` property to **true**. Add and set the `DnsServers` to the address of the DNS server in the subnet.
+
+ ```json
+ "DnsServers": ["<IP ADDRESS OF THE DNS SERVER IN SUBNET>"],
+ "SkipDnsCheck": true,
+ ```
+
+1. Install **local-path** storage in the cluster by running the following command:
+
+ ```cmd
+ kubectl apply -f https://raw.githubusercontent.com/Azure/AKS-Edge/main/samples/storage/local-path-provisioner/local-path-storage.yaml
+ ```
+
+## Move the device to level 3 isolated network
+
+In your isolated network layer, the DNS server was configured in a prerequisite step using [Create sample network environment](./howto-configure-layered-network.md). Complete the step if you haven't done so.
+
+After the device is moved to level 3, configure the DNS setting using the following steps:
+
+1. In **Windows Control Panel** > **Network and Internet** > **Network and Sharing Center**, select the current network connection.
+1. In the network properties dialog, select **Properties** > **Internet Protocol Version 4 (TCP/IPv4)** > **Properties**.
+1. Select **Use the following DNS server addresses**.
+1. Enter the level 3 DNS server local IP address.
+
+ :::image type="content" source="./media/howto-configure-l3-cluster-layered-network/windows-dns-setting.png" alt-text="Screenshot that shows Windows DNS setting with the level 3 DNS server local IP address.":::
+ ## Provision the cluster to Azure Arc
iot-operations Howto Configure L4 Cluster Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network.md
Azure IoT Layered Network Management is one of the Azure IoT Operations componen
## Prerequisites Meet the following minimum requirements for deploying the Layered Network Management individually on the system.-- Arc-connected cluster and GitOps in [AKS Edge Essentials requirements and support matrix](/azure/aks/hybrid/aks-edge-system-requirements)
+- **AKS Edge Essentials** - *Arc-connected cluster and GitOps* category in [AKS Edge Essentials requirements and support matrix](/azure/aks/hybrid/aks-edge-system-requirements)
+- **K3S Kubernetes cluster** - [Azure Arc-enabled Kubernetes system requirements](/azure/azure-arc/kubernetes/system-requirements)
## Set up Kubernetes cluster in Level 4 To set up only Layered Network Management, the prerequisites are simpler than an Azure IoT Operations deployment. It's optional to fulfill the general requirements for Azure IoT Operations in [Prepare your Kubernetes cluster](../deploy-iot-ops/howto-prepare-cluster.md).
-Currently, the steps only include setting up an [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) Kubernetes cluster.
+The following steps for setting up [AKS Edge Essentials](/azure/aks/hybrid/aks-edge-overview) and [K3S](https://docs.k3s.io/) Kubernetes cluster are verified by Microsoft.
+
+# [K3S Cluster](#tab/k3s)
+
+## Prepare an Ubuntu machine
+
+1. Ubuntu 22.04 LTS is the recommended version for the host machine.
+
+1. Install [Helm](https://helm.sh/docs/intro/install/) 3.8.0 or later.
+
+1. Install [Kubectl](https://kubernetes.io/docs/tasks/tools/).
+
+1. Install the Azure CLI. You can install the Azure CLI directly onto the level 4 machine or on another *developer* or *jumpbox* machine if you plan to access the level 3 cluster remotely. If you choose to access the Kubernetes cluster remotely to keep the cluster host clean, you run the *kubectl* and *az*" related commands from the *developer* machine for the rest of the steps in this article.
+
+ - Install Azure CLI. Follow the steps in [Install Azure CLI on Linux](/cli/azure/install-azure-cli-linux).
+
+ - Install *connectedk8s* and other extensions.
+
+ ```bash
+ az extension add --name connectedk8s
+ az extension add --name k8s-extension
+ ```
+
+ - [Install Azure CLI extension](/cli/azure/iot/ops) using `az extension add --name azure-iot-ops`.
+
+## Create the K3S cluster
+
+1. Install K3S with the following command:
+
+ ```bash
+ curl -sfL https://get.k3s.io | sh -s - --disable=traefik --write-kubeconfig-mode 644
+ ```
+
+ > [!IMPORTANT]
+ > Be sure to use the `--disable=traefik` parameter to disable treafik. Otherwise, you might have an issue when you try to allocate public IP for the Layered Network Management service in later steps.
+
+1. Copy the K3s configuration yaml file to `.kube/config`.
+
+ ```bash
+ mkdir ~/.kube
+ cp ~/.kube/config ~/.kube/config.back
+ sudo KUBECONFIG=~/.kube/config:/etc/rancher/k3s/k3s.yaml kubectl config view --flatten > ~/.kube/merged
+ mv ~/.kube/merged ~/.kube/config
+ chmod 0600 ~/.kube/config
+ export KUBECONFIG=~/.kube/config
+ #switch to k3s context
+ kubectl config use-context default
+ ```
+
+# [AKS Edge Essentials](#tab/aksee)
## Prepare Windows 11
Currently, the steps only include setting up an [AKS Edge Essentials](/azure/aks
For more information about deployment configurations, see [Deployment configuration JSON parameters](/azure/aks/hybrid/aks-edge-deployment-config-json). ++ ## Arc enable the cluster 1. Sign in with Azure CLI. To avoid permission issues later, it's important that you sign in interactively using a browser window:
Currently, the steps only include setting up an [AKS Edge Essentials](/azure/aks
az account set -s $SUBSCRIPTION_ID ``` 1. Register the required resource providers in your subscription:
- ```powershell
- az provider register -n "Microsoft.ExtendedLocation"
- az provider register -n "Microsoft.Kubernetes"
- az provider register -n "Microsoft.KubernetesConfiguration"
- az provider register -n "Microsoft.IoTOperationsOrchestrator"
- az provider register -n "Microsoft.IoTOperationsMQ"
- az provider register -n "Microsoft.IoTOperationsDataProcessor"
- az provider register -n "Microsoft.DeviceRegistry"
- ```
+ > [!NOTE]
+ > This is a one-time configuration per subscription.
+
+ ```powershell
+ az provider register -n "Microsoft.ExtendedLocation"
+ az provider register -n "Microsoft.Kubernetes"
+ az provider register -n "Microsoft.KubernetesConfiguration"
+ az provider register -n "Microsoft.IoTOperationsOrchestrator"
+ az provider register -n "Microsoft.IoTOperationsMQ"
+ az provider register -n "Microsoft.IoTOperationsDataProcessor"
+ az provider register -n "Microsoft.DeviceRegistry"
+ ```
1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources: ```bash az group create --location $LOCATION --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID
Create the Layered Network Management custom resource.
kind: Lnm metadata: name: level4
- namespace: default
+ namespace: azure-iot-operations
spec: image: pullPolicy: IfNotPresent
Create the Layered Network Management custom resource.
### Add iptables configuration
-This step is for AKS Edge Essentials only.
+> [!IMPORTANT]
+> This step is for AKS Edge Essentials only.
The Layered Network Management deployment creates a Kubernetes service of type *LoadBalancer*. To ensure that the service is accessible from outside the Kubernetes cluster, you need to map the underlying Windows host's ports to the appropriate ports on the Layered Network Management service.
iot-operations Howto Configure Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-layered-network.md
Last updated 11/15/2023
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-To use Azure IoT Layered Network Management service, you can configure an isolated network environment with physical or logical segmentation.
+To use Azure IoT Layered Network Management service, you need to configure an isolated network environment. For example, the [ISA-95](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa95)/[Purdue Network architecture](http://www.pera.net/). This page provides few examples for setting up a test environment depends on how you want to achieve the isolation.
+- *Physical segmentation* - The networks are physically separated. In this case, the Layered Network Management needs to be deployed to a dual NIC (Network Interface Card) host to connect to both the internet-facing network and the isolated network.
+- *Logical segmentation* - The network is logically segmented with configurations such as VLAN, subnet or firewall. The Layered Network Management has a single endpoint and configured to be visible to its own network layer and the isolated layer.
-Each isolated layer that's level 3 and lower, requires you to configure a custom DNS.
+Both approaches require you to configure a custom DNS in the isolated network layer to direct the network traffic to the Layered Network Management instance in upper layer.
+
+> [!IMPORTANT]
+> The network environments outlined in Layered Network Management documentation are examples for testing the Layered Network Management. It's not a recommendation of how you build your network and cluster topology for production.
## Configure isolated network with physical segmentation The following example configuration is a simple isolated network with minimum physical devices.
-![Diagram of a physical device isolated network configuration.](./media/howto-configure-layered-network/physical-device-isolated.png)
+![Diagram of a physical device isolated network configuration.](./media/howto-configure-layered-network/physical-network-segmentation.png)
- The wireless access point is used for setting up a local network and doesn't provide internet access. - **Level 4 cluster** is a single node cluster hosted on a dual network interface card (NIC) physical machine that connects to internet and the local network. - **Level 3 cluster** is a single node cluster hosted on a physical machine. This device cluster only connects to the local network.
->[!IMPORTANT]
-> When assigning local IP addresses, avoid using the default address `192.168.0.x`. You should change the address if it's the default setting for your access point.
- Layered Network Management is deployed to the dual NIC cluster. The cluster in the local network connects to Layered Network Management as a proxy to access Azure and Arc services. In addition, it would need a custom DNS in the local network to provide domain name resolution and point the traffic to Layered Network Management. For more information, see [Configure custom DNS](#configure-custom-dns). ## Configure Isolated Network with logical segmentation
-The following example is an isolated network environment where each level is logically segmented with subnets. In this test environment, there are multiple clusters one at each level. The clusters can be AKS Edge Essentials or K3S. The Kubernetes cluster in the level 4 network has direct internet access. The Kubernetes clusters in level 3 and below don't have internet access.
+The following diagram illustrates an isolated network environment where each level is logically segmented with subnets. In this test environment, there are multiple clusters one at each level. The clusters can be AKS Edge Essentials or K3S. The Kubernetes cluster in the level 4 network has direct internet access. The Kubernetes clusters in level 3 and below don't have internet access.
-![Diagram of a logical segmentation isolated network](./media/howto-configure-layered-network/nested-edge.png)
+![Diagram of a logical segmentation isolated network.](./media/howto-configure-layered-network/logical-network-segmentation-subnets.png)
The multiple levels of networks in this test setup are accomplished using subnets within a network: - **Level 4 subnet (10.104.0.0/16)** - This subnet has access to the internet. All the requests are sent to the destinations on the internet. This subnet has a single Windows 11 machine with the IP address 10.104.0.10. - **Level 3 subnet (10.103.0.0/16)** - This subnet doesn't have access to the internet and is configured to only have access to the IP address 10.104.0.10 in Level 4. This subnet contains a Windows 11 machine with the IP address 10.103.0.33 and a Linux machine that hosts a DNS server. The DNS server is configured using the steps in [Configure custom DNS](#configure-custom-dns). All the domains in the DNS configuration must be mapped to the address 10.104.0.10.-- **Level 2 subnet (10.102.0.0/16)** - Like Level 3, this subnet doesn't have access to the internet. It's configured to only have access to the IP address 10.103.0.33 in Level 3. This subnet contains a Windows 11 machine with the IP address 10.102.0.28 and a Linux machine that hosts a DNS server. There's one Windows 11 machine (node) in this network with IP address 10.102.0.28. All the domains in the DNS configuration must be mapped to the address 10.103.0.33.
+- **Level 2 subnet (10.102.0.0/16)** - Like level 3, this subnet doesn't have access to the internet. It is configured to only have access to the IP address 10.103.0.33 in level 3. This subnet contains a Windows 11 machine with the IP address 10.102.0.28 and a Linux machine that hosts a DNS server. There's one Windows 11 machine (node) in this network with IP address 10.102.0.28. All the domains in the DNS configuration must be mapped to the address 10.103.0.33.
+
+Refer to the following examples for setup this type of network environment.
+
+### Example of logical segmentation with minimum hardware
+In this example, both machines are connected to an access point (AP) which connects to the internet. The level 4 host machine can access the internet. The level 3 host is blocked for accessing the internet with the AP's configuration. For example, firewall or client control. As both machines are in the same network, the Layered Network Management instance hosted on level 4 cluster is by default visible to the level 3 machine and cluster.
+An extra custom DNS needs to be set up in the local network to provide domain name resolution and point the traffic to Layered Network Management. For more information, see [Configure custom DNS](#configure-custom-dns).
+
+![Diagram of a logical isolated network configuration.](./media/howto-configure-layered-network/logical-network-segmentation.png)
+
+### Example of logical segmentation in Azure
+In this example, a test environment is created with a [virtual network](/azure/virtual-network/virtual-networks-overview) and a [Linux virtual machine](/azure/virtual-machines/linux/quick-create-portal) in Azure.
+> [!IMPORTANT]
+> Virtual environment is for exploration and evaluation only. For more information, see [validated environments](/azure/iot-operations/get-started/overview-iot-operations#validated-environments) for Azure IoT Operations.
+
+1. Create a virtual network in your Azure subscription. Create subnets for at least two layers (level 4 and level 3).
+1. It's optional to create an extra subnet for the *jumpbox* or *developer* machine to remotely access the machine or cluster across layers. This setup is convenient if you plan to create more than two network layers. Otherwise, you can connect the jumpbox machine to level 4 network.
+1. Create [network security groups](/azure/virtual-network/network-security-groups-overview) for each level and attach to the subnet accordingly.
+1. You can use the default value for level 4 security group.
+1. You need to configure additional inbound and outbound rules for level 3 (and lower level) security group.
+ - Add inbound and outbound security rules to deny all network traffic.
+ - With a higher priority, add inbound and outbound security rules to allow network traffic to and from the IP range of level 4 subnet.
+ - [Optional] If you create a *jumpbox* subnet, create inbound and outbound rules for allowing traffic to and from this subnet.
+1. Create Linux VMs in level 3 and level 4.
+ - Refer to [validated environments](/azure/iot-operations/get-started/overview-iot-operations#validated-environments) for specification of the VM.
+ - When creating the VM, connect the machine to the subnet that is created in earlier steps.
+ - Skip the security group creation for VM.
## Configure custom DNS
-A custom DNS is needed for level 3 and below. It ensures that DNS resolution for network traffic originating within the cluster is pointed to the parent level Layered Network Management instance. In an existing or production environment, incorporate the following DNS resolutions into your DNS design. If you want to set up a test environment for Layered Network Management service and Azure IoT Operations, you can refer to one of the following examples.
+A custom DNS is needed for level 3 and below. It ensures that DNS resolution for network traffic originating within the cluster is pointed to the parent level Layered Network Management instance. In an existing or production environment, incorporate the following DNS resolutions into your DNS design. If you want to set up a test environment for Layered Network Management service and Azure IoT Operations, you can refer to the following examples.
# [CoreDNS](#tab/coredns) ### Configure CoreDNS
-While the DNS setup can be achieved many different ways, this example uses an extension mechanism provided by CoreDNS to add the allowlisted URLs to be resolved by CoreDNS. CoreDNS is the default DNS server for K3S clusters.
+While the DNS setup can be achieved many different ways, this example uses an extension mechanism provided by CoreDNS that is the default DNS server for K3S clusters. URLs on the allowlist, which need to be resolved are added to the CoreDNS.
+> [!IMPORTANT]
+> The CoreDNS approach is only applicable to K3S cluster on Ubuntu host at level 3.
### Create configmap from level 4 Layered Network Management After the level 4 cluster and Layered Network Management are ready, perform the following steps.
iot-operations Howto Deploy Aks Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-deploy-aks-layered-network.md
Title: Deploy Azure IoT Layered Network Management to an AKS cluster
+ Title: "Quickstart: Configure Layered Network Management to Arc-enable a cluster in Azure environment"
-description: Configure Azure IoT Layered Network Management to an AKS cluster.
+description: Deploy Azure IoT Layered Network Management to an AKS cluster and Arc-enable a cluster on an Ubuntu VM.
Last updated 11/15/2023
#CustomerIntent: As an operator, I want to configure Layered Network Management so that I have secure isolate devices.
-# Deploy Azure IoT Layered Network Management to an AKS cluster
+# Quickstart: Configure Layered Network Management to Arc-enable a cluster in Azure environment
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
These steps deploy Layered Network Management to the AKS cluster. The cluster is
kind: Lnm metadata: name: level4
- namespace: default
+ namespace: azure-iot-operations
spec: image: pullPolicy: IfNotPresent
iot-operations Overview Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/overview-layered-network.md
Last updated 11/15/2023
[!INCLUDE [public-preview-note](../includes/public-preview-note.md)]
-Azure IoT Layered Network Management service is a component that facilitates the connection between Azure and clusters in isolated network environment. In industrial scenarios, the isolated network follows the *[ISA-95](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa95)/[Purdue Network architecture](http://www.pera.net/)*. The Layered Network Management service can route the network traffic from a non-internet facing layer through an internet facing layer and then to Azure. This service is deployed and managed as a component of Azure IoT Operations Preview on Arc-enabled Kubernetes clusters. Review the network architecture of your solution and use the Layered Network Management service if it's applicable and necessary for your scenarios. If you integrated other mechanisms of controlling internet access for the isolated network, you should compare the functionality with Layered Network Management service and choose the one that fits your needs the best. Layered Network Management is an optional component and it's not a dependency for any feature of Azure IoT Operations Preview.
+Azure IoT Layered Network Management service is a component that facilitates the connection between Azure and clusters in isolated network environment. In industrial scenarios, the isolated network follows the *[ISA-95](https://www.isa.org/standards-and-publications/isa-standards/isa-standards-committees/isa95)/[Purdue Network architecture](https://en.wikipedia.org/wiki/Purdue_Enterprise_Reference_Architecture)*. The Layered Network Management service can route the network traffic from a non-internet facing layer through an internet facing layer and then to Azure. This service is deployed and managed as a component of Azure IoT Operations Preview on Arc-enabled Kubernetes clusters. Review the network architecture of your solution and use the Layered Network Management service if it's applicable and necessary for your scenarios. If you integrated other mechanisms of controlling internet access for the isolated network, you should compare the functionality with Layered Network Management service and choose the one that fits your needs the best. Layered Network Management is an optional component and it's not a dependency for any feature of Azure IoT Operations Preview.
> [!IMPORTANT] > The network environments outlined in Layered Network Management documentation are examples for testing the Layered Network Management. It's not a recommendation of how you build your network and cluster topology for productional usage.
Layered Network Management supports the Azure IoT Operations components in an is
## Next steps
+- Learn [How Does Azure IoT Operations Work in Layered Network?](concept-iot-operations-in-layered-network.md)
- [Set up Layered Network Management in a simplified virtual machine and network environment](howto-deploy-aks-layered-network.md) to try a simple example with Azure virtual resources. It's the quickest way to see how Layered Network Management works without having to set up physical machines and Purdue Network.-- To understand how to set up a cluster in an isolated environment for Azure IoT Operations scenarios, see [Configure Layered Network Management service to enable Azure IoT Operations in an isolated network](howto-configure-aks-edge-essentials-layered-network.md).
load-testing Concept Load Test App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-test-app-service.md
Learn how to:
- [Start create a URL-based load test](./quickstart-create-and-run-load-test.md). - [Identify performance bottlenecks](./tutorial-identify-bottlenecks-azure-portal.md) for Azure applications. - [Configure your test for high-scale load](./how-to-high-scale-load.md).-- [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- [Configure automated performance testing](./quickstart-add-load-test-cicd.md).
load-testing Concept Load Testing Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-testing-concepts.md
You now know the key concepts of Azure Load Testing to start creating a load tes
- Learn how [Azure Load Testing works](./overview-what-is-azure-load-testing.md#how-does-azure-load-testing-work). - Learn how to [Create and run a load test for a website](./quickstart-create-and-run-load-test.md). - Learn how to [Identify a performance bottleneck in an Azure application](./tutorial-identify-bottlenecks-azure-portal.md).-- Learn how to [Set up automated regression testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn how to [Set up automated regression testing with CI/CD](./quickstart-add-load-test-cicd.md).
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Use the following steps to mark a test run as baseline:
- Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md). - Learn more about [diagnosing failing load tests](./how-to-diagnose-failing-load-test.md).-- Learn more about [configuring automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn more about [configuring automated performance testing with CI/CD](./quickstart-add-load-test-cicd.md).
load-testing How To Configure Load Test Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-load-test-cicd.md
If you don't plan to use any of the resources that you created, delete them so y
Advance to the next article to learn how to identify performance regressions by defining test fail criteria and comparing test runs. -- [Tutorial: automate regression tests](./tutorial-identify-performance-regression-with-cicd.md)
+- [Tutorial: automate regression tests](./quickstart-add-load-test-cicd.md)
- [Define test fail criteria](./how-to-define-test-criteria.md) - [View performance trends over time](./how-to-compare-multiple-test-runs.md)
load-testing How To Configure User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-configure-user-properties.md
To add a user properties file to your load test by using the Azure portal, follo
If you run a load test within your CI/CD workflow, you add the user properties file to the source control repository. You then specify this properties file in the [load test configuration YAML file](./reference-test-config-yaml.md).
-For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-identify-performance-regression-with-cicd.md).
+For more information about running a load test in a CI/CD workflow, see the [Automated regression testing quickstart](./quickstart-add-load-test-cicd.md).
To add a user properties file to your load test, follow these steps:
load-testing How To Create And Run Load Test With Jmeter Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-and-run-load-test-with-jmeter-script.md
To create a load test using an existing JMeter script with the Azure CLI:
Specify a unique test ID for your load test, and the name of the JMeter test script (JMX file). If you use an existing test ID, a test run will be added to the test when you run it. ```azurecli
- $testId="<test-id>"
+ testId="<test-id>"
testPlan="<my-jmx-file>" ```
load-testing How To Create Manage Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test-runs.md
Test runs are associated with a load test in Azure Load Testing. To view the tes
- Select **Download input file** to download all input files for running the test, such as the JMeter test script, input data files, and user property files. The download also contains the [load test configuration YAML file](./reference-test-config-yaml.md). > [!TIP]
- > You can use the downloaded test configuration YAML file for [setting up automated load testing in a CI/CD pipeline](./tutorial-identify-performance-regression-with-cicd.md).
+ > You can use the downloaded test configuration YAML file for [setting up automated load testing in a CI/CD pipeline](./how-to-configure-load-test-cicd.md).
- Select **Download results file** to download the JMeter test results CSV file. This file contains an entry for each web request. Learn more about [exporting load test results](./how-to-export-test-results.md).
To identify performance degradation over time, you can visually compare up to fi
## Next steps - [Create and manage load tests](./how-to-create-manage-test.md)-- [Set up automated load testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md)
+- [Set up automated load testing with CI/CD](./quickstart-add-load-test-cicd.md)
load-testing How To Create Manage Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-manage-test.md
To delete a test in the Azure portal:
- [Create and manage test runs](./how-to-create-manage-test-runs.md) - [Identify performance bottlenecks with Azure Load Testing in the Azure portal](./quickstart-create-and-run-load-test.md)-- [Set up automated load testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md)
+- [Set up automated load testing with CI/CD](./quickstart-add-load-test-cicd.md)
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
In this section, you configure test criteria for a load test in the Azure portal
# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
-In this section, you configure test criteria for a load test, as part of a CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+In this section, you configure test criteria for a load test, as part of a CI/CD workflow. Learn how to [set up automated performance testing with CI/CD](./quickstart-add-load-test-cicd.md).
For CI/CD workflows, you configure the load test settings in a [YAML test configuration file](./reference-test-config-yaml.md). You store the load test configuration file alongside the JMeter test script file in the source control repository.
To specify auto stop settings in the YAML configuration file:
1. Save the YAML configuration file, and commit the changes to source control.
-Learn how to [set up automated performance testing with CI/CD](./tutorial-identify-performance-regression-with-cicd.md).
+Learn how to [set up automated performance testing with CI/CD](./quickstart-add-load-test-cicd.md).
Learn how to [set up automated performance testing with CI/CD](./tutorial-identi
- To learn how to parameterize a load test by using secrets, see [Parameterize a load test](./how-to-parameterize-load-tests.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- To learn about performance test automation, see [Configure automated performance testing](./quickstart-add-load-test-cicd.md).
load-testing How To Export Test Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-export-test-results.md
To copy the test results and log files for a test run from a storage account, in
- Learn more about [Diagnosing failing load tests](./how-to-diagnose-failing-load-test.md). - For information about comparing test results, see [Compare multiple test results](./how-to-compare-multiple-test-runs.md).-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- To learn about performance test automation, see [Configure automated performance testing](./quickstart-add-load-test-cicd.md).
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
When you update the configuration of a load test, all future test runs will use
- Learn how to [set up a high-scale load test](./how-to-high-scale-load.md). -- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn how to [configure automated performance testing](./quickstart-add-load-test-cicd.md).
load-testing How To Parameterize Load Tests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-parameterize-load-tests.md
The values of the parameters aren't stored when they're passed from the CI/CD wo
- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md). -- To learn about performance test automation, see [Configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- To learn about performance test automation, see [Configure automated performance testing](./quickstart-add-load-test-cicd.md).
load-testing How To Read Csv Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-read-csv-data.md
To add a CSV file to your load test by using the Azure portal:
# [Azure Pipelines / GitHub Actions](#tab/pipelines+github)
-If you run a load test within your CI/CD workflow, you can add a CSV file to the test configuration YAML file. For more information about running a load test in a CI/CD workflow, see the [Automated regression testing tutorial](./tutorial-identify-performance-regression-with-cicd.md).
+If you run a load test within your CI/CD workflow, you can add a CSV file to the test configuration YAML file. For more information about running a load test in a CI/CD workflow, see [how to add load testing to CI/CD](./how-to-configure-load-test-cicd.md).
To add a CSV file to your load test:
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `autoStop.timeWindow` | integer | 60 | Time window in seconds for calculating the *autoStop.errorPercentage*. | | `properties` | object | | List of properties to configure the load test. | | `properties.userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file is uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. |
+| `zipArtifacts` | array | | List of zip artifact files load test. For files other than JMeter scripts and user properties, if the file size exceeds 50MB, compress them into a ZIP file. Ensure that the ZIP file remains below 50 MB in size. Only 5 ZIP artifacts are allowed with a maximum of 1000 files in each and uncompressed size of 1 GB. |
| `splitAllCSVs` | boolean | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). | | `secrets` | object | | List of secrets that the Apache JMeter script references. | | `secrets.name` | string | | Name of the secret. This name should match the secret name that you use in the Apache JMeter script. |
properties:
userPropertyFile: 'user.properties' configurationFiles: - 'SampleData.csv'
+zipArtifacts:
+ - sampleArtifact.zip
+ - TestData.zip
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
The requests JSON file uses the following properties for defining the load confi
## Next steps -- Learn how to build [automated regression testing in your CI/CD workflow](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn how to build [automated regression testing in your CI/CD workflow](./quickstart-add-load-test-cicd.md).
- Learn how to [parameterize load tests with secrets and environment variables](./how-to-parameterize-load-tests.md). - Learn how to [load test secured endpoints](./how-to-test-secured-endpoints.md).
load-testing Resource Limits Quotas Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-limits-quotas-capacity.md
To raise the limit or quota above the default limit, [open an online customer su
## Next steps - Learn how to [set up a high-scale load test](./how-to-high-scale-load.md).-- Learn how to [configure automated performance testing](./tutorial-identify-performance-regression-with-cicd.md).
+- Learn how to [configure automated performance testing](./quickstart-add-load-test-cicd.md).
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
Title: 'Tutorial: Run a load test to identify performance bottlenecks'
+ Title: 'Tutorial: Identify performance issues with load testing'
-description: In this tutorial, you learn how to identify and monitor performance bottlenecks in a web app by running a high-scale load test with Azure Load Testing.
+description: In this tutorial, you learn how to identify performance bottlenecks in a web app by running a high-scale load test with Azure Load Testing. Use the dashboard to analyze client-side and server-side metrics.
Previously updated : 01/18/2023 Last updated : 11/29/2023 #Customer intent: As an Azure user, I want to learn how to identify and fix bottlenecks in a web app so that I can improve the performance of the web apps that I'm running in Azure. # Tutorial: Run a load test to identify performance bottlenecks in a web app
-In this tutorial, you'll learn how to identify performance bottlenecks in a web application by using Azure Load Testing. You'll create a load test for a sample Node.js application.
+In this tutorial, you learn how to identify performance bottlenecks in a web application by using Azure Load Testing. You simulate load for a sample Node.js web application, and then use the load test dashboard to analyze client-side and server-side metrics.
-The sample application consists of a Node.js web API, which interacts with a NoSQL database. You'll deploy the web API to Azure App Service web apps and use Azure Cosmos DB as the database.
+The sample application consists of a Node.js web API, which interacts with a NoSQL database. You deploy the web API to Azure App Service web apps and use Azure Cosmos DB as the database.
-Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
-
-In this tutorial, you'll learn how to:
+In this tutorial, you learn how to:
> [!div class="checklist"] > * Deploy the sample app. > * Create and run a load test.
-> * Identify performance bottlenecks in the app.
-> * Remove a bottleneck.
-> * Rerun the load test to check performance improvements.
+> * Add Azure app components to the load test.
+> * Identify performance bottlenecks by using the load test dashboard.
## Prerequisites * An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* Azure CLI version 2.2.0 or later. Run `az --version` to find the version that's installed on your computer. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
+* The [Azure CLI](/cli/azure/install-azure-cli) installed on your local computer.
+* Azure CLI version 2.2.0 or later. Run `az --version` to find the version that is installed on your computer. If you need to install or upgrade the Azure CLI, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
* Visual Studio Code. If you don't have it, [download and install it](https://code.visualstudio.com/Download). * Git. If you don't have it, [download and install it](https://git-scm.com/download).
-## Deploy the sample app
+### Prerequisites check
+
+Before you start, validate your environment:
+
+* Sign in to the Azure portal and check that your subscription is active.
+
+* Check your version of the Azure CLI in a terminal or command window by running `az --version`. For the latest version, see the [latest release notes](/cli/azure/release-notes-azure-cli?tabs=azure-cli).
-Before you can load test the sample app, you have to get it deployed and running. Use Azure CLI commands, Git commands, and PowerShell commands to make that happen.
+ If you don't have the latest version, update your installation by following the [installation guide for your operating system or platform](/cli/azure/install-azure-cli).
+
+## Deploy the sample application
+
+In this tutorial, you're generating load against a sample web application that you deploy to Azure App Service. Use Azure CLI commands, Git commands, and PowerShell commands to deploy the sample application in your Azure subscription.
[!INCLUDE [include-deploy-sample-application](includes/include-deploy-sample-application.md)]
-Now that you have the application deployed and running, you can run your first load test against it.
+Now that you have the sample application deployed and running, you can create an Azure load testing resource and a load test.
+
+## Create a load test
-## Configure and create the load test
+In this tutorial, you're creating a load test with the Azure CLI by uploading a JMeter test script (`jmx` file). The sample application repository already contains a load test configuration file and JMeter test script.
-In this section, you'll create a load test by using a sample Apache JMeter test script.
+To create a load test by using the Azure portal, follow the steps in [Quickstart: create a load test with a JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).
-The sample application's source repo includes an Apache JMeter script named *SampleApp.jmx*. This script makes three API calls to the web app on each test iteration:
+Follow these steps to create an Azure load testing resource and a load test by using the Azure CLI:
-* `add`: Carries out a data insert operation on Azure Cosmos DB for the number of visitors on the web app.
-* `get`: Carries out a GET operation from Azure Cosmos DB to retrieve the count.
-* `lasttimestamp`: Updates the time stamp since the last user went to the website.
+1. Open a terminal window and enter the following command to sign in to your Azure subscription.
-> [!NOTE]
-> The sample Apache JMeter script requires two plugins: ```Custom Thread Groups``` and ```Throughput Shaping Timer```. To open the script on your local Apache JMeter instance, you need to install both plugins. You can use the [Apache JMeter Plugins Manager](https://jmeter-plugins.org/install/Install/) to do this.
+ ```azurecli
+ az login
+ ```
-### Create the Azure load testing resource
+1. Go to the sample application directory.
-The Azure load testing resource is a top-level resource for your load-testing activities. This resource provides a centralized place to view and manage load tests, test results, and related artifacts.
+ ```azurecli
+ cd nodejs-appsvc-cosmosdb-bottleneck
+ ```
-If you already have a load testing resource, skip this section and continue to [Create a load test](#create-a-load-test).
+1. Create a resource group for the Azure load testing resource.
-If you don't yet have an Azure load testing resource, create one now:
+ Optionally, you can also reuse the resource group of the sample application you deployed previously.
+ Replace the `<load-testing-resource-group-name>` text placeholder with the name of the resource group.
-### Create a load test
+ ```azurecli
+ resourceGroup="<load-testing-resource-group-name>"
+ location="East US"
+
+ az group create --name $resourceGroup --location $location
+ ```
-Next, you create a load test in your load testing resource for the sample app. You create the load test by using an existing JMeter script in the sample app repository.
+1. Create an Azure load testing resource with the [`az load create`](/cli/azure/load) command.
-1. Go to your load testing resource, and select **Create** on the **Overview** page.
+ Replace the `<load-testing-resource-name>` text placeholder with the name of the load testing resource.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-test.png" alt-text="Screenshot that shows the button for creating a new test." :::
+ ```azurecli
+ # This script requires the following Azure CLI extensions:
+ # - load
+
+ loadTestResource="<load-testing-resource-name>"
+
+ az load create --name $loadTestResource --resource-group $resourceGroup --location $location
+ ```
-1. On the **Basics** tab, enter the **Test name** and **Test description** information. Optionally, you can select the **Run test after creation** checkbox to automatically start the load test after creating it.
+1. Create a load test for simulating load against your sample application with the [`az load test create`](/cli/azure/load/test) command.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-basics.png" alt-text="Screenshot that shows the Basics tab for creating a test." :::
+ Replace the `<web-app-hostname>` text placeholder with the App Service hostname of the sample application. This value is of the form `myapp.azurewebsites.net`. Don't include the `https://` part of the URL.
-1. On the **Test plan** tab, select the **JMeter script** test method, and then select the *SampleApp.jmx* test script from the cloned sample application directory. Next, select **Upload** to upload the file to Azure and configure the load test.
+ ```azurecli
+ testId="sample-app-test"
+ webappHostname="<web-app-hostname>"
+
+ az load test create --test-id $testId --load-test-resource $loadTestResource --resource-group $resourceGroup --load-test-config-file SampleApp.yaml --env webapp=$webappHostname
+ ```
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-test-plan.png" alt-text="Screenshot that shows the Test plan tab and how to upload an Apache JMeter script." :::
+ This command uses the `Sampleapp.yaml` load test configuration file, which references the `SampleApp.jmx` JMeter test script. You use a command-line parameter to pass the sample application hostname to the load test.
- Optionally, you can select and upload additional Apache JMeter configuration files or other files that are referenced in the JMX file. For example, if your test script uses CSV data sets, you can upload the corresponding *.csv* file(s).
+You now have an Azure load testing resource and a load test to generate load against the sample web application in your Azure subscription.
-1. On the **Parameters** tab, add a new environment variable. Enter *webapp* for the **Name** and *`<yourappname>.azurewebsites.net`* for the **Value**. Replace the placeholder text `<yourappname>` with the name of the newly deployed sample application. Don't include the `https://` prefix.
+## Add Azure app components to monitor the application
- The Apache JMeter test script uses the environment variable to retrieve the web application URL. The script then invokes the three APIs in the web application.
+Azure Load Testing enables you to monitor resource metrics for the Azure components of your application. By analyzing these *server-side metrics*, you can identify performance and stability issues in your application directly from the Azure Load Testing dashboard.
- :::image type="content" source="media/tutorial-identify-bottlenecks-azure-portal/create-new-test-parameters.png" alt-text="Screenshot that shows the parameters tab to add environment variable.":::
+In this tutorial, you add the Azure components for the sample application you deployed on Azure, such as the app service, Cosmos DB account, and more.
-1. On the **Load** tab, configure the following details. You can leave the default value for this tutorial.
+To add the Azure app components for the sample application to your load test:
- |Setting |Value |Description |
- ||||
- |**Engine instances** |**1** |The number of parallel test engines that run the Apache JMeter script. |
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-load.png" alt-text="Screenshot that shows the Load tab for creating a test." :::
+1. On the left pane, select **Tests** to view the list of load tests
-1. On the **Monitoring** tab, specify the application components that you want to monitor with the resource metrics. Select **Add/modify** to manage the list of application components.
+1. Select the checkbox next to your load test, and then select **Edit**.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-monitoring.png" alt-text="Screenshot that shows the Monitoring tab for creating a test." :::
+ :::image type="content" source="media/tutorial-identify-bottlenecks-azure-portal/edit-load-test.png" alt-text="Screenshot that shows the list of load tests in the Azure portal, highlighting how to select a test from the list and the Edit button to modify the load test configuration." lightbox="media/tutorial-identify-bottlenecks-azure-portal/edit-load-test.png":::
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-add-resource.png" alt-text="Screenshot that shows how to add Azure resources to monitor during the load test." :::
+1. Go to the **Monitoring** tab, and then select **Add/Modify**.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-added-resources.png" alt-text="Screenshot that shows the Monitoring tab with the list of Azure resources to monitor." :::
+1. Select the checkboxes for the sample application you deployed previously, and then select **Apply**.
-1. Select **Review + create**, review all settings, and select **Create**.
+ :::image type="content" source="media/tutorial-identify-bottlenecks-azure-portal/configure-load-test-select-app-components.png" alt-text="Screenshot that shows how to add app components to a load test in the Azure portal." lightbox="media/tutorial-identify-bottlenecks-azure-portal/configure-load-test-select-app-components.png":::
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/create-new-test-review.png" alt-text="Screenshot that shows the tab for reviewing and creating a test." :::
+ > [!TIP]
+ > You can use the resource group filter to only view the Azure resources in the sample application resource group.
-> [!NOTE]
-> You can update the test configuration at any time, for example to upload a different JMX file. Choose your test in the list of tests, and then select **Edit**.
+1. Select **Apply** to save the changes to the load test configuration.
-## Run the load test in the Azure portal
+You successfully added the Azure app components for the sample application to your load test to enable monitoring server-side metrics while the load test is running.
-In this section, you'll use the Azure portal to manually start the load test that you created previously. If you checked the **Run test after creation** checkbox, the test will already be running.
+## Run the load test
-1. Select **Tests** to view the list of tests, and then select the test that you created.
+You can now run the load test to simulate load against the sample application you deployed in your Azure subscription. In this tutorial, you run the load test from within the Azure portal. Alternately, you can [configure your CI/CD workflow to run your load test](./quickstart-add-load-test-cicd.md).
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/test-list.png" alt-text="Screenshot that shows the list of tests." :::
+To run your load test in the Azure portal:
- >[!TIP]
- > You can use the search box and the **Time range** filter to limit the number of tests.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure load testing resource.
-1. On the test details page, select **Run** or **Run test**. Then, select **Run** on the **Run test** confirmation pane to start the load test.
+1. On the left pane, select **Tests** to view the list of load tests
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/test-runs-run.png" alt-text="Screenshot that shows selections for running a test." :::
+1. Select the load test from the list to view the test details and list of test runs.
- Azure Load Testing begins to monitor and display the application's server metrics on the dashboard.
+1. Select **Run**, and then **Run** again to start the load test.
- You can see the streaming client-side metrics while the test is running. By default, the results refresh automatically every five seconds.
+ Optionally, you can enter a test run description.
- :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/aggregated-by-percentile.png" alt-text="Screenshot that shows the dashboard with test results.":::
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/run-load-test-first-run.png" alt-text="Screenshot that shows how to start a load test in the Azure portal." lightbox="./media/tutorial-identify-bottlenecks-azure-portal/run-load-test-first-run.png":::
+
+ When you run a load test, Azure Load Testing deploys the JMeter test script and any extra files to the test engine instance(s), and then starts the load test.
+
+1. When the load test starts, you should see the load test dashboard.
+
+ If the dashboard doesn't show, you can select **Refresh** on then select the test run from the list.
+
+ The load test dashboard presents the test run details, such as the client-side metrics and server-side application metrics. The graphs on the dashboard refresh automatically.
+
+ :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/load-test-dashboard-client-metrics.png" alt-text="Screenshot that shows the client-side metrics graphs in the load test dashboard in the Azure portal." lightbox="./media/tutorial-identify-bottlenecks-azure-portal/load-test-dashboard-client-metrics.png":::
You can apply multiple filters or aggregate the results to different percentiles to customize the charts.
In this section, you'll use the Azure portal to manually start the load test tha
Wait until the load test finishes fully before you proceed to the next section.
-## Identify performance bottlenecks
+## Use server-side metrics to identify performance bottlenecks
-In this section, you'll analyze the results of the load test to identify performance bottlenecks in the application. Examine both the client-side and server-side metrics to determine the root cause of the problem.
+In this section, you analyze the results of the load test to identify performance bottlenecks in the application. Examine both the client-side and server-side metrics to determine the root cause of the problem.
-1. First, look at the client-side metrics. You'll notice that the 90th percentile for the **Response time** metric for the `add` and `get` API requests is higher than it is for the `lasttimestamp` API.
+1. First, look at the client-side metrics. You notice that the 90th percentile for the **Response time** metric for the `add` and `get` API requests is higher than it is for the `lasttimestamp` API.
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/client-side-metrics.png" alt-text="Screenshot that shows the client-side metrics.":::
In this section, you'll analyze the results of the load test to identify perform
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/cosmos-db-metrics.png" alt-text="Screenshot that shows Azure Cosmos DB metrics.":::
- Notice that the **Normalized RU Consumption** metric shows that the database was quickly running at 100% resource utilization. The high resource usage might have caused database throttling errors. It also might have increased response times for the `add` and `get` web APIs.
+ Notice that the **Normalized RU Consumption** metric shows that the database was quickly running at 100% resource utilization. The high resource usage might cause database throttling errors. It also might increase response times for the `add` and `get` web APIs.
You can also see that the **Provisioned Throughput** metric for the Azure Cosmos DB instance has a maximum throughput of 400 RUs. Increasing the provisioned throughput of the database might resolve the performance problem. ## Increase the database throughput
-In this section, you'll allocate more resources to the database, to resolve the performance bottleneck.
+In this section, you allocate more resources to the database to resolve the performance bottleneck.
For Azure Cosmos DB, increase the database RU scale setting:
For Azure Cosmos DB, increase the database RU scale setting:
## Validate the performance improvements
-Now that you've increased the database throughput, rerun the load test and verify that the performance results have improved:
+Now that you increased the database throughput, rerun the load test and verify that the performance results improved:
1. On the test run dashboard, select **Rerun**, and then select **Rerun** on the **Rerun test** pane. :::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/rerun-test.png" alt-text="Screenshot that shows selections for running the load test.":::
- You'll see a new test run entry with a status column that cycles through the **Provisioning**, **Executing**, and **Done** states. At any time, select the test run to monitor how the load test is progressing.
+ You can see a new test run entry with a status column that cycles through the **Provisioning**, **Executing**, and **Done** states. At any time, select the test run to monitor how the load test is progressing.
1. After the load test finishes, check the **Response time** results and the **Errors** results of the client-side metrics.
-1. Check the server-side metrics for Azure Cosmos DB and ensure that the performance has improved.
+1. Check the server-side metrics for Azure Cosmos DB and ensure that the performance improved.
:::image type="content" source="./media/tutorial-identify-bottlenecks-azure-portal/cosmos-db-metrics-post-run.png" alt-text="Screenshot that shows the Azure Cosmos DB client-side metrics after update of the scale settings."::: The Azure Cosmos DB **Normalized RU Consumption** value is now well below 100%.
-Now that you've changed the scale settings of the database, you see that:
+Now that you updated the scale settings of the database, you can see that:
-* The response time for the `add` and `get` APIs has improved.
+* The response time for the `add` and `get` APIs improved.
* The normalized RU consumption remains well under the limit.
-As a result, the overall performance of your application has improved.
+As a result, the overall performance of your application improved.
## Clean up resources [!INCLUDE [alt-delete-resource-group](../../includes/alt-delete-resource-group.md)]
-## Next steps
-
-Advance to the next tutorial to learn how to set up an automated regression testing workflow by using Azure Pipelines or GitHub Actions.
+## Related content
-> [!div class="nextstepaction"]
-> [Set up automated regression testing](./tutorial-identify-performance-regression-with-cicd.md)
+- Get more details about how to [diagnose failing tests](./how-to-diagnose-failing-load-test.md)
+- [Monitor server-side metrics](./how-to-monitor-server-side-metrics.md) to identify performance bottlenecks in your application
+- [Define load test fail criteria](./how-to-define-test-criteria.md) to validate test results against your service requirements
+- Learn more about the [key concepts for Azure Load Testing](./concept-load-testing-concepts.md).
load-testing Tutorial Identify Performance Regression With Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-performance-regression-with-cicd.md
- Title: 'Tutorial: Automate regression tests with CI/CD'-
-description: 'In this tutorial, you learn how to automate regression testing by using Azure Load Testing and CI/CD workflows. Quickly identify performance degradation for applications under high load.'
---- Previously updated : 09/19/2023-
-#Customer intent: As an Azure user, I want to learn how to automatically test builds for performance regressions on every merge request and/or deployment by using Azure Pipelines.
--
-# Tutorial: Identify performance regressions by automating load tests with CI/CD
-
-This tutorial describes how to identify performance regressions by using Azure Load Testing and CI/CD tools. Set up a CI/CD workflow in Azure Pipelines to automatically run a load test for your application. Use test fail criteria to get alerted about application changes that affect performance or stability.
-
-With regression testing, you want to validate that code changes don't affect the application functionality, performance, and stability. Azure Load Testing enables you to verify that your application continues to meet your performance and stability requirements when put under real-world user load. Test fail criteria give you a point-in-time check about how the application performs.
-
-In this tutorial, you use a sample Node.js application and JMeter script. The tutorial doesn't require any coding or Apache JMeter skills.
-
-You'll learn how to:
-
-> [!div class="checklist"]
-> * Deploy the sample application on Azure.
-> * Create a load test by using a JMeter script.
-> * Set up a CI/CD workflow from the Azure portal.
-> * View the load test results in the CI/CD dashboard.
-> * Define load test fail criteria to identify performance regressions.
-
-> [!NOTE]
-> Azure Pipelines has a 60-minute timeout on jobs that are running on Microsoft-hosted agents for private projects. If your load test is running for more than 60 minutes, you'll need to pay for [additional capacity](/azure/devops/pipelines/agents/hosted?tabs=yaml#capabilities-and-limitations). If not, the pipeline will time out without waiting for the test results. You can view the status of the load test in the Azure portal.
-
-## Prerequisites
-
-* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An Azure DevOps organization and project. If you don't have an Azure DevOps organization, you can [create one for free](/azure/devops/pipelines/get-started/pipelines-sign-up?view=azure-devops&preserve-view=true).
-
-## Deploy the sample application
-
-To get started with this tutorial, you first need to set up a sample Node.js web application.
--
-Now that you have the application deployed and running, you can create a URL-based load test against it.
-
-## Create a load test
-
-Before you set up the CI/CD workflow in Azure Pipelines, you create an Azure load testing resource and create load test by uploading a JMeter test script in the Azure portal. The JMeter script tests three endpoints in the sample application: `lasttimestamp`, `add`, and `get`.
-
-After you create the load test, you can then set up the CI/CD workflow from the Azure portal.
-
-### Create the Azure load testing resource
-
-The Azure load testing resource is a top-level resource for your load-testing activities. This resource provides a centralized place to view and manage load tests, test results, and related artifacts.
-
-If you already have a load testing resource, skip this section and continue to [Create a load test by uploading a JMeter script](#create-a-load-test-by-uploading-a-jmeter-script).
-
-If you don't yet have an Azure load testing resource, create one now:
--
-### Create a load test by uploading a JMeter script
-
-You can create a load test by uploading an Apache JMeter test script. The test script defines the test plan, and describes the application requests to invoke and any custom logic for the load test. Azure Load Testing abstracts the infrastructure for running the test script at scale.
-
-To create a load test by uploading a JMeter script in the Azure portal:
-
-1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
-
-1. Go to your Azure Load Testing resource, select **Tests** from the left pane, select **+ Create**, and then select **Upload a JMeter script**.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-new-test.png" alt-text="Screenshot that shows the Azure Load Testing page and the button for creating a new test." :::
-
-1. On the **Basics** tab, enter the **Test name** and **Test description** information.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-new-test-basics.png" alt-text="Screenshot that shows the Basics tab for creating a test." :::
-
-1. On the **Test plan** tab, select the sample application JMeter script, and then select **Upload** to upload the file to Azure.
-
- You can find the JMeter script `SampleApp.jmx` in the repository you cloned earlier.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-new-test-test-plan.png" alt-text="Screenshot that shows the Test plan tab." :::
-
-1. On the **Parameters** tab, add an environment variable for the sample application endpoint:
-
- The test script uses an environment variable to retrieve the endpoint of the sample application.
-
- | Field | Value |
- |-|-|
- | **Name** | *webapp* |
- | **Value** | Hostname of the deployed sample application, without `https://` prefix.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/create-new-test-parameters.png" alt-text="Screenshot that shows the Parameters plan tab, highlighting the environment variable for the sample app hostname." :::
-
-1. Select **Review + Create**, review the values, and then select **Create** to create and run the load test.
-
- > [!NOTE]
- > After creating the load test, it might take a few minutes for the load test to finish running.
-
-## Set up the CI/CD workflow from the Azure portal
-
-Now that you have load testing resource and a load test for the sample application, you can set up a new CI workflow to automatically run your load test. Azure Load Testing enables you to set up a new CI workflow in Azure Pipelines from the Azure portal.
-
-### Create the CI/CD workflow
-
-1. In the [Azure portal](https://portal.azure.com/), go to your Azure load testing resource.
-
-1. On the left pane, select **Tests** to view the list of tests.
-
-1. Select the test you created previously by selecting the checkbox, and then select **Set up CI/CD**.
-
- :::image type="content" source="media/tutorial-identify-performance-regression-with-cicd/list-of-tests.png" alt-text="Screenshot that shows the list of tests in Azure portal." lightbox="media/tutorial-identify-performance-regression-with-cicd/list-of-tests.png":::
-
-1. Enter the following details for creating a CI/CD pipeline definition:
-
- |Setting|Value|
- |-|-|
- | **Organization** | Select the Azure DevOps organization where you want to run the pipeline from. |
- | **Project** | Select the project from the organization selected previously. |
- | **Repository** | Select the source code repository to store and run the Azure pipeline from. |
- | **Branch** | Select the branch in the selected repository. |
- | **Repository branch folder** | (Optional) Enter the repository branch folder name in which you'd like to commit. If empty, the root folder is used. |
- | **Override existing files** | Check this setting. |
- | **Service connection** | Select *Create new* to create a new service connection to allow Azure Pipelines to connect to the load testing resource. |
-
- :::image type="content" source="media/tutorial-identify-performance-regression-with-cicd/set-up-cicd-pipeline.png" alt-text="Screenshot that shows the settings to be configured to set up a CI/CD pipeline." lightbox="media/tutorial-identify-performance-regression-with-cicd/set-up-cicd-pipeline.png":::
-
- > [!IMPORTANT]
- > If you're getting an error creating a PAT token, or you don't see any repositories, make sure to [connect your Azure DevOps organization to Microsoft Entra ID](/azure/devops/organizations/accounts/connect-organization-to-azure-ad). Make sure the directory in Azure DevOps matches the directory you're using for Azure Load Testing. After connecting to Microsoft Entra ID, close and reopen your browser window.
-
-1. Select **Create Pipeline** to start creating the pipeline definition.
-
- Azure Load Testing performs the following actions to configure the pipeline:
-
- - Create a new service connection of type [Azure Resource Manager](/azure/devops/pipelines/library/service-endpoints#azure-resource-manager-service-connection) in the Azure DevOps project. The service principal is automatically assigned the *Load Test Contributor* role on the Azure load testing resource.
-
- - Commit the JMeter script and test configuration YAML to the source code repository.
-
- - Create a pipeline definition that invokes the Azure load testing resource and runs the load test.
-
-1. When the pipeline creation finishes, you receive a notification in the Azure portal with a link to the pipeline.
-
-### Run the CI/CD workflow
-
-You can now manually trigger the CI/CD workflow to validate that the load test is run correctly.
-
-1. Sign in to your Azure DevOps organization (`https://dev.azure.com/<your-organization>`), and select your project.
-
- Replace the `<your-organization>` text placeholder with your project identifier.
-
-1. Select **Pipelines** in the left navigation
-
- Notice that there's a new pipeline created in your project.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-pipelines-list.png" alt-text="Screenshot that shows the Azure Pipelines page, showing the pipeline that Azure Load Testing generated." lightbox="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-pipelines-list.png":::
-
-1. Select the pipeline, select **Run pipeline**, and then select **Run** to start the CI workflow.
-
- The first time you run the pipeline, you need to grant the pipeline permission to access the service connection and connect to Azure. Until you grant permission, the CI workflow run remains in the waiting state.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-run-pipeline.png" alt-text="Screenshot that shows the Azure Pipelines 'Run pipeline' page." lightbox="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-run-pipeline.png":::
-
-1. Select the **Load Test** job to view the job details.
-
- An alert message is shown that the pipeline needs permission to access a resource.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-pending-permissions.png" alt-text="Screenshot that shows the Azure Pipelines run details page, showing a warning that the pipeline needs additional permissions." lightbox="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-pending-permissions.png":::
-
-1. Select **View** > **Permit** > **Permit** to grant the permission.
-
- The CI/CD pipeline run now starts and runs your load test.
-
-You've now configured and run an Azure Pipelines workflow that automatically runs a load test each time a source code update is made.
-
-## View load test results
-
-While the CI pipeline is running, you can view the load test statistics directly in the Azure Pipelines log. The CI/CD log displays the following load test statistics: response time metrics, requests per second, total number of requests, number of errors, and error rate. Alternately, you can navigate directly to the load test dashboard in the Azure portal by selecting the URL in the pipeline log.
--
-You can also download the load test results file, which is available as a pipeline artifact. In the pipeline log view, select **Load Test**, and then select **1 artifact produced** to download the result files for the load test.
--
-## Add test fail criteria
-
-To identify performance regressions, you can analyze the test metrics for each pipeline run logs. Ideally, you want the pipeline run to fail whenever your performance or stability requirements aren't met.
-
-Azure Load Testing enables you to define load test fail criteria based on client-side metrics, such as the response time or error rate. When at least one of the fail criteria isn't met, the status of the CI pipeline is set to failed accordingly. With test fail criteria, you can now quickly identify if a specific application build results in a performance regression.
-
-To define test fail criteria for the average response time and the error rate:
-
-1. In your Azure DevOps project, select **Repos** > **Files**.
-
-1. Select the `alt-config-<unique_id>.yml` file, and then select **Edit**.
-
- This YAML file specifies the load test configuration settings, such as the reference to the JMeter test script, the list of fail criteria, references to input data files, and more.
-
-1. Replace the `failureCriteria:` with the following snippet to define two test fail criteria:
-
- ```yaml
- failureCriteria:
- - avg(response_time_ms) > 100
- - percentage(error) > 20
- ```
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-update-load-test-config.png" alt-text="Screenshot that shows how to update the load test configuration file with test criteria in Azure Pipelines." lightbox="./media/tutorial-identify-performance-regression-with-cicd/azure-pipelines-update-load-test-config.png":::
-
- You've now specified fail criteria for your load test based on the average response time and the error rate. The test fails if at least one of these conditions is met:
-
- - The aggregate average response time is greater than 100 ms.
- - The aggregate percentage of errors is greater than 20%.
-
-1. Select **Commit** to save the updates.
-
- Updating the file will trigger the CI/CD workflow.
-
-1. After the test finishes, notice that the CI/CD pipeline run has failed.
-
- In the CI/CD output log, you find that the test failed because one of the fail criteria was met. The load test average response time was higher than the value that you specified in the fail criteria.
-
- :::image type="content" source="./media/tutorial-identify-performance-regression-with-cicd/test-criteria-failed.png" alt-text="Screenshot that shows pipeline logs after failed test criteria.":::
-
- The Azure Load Testing service evaluates the criteria during the test run. If any of these conditions fails, Azure Load Testing service returns a nonzero exit code. This code informs the CI/CD workflow that the test has failed.
-
-1. Edit the `alt-config-<unique_id>.yml` file and change the test's fail criteria to increase the criterion for average response time:
-
- ```yaml
- failureCriteria:
-     - avg(response_time_ms) > 5000
-     - percentage(error) > 20
- ```
-
-1. Commit the changes to trigger the CI/CD workflow again.
-
- After the test finishes, you notice that the load test and the CI/CD workflow run complete successfully.
-
-## Clean up resources
--
-## Related content
-
-In this tutorial, you've set up a new CI/CD workflow in Azure Pipelines to automatically run a load test with every code change. By using test fail criteria, you can identify when a performance regression was introduced in the application.
-
-* [Manually configure load testing in CI/CD](./how-to-configure-load-test-cicd.md) if you're using GitHub Actions, or want to use an existing workflow.
-* [Identify performance degradation over time by using metrics trends](./how-to-compare-multiple-test-runs.md#view-metrics-trends-across-test-runs).
-* [Monitor server-side application metrics](./how-to-monitor-server-side-metrics.md) to identify performance bottlenecks.
-* Learn more about [test fail criteria](./how-to-define-test-criteria.md).
logic-apps Logic Apps Gateway Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-gateway-connection.md
ms.suite: integration Previously updated : 10/19/2022 Last updated : 12/01/2023 #Customer intent: As a logic apps developer, I want to create a data gateway resource in the Azure portal so that my logic app workflow can connect to on-premises data sources.
Last updated 10/19/2022
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-In Azure Logic Apps, you can use some connectors to access on-premises data sources from your logic app workflows. However, before you can do so, you need to install the on-premises data gateway on a local computer. You also need to create a gateway resource in Azure for your gateway installation. You can then select this gateway resource when you use triggers and actions from connectors that can access on-premises data sources.
+Sometimes your workflow must connect to an on-premises data source and can use only connectors that provide this access through an on-premises data gateway. To set up this on-premises data gateway, you have to complete the following tasks: install the local on-premises data gateway and create an on-premises data gateway resource in Azure for the local data gateway. When you add a trigger or action to your workflow from a connector that requires the data gateway, you can select the data gateway resource to use with your connection.
> [!TIP]
-> To directly access on-premises resources in Azure virtual networks without having to use a gateway,
-> consider creating an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md)
-> or a [Standard logic app workflow](create-single-tenant-workflows-azure-portal.md), which provides
-> some built-in connectors that don't need the gateway to access on-premises data sources.
+>
+> To directly access on-premises resources in Azure virtual networks without having to use the data gateway,
+> consider creating a [Standard logic app workflow](create-single-tenant-workflows-azure-portal.md),
+> rather than a Consumption logic app workflow. In a Standard workflow, built-in connectors don't
+> require a data gateway to access on-premises data sources.
-This how-to guide shows how to create your Azure gateway resource after you [install the on-premises gateway on your local computer](logic-apps-gateway-install.md).
+This guide shows how to create the Azure data gateway resource after you [install the on-premises gateway on your local computer](logic-apps-gateway-install.md).
For more information, see the following documentation:
For information about how to use a gateway with other services, see the followin
## Supported data sources
-In Azure Logic Apps, an on-premises data gateway supports [on-premises connectors](../connectors/managed.md#on-premises-connectors) for the following data sources:
+In Azure Logic Apps, the on-premises data gateway supports [on-premises connectors](../connectors/managed.md#on-premises-connectors) for the following data sources:
* [Apache Impala](/connectors/impala) * [BizTalk Server](/connectors/biztalk)
Azure Logic Apps supports read and write operations through the data gateway, bu
## Prerequisites
-* You already [installed an on-premises data gateway on a local computer](logic-apps-gateway-install.md). This gateway installation must exist before you can create a gateway resource that links to this installation. You can install only one data gateway per local computer.
+* You already [installed an on-premises data gateway on a local computer](logic-apps-gateway-install.md). This data gateway installation must exist before you can create a data gateway resource that links to this installation. You can install only one data gateway per local computer.
* You have the [same Azure account and subscription](logic-apps-gateway-install.md#requirements) that you used for your gateway installation. This Azure account must belong only to a single [Microsoft Entra tenant or directory](../active-directory/fundamentals/active-directory-whatis.md#terminology). You have to use the same Azure account and subscription to create your gateway resource in Azure because only the gateway administrator can create the gateway resource in Azure. Service principals currently aren't supported.
- * When you create a gateway resource in Azure, you select a gateway installation to link with your gateway resource and only that gateway resource. Each gateway resource can link to only one gateway installation. You can't select a gateway installation that's already associated with another gateway resource.
+ * When you create a data gateway resource in Azure, you select a data gateway installation to link with your gateway resource and only that gateway resource. Each gateway resource can link to only one gateway installation. You can't select a gateway installation that's already associated with another gateway resource.
* Your logic app resource and gateway resource don't have to exist in the same Azure subscription. In triggers and actions where you use the gateway resource, you can select a different Azure subscription that has a gateway resource, but only if that subscription exists in the same Microsoft Entra tenant or directory as your logic app resource. You also have to have administrator permissions on the gateway, which another administrator can set up for you. For more information, see [Data Gateway: Automation using PowerShell - Part 1](https://community.powerbi.com/t5/Community-Blog/Data-Gateway-Automation-using-PowerShell-Part-1/ba-p/1117330) and [PowerShell: Data Gateway - Add-DataGatewayClusterUser](/powershell/module/datagateway/add-datagatewayclusteruser). > [!NOTE]
- > Currently, you can't share a gateway resource or installation across multiple subscriptions.
+ > Currently, you can't share a data gateway resource or installation across multiple subscriptions.
> To submit product feedback, see [Microsoft Azure Feedback Forum](https://feedback.azure.com/d365community/forum/79b1327d-d925-ec11-b6e6-000d3a4f06a4). <a name="create-gateway-resource"></a> ## Create Azure gateway resource
-After you install a gateway on a local computer, create the Azure resource for your gateway.
+After you install the data gateway on a local computer, create the Azure resource for your data gateway.
1. Sign in to the [Azure portal](https://portal.azure.com) with the same Azure account that you used to install the gateway. 1. In the Azure portal search box, enter **on-premises data gateway**, and then select **On-premises data gateways**.
- :::image type="content" source="./media/logic-apps-gateway-connection/search-for-on-premises-data-gateway.png" alt-text="Screenshot of the Azure portal. In the search box, 'on-premises data gateway' is selected. In the results, 'On-premises data gateways' is selected.":::
+ :::image type="content" source="./media/logic-apps-gateway-connection/search-for-on-premises-data-gateway.png" alt-text="Screenshot shows Azure portal search box with the words, on-premises data gateway. The results list shows the selected option, On-premises data gateways.":::
1. Under **On-premises data gateways**, select **Create**.
- :::image type="content" source="./media/logic-apps-gateway-connection/add-azure-data-gateway-resource.png" alt-text="Screenshot of the Azure portal. On the 'On-premises data gateways page,' the 'Create' button is selected.":::
+ :::image type="content" source="./media/logic-apps-gateway-connection/add-azure-data-gateway-resource.png" alt-text="Screenshot shows the page for On-premises data gateways with the selected option for Create.":::
1. Under **Create a gateway**, provide the following information for your gateway resource. When you're done, select **Review + create**.
After you install a gateway on a local computer, create the Azure resource for y
The following example shows a gateway installation that's in the same region as your gateway resource and is linked to the same Azure account:
- :::image type="content" source="./media/logic-apps-gateway-connection/on-premises-data-gateway-create-connection.png" alt-text="Screenshot of the Azure portal 'Create a gateway' page. The 'Name,' 'Region,' and other boxes have values. The 'Review + create' button is selected.":::
+ :::image type="content" source="./media/logic-apps-gateway-connection/on-premises-data-gateway-create-connection.png" alt-text="Screenshot shows the page for Create a gateway. The Name, Region, and other boxes contain values. The button, Review + create, appears selected.":::
-1. On the validation page that appears, confirm all the information that you provided, and then select **Create**.
+1. On the validation page that appears, confirm all the information that you provided, and select **Create**.
<a name="connect-logic-app-gateway"></a>
After you create your gateway resource and associate your Azure subscription wit
1. In the Azure portal, create or open your logic app workflow in the designer.
-1. Add a trigger or action from a connector that supports on-premises connections through the gateway.
+1. Add a trigger or action from a connector that supports on-premises connections through the data gateway.
> [!NOTE] >
To update the settings for a gateway connection, you can edit your connection. T
To find all API connections associated with your Azure subscription, use one of the following options:
-* In the Azure search box, enter **api connections**, and then select **API Connections**.
+* In the Azure portal search box, enter **api connections**, and select **API Connections**.
* From the Azure portal menu, select **All resources**. Set the **Type** filter to **API Connection**. <a name="change-delete-gateway-resource"></a>
To create a different gateway resource, link your gateway installation to a diff
1. On the gateway resource toolbar, select **Delete**.
- :::image type="content" source="./media/logic-apps-gateway-connection/delete-on-premises-data-gateway.png" alt-text="Screenshot of an on-premises data gateway resource in the Azure portal. On the toolbar, 'Delete' is highlighted.":::
+ :::image type="content" source="./media/logic-apps-gateway-connection/delete-on-premises-data-gateway.png" alt-text="Screenshot shows on-premises data gateway resource in the Azure portal. On the toolbar, Delete is selected.":::
<a name="faq"></a>
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
description: Make data-driven decisions and policies with the Responsible AI dashboard's integration of the causal analysis tool EconML. -+
machine-learning Concept Counterfactual Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-counterfactual-analysis.md
description: Generate diverse counterfactual examples with feature perturbations to see minimal changes required to achieve desired prediction with the Responsible AI dashboard's integration of DiCE machine learning. -+
machine-learning Concept Data Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-analysis.md
description: Perform exploratory data analysis to understand feature biases and imbalances by using the Responsible AI dashboard's data analysis. -+
machine-learning Concept Error Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-error-analysis.md
description: Assess model error distributions in different cohorts of your dataset with the Responsible AI dashboard's integration of error analysis. -+
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-fairness-ml.md
description: Learn about machine learning fairness and how the Fairlearn Python package can help you assess and mitigate unfairness. -+
machine-learning Concept Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-foundation-models.md
description: Learn about machine learning foundation models and how to use them at scale in Azure. +
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
description: Learn how to use the comprehensive UI and SDK/YAML components in the Responsible AI dashboard to debug your machine learning models and make data-driven decisions. -+
machine-learning Concept Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-scorecard.md
description: Learn about how to use the Responsible AI scorecard to share responsible AI insights from your machine learning models and make data-driven decisions with non-technical and technical stakeholders. -+
machine-learning Concept Responsible Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai.md
description: Learn what Responsible AI is and how to use it with Azure Machine Learning to understand models, protect data, and control the model lifecycle. -+
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
description: Learn best practices for mitigating potential harm to peopleΓÇöespe
+ Last updated 11/04/2022
machine-learning How To Deploy Models From Huggingface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-from-huggingface.md
description: Deploy and score transformers based large language models from the Hugging Face hub. +
Follow this link to find [hugging face model example code](https://github.com/Az
HuggingFace hub has thousands of models with hundreds being updated each day. Only the most popular models in the collection are tested and others may fail with one of the below errors. ### Gated models
-[Gated models](https://huggingface.co/docs/hub/models-gated) require users to agree to share their contact information and accept the model ownersΓÇÖ terms and conditions in order to access the model. Attempting to deploy such models will fail with a `KeyError`.
+[Gated models](https://huggingface.co/docs/hub/models-gated) require users to agree to share their contact information and accept the model owners' terms and conditions in order to access the model. Attempting to deploy such models will fail with a `KeyError`.
### Models that need to run remote code Models typically use code from the transformers SDK but some models run code from the model repo. Such models need to set the parameter `trust_remote_code` to `True`. Follow this link to learn more about using [remote code](https://huggingface.co/docs/transformers/custom_models#using-a-model-with-custom-code). Such models are not supported from keeping security in mind. Attempting to deploy such models will fail with the following error: `ValueError: Loading <model> requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error.`
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
description: Learn how your machine learning model makes predictions during training and inferencing by using the Azure Machine Learning CLI and Python SDK. -+
machine-learning How To Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard.md
description: Learn how to use the various tools and visualization charts in the Responsible AI dashboard in Azure Machine Learning. -+
You can also find this information on the Responsible AI dashboard page by selec
### Enable full functionality of the Responsible AI dashboard
-1. Select a running compute instance in the **Compute** dropdown list at the top of the dashboard. If you donΓÇÖt have a running compute, create a new compute instance by selecting the plus sign (**+**) next to the dropdown. Or you can select the **Start compute** button to start a stopped compute instance. Creating or starting a compute instance might take few minutes.
+1. Select a running compute instance in the **Compute** dropdown list at the top of the dashboard. If you don't have a running compute, create a new compute instance by selecting the plus sign (**+**) next to the dropdown. Or you can select the **Start compute** button to start a stopped compute instance. Creating or starting a compute instance might take few minutes.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/select-compute.png" alt-text="Screenshot of the 'Compute' dropdown box for selecting a running compute instance." lightbox = "./media/how-to-responsible-ai-dashboard/select-compute.png":::
You can name your new dataset cohort, select **Add filter** to add each filter y
:::image type="content" source="./media/how-to-responsible-ai-dashboard/view-dashboard-new-cohort.png" alt-text="Screenshot of making a new cohort in the dashboard." lightbox= "./media/how-to-responsible-ai-dashboard/view-dashboard-new-cohort.png":::
-Select **Dashboard configuration** to open a panel with a list of the components youΓÇÖve configured on your dashboard. You can hide components on your dashboard by selecting the **Trash** icon, as shown in the following image:
+Select **Dashboard configuration** to open a panel with a list of the components you've configured on your dashboard. You can hide components on your dashboard by selecting the **Trash** icon, as shown in the following image:
:::image type="content" source="./media/how-to-responsible-ai-dashboard/dashboard-configuration.png" alt-text="Screenshot showing the dashboard configuration." lightbox="./media/how-to-responsible-ai-dashboard/dashboard-configuration.png":::
The **Chart view** panel shows you aggregate and individual plots of datapoints.
### Feature importances (model explanations)
-By using the model explanation component, you can see which features were most important in your modelΓÇÖs predictions. You can view what features affected your modelΓÇÖs prediction overall on the **Aggregate feature importance** pane or view feature importances for individual data points on the **Individual feature importance** pane.
+By using the model explanation component, you can see which features were most important in your model's predictions. You can view what features affected your model's prediction overall on the **Aggregate feature importance** pane or view feature importances for individual data points on the **Individual feature importance** pane.
#### Aggregate feature importances (global explanations)
machine-learning How To Responsible Ai Image Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-image-dashboard.md
description: Learn how to use the various tools and visualization charts in the Responsible AI image dashboard in Azure Machine Learning. -+
For image classification and multiclassification, incorrect predictions refer to
For object detection, incorrect predictions refer to images where: - At least one object was incorrectly labeled-- Incorrectly detecting an object class when a ground truth object doesnΓÇÖt exist
+- Incorrectly detecting an object class when a ground truth object doesn't exist
- Failing to detect an object class when a ground truth object exists > [!NOTE]
The Class view pane breaks down your model predictions by class label. You can i
- **Select label type**: Choose to view images by the predicted or ground truth label. - **Select labels to display**: View image instances containing your selection of one or more class labels.-- **View images per class label**: Identify successful and error image instances per selected class label(s), and the distribution of each class label in your dataset. If a class label has ΓÇ£10/120 examplesΓÇ¥, out of 120 total images in the dataset, 10 images belong to that class label.
+- **View images per class label**: Identify successful and error image instances per selected class label(s), and the distribution of each class label in your dataset. If a class label has "10/120 examples", out of 120 total images in the dataset, 10 images belong to that class label.
Class view for multiclass classification:
Class view for object detection:
For AutoML image classification models, four kinds of explainability methods are supported, namely [Guided backprop](https://arxiv.org/abs/1412.6806), [Guided gradCAM](https://arxiv.org/abs/1610.02391v4), [Integrated Gradients](https://arxiv.org/abs/1703.01365) and [XRAI](https://arxiv.org/abs/1906.02825). To learn more about the four explainability methods, see [Generate explanations for predictions](how-to-auto-train-image-models.md#generate-explanations-for-predictions). > [!NOTE]
-> - **These four methods are specific to AutoML image classification only** and will not work with other task types such as object detection, instance segmentation etc. Non-AutoML image classification models can leverage SHAP vision for model interpretability.
->- **The explanations are only generated for the predicted class**. For multilabel classification, a threshold on confidence score is required, to select the classes for which the explanations are generated. See the [parameter list](how-to-responsible-ai-vision-insights.md#responsible-ai-vision-insights-component-parameter-automl-specific) for the parameter name.
+> - **These four methods are specific to AutoML image classification only** and will not work with other task types such as object detection, instance segmentation etc. Non-AutoML image classification models can leverage SHAP vision for model interpretability.
+>- **The explanations are only generated for the predicted class**. For multilabel classification, a threshold on confidence score is required, to select the classes for which the explanations are generated. See the [parameter list](how-to-responsible-ai-vision-insights.md#responsible-ai-vision-insights-component-parameter-automl-specific) for the parameter name.
Both AutoML and non-AutoML object detection models can leverage [D-RISE](https://github.com/microsoft/vision-explanation-methods) to generate visual explanations for model predictions.
machine-learning How To Responsible Ai Insights Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-insights-ui.md
description: Learn how to generate a Responsible AI insights with no-code experience in the Azure Machine Learning studio UI. -+
To access the dashboard generation wizard and generate a Responsible AI dashboar
To learn more supported model types and limitations in the Responsible AI dashboard, see [supported scenarios and limitations](concept-responsible-ai-dashboard.md#supported-scenarios-and-limitations).
-The wizard provides an interface for entering all the necessary parameters to create your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI. The studio presents a guided flow and instructional text to help contextualize the variety of choices about which Responsible AI components youΓÇÖd like to populate your dashboard with.
+The wizard provides an interface for entering all the necessary parameters to create your Responsible AI dashboard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio UI. The studio presents a guided flow and instructional text to help contextualize the variety of choices about which Responsible AI components you'd like to populate your dashboard with.
The wizard is divided into five sections:
The Responsible AI dashboard offers two profiles for recommended sets of tools t
## Configure parameters for dashboard components
-After youΓÇÖve selected a profile, the **Component parameters for model debugging** configuration pane for the corresponding components appears.
+After you've selected a profile, the **Component parameters for model debugging** configuration pane for the corresponding components appears.
:::image type="content" source="./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-component-parameter-debugging.png" alt-text="Screenshot of the component parameter tab, showing the 'Component parameters for model debugging' configuration pane." lightbox = "./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-component-parameter-debugging.png":::
Component parameters for model debugging:
1. **Generate explanations**: Toggle on and off to generate a model explanation component for your Responsible AI dashboard. No configuration is necessary, because a default opaque box mimic explainer will be used to generate feature importances.
-Alternatively, if you select the **Real-life interventions** profile, youΓÇÖll see the following screen generate a causal analysis. This will help you understand the causal effects of features you want to ΓÇ£treatΓÇ¥ on a certain outcome you want to optimize.
+Alternatively, if you select the **Real-life interventions** profile, you'll see the following screen generate a causal analysis. This will help you understand the causal effects of features you want to "treat" on a certain outcome you want to optimize.
:::image type="content" source="./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-component-parameter-real-life-intervention.png" alt-text="Screenshot of the wizard, showing the 'Component parameters for real-life interventions' pane." lightbox = "./media/how-to-responsible-ai-insights-ui/create-responsible-ai-dashboard-ui-component-parameter-real-life-intervention.png"::: Component parameters for real-life interventions use causal analysis. Do the following: 1. **Target feature (required)**: Choose the outcome you want the causal effects to be calculated for.
-1. **Treatment features (required)**: Choose one or more features that youΓÇÖre interested in changing (ΓÇ£treatingΓÇ¥) to optimize the target outcome.
+1. **Treatment features (required)**: Choose one or more features that you're interested in changing ("treating") to optimize the target outcome.
1. **Categorical features**: Indicate which features are categorical to properly render them as categorical values in the dashboard UI. This field is pre-loaded for you based on your dataset metadata. 1. **Advanced settings**: Specify additional parameters for your causal analysis, such as heterogenous features (that is, additional features to understand causal segmentation in your analysis, in addition to your treatment features) and which causal model you want to be used.
Finally, configure your experiment to kick off a job to generate your Responsibl
On the **Training job** or **Experiment configuration** pane, do the following:
-1. **Name**: Give your dashboard a unique name so that you can differentiate it when youΓÇÖre viewing the list of dashboards for a given model.
+1. **Name**: Give your dashboard a unique name so that you can differentiate it when you're viewing the list of dashboards for a given model.
1. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment. 1. **Existing experiment**: In the dropdown list, select an existing experiment. 1. **Select compute type**: Specify which compute type you want to use to execute your job.
On the **Training job** or **Experiment configuration** pane, do the following:
1. **Description**: Add a longer description of your Responsible AI dashboard. 1. **Tags**: Add any tags to this Responsible AI dashboard.
-After youΓÇÖve finished configuring your experiment, select **Create** to start generating your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job with a link to the resulting Responsible AI dashboard from the job page when it's completed.
+After you've finished configuring your experiment, select **Create** to start generating your Responsible AI dashboard. You'll be redirected to the experiment page to track the progress of your job with a link to the resulting Responsible AI dashboard from the job page when it's completed.
To learn how to view and use your Responsible AI dashboard see, [Use the Responsible AI dashboard in Azure Machine Learning studio](how-to-responsible-ai-dashboard.md).
To learn how to view and use your Responsible AI dashboard see, [Use the Respons
Once you've created a dashboard, you can use a no-code UI in Azure Machine Learning studio to customize and generate a Responsible AI scorecard. This enables you to share key insights for responsible deployment of your model, such as fairness and feature importance, with non-technical and technical stakeholders. Similar to creating a dashboard, you can use the following steps to access the scorecard generation wizard: - Navigate to the Models tab from the left navigation bar in Azure Machine Learning studio.-- Select the registered model youΓÇÖd like to create a scorecard for and select the **Responsible AI** tab.
+- Select the registered model you'd like to create a scorecard for and select the **Responsible AI** tab.
- From the top panel, select **Create Responsible AI insights (preview)** and then **Generate new PDF scorecard**.
-The wizard will allow you to customize your PDF scorecard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio to help contextualize the variety of choices of UI with a guided flow and instructional text to help you choose the components youΓÇÖd like to populate your scorecard with. The wizard is divided into seven steps, with an eighth step (fairness assessment) that will only appear for models with categorical features:
+The wizard will allow you to customize your PDF scorecard without having to touch code. The experience takes place entirely in the Azure Machine Learning studio to help contextualize the variety of choices of UI with a guided flow and instructional text to help you choose the components you'd like to populate your scorecard with. The wizard is divided into seven steps, with an eighth step (fairness assessment) that will only appear for models with categorical features:
1. PDF scorecard summary 2. Model performance
The wizard will allow you to customize your PDF scorecard without having to touc
> [!NOTE] > The Fairness assessment is currently only available for categorical sensitive attributes such as gender.
-6. *The Causal analysis* section answers real-world ΓÇ£what ifΓÇ¥ questions about how changes of treatments would impact a real-world outcome. If the causal component is activated in the Responsible AI dashboard for which you're generating a scorecard, no more configuration is needed.
+6. *The Causal analysis* section answers real-world "what if" questions about how changes of treatments would impact a real-world outcome. If the causal component is activated in the Responsible AI dashboard for which you're generating a scorecard, no more configuration is needed.
:::image type="content" source="./media/how-to-responsible-ai-insights-ui/scorecard-causal.png" alt-text="Screenshot of the wizard on scorecard causal analysis configuration." lightbox= "./media/how-to-responsible-ai-insights-ui/scorecard-causal.png":::
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
description: Share insights with non-technical business stakeholders by exporting a PDF Responsible AI scorecard from Azure Machine Learning. -+
machine-learning How To Responsible Ai Text Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-text-dashboard.md
description: Learn how to use the various tools and visualization charts in the Responsible AI text dashboard in Azure Machine Learning. -+
machine-learning How To Responsible Ai Text Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-text-insights.md
description: Learn how to generate Responsible AI text insights with Python and YAML in Azure Machine Learning. -+
rai_text_insights_component = ml_client_registry.components.get(
#Then inside the pipeline: # Initiate the RAI Text Insights rai_text_job = rai_text_insights_component(
- title=ΓÇ¥From PythonΓÇ¥,
+ title="From Python",
task_type="text_classification", model_info=expected_model_id, model_input=Input(type=AssetTypes.MLFLOW_MODEL, path= "<azureml:model_name:model_id>"),
machine-learning How To Responsible Ai Vision Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-vision-insights.md
description: Learn how to generate Responsible AI vision insights with Python and YAML in Azure Machine Learning. -+
To start, register your input model in Azure Machine Learning and reference the
```python DataFrame({
- ΓÇÿimage_path_1ΓÇÖ : [
+ 'image_path_1' : [
[object_1, topX1, topY1, bottomX1, bottomY1, (optional) confidence_score], [object_2, topX2, topY2, bottomX2, bottomY2, (optional) confidence_score], [object_3, topX3, topY3, bottomX3, bottomY3, (optional) confidence_score] ],
- ΓÇÿimage_path_2ΓÇÖ: [
+ 'image_path_2': [
[object_1, topX4, topY4, bottomX4, bottomY4, (optional) confidence_score], [object_2, topX5, topY5, bottomX5, bottomY5, (optional) confidence_score] ]
To start, register your input model in Azure Machine Learning and reference the
- Image Classification ```python
- DataFrame({ ΓÇÿimage_path_1ΓÇÖ : ΓÇÿlabel_1ΓÇÖ, ΓÇÿimage_path_2ΓÇÖ : ΓÇÿlabel_2ΓÇÖ ... })
+ DataFrame({ 'image_path_1' : 'label_1', 'image_path_2' : 'label_2' ... })
``` The RAI vision insights component also accepts the following parameters:
After specifying and submitting the pipeline to Azure Machine Learning for execu
path: ${{parent.inputs.my_test_data}} target_column_name: ${{parent.inputs.target_column_name}} maximum_rows_for_test_dataset: 5000
- classes: '[ΓÇ£catΓÇ¥, ΓÇ£dogΓÇ¥]'
+ classes: '["cat", "dog"]'
precompute_explanation: True enable_error_analysis: True
After specifying and submitting the pipeline to Azure Machine Learning for execu
```python #First load the RAI component: rai_vision_insights_component = ml_client_registry.components.get(
- name="rai_vision_insights", label=ΓÇ¥latestΓÇ¥
+ name="rai_vision_insights", label="latest"
) #Then construct the pipeline: # Initiate Responsible AI Vision Insights rai_vision_job = rai_vision_insights_component(
- title=ΓÇ¥From PythonΓÇ¥,
+ title="From Python",
task_type="image_classification", model_info=expected_model_id, model_input=Input(type=AssetTypes.MLFLOW_MODEL, path= "<azureml:model_name:model_id>"),
In addition to the list of Responsible AI vision insights parameters provided in
| Parameter name | Description | Type | |-|-|-| | `model_type` | Flavor of the model. Select pyfunc for AutoML models. | Enum <br> - Pyfunc <br> - fastai |
-| `dataset_type` | Whether the Images in the dataset are read from publicly available url or they're stored in the userΓÇÖs datastore. <br> For AutoML models, images are always read from UserΓÇÖs workspace datastore, hence the dataset type for AutoML models is ΓÇ£privateΓÇ¥. <br> For private dataset type, we download the images on the compute before generating the explanations. | Enum <br> - Public <br> - Private |
+| `dataset_type` | Whether the Images in the dataset are read from publicly available url or they're stored in the user's datastore. <br> For AutoML models, images are always read from User's workspace datastore, hence the dataset type for AutoML models is "private". <br> For private dataset type, we download the images on the compute before generating the explanations. | Enum <br> - Public <br> - Private |
| `xai_algorithm` | Type of the XAI algorithms supported for AutoML Models <br> Note: Shap isn't supported for AutoML models. | Enum <br> - `guided_backprop` <br> - `guided_gradcam` <br> - `integrated_gradients` <br> - `xrai` | | `xrai_fast` | Whether to use faster version of XRAI. if True, then computation time for explanations is faster but leads to less accurate explanations (attributions) | Boolean | | `approximation_method` | This Parameter is only specific to Integrated gradients. <br> Method for approximating the integral. Available approximation methods are `riemann_middle` and `gausslegendre`.| Enum <br> - `riemann_middle` <br> - `gausslegendre` |
machine-learning How To Use Foundation Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-foundation-models.md
description: Learn how to discover, evaluate, fine-tune and deploy Open Source foundation models in Azure Machine Learning +
Last updated 06/15/2023
In this article, you learn how to access and evaluate foundation models using Azure Machine Learning automated ML in the [Azure Machine Learning studio](overview-what-is-azure-machine-learning.md#studio). Additionally, you learn how to fine-tune each model and how to deploy the model at scale.
-Foundation models are machine learning models that have been pre-trained on vast amounts of data, and that can be fine tuned for specific tasks with relatively small amount of domain specific data. These models serve as a starting point for custom models and accelerate the model building process for a variety of tasks including natural language processing, computer vision, speech and generative AI tasks. Azure Machine Learning provides the capability to easily integrate these pre-trained foundation models into your applications. **foundation models in Azure Machine Learning** provides Azure Machine Learning native capabilities that enable customers to discover, evaluate, fine tune, deploy and operationalize open-source foundation models at scale.
+Foundation models are machine learning models that have been pre-trained on vast amounts of data, and that can be fine tuned for specific tasks with relatively small amount of domain specific data. These models serve as a starting point for custom models and accelerate the model building process for a variety of tasks including natural language processing, computer vision, speech and generative AI tasks. Azure Machine Learning provides the capability to easily integrate these pre-trained foundation models into your applications. **Foundation models in Azure Machine Learning** provides Azure Machine Learning native capabilities that enable customers to discover, evaluate, fine tune, deploy and operationalize open-source foundation models at scale.
## How to access foundation models in Azure Machine Learning
You can filter the list of models in the model catalog by Task, or by license. S
> [!NOTE] >Models from Hugging Face are subject to third party license terms available on the Hugging Face model details page. It is your responsibility to comply with the model's license terms. -
-You can quickly test out any pre-trained model using the Sample Inference widget on the model card, providing your own sample input to test the result. Additionally, the model card for each model includes a brief description of the model and links to samples for code based inferencing, finetuning and evaluation of the model.
-
-> [!IMPORTANT]
-> Deploying foundational models to a managed online endpoint is currently supported with __public workspaces__ (and their public associated resources) only.
->
-> * When `egress_public_network_access` is set to `disabled`, the deployment can only access the workspace-associated resources secured in the virtual network.
-> * When `egress_public_network_access` is set to `enabled` for a managed online endpoint deployment, the deployment can only access the resources with public access. Which means that it cannot access resources secured in the virtual network.
->
-> For more information, see [Secure outbound access with legacy network isolation method](concept-secure-online-endpoint.md#secure-outbound-access-with-legacy-network-isolation-method).
+You can quickly test out any pre-trained model using the Sample Inference widget on the model card, providing your own sample input to test the result. Additionally, the model card for each model includes a brief description of the model and links to samples for code based inferencing, fine-tuning and evaluation of the model.
## How to evaluate foundation models using your own test data
Each model can be evaluated for the specific inference task that the model can b
**Compute:**
-1. Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Evaluation needs to run on GPU compute. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
+1. Provide the Azure Machine Learning Compute cluster you would like to use for fine-tuning the model. Evaluation needs to run on GPU compute. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
-1. Select **Finish** in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to finetune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint.
+1. Select **Finish** in the Evaluate wizard to submit your evaluation job. Once the job completes, you can view evaluation metrics for the model. Based on the evaluation metrics, you might decide if you would like to fine-tune the model using your own training data. Additionally, you can decide if you would like to register the model and deploy it to an endpoint.
### Evaluating using code based samples To enable users to get started with model evaluation, we have published samples (both Python notebooks and CLI examples) in the [Evaluation samples in azureml-examples git repo](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/evaluation). Each model card also links to evaluation samples for corresponding tasks
-## How to finetune foundation models using your own training data
+## How to fine-tune foundation models using your own training data
-In order to improve model performance in your workload, you might want to fine tune a foundation model using your own training data. You can easily finetune these foundation models by using either the finetune settings in the studio or by using the code based samples linked from the model card.
+In order to improve model performance in your workload, you might want to fine tune a foundation model using your own training data. You can easily fine-tune these foundation models by using either the fine-tune settings in the studio or by using the code based samples linked from the model card.
-### Finetune using the studio
-You can invoke the finetune settings form by selecting on the **Finetune** button on the model card for any foundation model.
+### Fine-tune using the studio
-**Finetune Settings:**
+You can invoke the fine-tune settings form by selecting on the **Fine-tune** button on the model card for any foundation model.
+**Fine-tune Settings:**
-**Finetuning task type**
-* Every pre-trained model from the model catalog can be finetuned for a specific set of tasks (For Example: Text classification, Token classification, Question answering). Select the task you would like to use from the drop-down.
+**Fine-tuning task type**
+
+* Every pre-trained model from the model catalog can be fine-tuned for a specific set of tasks (For Example: Text classification, Token classification, Question answering). Select the task you would like to use from the drop-down.
**Training Data**
-1. Pass in the training data you would like to use to finetune your model. You can choose to either upload a local file (in JSONL, CSV or TSV format) or select an existing registered dataset from your workspace.
+1. Pass in the training data you would like to use to fine-tune your model. You can choose to either upload a local file (in JSONL, CSV or TSV format) or select an existing registered dataset from your workspace.
1. Once you've selected the dataset, you need to map the columns from your input data, based on the schema needed for the task. For example: map the column names that correspond to the 'sentence' and 'label' keys for Text Classification * Validation data: Pass in the data you would like to use to validate your model. Selecting **Automatic split** reserves an automatic split of training data for validation. Alternatively, you can provide a different validation dataset.
-* Test data: Pass in the test data you would like to use to evaluate your finetuned model. Selecting **Automatic split** reserves an automatic split of training data for test.
-* Compute: Provide the Azure Machine Learning Compute cluster you would like to use for finetuning the model. Finetuning needs to run on GPU compute. We recommend using compute SKUs with A100 / V100 GPUs when fine tuning. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
+* Test data: Pass in the test data you would like to use to evaluate your fine-tuned model. Selecting **Automatic split** reserves an automatic split of training data for test.
+* Compute: Provide the Azure Machine Learning Compute cluster you would like to use for fine-tuning the model. Fine-tuning needs to run on GPU compute. We recommend using compute SKUs with A100 / V100 GPUs when fine tuning. Ensure that you have sufficient compute quota for the compute SKUs you wish to use.
-3. Select **Finish** in the finetune form to submit your finetuning job. Once the job completes, you can view evaluation metrics for the finetuned model. You can then register the finetuned model output by the finetuning job and deploy this model to an endpoint for inferencing.
+3. Select **Finish** in the fine-tune form to submit your fine-tuning job. Once the job completes, you can view evaluation metrics for the fine-tuned model. You can then register the fine-tuned model output by the fine-tuning job and deploy this model to an endpoint for inferencing.
-### Finetuning using code based samples
+### Fine-tuning using code based samples
-Currently, Azure Machine Learning supports finetuning models for the following language tasks:
+Currently, Azure Machine Learning supports fine-tuning models for the following language tasks:
* Text classification * Token classification
Currently, Azure Machine Learning supports finetuning models for the following l
* Summarization * Translation
-To enable users to quickly get started with finetuning, we have published samples (both Python notebooks and CLI examples) for each task in the [azureml-examples git repo Finetune samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/finetune). Each model card also links to Finetuning samples for supported finetuning tasks.
+To enable users to quickly get started with fine-tuning, we have published samples (both Python notebooks and CLI examples) for each task in the [azureml-examples git repo Finetune samples](https://github.com/Azure/azureml-examples/tree/main/sdk/python/foundation-models/system/finetune). Each model card also links to fine-tuning samples for supported fine-tuning tasks.
## Deploying foundation models to endpoints for inferencing
-You can deploy foundation models (both pre-trained models from the model catalog, and finetuned models, once they're registered to your workspace) to an endpoint that can then be used for inferencing. Deployment to both real time endpoints and batch endpoints is supported. You can deploy these models by using either the Deploy UI wizard or by using the code based samples linked from the model card.
+You can deploy foundation models (both pre-trained models from the model catalog, and fine-tuned models, once they're registered to your workspace) to an endpoint that can then be used for inferencing. Deployment to both real time endpoints and batch endpoints is supported. You can deploy these models by using either the Deploy UI wizard or by using the code based samples linked from the model card.
+
+> [!IMPORTANT]
+> __Workspaces without public network access:__ Deploying foundational models to online endpoints without egress connectivity requires [packaging the models (preview)](how-to-package-models.md) first. By using model packaging, you can avoid the need for an internet connection, which Azure Machine Learning would otherwise require to dynamically install necessary Python packages for the MLflow models.
### Deploying using the studio
You can invoke the Deploy UI wizard by clicking on the 'Deploy' button on the mo
:::image type="content" source="./media/how-to-use-foundation-models/deploy-button.png" lightbox="./media/how-to-use-foundation-models/deploy-button.png" alt-text="Screenshot showing the deploy button on the foundation model card.":::
-Deployment Settings:
+#### Deployment settings
+ Since the scoring script and environment are automatically included with the foundation model, you only need to specify the Virtual machine SKU to use, number of instances and the endpoint name to use for the deployment. :::image type="content" source="./media/how-to-use-foundation-models/deploy-options.png" alt-text="Screenshot showing the deploy options on the foundation model card after user selects the deploy button.":::
+##### Networking
+
+Curated models from the Azure Machine Learning are in MLflow format. If you are planning to deploy these models under an online endpoint without public internet network connectivity, you need to package the model first.
++
+##### Shared quota
+ If you're deploying a Llama model from the model catalog but don't have enough quota available for the deployment, Azure Machine Learning allows you to use quota from a shared quota pool for a limited time. For _Llama-2-70b_ and _Llama-2-70b-chat_ model deployment, access to the shared quota is available only to customers with [Enterprise Agreement subscriptions](/azure/cost-management-billing/manage/create-enterprise-subscription). For more information on shared quota, see [Azure Machine Learning shared quota](how-to-manage-quotas.md#azure-machine-learning-shared-quota). :::image type="content" source="media/how-to-use-foundation-models/deploy-llama-model-with-shared-quota.png" alt-text="Screenshot showing the option to deploy a Llama model temporarily, using shared quota." lightbox="media/how-to-use-foundation-models/deploy-llama-model-with-shared-quota.png":::
If you're looking to use an open source model that isn't included in the model c
* text-to-image > [!NOTE]
->Models from Hugging Face are subject to third-party license terms available on the Hugging Face model details page. It is your responsibility to comply with the model's license terms.
+> Models from Hugging Face are subject to third-party license terms available on the Hugging Face model details page. It is your responsibility to comply with the model's license terms.
You can select the "Import" button on the top-right of the model catalog to use the Model Import Notebook.
In order to import the model, you need to pass in the `MODEL_ID` of the model yo
:::image type="content" source="./media/how-to-use-foundation-models/hugging-face-model-id.png" alt-text="Screenshot showing an example of a hugging face model ID ('bert-base-uncased') as it is displayed in the hugging face model documentation page.":::
-You need to provide compute for the Model import to run. Running the Model Import results in the specified model being imported from Hugging Face and registered to your Azure Machine Learning workspace. You can then finetune this model or deploy it to an endpoint for inferencing.
+You need to provide compute for the Model import to run. Running the Model Import results in the specified model being imported from Hugging Face and registered to your Azure Machine Learning workspace. You can then fine-tune this model or deploy it to an endpoint for inferencing.
## Next Steps
machine-learning How To Secure Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-secure-prompt-flow.md
Workspace managed virtual network is the recommended way to support network isol
- [Secure workspace resources](../how-to-secure-workspace-vnet.md) - [Workspace managed network isolation](../how-to-managed-network.md)-- [Secure Azure Kubernetes Service inferencing environment](../how-to-secure-online-endpoint.md)-- [Secure your managed online endpoints with network isolation](../how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure Azure Kubernetes Service inferencing environment](../how-to-secure-kubernetes-inferencing-environment.md)
+- [Secure your managed online endpoints with network isolation](../how-to-secure-online-endpoint.md)
- [Secure your RAG workflows with network isolation](../how-to-secure-rag-workflows.md)
machine-learning Troubleshooting Managed Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/troubleshooting-managed-feature-store.md
Title: Troubleshoot managed feature store errors
description: Information required to troubleshoot common errors with the managed feature store in Azure Machine Learning. +
In this example, the `feature_transformation_code.path` property in the YAML sho
#### Symptom
-When you use the feature store CRUD client to GET a feature set - for example, `fs_client.feature_sets.get(name, version)`ΓÇ¥` - you might see this error:
+When you use the feature store CRUD client to GET a feature set - for example, `fs_client.feature_sets.get(name, version)`"` - you might see this error:
```python
When you use a registered model as a feature retrieval job input, the job fails
```python ValueError: Failed with visit error: Failed with execution error: error in streaming from input data sources
- VisitError(ExecutionError(StreamError(NotFound)))
+ VisitError(ExecutionError(StreamError(NotFound)))
=> Failed with execution error: error in streaming from input data sources
- ExecutionError(StreamError(NotFound)); Not able to find path: azureml://subscriptions/{sub_id}/resourcegroups/{rg}/workspaces/{ws}/datastores/workspaceblobstore/paths/LocalUpload/{guid}/feature_retrieval_spec.yaml
+ ExecutionError(StreamError(NotFound)); Not able to find path: azureml://subscriptions/{sub_id}/resourcegroups/{rg}/workspaces/{ws}/datastores/workspaceblobstore/paths/LocalUpload/{guid}/feature_retrieval_spec.yaml
``` #### Solution:
The feature retrieval job/query fails with the following error message in the *l
```python An error occurred while calling o1025.parquet. : java.nio.file.AccessDeniedException: Operation failed: "This request is not authorized to perform this operation using this permission.", 403, GET, https://{storage}.dfs.core.windows.net/test?upn=false&resource=filesystem&maxResults=5000&directory=datasources&timeout=90&recursive=false, AuthorizationPermissionMismatch, "This request is not authorized to perform this operation using this permission. RequestId:63013315-e01f-005e-577b-7c63b8000000 Time:2023-05-01T22:20:51.1064935Z"
- at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1203)
- at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:408)
- at org.apache.hadoop.fs.Globber.listStatus(Globber.java:128)
- at org.apache.hadoop.fs.Globber.doGlob(Globber.java:291)
- at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
- at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2124)
+ at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1203)
+ at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:408)
+ at org.apache.hadoop.fs.Globber.listStatus(Globber.java:128)
+ at org.apache.hadoop.fs.Globber.doGlob(Globber.java:291)
+ at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
+ at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2124)
``` #### Solution:
The materialization job fails with this error message in the *logs/azureml/drive
```python An error occurred while calling o1025.parquet. : java.nio.file.AccessDeniedException: Operation failed: "This request is not authorized to perform this operation using this permission.", 403, GET, https://{storage}.dfs.core.windows.net/test?upn=false&resource=filesystem&maxResults=5000&directory=datasources&timeout=90&recursive=false, AuthorizationPermissionMismatch, "This request is not authorized to perform this operation using this permission. RequestId:63013315-e01f-005e-577b-7c63b8000000 Time:2023-05-01T22:20:51.1064935Z"
- at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1203)
- at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:408)
- at org.apache.hadoop.fs.Globber.listStatus(Globber.java:128)
- at org.apache.hadoop.fs.Globber.doGlob(Globber.java:291)
- at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
- at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2124)
+ at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1203)
+ at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:408)
+ at org.apache.hadoop.fs.Globber.listStatus(Globber.java:128)
+ at org.apache.hadoop.fs.Globber.doGlob(Globber.java:291)
+ at org.apache.hadoop.fs.Globber.glob(Globber.java:202)
+ at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:2124)
``` #### Solution:
The materialization job fails with this error message in the *logs/azureml/drive
```yaml An error occurred while calling o1162.load. : java.util.concurrent.ExecutionException: java.nio.file.AccessDeniedException: Operation failed: "This request is not authorized to perform this operation using this permission.", 403, HEAD, https://featuresotrestorage1.dfs.core.windows.net/offlinestore/fs_xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_fsname/transactions/1/_delta_log?upn=false&action=getStatus&timeout=90
- at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
- at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
- at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
- at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
- at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
- at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
- at com.google.common.cache.LocalCache$S
+ at com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
+ at com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
+ at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
+ at com.google.common.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
+ at com.google.common.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
+ at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
+ at com.google.common.cache.LocalCache$S
``` #### Solution
For more information about RBAC configuration, see [Permissions required for the
#### Symptom:
-When using the feature store CRUD client to stream materialization job results to notebook using `fs_client.jobs.stream(ΓÇ£<job_id>ΓÇ¥)`, the SDK call fails with an error
+When using the feature store CRUD client to stream materialization job results to notebook using `fs_client.jobs.stream("<job_id>")`, the SDK call fails with an error
``` HttpResponseError: (UserError) A job was found, but it is not supported in this API version and cannot be accessed.
machine-learning How To Train With Custom Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-custom-image.md
description: Learn how to use your own Docker images, or curated ones from Micro
-+ Last updated 11/14/2023
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-In this article, learn how to use a custom Docker image when you're training models with Azure Machine Learning. You use the example scripts in this article to classify pet images by creating a convolutional neural network.
+In this article, learn how to use a custom Docker image when you're training models with Azure Machine Learning. You'll use the example scripts in this article to classify pet images by creating a convolutional neural network.
Azure Machine Learning provides a default Docker base image. You can also use Azure Machine Learning environments to specify a different base image, such as one of the maintained [Azure Machine Learning base images](https://github.com/Azure/AzureML-Containers) or your own [custom image](../how-to-deploy-custom-container.md). Custom base images allow you to closely manage your dependencies and maintain tighter control over component versions when running training jobs.
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
python3 client_configurator.py --subscription-id <subcriptionId> --cluster-resou
*It then prompts user to restart Cassandra. :::image type="content" source="./media/configure-hybrid-cluster/script-result.png" alt-text="Screenshot of the result of running the script.":::
-* Once Cassandra has finished restarting on all nodes, check `nodetool status`. Both datacenters should appear in the list, with their nodes in the UN (Up/Normal) state.
+* Once Cassandra is done restarting on all nodes, check `nodetool status`. Both datacenters should appear in the list, with their nodes in the UN (Up/Normal) state.
* From your Azure Managed Instance for Apache Cassandra, you can then select `AllKeyspaces` to change the replication settings in your Keyspace schema and start the migration process to Cassandra Managed Instance cluster. :::image type="content" source="./media/create-cluster-portal/cluster-version.png" alt-text="Screenshot of selecting all key spaces." lightbox="./media/create-cluster-portal/cluster-version.png" border="true":::
+> [!TIP]
+> Auto-Replicate setting should be enabled via an arm template.
+> The arm template should include:
+> ```json
+> "properties":{
+> ...
+> "externalDataCenters": ["dc-name-1","dc-name-2"],
+> "autoReplicate": "AllKeyspaces",
+> ...
+> }
+> ```
+ > [!WARNING] > This will change all your keyspaces definition to include > `WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'on-prem-datacenter-1' : 3, 'mi-datacenter-1': 3 }`.
python3 client_configurator.py --subscription-id <subcriptionId> --cluster-resou
:::image type="content" source="./media/configure-hybrid-cluster/replication-progress.png" alt-text="Screenshot showing replication progress." lightbox="./media/configure-hybrid-cluster/replication-progress.png" border="true":::
+> [!INFO]
+>
+ ## Next steps In this quickstart, you learned how to create a hybrid cluster using Azure Managed Instance for Apache Cassandra Client Configurator. You can now start working with the cluster.
mysql App Development Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/app-development-best-practices.md
description: Learn about best practices for building an app by using Azure Datab
Previously updated : 03/29/2023 Last updated : 12/01/2023
Occasionally, you need to deploy changes to your database. In such cases, you ca
During manual database deployment, follow these steps to minimize downtime or reduce the risk of failed deployment: 1. Create a copy of a production database on a new database by using [mysqldump](https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html) or [MySQL Workbench](https://dev.mysql.com/doc/workbench/en/wb-admin-export-import-management.html).
-1. Update the new database with your new schema changes or updates needed for your database.
-1. Put the production database in a read-only state. It would be best if you didn't have write operations on the production database until deployment is completed.
-1. Test your application with the newly updated database from step 1.
-1. Deploy your application changes and make sure the application is now using the new database with the latest updates.
-1. Keep the old production database to roll back the changes. You can then evaluate to delete the old production database or export it on Azure Storage if needed.
+2. Update the new database with your new schema changes or updates needed for your database.
+3. Put the production database in a read-only state. It would be best if you didn't have write operations on the production database until deployment is completed.
+4. Test your application with the newly updated database from step 1.
+5. Deploy your application changes and make sure the application is now using the new database with the latest updates.
+6. Keep the old production database to roll back the changes. You can then evaluate to delete the old production database or export it on Azure Storage if needed.
> [!NOTE] > If the application is like an e-commerce app and you can't put it in a read-only state, deploy the changes directly on the production database after making a backup. These changes should occur during off-peak hours with low traffic to the app to minimize the impact because some users might experience failed requests.
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
description: Describes the policy around MySQL major and minor versions in Azure
Previously updated : 04/27/2023 Last updated : 12/01/2023
[!INCLUDE [Azure-database-for-mysql-single-server-deprecation](includes/Azure-database-for-mysql-single-server-deprecation.md)]
-This page describes the Azure Database for MySQL versioning policy and applies to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server (Preview) deployment modes.
+This page describes the Azure Database for MySQL versioning policy and applies to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server deployment modes.
## Supported MySQL versions
-Azure Database for MySQL has been developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports the community's current major versions, namely MySQL 5.7, and 8.0. MySQL uses the X.Y.Z. naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
+Azure Database for MySQL was developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports the community's current major versions, namely MySQL 5.7, and 8.0. MySQL uses the X.Y.Z. naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
Azure Database for MySQL currently supports the following major and minor versions of MySQL:
Azure Database for MySQL automatically performs minor version upgrades to the Az
## Major version retirement policy
-The table below provides the retirement details for MySQL major versions. The dates follow the [MySQL versioning policy](https://www.mysql.com/support/eol-notice.html).
+The retirement details for MySQL major versions are listed in the following table. Dates shown follow the [MySQL versioning policy](https://www.mysql.com/support/eol-notice.html).
| Version | What's New | Azure support start date | Azure support end date | Community Retirement date
-| | | | | |
+| | | | | |
| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 |September 2025 |October 2023|
-| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html)) | December 11, 2019 | NA |April 2026|
+| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html) | December 11, 2019 | NA |April 2026|
## What will happen to Azure Database for MySQL service after MySQL community version is retired in October 2023? In line with Oracle's announcement regarding the end-of-life (EOL) of [MySQL Community Version v5.7 in __October 2023__](https://www.oracle.com/us/support/library/lsp-tech-chart-069290.pdf) (Page 23), we at Azure are actively preparing for this important transition. This development specifically impacts customers who are currently utilizing Version 5.7 of Azure Database for MySQL - Single Server and Flexible Server.
-In response to the customer's requests, Microsoft has decided to prolong the support for Azure Database for MySQL beyond __October 2023__. During the extended support period, which will last until __September 2025__, Microsoft prioritizes the availability, reliability, and security of the service. While there are no specific guarantees regarding minor version upgrades, we implement essential modifications to ensure that the service remains accessible, dependable, and protected. Our plan includes:
+In response to the customer's requests, Microsoft decided to prolong the support for Azure Database for MySQL beyond __October 2023__. During the extended support period, which lasts until __September 2025__, Microsoft prioritizes the availability, reliability, and security of the service. While there are no specific guarantees regarding minor version upgrades, we implement essential modifications to ensure that the service remains accessible, dependable, and protected. Our plan includes:
- Extended support for v5.7 on Azure Database for MySQL- Flexible Servers until __September 2025__, offering ample time for customers to plan and execute their upgrades to MySQL v8.0.
__Azure MySQL 5.7 Deprecation Timelines__
|Creation of new servers for migrating from Azure Database for MariaDB to Azure Database for MySQL - Flexible Server.| September 2025| NA| |Extended support for Azure Database for MySQL v5.7| September 2025| September 2024|
-To summarize, creating Azure Database for MySQL v5.7 - Flexible Server will conclude in __April 2024__. However, it's important to note that certain scenarios such as replica creation, point in time recovery, and migration from Azure Database for MySQL - Single Server or Azure Database for MariaDB to Azure Database for MySQL - Flexible Server, will allow you to create MySQL version 5.7 until the end of the extended support period.
+To summarize, creating Azure Database for MySQL flexible server based on v5.7 won't be available after __April 2024__. However, it's important to note that certain scenarios such as replica creation, point in time recovery, and migration from Azure Database for MySQL - Single Server or Azure Database for MariaDB to Azure Database for MySQL - Flexible Server, will allow you to create MySQL version 5.7 until the end of the extended support period.
### FAQs
A: Starting May 2023, Azure Database for MySQL - Flexible Server enables you to
__Q: I'm currently using Azure database for MySQL - Single Sever version 5.7, how should I plan my upgrade?__
-A: Azure Database for MySQL - Single Server does not offer built-in support for major version upgrade from v5.7 to v8.0. As Azure Database for MySQL - Single Server is on deprecation path, there are no investments planned to support major version upgrade from v5.7 to v8.0. The recommended path to upgrade from v5.7 of Azure Database for MySQL - Single Server to v8.0 is to first [migrate your v5.7 Azure Database for MySQL - Single Server to v5.7 of Azure Database for MySQL - Flexible Server](single-server/whats-happening-to-mysql-single-server.md#migrate-from-single-server-to-flexible-server). Once the migration is completed and server is stabilized on Flexible Server, you can proceed with performing a [major version upgrade](flexible-server/how-to-upgrade.md) on the migrated Azure Database for MySQL - Flexible Server from v5.7 to v8.0. The extended support for v5.7 on Flexible Server will allow you to run on v5.7 longer and plan your upgrade to v8.0 on Flexible Server at a later point in time after migration from Single Server.
+A: Azure Database for MySQL - Single Server doesn't offer built-in support for major version upgrade from v5.7 to v8.0. As Azure Database for MySQL - Single Server is on deprecation path, there are no investments planned to support major version upgrade from v5.7 to v8.0. The recommended path to upgrade from v5.7 of Azure Database for MySQL - Single Server to v8.0 is to first [migrate your v5.7 Azure Database for MySQL - Single Server to v5.7 of Azure Database for MySQL - Flexible Server](single-server/whats-happening-to-mysql-single-server.md#migrate-from-single-server-to-flexible-server). After the migration is completed and server is stabilized on Flexible Server, you can proceed with performing a [major version upgrade](flexible-server/how-to-upgrade.md) on the migrated Azure Database for MySQL - Flexible Server from v5.7 to v8.0. The extended support for v5.7 on Flexible Server will allow you to run on v5.7 longer and plan your upgrade to v8.0 on Flexible Server at a later point in time after migration from Single Server.
__Q: Are there any expected downtime or performance impacts during the upgrade process?__
-A: Yes, it's expected that there will be some downtime during the upgrade process. The specific duration varies depending on factors such as the size and complexity of the database. We advise conducting a test upgrade on a nonproduction environment to assess the expected downtime and evaluate the potential performance impact. If you wish to minimize downtime for your applications during the upgrade, you can explore the option of [perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replica](flexible-server/how-to-upgrade.md#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas).
-
+A: Yes, it's expected that there will be some downtime during the upgrade process. The specific duration varies depending on factors such as the size and complexity of the database. We advise conducting a test upgrade on a nonproduction environment to assess the expected downtime and evaluate the potential performance impact. If you wish to minimize downtime for your applications during the upgrade, you can explore the option of [perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replica](flexible-server/how-to-upgrade.md#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas).
__Q: Can I roll back to MySQL v5.7 after upgrading to v8.0?__
A: If you have questions, get answers from community experts in [Microsoft Q&A](
__Q: What will happen to my data during the upgrade?__
-A: While your data will remain unaffected during the upgrade process, it's highly advisable to create a backup of your data before proceeding with the upgrade. This precautionary measure will help mitigate the risk of potential data loss in the event of unforeseen complications.
+A: While your data will remain unaffected during the upgrade process, it's highly advisable to create a backup of your data before proceeding with the upgrade. This precautionary measure helps mitigate the risk of potential data loss in the event of unforeseen complications.
-__Q: What will happen to the server 5.7 post Sep 2025__
+__Q: What will happen to the server 5.7 after Sep 2025?__
A: You refer to our [retired MySQL version support policy](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql) to learn what will happen after Azure Database for MySQL 5.7 end of support __Q: I have a Azure Database for MariaDB or Azure database for MySQL -Single server, how can I create the server in 5.7 post April 2024 for migrating to Azure database for MySQL - flexible server?__
-A: If there's MariaDB\Single server in your subscription, this subscription is still permitted to create Azure Database for MySQL ΓÇô Flexible Server v5.7 for the purpose of migration to Azure Database for MySQL ΓÇô Flexible Server
-
+A: If there's MariaDB\Single server in your subscription, this subscription is still permitted to create Azure Database for MySQL ΓÇô Flexible Server v5.7 to migrate to Azure Database for MySQL ΓÇô Flexible Server.
## Retired MySQL engine versions not supported in Azure Database for MySQL
After the retirement date for each MySQL database version, if you continue runni
- You won't be able to create new database servers for the retired version. However, you can perform point-in-time recoveries and create read replicas for your existing servers. - New service capabilities developed by Azure Database for MySQL may only be available to supported database server versions. - Uptime S.L.A.s apply solely to Azure Database for MySQL service-related issues and not to any downtime caused by database engine-related bugs.-- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in, the retired database version, Azure may choose to stop the compute node of your database server from securing the service first. You are asked to upgrade the server before bringing the server online. During the upgrade process, your data is always protected using automatic backups performed on the service, which can be used to restore to the older version if desired.
+- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in, the retired database version, Azure may choose to stop the compute node of your database server from securing the service first. You're asked to upgrade the server before bringing the server online. During the upgrade process, your data is always protected using automatic backups performed on the service, which can be used to restore to the older version if desired.
## Next steps
mysql How To Create Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-create-users.md
description: This article describes how to create new user accounts to interact
Previously updated : 03/29/2023 Last updated : 12/01/2023
After you create an Azure Database for the MySQL server, you can use the first s
## Create a database 1. Get the connection information and admin user name.+ You need the full server name and admin sign-in credentials to connect to your database server. You can easily find the server name and sign-in information on the server **Overview** or the **Properties** page in the Azure portal.
-1. Use the admin account and password to connect to your database server. Use your preferred client tool, MySQL Workbench, mysql.exe, or HeidiSQL.
+2. Use the admin account and password to connect to your database server. Use your preferred client tool, MySQL Workbench, mysql.exe, or HeidiSQL.
> [!NOTE] > If you're not sure how to connect, see [connect and query data for Single Server](single-server/connect-workbench.md) or [connect and query data for Flexible Server](flexible-server/connect-workbench.md).
-1. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name. Replace the placeholder value `testdb` with your database name.
+3. Edit and run the following SQL code. Replace the placeholder value `db_user` with your intended new user name. Replace the placeholder value `testdb` with your database name.
This SQL code creates a new database named testdb. It then makes a new user in the MySQL service and grants that user all privileges for the new database schema (testdb.\*).
After you create an Azure Database for the MySQL server, you can use the first s
## Create a nonadmin user
- Now that the database is created, you can start with a nonadmin user with the ```CREATE USER``` MySQL statement.
+ Now that you have created the database, you can start by creating a nonadmin user by using the ```CREATE USER``` MySQL statement.
``` sql CREATE USER 'db_user'@'%' IDENTIFIED BY 'StrongPassword!';
After you create an Azure Database for the MySQL server, you can use the first s
## Verify the user permissions
-Run the ```SHOW GRANTS``` MySQL statement to view the privileges allowed for user **db_user** on **testdb** database.
+To view the privileges allowed for user **db_user** on **testdb** database, run the ```SHOW GRANTS``` MySQL statement.
```sql USE testdb;
Run the ```SHOW GRANTS``` MySQL statement to view the privileges allowed for use
## Connect to the database with the new user
-Sign in to the server, specifying the designated database and using the new username and password. This example shows the MySQL command line. When you use this command, you are prompted for the user's password. Use your own server name, database name, and user name. See how to connect the single server and the flexible server below.
+Sign in to the server, specifying the designated database and using the new username and password. This example shows the MySQL command line. When you use this command, you're prompted for the user's password. Use your own server name, database name, and user name. See how to connect the single server and the flexible in the following table.
| Server type | Usage | | | |
Sign in to the server, specifying the designated database and using the new user
## Limit privileges for a user
-To restrict the type of operations a user can run on the database, you must explicitly add the operations in the **GRANT** statement. See an example below:
+To restrict the type of operations a user can run on the database, you must explicitly add the operations in the **GRANT** statement. See the following example:
```sql CREATE USER 'new_master_user'@'%' IDENTIFIED BY 'StrongPassword!';
To restrict the type of operations a user can run on the database, you must expl
## About azure_superuser
-All Azure Databases for MySQL servers are created with a user called "azure_superuser". Microsoft created a system account to manage the server to conduct monitoring, backups, and other regular maintenance. On-call engineers may also use this account to access the server during an incident with certificate authentication and must request access using just-in-time (JIT) processes.
+All Azure Database for MySQL servers are created with a user called "azure_superuser". Microsoft created a system account to manage the server to conduct monitoring, backups, and other regular maintenance. On-call engineers may also use this account to access the server during an incident with certificate authentication and must request access using just-in-time (JIT) processes.
## Next steps
mysql How To Troubleshoot Replication Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/how-to-troubleshoot-replication-latency.md
Previously updated : 06/20/2022 Last updated : 12/01/2023
-# Troubleshoot replication latency in Azure Database for MySQL - flexible Server
+# Troubleshoot replication latency in Azure Database for MySQL - Flexible Server
[!INCLUDE[applies-to-mysql-single-flexible-server](includes/applies-to-mysql-single-flexible-server.md)]
Last updated 06/20/2022
[!INCLUDE[inclusive-language-guidelines-slave](includes/inclusive-language-guidelines-slave.md)]
-The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server. It also improves overall performance and latency of the application as it scales.
+The [read replica](concepts-read-replicas.md) feature allows you to replicate data from an Azure Database for MySQL server to a read-only replica server. You can scale out workloads by routing read and reporting queries from the application to replica servers. This setup reduces the pressure on the source server and improves overall performance and latency of the application as it scales.
Replicas are updated asynchronously by using the MySQL engine's native binary log (binlog) file position-based replication technology. For more information, see [MySQL binlog file position-based replication configuration overview](https://dev.mysql.com/doc/refman/5.7/en/binlog-replication-configuration-overview.html).
The replication lag on the secondary read replicas depends several factors. Thes
- Compute tier of the source server and secondary read replica server. - Queries running on the source server and secondary server.
-In this article, you learn how to troubleshoot replication latency in Azure Database for MySQL. You'll also understand some common causes of increased replication latency on replica servers.
+In this article, you'll learn how to troubleshoot replication latency in Azure Database for MySQL. You'll also get a better idea of some common causes of increased replication latency on replica servers.
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
When a binary log is enabled, the source server writes committed transactions in
## Monitoring replication latency
-Azure Database for MySQL provides the metric for replication lag in seconds in [Azure Monitor](concepts-monitoring.md). This metric is available only on read replica servers. It's calculated by the seconds_behind_master metric that's available in MySQL.
+Azure Database for MySQL provides the metric for replication lag in seconds in [Azure Monitor](concepts-monitoring.md). This metric is available only on read replica servers. It's calculated by the seconds_behind_master metric that's available in MySQL.
To understand the cause of increased replication latency, connect to the replica server by using [MySQL Workbench](connect-workbench.md) or [Azure Cloud Shell](https://shell.azure.com). Then run following command.
The output contains numerous information. Normally, you need to focus on only th
|Last_SQL_Error|Displays the SQL thread error message, if any.| |Slave_SQL_Running_State| Indicates the current SQL thread status. In this state, `System lock` is normal. It's also normal to see a status of `Waiting for dependent transaction to commit`. This status indicates that the replica is waiting for other SQL worker threads to update committed transactions.|
-If Slave_IO_Running is `Yes` and Slave_SQL_Running is `Yes`, then the replication is running fine.
+If Slave_IO_Running is `Yes` and Slave_SQL_Running is `Yes`, then the replication is running fine.
Next, check Last_IO_Errno, Last_IO_Error, Last_SQL_Errno, and Last_SQL_Error. These fields display the error number and error message of the most-recent error that caused the SQL thread to stop. An error number of `0` and an empty message means there's no error. Investigate any nonzero error value by checking the error code in the [MySQL server error message reference](https://dev.mysql.com/doc/mysql-errors/5.7/en/server-error-reference.html).
In Azure, network latency within a region can typically be measured milliseconds
In most cases, the connection delay between IO threads and the source server is caused by high CPU utilization on the source server. The IO threads are processed slowly. You can detect this problem by using Azure Monitor to check CPU utilization and the number of concurrent connections on the source server.
-If you don't see high CPU utilization on the source server, the problem might be network latency. If network latency is suddenly abnormally high, check the [Azure status page](https://azure.status.microsoft/status) for known issues or outages.
+If you don't see high CPU utilization on the source server, the problem might be network latency. If network latency is suddenly abnormally high, check the [Azure status page](https://azure.status.microsoft/status) for known issues or outages.
### Heavy bursts of transactions on the source server
-If you see the following values, then a heavy burst of transactions on the source server is likely causing the replication latency.
+If you see the following values, then a heavy burst of transactions on the source server is likely causing the replication latency.
```bash Slave_IO_State: Waiting for the slave SQL thread to free enough relay log space
mysql Quickstart Create Mysql Server Database Using Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/quickstart-create-mysql-server-database-using-bicep.md
Previously updated : 05/02/2022 Last updated : 12/01/2023 # Quickstart: Use Bicep to create an Azure Database for MySQL server
The Bicep file defines five Azure resources:
## Deploy the Bicep file - 1. Save the Bicep file as **main.bicep** to your local computer.
-1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+2. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
# [CLI](#tab/CLI)
mysql Select Right Deployment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/select-right-deployment-type.md
description: This article describes what factors to consider before you deploy A
Previously updated : 04/18/2023 Last updated : 12/01/2023
With Azure, your MySQL server workloads can run in a hosted virtual machine infr
When making your decision, consider the following two options: -- **Azure Database for MySQL**. This option is a fully managed MySQL database engine based on the stable version of the MySQL community edition. This relational database as a service (DBaaS), hosted on the Azure cloud platform, falls into the industry category of PaaS. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise-grade security, compliance and governance, monitoring and alerting that require extensive configuration when MySQL Server is either on-premises or in an Azure VM. When using MySQL as a service, you pay-as-you-go, with options to scale up or out for greater control without interruption. [Azure Database for MySQL](flexible-server/overview.md), powered by the MySQL community edition, is available in two deployment modes:
+- **Azure Database for MySQL**. This option falls into the industry category of PaaS, and represents a fully managed MySQL database engine based on the stable version of the MySQL community edition. This relational database as a service (DBaaS), hosted on the Azure cloud platform, falls into the industry category of PaaS. With a managed instance of MySQL on Azure, you can use built-in features viz automated patching, high availability, automated backups, elastic scaling, enterprise-grade security, compliance and governance, monitoring and alerting that require extensive configuration when MySQL Server is either on-premises or in an Azure VM. When using MySQL as a service, you pay-as-you-go, with options to scale up or out for greater control without interruption. [Azure Database for MySQL](flexible-server/overview.md), powered by the MySQL community edition, is available in two deployment modes:
- - [Flexible Server](flexible-server/overview.md) - Azure Database for MySQL Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones. Flexible servers provide better cost optimization controls with the ability to stop/start the server and burstable compute tier, ideal for workloads that don't need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, which is ideal for production workloads with predictable compute capacity requirements. The service supports the community version of MySQL 5.7 and 8.0. The service is generally available today in various [Azure regions](flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service.
+ - [Flexible Server](flexible-server/overview.md) is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. The flexible server architecture allows users to opt for high availability within a single availability zone and across multiple availability zones. Flexible servers provide better cost optimization controls with the ability to stop/start the server and burstable compute tier, ideal for workloads that don't need full compute capacity continuously. Flexible Server also supports reserved instances allowing you to save up to 63% cost, which is ideal for production workloads with predictable compute capacity requirements. The service supports the community version of MySQL 5.7 and 8.0. The service is generally available today in various [Azure regions](flexible-server/overview.md#azure-regions). Flexible servers are best suited for all new developments and migration of production workloads to Azure Database for MySQL service.
- - [Single Server](single-server/single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most database management functions such as patching, backups, high availability, and security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability in a single availability zone. It supports the community version of MySQL 5.6 (retired), 5.7, and 8.0. The service is generally available today in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best-suited **only for existing applications already leveraging single servers**. A Flexible Server would be the recommended deployment option for all new developments or migrations.
+ - [Single Server](single-server/single-server-overview.md) is a fully managed database service designed for minimal customization. The single server platform is designed to handle most database management functions such as patching, backups, high availability, and security with minimal user configuration and control. The architecture is optimized for built-in high availability with 99.99% availability in a single availability zone. It supports the community version of MySQL 5.6 (retired), 5.7, and 8.0. The service is generally available today in various [Azure regions](https://azure.microsoft.com/global-infrastructure/services/). Single servers are best-suited **only for existing applications already leveraging single servers**. It's recommended to choose Flexible Server for all new developments or migrations.
-- **MySQL on Azure VMs**. This option falls into the industry category of IaaS. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. All recent versions and editions of MySQL can be installed on the virtual machine.
+- **MySQL on Azure VMs**. This option falls into the industry category of IaaS. With this service, you can run MySQL Server inside a managed virtual machine on the Azure cloud platform. You can install all recent versions and editions of MySQL on a virtual machine.
## Compare the MySQL deployment options in Azure
The main differences between these options are listed in the following table:
| Point in time restore capability to any time within the retention period | Yes | Yes | User Managed | | Fast restore point | No | Yes | No | | Ability to restore on a different zone | Not supported | Yes | Yes |
-| Ability to restore to a different VNET | No | Yes | Yes |
+| Ability to restore to a different VNet | No | Yes | Yes |
| Ability to restore to a different region | Yes (Geo-redundant) | Yes (Geo-redundant) | User Managed | | Ability to restore a deleted server | Yes | Yes | No | | [**Disaster Recovery**](flexible-server/concepts-business-continuity.md) | | | |
Several factors can influence whether you choose PaaS or IaaS to host your MySQL
Cost reduction is often the primary consideration in determining the best solution for hosting your databases. This is true whether you're a startup with little cash or a team in an established company that operates under tight budget constraints. This section describes billing and licensing basics in Azure as they apply to Azure Database for MySQL and MySQL on Azure VMs.
-#### Bill
+#### Billing
Azure Database for MySQL is currently available as a service in several tiers with different resource prices. All resources are billed hourly at a fixed rate. For the latest information on the currently supported service tiers, compute sizes, and storage amounts, see [pricing page](https://azure.microsoft.com/pricing/details/mysql/). You can dynamically adjust service tiers and compute sizes to match your application's varied throughput needs. You're billed for outgoing Internet traffic at regular [data transfer rates](https://azure.microsoft.com/pricing/details/data-transfers/).
operator-insights Concept Data Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-quality-monitoring.md
Title: Data quality and quality monitoring
description: This article helps you understand how data quality and quality monitoring work in Azure Operator Insights. + Last updated 10/24/2023
operator-insights Concept Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-types.md
Title: Data types - Azure Operator Insights
description: This article provides an overview of the data types used by Azure Operator Insights Data Products + Last updated 10/25/2023
operator-insights Concept Data Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-visualization.md
Title: Data visualization in Azure Operator Insights Data Products
description: This article outlines how data is stored and visualized in Azure Operator Insights Data Products. + Last updated 10/23/2023
operator-insights Dashboards Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/dashboards-use.md
Title: Use Azure Operator Insights Data Product dashboards
description: This article outlines how to access and use dashboards in the Azure Operator Insights Data Product. + Last updated 10/24/2023
operator-insights Data Product Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-product-create.md
Title: Create an Azure Operator Insights Data Product
description: In this article, learn how to create an Azure Operator Insights Data Product resource. + Last updated 10/16/2023
operator-insights Data Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-query.md
Title: Query data in the Azure Operator Insights Data Product
description: This article outlines how to access and query the data in the Azure Operator Insights Data Product. + Last updated 10/22/2023
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
Title: Create and configure MCC EDR Ingestion Agents
description: Learn how to create and configure MCC EDR Ingestion Agents for Azure Operator Insights + Last updated 10/31/2023
operator-insights How To Manage Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-mcc-edr-agent.md
Title: Manage MCC EDR Ingestion Agents for Azure Operator Insights
description: Learn how to upgrade, update, roll back and manage MCC EDR Ingestion agents for AOI + Last updated 11/02/2023
operator-insights Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/managed-identity.md
Title: Managed identity for Azure Operator Insights
description: This article helps you understand managed identity and how it works in Azure Operator Insights. + Last updated 10/18/2023
operator-insights Mcc Edr Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/mcc-edr-agent-configuration.md
Title: MCC EDR Ingestion Agents configuration reference for Azure Operator Insig
description: This article documents the complete set of configuration for the agent, listing all fields with examples and explanatory comments. + Last updated 11/02/2023
operator-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/overview.md
Title: What is Azure Operator Insights?
description: Azure Operator Insights is an Azure service for monitoring and analyzing data from multiple sources + Last updated 10/26/2023
operator-insights Purview Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/purview-setup.md
Title: Use Microsoft Purview with an Azure Operator Insights Data Product
description: In this article, learn how to set up Microsoft Purview to explore an Azure Operator Insights Data Product. + Last updated 11/02/2023
operator-insights Troubleshoot Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/troubleshoot-mcc-edr-agent.md
Title: Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insi
description: Learn how to monitor MCC EDR Ingestion Agents and troubleshoot common issues + Last updated 10/30/2023
orbital Update Tle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/update-tle.md
Update the TLE of an existing spacecraft resource.
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A registered spacecraft. Learn more on how to [register spacecraft](register-spacecraft.md).
-## Update the spacecraft TLE
+## Azure portal method
1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results. 2. In the **Spacecraft** page, select the name of the spacecraft for which to update the ephemeris.
Update the TLE of an existing spacecraft resource.
5. Select the **Submit** button.
+## API method
+
+Use the Spacecrafts REST Operation Group to [update a spacecraft's TLE](/rest/api/orbital/azureorbitalgroundstation/spacecrafts/create-or-update/) in the Azure Orbital Ground Station API.
+ ## Next steps - [Schedule a contact](schedule-contact.md)
postgresql Concepts Scaling Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-scaling-resources.md
+
+ Title: Scaling Resources in Azure Database for PostgreSQL - Flexible Server
+description: This article describes the resource scaling in Azure Database for PostgreSQL - Flexible Server.
+++++ Last updated : 12/01/2023++
+# Scaling Resources in Azure Database for PostgreSQL - Flexible Server
++
+Azure Database for PostgreSQL = Flexible Server supports both vertical and horizontal scaling options.
+
+You scale vertically by adding more resources to the Flexible server instance, such as increasing the instance-assigned number of CPUs and memory. Network throughput of your instance depends on the values you choose for CPU and memory. Once a Flexible server instance is created, you can independently change the CPU (vCores), the amount of storage, and the backup retention period. The number of vCores can be scaled up or down. The storage size however can only be increased. In addition, uou can scale the backup retention period up or down from 7 to 35 days. The resources can be scaled using multiple tools for instance [Azure portal](./quickstart-create-server-portal.md) or the [Azure CLI](./quickstart-create-server-cli.md).
+
+> [!NOTE]
+> After you increase the storage size, you can't go back to a smaller storage size.
+
+You scale horizontally by creating [read replicas](./concepts-read-replicas.md). Read replicas let you scale your read workloads onto separate flexible server instance without affecting the performance and availability of the primary instance.
+
+When you change the number of vCores or the compute tier, the instance is restarted for the new server type to take effect. During this time the system switches over to the new server type, no new connections can be established, and all uncommitted transactions will be rolled back. The overall time it takes to restart your server depends on the crash recovery process and database activity at the time of the restart. Restarts typically takes a minute or less but it can be higher and can take several minutes, depending on transactional activity at the time of the restart. Scaling the storage does not require a server restart in most cases. Similarly, backup retention period changes is an online operation. To improve the restart time, we recommend that you perform scale operations during off-peak hours. That approach reduces the time needed to restart the database server.
+
+## Near-zero downtime scaling
+
+Near-zero Downtime Scaling is a feature designed to minimize downtime when modifying storage and compute tiers. If you modify the number of vCores or change the compute tier, the server undergoes a restart to apply the new configuration. During this transition to the new server, no new connections can be established. Typically, this process with regular scaling could take anywhere between 2 to 10 minutes. However, with the new 'Near-zero downtime' Scaling feature this duration has been reduced to less than 30 seconds. This significant reduction in downtime during scaling resources, that greatly improves the overall availability of your database instance.
+
+### How it works
+
+When updating your Flexible server in scaling scenarios, we create a new copy of your server (VM) with the updated configuration, synchronize it with your current one, briefly switch to the new copy with a 30-second interruption, and retire the old server, all at no extra cost to you. This process allows for seamless updates while minimizing downtime and ensuring cost-efficiency. This scaling process is triggered when changes are made to the storage and compute tiers, and the experience remains consistent for both (HA) and non-HA servers. This feature is enabled in all Azure regions and there is **no customer action required** to use this capability.
+
+> [!NOTE]
+> Near-zero downtime scaling process is the _default_ operation. However, in cases where the following limitations are encountered, the system switches to regular scaling, which involves more downtime compared to the near-zero downtime scaling.
+
+#### Pre-requisites
+- In order for near-zero downtime scaling to work, you should enable all inbound/outbound connections between the IPs in the delegated subnet. If these are not enabled near zero downtime scaling process will not work and scaling will occur through the standard scaling workflow.
+
+#### Limitations
+
+- Near-zero Downtime Scaling will not work if there are regional capacity constraints or quota limits on customer subscriptions.
+- Near-zero Downtime Scaling doesn't work for replica server but supports the primary server. For replica server it will automatically go through regular scaling process.
+- Near-zero Downtime Scaling will not work if a VNET injected Server with delegated subnet does not have sufficient usable IP addresses. If you have a standalone server, one additional IP address is necessary, and for a HA-enabled server, two extra IP addresses are required.
+
+## Related content
+
+- [create a PostgreSQL server in the portal](how-to-manage-server-portal.md)
+- [service limits](concepts-limits.md)
private-5g-core Reliability Private 5G Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reliability-private-5g-core.md
Last updated 01/31/2022
# Reliability for Azure Private 5G Core
-This article describes reliability support in Azure Private 5G Core. It covers both regional resiliency with availability zones and cross-region resiliency with disaster recovery. For an overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
-
-See [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core) for the Azure regions where Azure Private 5G Core is available.
+This article describes reliability support in Azure Private 5G Core. It covers both regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity). For an overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
## Availability zone support ++ The Azure Private 5G Core service is automatically deployed as zone-redundant in Azure regions that support availability zones, as listed in [Availability zone service and regional support](../reliability/availability-zones-service-support.md). If a region supports availability zones then all Azure Private 5G Core resources created in a region can be managed from any of the availability zones. No further work is required to configure or manage availability zones. Failover between availability zones is automatic. +
+### Prerequisites
+
+See [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=private-5g-core) for the Azure regions where Azure Private 5G Core is available.
+ ### Zone down experience In a zone-wide outage scenario, users should experience no impact because the service will move to take advantage of the healthy zone automatically. At the start of a zone-wide outage, you may see in-progress ARM requests time-out or fail. New requests will be directed to healthy nodes with zero impact on users and any failed operations should be retried. You'll still be able to create new resources and update, monitor and manage existing resources during the outage.
In a zone-wide outage scenario, users should experience no impact because the se
The application ensures that all cloud state is replicated between availability zones in the region so all management operations will continue without interruption. The packet core is running at the Edge and is unaffected by the zone failure, so will continue to provide service for users.
-## Disaster recovery: cross-region failover
+
+## Cross-region disaster recovery and business continuity
+
-Azure Private 5G Core is only available in multi-region (3+N) geographies. The service automatically replicates SIM credentials to a backup region in the same geography. This means that there's no loss of data in the event of region failure. Within four hours of the failure, all resources in the failed region are available to view through the Azure portal and ARM tools but will be read-only until the failed region is recovered. the packet running at the Edge continues to operate without interruption and network connectivity will be maintained.
-### Cross-region disaster recovery in multi-region geography
+Azure Private 5G Core is only available in multi-region (3+N) geographies. The service automatically replicates SIM credentials to a backup region in the same geography. This means that there's no loss of data in the event of region failure. Within four hours of the failure, all resources in the failed region are available to view through the Azure portal and ARM tools but will be read-only until the failed region is recovered. The packet core running at the Edge continues to operate without interruption and network connectivity will be maintained.
Microsoft is responsible for outage detection, notification and support for the Azure cloud aspects of the Azure Private 5G Core service.
-#### Outage detection, notification, and management
+### Outage detection, notification, and management
Microsoft monitors the underlying resources providing the Azure Private 5G Core service in each region. If those resources start to show failures or health monitoring alerts that aren't restricted to a single availability zone then Microsoft will move the service to another supported region in the same geography. This is an Active-Active pattern. The service health for a particular region can be found on [Azure Service Health](https://status.azure.com/status) (Azure Private 5G Core is listed in the **Networking** section). You'll be notified of any region failures through normal Azure communications channels.
Note that this will cause an outage of your packet core service and interrupt ne
In advance of a disaster recovery event, you must back up your resource configuration to another region that supports Azure Private 5G Core. When the region failure occurs, you can redeploy the packet core using the resources in your backup region.
-##### Preparation
+#### Preparation
There are two types of Azure Private 5G Core configuration data that need to be backed up for disaster recovery: mobile network configuration and SIM credentials. We recommend that you:
For security reasons, Azure Private 5G Core will never return the SIM credential
<br></br> Your Azure Private 5G Core deployment may make use of Azure Key Vaults for storing [SIM encryption keys](./security.md#customer-managed-key-encryption-at-rest) or HTTPS certificates for [local monitoring](./security.md#access-to-local-monitoring-tools). You must follow the [Azure Key Vault documentation](../key-vault/general/disaster-recovery-guidance.md) to ensure that your keys and certificates will be available in the backup region.
-##### Recovery
+#### Recovery
In the event of a region failure, first validate that all the resources in your backup region are present by querying the configuration through the Azure portal or API (see [Move resources to a different region](./region-move-private-mobile-network-resources.md)). If all the resources aren't present, stop here and don't follow the rest of this procedure. You may not be able to recover service at the edge site without the resource configuration. The recovery process is split into three stages for each packet core:
Take a copy of the **packetCoreControlPlanes.platform** values you stored in [Pr
You should follow your normal process for validating a new site install to confirm that UE connectivity has been restored and all network functionality is operational. In particular, you should confirm that the site dashboards in the Azure portal show UE registrations and that data is flowing through the data plane.
-##### Failed region restored
+#### Failed region restored
When the failed region recovers, you should ensure the configuration in the two regions is in sync by performing a backup from the active backup region to the recovered primary region, following the steps in [Preparation](#preparation).
You must also check for and remove any resources in the recovered region that ha
You then have two choices for ongoing management:
-1. Use the operational backup region as the new primary region and use the recovered region as a backup. No further action is required.
-1. Make the recovered region the new active primary region by following the instructions in [Move resources to a different region](./region-move-private-mobile-network-resources.md) to switch back to the recovered region.
+- Use the operational backup region as the new primary region and use the recovered region as a backup. No further action is required.
+- Make the recovered region the new active primary region by following the instructions in [Move resources to a different region](./region-move-private-mobile-network-resources.md) to switch back to the recovered region.
-##### Testing
+#### Testing
If you want to test your disaster recovery plans, you can follow the recovery procedure for a single packet core at any time. Note that this will cause a service outage of your packet core service and interrupt network connectivity to your UEs for up to four hours, so we recommend only doing this with non-production packet core deployments or at a time when an outage won't adversely affect your business. ## Next steps -- [Resiliency in Azure](/azure/availability-zones/overview)
+- [Reliability in Azure](/azure/reliability/overview)
reliability Business Continuity Management Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/business-continuity-management-program.md
For example:
## Shared responsibility model
-Many of the offerings Azure provides require customers to set up disaster recovery in multiple regions and aren't the responsibility of Microsoft. Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these cases, recovery and replication must be configured by the customer.
+Many of the offerings Azure provides require you to set up disaster recovery in multiple regions and aren't the responsibility of Microsoft. Not all Azure services automatically replicate data or automatically fall back from a failed region to cross-replicate to another enabled region. In these cases, you are responsible for configuring recovery and replication.
-Microsoft does ensure that the baseline infrastructure and platform services are available. But in some scenarios, usage requires the customer to duplicate their deployments and storage in a multi-region capacity, if they opt to. These examples illustrate the shared responsibility model. It's a fundamental pillar in your business continuity and disaster recovery strategy.
+Microsoft does ensure that the baseline infrastructure and platform services are available. But in some scenarios, usage demands that you duplicate your deployments and storage in a multi-region capacity, if you choose to. These examples illustrate the shared responsibility model. It's a fundamental pillar in your business continuity and disaster recovery strategy.
### Division of responsibility
In any on-premises datacenter, you own the whole stack. As you move assets to th
![A visual showing what responsibilities belong to the cloud customer versus the cloud provider.](./media/shared-responsibility-model.png)
-A good example of the shared responsibility model is the deployment of virtual machines. If a customer wants to set up *cross-region replication* for resiliency if there's region failure, they must deploy a duplicate set of virtual machines in an alternate enabled region. Azure doesn't automatically replicate these services over if there's a failure. It's the customer's responsibility to deploy necessary assets. The customer must have a process to manually change primary regions, or they must use a traffic manager to detect and automatically fail over.
+A good example of the shared responsibility model is the deployment of virtual machines. If you want to set up *cross-region replication* for resiliency if there's region failure, you must deploy a duplicate set of virtual machines in an alternate enabled region. Azure doesn't automatically replicate these services over if there's a failure. It's your responsibility to deploy necessary assets. You must have a process to manually change primary regions, or you must use a traffic manager to detect and automatically fail over.
Customer-enabled disaster recovery services all have public-facing documentation to guide you. For an example of public-facing documentation for customer-enabled disaster recovery, see [Azure Data Lake Analytics](../data-lake-analytics/data-lake-analytics-disaster-recovery.md).
Each service is required to complete Business Continuity Disaster Recovery recor
- **Recovery plan and test**: Azure requires every service to have a detailed recovery plan and to test that plan as if the service has failed because of catastrophic outage. The recovery plans are required to be written so that someone with similar skills and access can complete the tasks. A written plan avoids relying on subject matter experts being available.
- Testing is done in several ways, including self-test in a production or near-production environment, and as part of Azure full-region down drills in canary region sets. These enabled regions are identical to production regions but can be disabled without affecting customers. Testing is considered integrated because all services are affected simultaneously.
+ Testing is done in several ways, including self-test in a production or near-production environment, and as part of Azure full-region down drills in canary region sets. These enabled regions are identical to production regions but can be disabled without affecting your services. Testing is considered integrated because all services are affected simultaneously.
-- **Customer enablement**: When the customer is responsible for setting up disaster recovery, Azure is required to have public-facing documentation guidance. For all such services, links are provided to documentation and details about the process.
+- **Customer enablement**: When you are responsible for setting up disaster recovery, Azure is required to have public-facing documentation guidance. For all such services, links are provided to documentation and details about the process.
## Verify your business continuity compliance
To ensure services can similarly recover in a true region-down scenario, &quot;p
During these tests, Azure uses the same production process for detection, notification, response, and recovery. No individuals are expecting a drill, and engineers relied on for recovery are the normal on-call rotation resources. This timing avoids depending on subject matter experts who might not be available during an actual event.
-Included in these tests are services where the customer is responsible for setting up disaster recovery following Microsoft public-facing documentation. Service teams create customer-like instances to show that customer-enabled disaster recovery works as expected and that the instructions provided are accurate.
+Included in these tests are services where you are responsible for setting up disaster recovery following Microsoft public-facing documentation. Service teams create customer-like instances to show that customer-enabled disaster recovery works as expected and that the instructions provided are accurate.
For more information on certifications, see the [Microsoft Trust Center](https://www.microsoft.com/trust-center) and the section on compliance.
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
Azure reliability guidance contains the following:
| **Products** | | | |[Azure Cosmos DB](../cosmos-db/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Database for PostgreSQL - Flexible Server](reliability-postgresql-flexible-server.md)|
[Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Azure ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
[Azure Service Fabric](../service-fabric/service-fabric-cross-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Site Recovery](../site-recovery/site-recovery-overview.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure SQL](/azure/azure-sql/database/high-availability-sla?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Storage: Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure Storage Mover](reliability-azure-storage-mover.md)| [Azure Virtual Machine Scale Sets](reliability-virtual-machine-scale-sets.md)| [Azure Virtual Machines](reliability-virtual-machines.md)|
Azure reliability guidance contains the following:
| **Products** | | |
+[Azure AI Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
|[Azure API Management](../api-management/high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure App Configuration](../azure-app-configuration/faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-does-app-configuration-ensure-high-data-availability)| [Azure App Service](./reliability-app-service.md)|
Azure reliability guidance contains the following:
[Azure Batch](reliability-batch.md)| [Azure Bot Service](reliability-bot.md)| [Azure Cache for Redis](../azure-cache-for-redis/cache-how-to-zone-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure AI Search](../search/search-reliability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure Communications Gateway](../communications-gateway/reliability-communications-gateway.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Container Apps](reliability-azure-container-apps.md)| [Azure Container Instances](reliability-containers.md)|
Azure reliability guidance contains the following:
[Azure Data Factory](../data-factory/concepts-data-redundancy.md?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json)| [Azure Database for MySQL - Flexible Server](../mysql/flexible-server/concepts-high-availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Database for PostgreSQL - Flexible Server](./reliability-postgresql-flexible-server.md)|
-[Azure Data Manager for Energy](./reliability-energy-data-services.md) |
[Azure DDoS Protection](../ddos-protection/ddos-faq.yml?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Disk Encryption](../virtual-machines/disks-redundancy.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure DNS - Azure DNS Private Zones](../dns/private-dns-getstarted-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
[Azure Functions](reliability-functions.md)| [Azure HDInsight](reliability-hdinsight.md)| [Azure IoT Hub](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Image Builder](reliability-image-builder.md)|
[Azure Kubernetes Service (AKS)](../aks/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Logic Apps](../logic-apps/set-up-zone-redundancy-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Monitor](../azure-monitor/logs/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Network Watcher](../network-watcher/frequently-asked-questions.yml?bc=%2fazure%2freliability%2fbreadcrumb%2ftoc.json&toc=%2fazure%2freliability%2ftoc.json#service-availability-and-redundancy)| [Azure Notification Hubs](../notification-hubs/availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
-[Azure Operator Nexus Network Cloud](reliability-operator-nexus.md)|
-[Azure Private 5G Core](../private-5g-core/reliability-private-5g-core.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Operator Nexus](reliability-operator-nexus.md)|
[Azure Private Link](../private-link/availability.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Azure Route Server](../route-server/route-server-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+[Azure Storage - Blob Storage](../storage/common/storage-disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
[Azure Virtual WAN](../virtual-wan/virtual-wan-faq.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#how-are-availability-zones-and-resiliency-handled-in-virtual-wan)| [Azure Web Application Firewall](../firewall/deploy-availability-zone-powershell.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
Azure reliability guidance contains the following:
| **Products** | |--|
-| [Azure Spring Apps](reliability-spring-apps.md) |
+|[Azure Cosmos DB for MongoDB vCore](../cosmos-db/mongodb/vcore/failover-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
+| [Azure Data Manager for Energy](./reliability-energy-data-services.md) |
+| [Azure Deployment Environments](reliability-deployment-environments.md)|
+|[Azure Private 5G Core](../private-5g-core/reliability-private-5g-core.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|
+| [Azure Spring Apps](reliability-spring-apps.md) |
+| [Azure Storage Mover](./reliability-azure-storage-mover.md)|
## Azure Service Manager Retirement
role-based-access-control Built In Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/built-in-roles.md
The following table provides a brief description of each built-in role. Click th
> | [Contributor](#contributor) | Grants full access to manage all resources, but does not allow you to assign roles in Azure RBAC, manage assignments in Azure Blueprints, or share image galleries. | b24988ac-6180-42a0-ab88-20f7382dd24c | > | [Owner](#owner) | Grants full access to manage all resources, including the ability to assign roles in Azure RBAC. | 8e3af657-a8ff-443c-a75c-2fe8c4bcb635 | > | [Reader](#reader) | View all resources, but does not allow you to make any changes. | acdd72a7-3385-48ef-bd42-f606fba81ae7 |
-> | [Role Based Access Control Administrator (Preview)](#role-based-access-control-administrator-preview) | Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy. | f58310d9-a9f6-439a-9e8d-f62e7b41a168 |
+> | [Role Based Access Control Administrator](#role-based-access-control-administrator) | Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy. | f58310d9-a9f6-439a-9e8d-f62e7b41a168 |
> | [User Access Administrator](#user-access-administrator) | Lets you manage user access to Azure resources. | 18d7d88d-d35e-4fb5-a5c3-7773c20a72d9 | > | **Compute** | | | > | [Classic Virtual Machine Contributor](#classic-virtual-machine-contributor) | Lets you manage classic virtual machines, but not access to them, and not the virtual network or storage account they're connected to. | d73bb868-a0df-4d4d-bd69-98a00b01fccb |
The following table provides a brief description of each built-in role. Click th
> | [Virtual Machine User Login](#virtual-machine-user-login) | View Virtual Machines in the portal and login as a regular user. | fb879df8-f326-4884-b1cf-06f3ad86be52 | > | [Windows Admin Center Administrator Login](#windows-admin-center-administrator-login) | Let's you manage the OS of your resource via Windows Admin Center as an administrator. | a6333a3e-0164-44c3-b281-7a577aff287f | > | **Networking** | | |
+> | [Azure Front Door Domain Contributor](#azure-front-door-domain-contributor) | Can manage Azure Front Door domains, but can't grant access to other users. | 0ab34830-df19-4f8c-b84e-aa85b8afa6e8 |
+> | [Azure Front Door Domain Reader](#azure-front-door-domain-reader) | Can view Azure Front Door domains, but can't make changes. | 0f99d363-226e-4dca-9920-b807cf8e1a5f |
+> | [Azure Front Door Profile Reader](#azure-front-door-profile-reader) | Can view AFD standard and premium profiles and their endpoints, but can't make changes. | 662802e2-50f6-46b0-aed2-e834bacc6d12 |
+> | [Azure Front Door Secret Contributor](#azure-front-door-secret-contributor) | Can manage Azure Front Door secrets, but can't grant access to other users. | 3f2eb865-5811-4578-b90a-6fc6fa0df8e5 |
+> | [Azure Front Door Secret Reader](#azure-front-door-secret-reader) | Can view Azure Front Door secrets, but can't make changes. | 0db238c4-885e-4c4f-a933-aa2cef684fca |
> | [CDN Endpoint Contributor](#cdn-endpoint-contributor) | Can manage CDN endpoints, but can't grant access to other users. | 426e0c7f-0c7e-4658-b36f-ff54d6c29b45 | > | [CDN Endpoint Reader](#cdn-endpoint-reader) | Can view CDN endpoints, but can't make changes. | 871e35f6-b5c1-49cc-a043-bde969a0f2cd | > | [CDN Profile Contributor](#cdn-profile-contributor) | Can manage CDN profiles and their endpoints, but can't grant access to other users. | ec156ff8-a8d1-4d15-830c-5b80698ca432 |
The following table provides a brief description of each built-in role. Click th
> | [Schema Registry Reader (Preview)](#schema-registry-reader-preview) | Read and list Schema Registry groups and schemas. | 2c56ea50-c6b3-40a6-83c0-9d98858bc7d2 | > | [Stream Analytics Query Tester](#stream-analytics-query-tester) | Lets you perform query testing without creating a stream analytics job first | 1ec5b3c1-b17e-4e25-8312-2acb3c3c5abf | > | **AI + machine learning** | | |
+> | [AzureML Compute Operator](#azureml-compute-operator) | Can access and perform CRUD operations on Machine Learning Services managed compute resources (including Notebook VMs). | e503ece1-11d0-4e8e-8e2c-7a6c3bf38815 |
> | [AzureML Data Scientist](#azureml-data-scientist) | Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. | f6c7c914-8db3-469d-8ca1-694a8f32e121 | > | [Cognitive Services Contributor](#cognitive-services-contributor) | Lets you create, read, update, delete and manage keys of Cognitive Services. | 25fbc0a9-bd7c-42a3-aa1a-3b75d497ee68 | > | [Cognitive Services Custom Vision Contributor](#cognitive-services-custom-vision-contributor) | Full access to the project, including the ability to view, create, edit, or delete projects. | c1ff6cc2-c111-46fe-8896-e0ef812ad9f3 |
View all resources, but does not allow you to make any changes. [Learn more](rba
} ```
-### Role Based Access Control Administrator (Preview)
+### Role Based Access Control Administrator
Manage access to Azure resources by assigning roles using Azure RBAC. This role does not allow you to manage access using other ways, such as Azure Policy.
Manage access to Azure resources by assigning roles using Azure RBAC. This role
"notDataActions": [] } ],
- "roleName": "Role Based Access Control Administrator (Preview)",
+ "roleName": "Role Based Access Control Administrator",
"roleType": "BuiltInRole", "type": "Microsoft.Authorization/roleDefinitions" }
Let's you manage the OS of your resource via Windows Admin Center as an administ
## Networking
+### Azure Front Door Domain Contributor
+
+Can manage Azure Front Door domains, but can't grant access to other users.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/operationresults/profileresults/customdomainresults/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/customdomains/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/customdomains/write | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/customdomains/delete | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can manage Azure Front Door domains, but can't grant access to other users.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/0ab34830-df19-4f8c-b84e-aa85b8afa6e8",
+ "name": "0ab34830-df19-4f8c-b84e-aa85b8afa6e8",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Cdn/operationresults/profileresults/customdomainresults/read",
+ "Microsoft.Cdn/profiles/customdomains/read",
+ "Microsoft.Cdn/profiles/customdomains/write",
+ "Microsoft.Cdn/profiles/customdomains/delete",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Front Door Domain Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Front Door Domain Reader
+
+Can view Azure Front Door domains, but can't make changes.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/operationresults/profileresults/customdomainresults/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/customdomains/read | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can view Azure Front Door domains, but can't make changes.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/0f99d363-226e-4dca-9920-b807cf8e1a5f",
+ "name": "0f99d363-226e-4dca-9920-b807cf8e1a5f",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Cdn/operationresults/profileresults/customdomainresults/read",
+ "Microsoft.Cdn/profiles/customdomains/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Front Door Domain Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Front Door Profile Reader
+
+Can view AFD standard and premium profiles and their endpoints, but can't make changes.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Authorization](resource-provider-operations.md#microsoftauthorization)/*/read | Read roles and role assignments |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/edgenodes/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/operationresults/* | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/*/read | |
+> | [Microsoft.Insights](resource-provider-operations.md#microsoftinsights)/alertRules/* | Create and manage a classic metric alert |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/deployments/* | Create and manage a deployment |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/operationresults/profileresults/afdendpointresults/CheckCustomDomainDNSMappingStatus/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/queryloganalyticsmetrics/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/queryloganalyticsrankings/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/querywafloganalyticsmetrics/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/querywafloganalyticsrankings/action | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/afdendpoints/CheckCustomDomainDNSMappingStatus/action | |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can view AFD standard and premium profiles and their endpoints, but can't make changes.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/662802e2-50f6-46b0-aed2-e834bacc6d12",
+ "name": "662802e2-50f6-46b0-aed2-e834bacc6d12",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Authorization/*/read",
+ "Microsoft.Cdn/edgenodes/read",
+ "Microsoft.Cdn/operationresults/*",
+ "Microsoft.Cdn/profiles/*/read",
+ "Microsoft.Insights/alertRules/*",
+ "Microsoft.Resources/deployments/*",
+ "Microsoft.Resources/subscriptions/resourceGroups/read",
+ "Microsoft.Cdn/operationresults/profileresults/afdendpointresults/CheckCustomDomainDNSMappingStatus/action",
+ "Microsoft.Cdn/profiles/queryloganalyticsmetrics/action",
+ "Microsoft.Cdn/profiles/queryloganalyticsrankings/action",
+ "Microsoft.Cdn/profiles/querywafloganalyticsmetrics/action",
+ "Microsoft.Cdn/profiles/querywafloganalyticsrankings/action",
+ "Microsoft.Cdn/profiles/afdendpoints/CheckCustomDomainDNSMappingStatus/action"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Front Door Profile Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Front Door Secret Contributor
+
+Can manage Azure Front Door secrets, but can't grant access to other users.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/operationresults/profileresults/secretresults/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/secrets/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/secrets/write | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/secrets/delete | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can manage Azure Front Door secrets, but can't grant access to other users.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/3f2eb865-5811-4578-b90a-6fc6fa0df8e5",
+ "name": "3f2eb865-5811-4578-b90a-6fc6fa0df8e5",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Cdn/operationresults/profileresults/secretresults/read",
+ "Microsoft.Cdn/profiles/secrets/read",
+ "Microsoft.Cdn/profiles/secrets/write",
+ "Microsoft.Cdn/profiles/secrets/delete",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Front Door Secret Contributor",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+
+### Azure Front Door Secret Reader
+
+Can view Azure Front Door secrets, but can't make changes.
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/operationresults/profileresults/secretresults/read | |
+> | [Microsoft.Cdn](resource-provider-operations.md#microsoftcdn)/profiles/secrets/read | |
+> | [Microsoft.Resources](resource-provider-operations.md#microsoftresources)/subscriptions/resourceGroups/read | Gets or lists resource groups. |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can view Azure Front Door secrets, but can't make changes.",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/0db238c4-885e-4c4f-a933-aa2cef684fca",
+ "name": "0db238c4-885e-4c4f-a933-aa2cef684fca",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.Cdn/operationresults/profileresults/secretresults/read",
+ "Microsoft.Cdn/profiles/secrets/read",
+ "Microsoft.Resources/subscriptions/resourceGroups/read"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "Azure Front Door Secret Reader",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### CDN Endpoint Contributor Can manage CDN endpoints, but can't grant access to other users.
Lets you perform query testing without creating a stream analytics job first
## AI + machine learning
+### AzureML Compute Operator
+
+Can access and perform CRUD operations on Machine Learning Services managed compute resources (including Notebook VMs). [Learn more](../machine-learning/how-to-assign-roles.md)
+
+> [!div class="mx-tableFixed"]
+> | Actions | Description |
+> | | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/computes/* | |
+> | [Microsoft.MachineLearningServices](resource-provider-operations.md#microsoftmachinelearningservices)/workspaces/notebooks/vm/* | |
+> | **NotActions** | |
+> | *none* | |
+> | **DataActions** | |
+> | *none* | |
+> | **NotDataActions** | |
+> | *none* | |
+
+```json
+{
+ "assignableScopes": [
+ "/"
+ ],
+ "description": "Can access and perform CRUD operations on Machine Learning Services managed compute resources (including Notebook VMs).",
+ "id": "/providers/Microsoft.Authorization/roleDefinitions/e503ece1-11d0-4e8e-8e2c-7a6c3bf38815",
+ "name": "e503ece1-11d0-4e8e-8e2c-7a6c3bf38815",
+ "permissions": [
+ {
+ "actions": [
+ "Microsoft.MachineLearningServices/workspaces/computes/*",
+ "Microsoft.MachineLearningServices/workspaces/notebooks/vm/*"
+ ],
+ "notActions": [],
+ "dataActions": [],
+ "notDataActions": []
+ }
+ ],
+ "roleName": "AzureML Compute Operator",
+ "roleType": "BuiltInRole",
+ "type": "Microsoft.Authorization/roleDefinitions"
+}
+```
+ ### AzureML Data Scientist
-Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself.
+Can perform all actions within an Azure Machine Learning workspace, except for creating or deleting compute resources and modifying the workspace itself. [Learn more](../machine-learning/how-to-assign-roles.md)
> [!div class="mx-tableFixed"] > | Actions | Description |
role-based-access-control Conditions Custom Security Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-custom-security-attributes.md
Previously updated : 11/15/2023 Last updated : 12/01/2023 #Customer intent: As a dev, devops, or it admin, I want to
In this article, you learn how to allow read access to blobs based on blob index
To assign custom security attributes and add role assignments conditions in your Microsoft Entra tenant, you need: - [Attribute Definition Administrator](../active-directory/roles/permissions-reference.md#attribute-definition-administrator) and [Attribute Assignment Administrator](../active-directory/roles/permissions-reference.md#attribute-assignment-administrator)-- [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)
+- [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)
> [!IMPORTANT] > By default, [Global Administrator](../active-directory/roles/permissions-reference.md#global-administrator) and other administrator roles do not have permissions to read, define, or assign custom security attributes. If you do not meet these prerequisites, you won't see the principal/user attributes in the condition editor.
You can also use Azure PowerShell to add role assignment conditions. The followi
### Add a condition
-1. Use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the instructions that appear to sign in to your directory as User Access Administrator or Owner.
+1. Use the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the instructions that appear to sign in to your directory as Role Based Access Control Administrator.
```powershell Connect-AzAccount
You can also use Azure CLI to add role assignments conditions. The following com
### Add a condition
-1. Use the [az login](/cli/azure/reference-index#az-login) command and follow the instructions that appear to sign in to your directory as User Access Administrator or Owner.
+1. Use the [az login](/cli/azure/reference-index#az-login) command and follow the instructions that appear to sign in to your directory as Role Based Access Control Administrator.
```azurecli az login
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Previously updated : 04/11/2023 Last updated : 12/01/2023 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
Some features of conditions are still in preview. The following table lists the
| Add conditions using the [condition editor in the Azure portal](conditions-role-assignments-portal.md) | GA | October 2022 | | Add conditions using [Azure PowerShell](conditions-role-assignments-powershell.md), [Azure CLI](conditions-role-assignments-cli.md), or [REST API](conditions-role-assignments-rest.md) | GA | October 2022 | | Use [resource and request attributes](conditions-format.md#attributes) for specific combinations of Azure storage resources, access attribute types, and storage account performance tiers. For more information, see [Status of condition features in Azure Storage](../storage/blobs/storage-auth-abac.md#status-of-condition-features-in-azure-storage). | GA | October 2022 |
-| Use [custom security attributes on a principal](conditions-format.md#principal-attributes) in a condition | Preview | November 2021 |
+| Use [custom security attributes on a principal](conditions-format.md#principal-attributes) in a condition | GA | November 2023 |
<a name='conditions-and-azure-ad-pim'></a>
role-based-access-control Conditions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-prerequisites.md
Previously updated : 11/15/2023 Last updated : 12/01/2023
For more information, see [API versions of Azure RBAC REST APIs](/rest/api/autho
## Permissions
-Just like role assignments, to add or update conditions, you must be signed in to Azure with a user that has the `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner).
+Just like role assignments, to add or update conditions, you must be signed in to Azure with a user that has the `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator).
## Principal attributes
role-based-access-control Custom Roles Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-bicep.md
Previously updated : 07/01/2022 Last updated : 12/01/2023 #Customer intent: As an IT admin, I want to create custom and/or roles using Bicep so that I can start automating custom role processes.
To create a custom role, you specify a role name, role permissions, and where th
## Prerequisites
-To create a custom role, you must have permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator).
+To create a custom role, you must have permissions to create custom roles, such as [User Access Administrator](built-in-roles.md#user-access-administrator).
You also must have an active Azure subscription. If you don't have one, you can create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
role-based-access-control Custom Roles Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-cli.md
na Previously updated : 04/05/2023 Last updated : 12/01/2023
For a step-by-step tutorial on how to create a custom role, see [Tutorial: Creat
To create custom roles, you need: -- Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator)
+- Permissions to create custom roles, such as [User Access Administrator](built-in-roles.md#user-access-administrator)
- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure CLI](/cli/azure/install-azure-cli) ## List custom roles
role-based-access-control Custom Roles Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-powershell.md
na Previously updated : 04/05/2023 Last updated : 12/01/2023
For a step-by-step tutorial on how to create a custom role, see [Tutorial: Creat
To create custom roles, you need: -- Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator)
+- Permissions to create custom roles, such as [User Access Administrator](built-in-roles.md#user-access-administrator)
- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell) ## List custom roles
role-based-access-control Custom Roles Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-rest.md
rest-api Previously updated : 04/05/2023 Last updated : 12/01/2023
To create a custom role, use the [Role Definitions - Create Or Update](/rest/api
## Update a custom role
-To update a custom role, use the [Role Definitions - Create Or Update](/rest/api/authorization/role-definitions/create-or-update) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission on all the `assignableScopes`. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) include this permission.
+To update a custom role, use the [Role Definitions - Create Or Update](/rest/api/authorization/role-definitions/create-or-update) REST API. To call this API, you must be signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission on all the `assignableScopes`, such as [User Access Administrator](built-in-roles.md#user-access-administrator).
1. Use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) or [Role Definitions - Get](/rest/api/authorization/role-definitions/get) REST API to get information about the custom role. For more information, see the earlier [List all custom role definitions](#list-all-custom-role-definitions) section.
role-based-access-control Custom Roles Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/custom-roles-template.md
Previously updated : 10/19/2022 Last updated : 12/01/2023 #Customer intent: As an IT admin, I want to create custom roles by using an Azure Resource Manager template so that I can start automating custom role processes.
If your environment meets the prerequisites and you're familiar with using ARM t
To create a custom role, you must have: -- Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator).
+- Permissions to create custom roles, such as [User Access Administrator](built-in-roles.md#user-access-administrator).
You must use the following version:
role-based-access-control Delegate Role Assignments Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-examples.md
Previously updated : 11/29/2023 Last updated : 12/01/2023 #Customer intent: As a dev, devops, or it admin, I want to learn about the conditions so that I write more complex conditions.
New-AzRoleAssignment -ObjectId $principalId -Scope $scope -RoleDefinitionId $rol
## Example: Allow most roles, but don't allow others to assign roles
-This condition allows a delegate to add or remove role assignments for all roles except the [Owner](built-in-roles.md#owner), [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator-preview), and [User Access Administrator](built-in-roles.md#user-access-administrator) roles.
+This condition allows a delegate to add or remove role assignments for all roles except the [Owner](built-in-roles.md#owner), [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator), and [User Access Administrator](built-in-roles.md#user-access-administrator) roles.
This condition is useful when you want to allow a delegate to assign most roles, but not allow the delegate to allow others to assign roles.
To target both the add and remove role assignment actions, notice that you must
> | Attribute | [Role definition ID](conditions-authorization-actions-attributes.md#role-definition-id) | > | Operator | [ForAnyOfAnyValues:GuidNotEquals](conditions-format.md#foranyofanyvalues) | > | Comparison | Value |
-> | Roles | [Owner](built-in-roles.md#owner)<br/>[Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator-preview)<br/>[User Access Administrator](built-in-roles.md#user-access-administrator) |
+> | Roles | [Owner](built-in-roles.md#owner)<br/>[Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)<br/>[User Access Administrator](built-in-roles.md#user-access-administrator) |
> [!div class="mx-tableFixed"] > | Condition #2 | Setting |
To target both the add and remove role assignment actions, notice that you must
> | Attribute | [Role definition ID](conditions-authorization-actions-attributes.md#role-definition-id) | > | Operator | [ForAnyOfAnyValues:GuidNotEquals](conditions-format.md#foranyofanyvalues) | > | Comparison | Value |
-> | Roles | [Owner](built-in-roles.md#owner)<br/>[Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator-preview)<br/>[User Access Administrator](built-in-roles.md#user-access-administrator) |
+> | Roles | [Owner](built-in-roles.md#owner)<br/>[Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)<br/>[User Access Administrator](built-in-roles.md#user-access-administrator) |
``` (
role-based-access-control Delegate Role Assignments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-overview.md
Previously updated : 11/29/2023 Last updated : 12/01/2023 #Customer intent: As a dev, devops, or it admin, I want to delegate Azure role assignment management to other users who are closer to the decision, but want to limit the scope of the role assignments.
Here are some reasons why delegating role assignment management to others with c
Consider an example where Alice is an administrator with the User Access Administrator role for a subscription. Alice wants to grant Dara the ability to assign specific roles for specific groups. Alice doesn't want Dara to have any other role assignment permissions. The following diagram shows how Alice can delegate role assignment responsibilities to Dara with conditions.
-1. Alice assigns the Role Based Access Control Administrator (Preview) role to Dara. Alice adds conditions so that Dara can only assign the Backup Contributor or Backup Reader roles to the Marketing and Sales groups.
+1. Alice assigns the Role Based Access Control Administrator role to Dara. Alice adds conditions so that Dara can only assign the Backup Contributor or Backup Reader roles to the Marketing and Sales groups.
1. Dara can now assign the Backup Contributor or Backup Reader roles to the Marketing and Sales groups. 1. If Dara attempts to assign other roles or assign any roles to different principals (such as a user or managed identity), the role assignment fails.
Consider an example where Alice is an administrator with the User Access Adminis
## Role Based Access Control Administrator role
-The [Role Based Access Control Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) role is a built-in role that has been designed for delegating role assignment management to others. It has fewer permissions than [User Access Administrator](built-in-roles.md#user-access-administrator), which follows least privilege best practices. The Role Based Access Control Administrator role has following permissions:
+The [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) role is a built-in role that has been designed for delegating role assignment management to others. It has fewer permissions than [User Access Administrator](built-in-roles.md#user-access-administrator), which follows least privilege best practices. The Role Based Access Control Administrator role has following permissions:
- Create a role assignment at the specified scope - Delete a role assignment at the specified scope
To delegate role assignment management with conditions, you assign roles as you
1. Start a new role assignment
-1. Select the [Role Based Access Control Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) role
+1. Select the [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) role
- You can select any role that includes the `Microsoft.Authorization/roleAssignments/write` action, but Role Based Access Control Administrator (Preview) has fewer permissions.
+ You can select any role that includes the `Microsoft.Authorization/roleAssignments/write` action, but Role Based Access Control Administrator has fewer permissions.
1. Select the delegate
role-based-access-control Delegate Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/delegate-role-assignments-portal.md
Previously updated : 11/29/2023 Last updated : 12/01/2023 #Customer intent: As a dev, devops, or it admin, I want to delegate Azure role assignment management to other users who are closer to the decision, but want to limit the scope of the role assignments.
Once you know the permissions that delegate needs, you use the following steps t
1. On the **Roles** tab, select the **Privileged administrator roles** tab.
-1. Select the **Role Based Access Control Administrator (Preview)** role.
+1. Select the **Role Based Access Control Administrator** role.
The **Conditions** tab appears.
- You can select any role that includes the `Microsoft.Authorization/roleAssignments/write` or `Microsoft.Authorization/roleAssignments/delete` actions, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator), but [Role Based Access Control Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) has fewer permissions.
+ You can select any role that includes the `Microsoft.Authorization/roleAssignments/write` or `Microsoft.Authorization/roleAssignments/delete` actions, such as [User Access Administrator](built-in-roles.md#user-access-administrator), but [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) has fewer permissions.
1. On the **Members** tab, find and select the delegate.
role-based-access-control Quickstart Role Assignments Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-bicep.md
Previously updated : 06/30/2022 Last updated : 12/01/2023 #Customer intent: As a new user, I want to see how to grant access to resources using Bicep so that I can start automating role assignment processes.
To assign Azure roles and remove role assignments, you must have: - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner).
+- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator).
- To assign a role, you must specify three elements: security principal, role definition, and scope. For this quickstart, the security principal is you or another user in your directory, the role definition is [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor), and the scope is a resource group that you specify. ## Review the Bicep file
role-based-access-control Quickstart Role Assignments Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-template.md
Previously updated : 04/28/2021 Last updated : 12/01/2023 #Customer intent: As a new user, I want to see how to grant access to resources by using Azure Resource Manager template so that I can start automating role assignment processes.
If your environment meets the prerequisites and you're familiar with using ARM t
To assign Azure roles and remove role assignments, you must have: - If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.-- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)
+- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)
- To assign a role, you must specify three elements: security principal, role definition, and scope. For this quickstart, the security principal is you or another user in your directory, the role definition is [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor), and the scope is a resource group that you specify. ## Review the template
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
na Previously updated : 08/09/2023 Last updated : 12/01/2023
The following diagram is a high-level view of how the Azure roles, Microsoft Ent
## Azure roles
-[Azure RBAC](overview.md) is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management to Azure resources, such as compute and storage. Azure RBAC includes over 70 built-in roles. There are four fundamental Azure roles. The first three apply to all resource types:
+[Azure RBAC](overview.md) is an authorization system built on [Azure Resource Manager](../azure-resource-manager/management/overview.md) that provides fine-grained access management to Azure resources, such as compute and storage. Azure RBAC includes over 100 built-in roles. There are five fundamental Azure roles. The first three apply to all resource types:
| Azure role | Permissions | Notes | | | | | | [Owner](built-in-roles.md#owner) | <ul><li>Grants full access to manage all resources</li><li>Assign roles in Azure RBAC</li></ul> | The Service Administrator and Co-Administrators are assigned the Owner role at the subscription scope<br>Applies to all resource types. | | [Contributor](built-in-roles.md#contributor) | <ul><li>Grants full access to manage all resources</li><li>Can't assign roles in Azure RBAC</li><li>Can't manage assignments in Azure Blueprints or share image galleries</li></ul> | Applies to all resource types. | | [Reader](built-in-roles.md#reader) | <ul><li>View Azure resources</li></ul> | Applies to all resource types. |
+| [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li><li>Can't manage access using other ways, such as Azure Policy</li></ul> |
| [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li></ul> | | The rest of the built-in roles allow management of specific Azure resources. For example, the [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor) role allows the user to create and manage virtual machines. For a list of all the built-in roles, see [Azure built-in roles](built-in-roles.md).
role-based-access-control Role Assignments Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-cli.md
Previously updated : 06/03/2022 Last updated : 12/01/2023
To assign roles, you must have: -- `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)
+- `Microsoft.Authorization/roleAssignments/write` permissions, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)
- [Bash in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure CLI](/cli/azure) ## Steps to assign an Azure role
role-based-access-control Role Assignments Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-portal.md
Previously updated : 11/29/2023 Last updated : 12/01/2023
The **Conditions** tab will look different depending on the role you selected.
If you selected one of the following privileged roles, follow the steps in this section. - [Owner](built-in-roles.md#owner)-- [Role Based Access Control Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview)
+- [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)
- [User Access Administrator](built-in-roles.md#user-access-administrator) 1. On the **Conditions** tab under **Delegation type**, select the **Constrained (recommended)** option.
role-based-access-control Role Assignments Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-powershell.md
Previously updated : 10/26/2022 Last updated : 12/01/2023
To assign roles, you must have: -- `Microsoft.Authorization/roleAssignments/write` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner)
+- `Microsoft.Authorization/roleAssignments/write` permissions, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)
- [PowerShell in Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell) - The account you use to run the PowerShell command must have the Microsoft Graph `Directory.Read.All` permission.
role-based-access-control Role Assignments Remove https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-remove.md
Previously updated : 10/19/2022 Last updated : 12/01/2023 ms.devlang: azurecli
ms.devlang: azurecli
# Remove Azure role assignments
-[Azure role-based access control (Azure RBAC)](../../articles/role-based-access-control/overview.md) is the authorization system you use to manage access to Azure resources. To remove access from an Azure resource, you remove a role assignment. This article describes how to remove roles assignments using the Azure portal, Azure PowerShell, Azure CLI, and REST API.
+[Azure role-based access control (Azure RBAC)](overview.md) is the authorization system you use to manage access to Azure resources. To remove access from an Azure resource, you remove a role assignment. This article describes how to remove roles assignments using the Azure portal, Azure PowerShell, Azure CLI, and REST API.
## Prerequisites To remove role assignments, you must have: -- `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](../../articles/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../../articles/role-based-access-control/built-in-roles.md#owner)
+- `Microsoft.Authorization/roleAssignments/delete` permissions, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator)
For the REST API, you must use the following version:
role-based-access-control Role Assignments Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-rest.md
rest-api Previously updated : 10/19/2022 Last updated : 12/01/2023
For more information, see [API versions of Azure RBAC REST APIs](/rest/api/autho
## Assign an Azure role
-To assign a role, use the [Role Assignments - Create](/rest/api/authorization/role-assignments/create) REST API and specify the security principal, role definition, and scope. To call this API, you must have access to the `Microsoft.Authorization/roleAssignments/write` action. Of the built-in roles, only [Owner](built-in-roles.md#owner) and [User Access Administrator](built-in-roles.md#user-access-administrator) are granted access to this action.
+To assign a role, use the [Role Assignments - Create](/rest/api/authorization/role-assignments/create) REST API and specify the security principal, role definition, and scope. To call this API, you must have access to the `Microsoft.Authorization/roleAssignments/write` action, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator).
1. Use the [Role Definitions - List](/rest/api/authorization/role-definitions/list) REST API or see [Built-in roles](built-in-roles.md) to get the identifier for the role definition you want to assign.
role-based-access-control Role Assignments Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/role-assignments-steps.md
Previously updated : 11/06/2023 Last updated : 12/01/2023
Privileged administrator roles are roles that grant privileged administrator acc
| | | | [Owner](built-in-roles.md#owner) | <ul><li>Grants full access to manage all resources</li><li>Assign roles in Azure RBAC</li></ul> | | [Contributor](built-in-roles.md#contributor) | <ul><li>Grants full access to manage all resources</li><li>Can't assign roles in Azure RBAC</li><li>Can't manage assignments in Azure Blueprints or share image galleries</li></ul> |
-| [Role Based Access Administrator (Preview)](built-in-roles.md#role-based-access-control-administrator-preview) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li><li>Can't manage access using other ways, such as Azure Policy</li></ul> |
+| [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li><li>Can't manage access using other ways, such as Azure Policy</li></ul> |
| [User Access Administrator](built-in-roles.md#user-access-administrator) | <ul><li>Manage user access to Azure resources</li><li>Assign roles in Azure RBAC</li><li>Assign themselves or others the Owner role</li></ul> | For best practices when using privileged administrator role assignments, see [Best practices for Azure RBAC](best-practices.md#limit-privileged-administrator-role-assignments). For more information, see [Privileged administrator role definition](./role-definitions.md#privileged-administrator-role-definition).
When you assign a role at a parent scope, those permissions are inherited to the
## Step 4: Check your prerequisites
-To assign roles, you must be signed in with a user that is assigned a role that has role assignments write permission, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) at the scope you are trying to assign the role. Similarly, to remove a role assignment, you must have the role assignments delete permission.
+To assign roles, you must be signed in with a user that is assigned a role that has role assignments write permission, such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) at the scope you are trying to assign the role. Similarly, to remove a role assignment, you must have the role assignments delete permission.
- `Microsoft.Authorization/roleAssignments/write` - `Microsoft.Authorization/roleAssignments/delete`
role-based-access-control Troubleshoot Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshoot-limits.md
Previously updated : 07/31/2023 Last updated : 12/01/2023
This article describes some common solutions when you exceed the limits in Azure
## Prerequisites - [Reader](./built-in-roles.md#reader) role to run Azure Resource Graph queries.-- [User Access Administrator](./built-in-roles.md#user-access-administrator) or [Owner](./built-in-roles.md#owner) role to add role assignments, remove role assignments, or delete custom roles.
+- [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) role to add or remove role assignments.
+- [User Access Administrator](./built-in-roles.md#user-access-administrator) role to add role assignments, remove role assignments, or delete custom roles.
- [Groups Administrator](../active-directory/roles/permissions-reference.md#groups-administrator) or [User Administrator](../active-directory/roles/permissions-reference.md#user-administrator) role to create groups. > [!NOTE]
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
na Previously updated : 09/20/2023 Last updated : 12/01/2023
You're currently signed in with a user that doesn't have permission to assign ro
**Solution**
-Check that you're currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleAssignments/write` permission such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) at the scope you're trying to assign the role.
+Check that you're currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleAssignments/write` permission such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) at the scope you're trying to assign the role.
### Symptom - Roles or principals are not listed
You are currently signed in with a user that does not have permission to assign
**Solution 1**
-Check that you are currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleAssignments/write` permission such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator) at the scope you are trying to assign the role.
+Check that you are currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleAssignments/write` permission such as [Role Based Access Control Administrator](built-in-roles.md#role-based-access-control-administrator) at the scope you are trying to assign the role.
**Cause 2**
You're currently signed in with a user that doesn't have permission to update or
**Solution 1**
-Check that you're currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator).
+Check that you're currently signed in with a user that is assigned a role that has the `Microsoft.Authorization/roleDefinitions/write` permission such as [User Access Administrator](built-in-roles.md#user-access-administrator).
**Cause 2**
This error usually indicates that you don't have permissions to one or more of t
Try the following: - Review [Who can create, delete, update, or view a custom role](custom-roles.md#who-can-create-delete-update-or-view-a-custom-role) and check that you have permissions to create or update the custom role for all assignable scopes.-- If you don't have permissions, ask your administrator to assign you a role that has the `Microsoft.Authorization/roleDefinitions/write` action, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator), at the scope of the assignable scope.
+- If you don't have permissions, ask your administrator to assign you a role that has the `Microsoft.Authorization/roleDefinitions/write` action, such as [User Access Administrator](built-in-roles.md#user-access-administrator), at the scope of the assignable scope.
- Check that all the assignable scopes in the custom role are valid. If not, remove any invalid assignable scopes. For more information, see the custom role tutorials using the [Azure portal](custom-roles-portal.md), [Azure PowerShell](tutorial-custom-role-powershell.md), or [Azure CLI](tutorial-custom-role-cli.md).
role-based-access-control Tutorial Custom Role Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-custom-role-cli.md
'' Previously updated : 02/20/2019 Last updated : 12/01/2023
If you don't have an Azure subscription, create a [free account](https://azure.m
To complete this tutorial, you will need: -- Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator)
+- Permissions to create custom roles, such as [User Access Administrator](built-in-roles.md#user-access-administrator)
- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure CLI](/cli/azure/install-azure-cli) ## Sign in to Azure CLI
role-based-access-control Tutorial Custom Role Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-custom-role-powershell.md
Previously updated : 02/20/2019 Last updated : 12/01/2023 #Customer intent: As a dev or devops, I want step-by-step instructions for how to grant custom permissions because the current built-in roles do not meet my permission needs.
If you don't have an Azure subscription, create a [free account](https://azure.m
To complete this tutorial, you will need: -- Permissions to create custom roles, such as [Owner](built-in-roles.md#owner) or [User Access Administrator](built-in-roles.md#user-access-administrator)
+- Permissions to create custom roles, such as [User Access Administrator](built-in-roles.md#user-access-administrator)
- [Azure Cloud Shell](../cloud-shell/overview.md) or [Azure PowerShell](/powershell/azure/install-azure-powershell) ## Sign in to Azure PowerShell
sap Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-control-plane.md
This table shows the networking parameters.
> | `management_subnet_name` | The name of the subnet | Optional | | > | `management_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments | > | `management_subnet_arm_id` | The Azure resource identifier for the subnet | Mandatory | For brown-field deployments |
-> | `management_subnet_nsg_name` | The name of the network security group | Optional | |
+> | `management_subnet_nsg_name` | The name of the network security group | Optional | |
> | `management_subnet_nsg_arm_id` | The Azure resource identifier for the network security group | Mandatory | For brown-field deployments | > | `management_subnet_nsg_allowed_ips` | Range of allowed IP addresses to add to Azure Firewall | Optional | | > | | | | |
-> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Azure Firewall subnet | Mandatory | For brown-field deployments |
+> | `management_firewall_subnet_arm_id` | The Azure resource identifier for the Azure Firewall subnet | Mandatory | For brown-field deployments |
> | `management_firewall_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments | > | | | | |
-> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Azure Bastion subnet | Mandatory | For brown-field deployments |
+> | `management_bastion_subnet_arm_id` | The Azure resource identifier for the Azure Bastion subnet | Mandatory | For brown-field deployments |
> | `management_bastion_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments | > | | | | | > | `webapp_subnet_arm_id` | The Azure resource identifier for the web app subnet | Mandatory | For brown-field deployments by using the web app | > | `webapp_subnet_address_prefix` | The address range for the subnet | Mandatory | For green-field deployments by using the web app |
+> | | | | |
+> | `use_private_endpoint` | Use private endpoints. | Optional | |
+> | `use_service_endpoint` | Use service endpoints for subnets. | Optional | |
> [!NOTE] > When you use an existing subnet for the web app, the subnet must be empty, in the same region as the resource group being deployed, and delegated to Microsoft.Web/serverFarms.
This section defines the parameters used for defining the Azure Key Vault inform
> | `bastion_deployment` | Boolean flag that controls if Azure Bastion host is to be deployed. | Optional | | > | `bastion_sku` | SKU for Azure Bastion host to be deployed (Basic/Standard). | Optional | | > | `enable_purge_control_for_keyvaults` | Boolean flag that controls if purge control is enabled on the key vault. | Optional | Use only for test deployments. |
-> | `use_private_endpoint` | Use private endpoints. | Optional |
-> | `use_service_endpoint` | Use service endpoints for subnets. | Optional |
> | `enable_firewall_for_keyvaults_and_storage` | Restrict access to selected subnets. | Optional |
+### Web App parameters
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Variable | Description | Type | Notes |
+> | -- | - | -- | |
+> | `use_webapp` | Boolean value indicating if a webapp should be deployed. | Optional | |
+> | `app_service_SKU_name` | The SKU of the App Service Plan. | Optional | |
+> | `app_registration_app_id` | The app registration id to be used for the webapp. | Optional | |
+> | `webapp_client_secret` | The SKU of the App Service Plan. | Optional | Will be persisted in Key Vault |
+ ### Example parameters file for deployer (required parameters only) ```terraform
sap Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/configure-system.md
This section contains the parameters related to the Azure infrastructure.
> [!div class="mx-tdCol2BreakAll "]
-> | Variable | Description | Type |
-> | - | -- | - |
-> | `custom_disk_sizes_filename` | Defines the disk sizing file name, See [Custom sizing](configure-extra-disks.md). | Optional |
-> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional |
-> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | |
-> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | |
-> | `resource_offset` | Provides an offset for resource naming. | Optional |
-> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional |
-> | `use_scalesets_for_deployment` | Use Flexible Virtual Machine Scale Sets for the deployment | Optional |
-> | `scaleset_id` | Azure resource identifier for the virtual machine scale set | Optional |
-> | `user_assigned_identity_id | User assigned identity to assign to the virtual machines | Optional |
+> | Variable | Description | Type |
+> | - | - | - |
+> | `app_proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups for the app tier. | |
+> | `app_proximityplacementgroup_names` | Specifies the names of the proximity placement groups for the app tier. | |
+> | `custom_disk_sizes_filename` | Defines the disk sizing file name, See [Custom sizing](configure-extra-disks.md). | Optional |
+> | `disk_encryption_set_id` | The disk encryption key to use for encrypting managed disks by using customer-provided keys. | Optional |
+> | `proximityplacementgroup_arm_ids` | Specifies the Azure resource identifiers of existing proximity placement groups. | |
+> | `proximityplacementgroup_names` | Specifies the names of the proximity placement groups. | |
+> | `resource_offset` | Provides an offset for resource naming. | Optional |
+> | `scaleset_id` | Azure resource identifier for the virtual machine scale set | Optional |
+> | `use_app_proximityplacementgroups` | Controls if the app tier virtual machines are placed in a different ppg from the database. | Optional |
+> | `use_loadbalancers_for_standalone_deployments` | Controls if load balancers are deployed for standalone installations | Optional |
+> | `use_scalesets_for_deployment` | Use Flexible Virtual Machine Scale Sets for the deployment | Optional |
+> | `user_assigned_identity_id | User assigned identity to assign to the virtual machines | Optional |
The `resource_offset` parameter controls the naming of resources. For example, if you set the `resource_offset` to 1, the first disk will be named `disk1`. The default value is 0.
sap Deployment Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/deployment-framework.md
# SAP Deployment Automation Framework
-[SAP Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool that's used to deploy, install, and maintain SAP environments. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB by using [Terraform](https://www.terraform.io/) and [Ansible](https://www.ansible.com/) for the operating system and application configuration. You can deploy the systems on any of the SAP-supported operating system versions and into any Azure region.
+[SAP Deployment Automation Framework](https://github.com/Azure/sap-automation) is an open-source orchestration tool that can deploy, install, and maintain SAP environments. You can deploy the systems on any of the SAP-supported operating system versions and into any Azure region. You can create infrastructure for SAP landscapes based on SAP HANA and NetWeaver with AnyDB by using [Terraform](https://www.terraform.io/). The environments can be configured using [Ansible](https://www.ansible.com/).
[Terraform](https://www.terraform.io/) from Hashicorp is an open-source tool for provisioning and managing cloud infrastructure.
The [automation framework](https://github.com/Azure/sap-automation) has two main components: -- Deployment infrastructure (control plane and hub component)-- SAP infrastructure (SAP workload and spoke component)
+- Deployment infrastructure (control plane, typically deployed in the hub)
+- SAP infrastructure (SAP workload zone, typically deployed in a spoke.)
+
+The dependency between the control plane and the application plane is illustrated in the following diagram. In a typical deployment, a single control plane is used to manage multiple SAP deployments.
+ You use the control plane of SAP Deployment Automation Framework to deploy the SAP infrastructure and the SAP application. The deployment uses Terraform templates to create the [infrastructure as a service (IaaS)](https://azure.microsoft.com/overview/what-is-iaas)-defined infrastructure to host the SAP applications.
You can use the automation framework to deploy the following SAP architectures:
- **Distributed**: With this architecture, you can separate the database server and the application tier. The application tier can further be separated in two by having SAP central services on a VM and one or more application servers. - **Distributed (highly available)**: This architecture is similar to the distributed architecture. In this deployment, the database and/or SAP central services can both be configured by using a highly available configuration that uses two VMs, each with Pacemaker clusters.
-The dependency between the control plane and the application plane is illustrated in the following diagram. In a typical deployment, a single control plane is used to manage multiple SAP deployments.
- ## About the control plane
The control plane provides the following
- Persistent storage for the downloaded SAP software - Azure Key Vault for secure storage for deployment credentials - Private DNS zone (optional)-- Configuration for web applications
+- A Web application for configuration management
The control plane is typically a regional resource deployed into the hub subscription in a [hub-and-spoke architecture](/azure/architecture/reference-architectures/hybrid-networking/hub-spoke).
The software acquisition is using an SAP application manifest file that contains
The SAP software download playbook processes the manifest file and the dependent manifest files and downloads the SAP software from SAP by using the specified SAP user account. The software is downloaded to the SAP library storage account and is available for the installation process.
-As part of the download process, the application manifest and the supporting templates are also persisted in the storage account. The application manifest and the dependent manifests are aggregated into a single manifest file that's used by the installation process.
+As part of the download process, the application manifest and the supporting templates are also persisted in the storage account. The application manifest and the dependent manifests are aggregated into a single manifest file that is used by the installation process.
### Deployer VMs
The SAP workload contains all the Azure infrastructure resources for the SAP dep
The SAP workload has two main components: -- SAP workload zone
+- SAP workload zone which is used for the shared resources for the SAP systems
- SAP systems ## About the SAP workload zone
-The workload zone allows for partitioning of the deployments into different environments, such as development, test, and production. The workload zone provides the shared services (networking and credentials management) to the SAP systems.
+The workload zone allows for partitioning of the deployments into different environments, such as development, test, and production. The workload zone provides the shared resources (networking and credentials management) to the SAP systems.
The SAP workload zone provides the following services to the SAP systems: -- Virtual networking infrastructure-- Azure Key Vault for system credentials (VMs and SAP)
+- Virtual network
+- Azure Key Vault for system credentials (VMs and SAP accounts)
- Shared storage (optional) For more information about how to configure and deploy the SAP workload zone, see [Configure the workload zone](configure-workload-zone.md) and [Deploy the SAP workload zone](deploy-workload-zone.md).
sap Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/extensibility.md
custom_logical_volumes:
``` > [!NOTE]
-> In order to use this functionality you need to add an additional disk named 'custom' to one or more of your Virtual machines. See [Custom disk sizing](configure-extra-disks.md) for more information.
+> In order to use this functionality you need to add an additional disk named 'custom' to one or more of your Virtual machines. For more information, see [Custom disk sizing](configure-extra-disks.md).
You can use the `configuration_settings` variable to let Terraform add them to sap-parameters.yaml file.
configuration_settings = {
## Adding custom mount (Linux)
-You can extend the SAP Deployment Automation Framework by mounting additional mount points in your installation.
+You can extend the SAP Deployment Automation Framework by mounting extra mount points in your installation.
When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is mounted from an NFS share on "xxxxxxxxx.file.core.windows.net:/xxxxxxxxx/custom".
configuration_settings = {
You can extend the SAP Deployment Automation Framework by adding additional folders to be exported from the Central Services virtual machine.
-When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' will be exported from the Central Services virtual machine and available via NFS.
+When you add the following section to the sap-parameters.yaml file, a filesystem '/usr/custom' is exported from the Central Services virtual machine and available via NFS.
```yaml
configuration_settings = {
> [!NOTE] > This applies only for deployments with NFS_Provider set to 'NONE' as this makes the Central Services server an NFS Server.
+## Custom Stripe sizes (Linux)
+If you want to the stripe sizes used by the framework when creating the disks, you can add the following section to the sap-parameters.yaml file with the values you want.
+
+```yaml
+# User and group IDs
+hana_data_stripe_size: 256
+hana_log_stripe_size: 64
+
+db2_log_stripe_size: 64
+db2_data_stripe_size: 256
+db2_temp_stripe_size: 128
+
+sybase_data_stripe_size: 256
+sybase_log_stripe_size: 64
+sybase_temp_stripe_size: 128
+
+oracle_data_stripe_size: 256
+oracle_log_stripe_size: 128
+
+```
+
+## Custom volume sizes (Linux)
+
+If you want to the default volume sizes used by the framework, you can add the following section to the sap-parameters.yaml file with the values you want.
+
+```yaml
+
+sapmnt_volume_size: 32g
+usrsap_volume_size: 32g
+hanashared_volume_size: 32g
+```
## Next step
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/get-started.md
Some of the prerequisites might already be installed in your deployment environm
Using Azure DevOps streamlines the deployment process. Azure DevOps provides pipelines that you can run to perform the infrastructure deployment and the configuration and SAP installation activities.
-You can use Azure Repos to store your configuration files. Use Azure Pipelines to deploy and configure the infrastructure and the SAP application.
+You can use Azure Repos to store your configuration files. Azure Pipelines provides pipelines, which can be used to deploy and configure the infrastructure and the SAP application.
### Sign up for Azure DevOps Services
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
Use this article to migrate data plane calls to newer *stable* versions of the [
+ [**2023-11-01**](/rest/api/searchservice/search-service-api-versions#2023-11-01) is the most recent stable version. Semantic ranking and vector search support are generally available in this version.
-+ [**2023-10-01-preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-preview) is the most recent preview version. [Integrated data chunking and vectorization](vector-search-integrated-vectorization.md) using the [Text Split](cognitive-search-skill-textsplit.md) skill and [Azure OpenAI Embedding](cognitive-search-skill-azure-openai-embedding.md) skill are introduced in this version. There's no migration guidance for preview API versions, but you can review [code samples](https://github.com/Azure/azure-search-vector-samples) and [walkthroughs](vector-search-how-to-configure-vectorizer.md) for guidance.
++ [**2023-10-01-preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-preview) is the most recent preview version. [Integrated data chunking and vectorization](vector-search-integrated-vectorization.md) using the [Text Split](cognitive-search-skill-textsplit.md) skill and [Azure OpenAI Embedding](cognitive-search-skill-azure-openai-embedding.md) skill are introduced in this version. *There's no migration guidance for preview API versions*, but you can review [code samples](https://github.com/Azure/azure-search-vector-samples) and [walkthroughs](vector-search-how-to-configure-vectorizer.md) for help with new features. > [!NOTE]
-> API reference docs are now versioned. To get the right information, open a reference page and then apply the version-specific filter located above the table of contents.
+> API reference docs are now versioned. To get the right content, open a reference page and then apply the version-specific filter located above the table of contents.
<a name="UpgradeSteps"></a>
This version has breaking changes and behavioral differences for semantic rankin
If you added vector support using 2023-10-01-preview, there are no breaking changes, but there's one behavior difference: the `vectorFilterMode` default changed from postfilter to prefilter for [filter expressions](vector-search-filters.md). The default is prefilter for indexes created after 2023-10-01. Indexes created before that date only support postfilter, regardless of how you set the filter mode. > [!TIP]
-> Azure portal supports a one-click upgrade path for 2023-07-01-preview indexes. The portal detects that version and provides a **Migrate** button. Before selecting **Migrate**, select **Edit JSON** to review the updated schema first. You should find a schema that conforms to the changes described in this section. Portal migration only handles indexes with one vector field. Indexes with more fields require manual migration.
+> Azure portal supports a one-click upgrade path for 2023-07-01-preview indexes. The portal detects 2023-07-01-preview indexes and provides a **Migrate** button. Before selecting **Migrate**, select **Edit JSON** to review the updated schema first. You should find a schema that conforms to the changes described in this section. Portal migration only handles indexes with one vector search algorithm configuration, creating a default profile that maps to the algorithm. Indexes with multiple configurations require manual migration.
Here are the steps for migrating from 2023-07-01-preview:
Here are the steps for migrating from 2023-07-01-preview:
} ```
-1. Modify vector field definitions, replacing `vectorSearchConfiguration` with `vectorSearchProfile`. Other vector field properties remain unchanged. For example, they can't be filterable, sortable, or facetable, nor use analyzers or normalizers or synonym maps.
+1. Modify vector field definitions, replacing `vectorSearchConfiguration` with `vectorSearchProfile`. Make sure the profile name resolves to a new vector profile definition, and not the algorithm configuration name. Other vector field properties remain unchanged. For example, they can't be filterable, sortable, or facetable, nor use analyzers or normalizers or synonym maps.
**Before (2023-07-01-preview)**:
Existing code written against earlier API versions will break on api-version=202
### Behavior changes
-* [BM25 ranking algorithm](index-ranking-similarity.md) replaces the previous ranking algorithm with newer technology. New services use this algorithm automatically. For existing services, you must set parameters to use the new algorithm.
+* [BM25 ranking algorithm](index-ranking-similarity.md) replaces the previous ranking algorithm with newer technology. Services created after 2019 use this algorithm automatically. For older services, you must set parameters to use the new algorithm.
* Ordered results for null values have changed in this version, with null values appearing first if the sort is `asc` and last if the sort is `desc`. If you wrote code to handle how null values are sorted, be aware of this change.
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Title: Introduction to Azure AI Search
-description: Azure AI Search is a fully managed cloud search service from Microsoft. Learn about use cases, the development workflow, comparisons to other Microsoft search products, and how to get started.
+description: Azure AI Search is an AI-powered information retrieval platform, helps developers build rich search experiences and generative AI apps that combine large language models with enterprise data.
Azure AI Search ([formerly known as "Azure Cognitive Search"](whats-new.md#new-s
Information retrieval is foundational to any app that surfaces text and vectors. Common scenarios include catalog or document search, data exploration, and increasingly chat-style copilot apps over proprietary grounding data. When you create a search service, you work with the following capabilities:
-+ A search engine for [full text](search-lucene-query-architecture.md) and [vector search](vector-search-overview.md) over a search index
++ A search engine for [vector search](vector-search-overview.md) and [full text](search-lucene-query-architecture.md) and [hybrid search](hybrid-search-overview.md) over a search index + Rich indexing with [integrated data chunking and vectorization (preview)](vector-search-integrated-vectorization.md), [lexical analysis](search-analyzers.md) for text, and [optional AI enrichment](cognitive-search-concept-intro.md) for content extraction and transformation
-+ Rich query syntax for [vector queries](vector-search-how-to-query.md), text search, [hybrid search](hybrid-search-overview.md), fuzzy search, autocomplete, geo-search and others
++ Rich query syntax for [vector queries](vector-search-how-to-query.md), text search, [hybrid queries](hybrid-search-how-to-query.md), fuzzy search, autocomplete, geo-search and others + Azure scale, security, and reach + Azure integration at the data layer, machine learning layer, Azure AI services and Azure OpenAI
On the search service itself, the two primary workloads are *indexing* and *quer
Azure AI Search is well suited for the following application scenarios:
-+ Search over your vector and text content. You own or control what's searchable.
++ Use it for traditional full text search and next-generation vector similarity search. Back your generative AI apps with information retrieval that leverages the strength of keyword and similarity search. Use both modalities to retrieve the most relevant results.
-+ Consolidate heterogeneous content into a user-defined and populated search index composed of vectors and text.
++ Consolidate heterogeneous content into a user-defined and populated search index composed of vectors and text. You own and control what's searchable. + [Integrate data chunking and vectorization](vector-search-integrated-vectorization.md) for generative AI and RAG apps.
Customers often ask how Azure AI Search compares with other search-related solut
Key strengths include: ++ Store, index, and search vector embeddings for sentences, images, audio, graphs, and more. ++ Find information thatΓÇÖs semantically similar to search queries, even if the search terms arenΓÇÖt exact matches. ++ Use hybrid search for the best of keyword and vector search. + Relevance tuning through semantic ranking and scoring profiles. + Data integration (crawlers) at the indexing layer. + Azure AI integration for transformations that make content text and vector searchable.
site-recovery Azure To Azure How To Enable Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-policy.md
description: Learn how to enable policy support to help protect your VMs by usin
Previously updated : 07/25/2021 Last updated : 12/01/2023
With built-in Azure Policy capabilities, you have a way to enable Site Recovery
Interoperability with other policies applied as default by Azure (if any) | Supported > [!NOTE]
-> Site Recovery won't be enabled if:
-> - An unsupported VM is created within the scope of the policy.
-> - A VM is a part of both an availability set and a PPG.
+> Site Recovery won't be enabled if an unsupported VM is created within the scope of the policy.
## Create a policy assignment
storage Data Lake Storage Acl Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-powershell.md
$id = "xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
foreach ($a in $aclnew) {
- if ($a.AccessControlType -eq "User"-and $a.DefaultScope -eq $false -and $a.EntityId -eq $id)
+ if ($a.AccessControlType -eq "User" -and $a.DefaultScope -eq $false -and $a.EntityId -eq $id)
{ $aclnew.Remove($a); break;
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
For data that is modified and accessed regularly throughout its lifetime, you ca
} ```
-## Feature support
-- ## Regional availability and pricing The lifecycle management feature is available in all Azure regions.
storage Storage Retry Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-retry-policy.md
Previously updated : 09/26/2023- Last updated : 12/01/2023+ # Implement a retry policy with .NET
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
# Enable infrastructure encryption for double encryption of data
-Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES encryption at the Azure Storage infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario where one of the encryption algorithms or keys might be compromised. In this scenario, the additional layer of encryption continues to protect your data.
+Azure Storage automatically encrypts all data in a storage account at the service level using 256-bit AES with GCM mode encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant. Customers who require higher levels of assurance that their data is secure can also enable 256-bit AES with CBC encryption at the Azure Storage infrastructure level for double encryption. Double encryption of Azure Storage data protects against a scenario where one of the encryption algorithms or keys might be compromised. In this scenario, the additional layer of encryption continues to protect your data.
Infrastructure encryption can be enabled for the entire storage account, or for an encryption scope within an account. When infrastructure encryption is enabled for a storage account or an encryption scope, data is encrypted twice &mdash; once at the service level and once at the infrastructure level &mdash; with two different encryption algorithms and two different keys.
synapse-analytics Apache Spark Version Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-version-support.md
# Azure Synapse runtimes
-Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime will be upgraded periodically to include new improvements, features, and patches. When you create a serverless Apache Spark pool, you will have the option to select the corresponding Apache Spark version. Based on this, the pool will come pre-installed with the associated runtime components and packages. The runtimes have the following advantages:
+Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime is upgraded periodically to include new improvements, features, and patches. When you create a serverless Apache Spark pool, you have the option to select the corresponding Apache Spark version. Based on this, the pool comes pre-installed with the associated runtime components and packages. The runtimes have the following advantages:
- Faster session startup times - Tested compatibility with specific Apache Spark versions - Access to popular, compatible connectors and open-source packages
Azure Synapse runtime for Apache Spark patches are rolled out monthly containing
The patch policy differs based on the [runtime lifecycle stage](./runtime-for-apache-spark-lifecycle-and-supportability.md): 1. Generally Available (GA) runtime: Receive no upgrades on major versions (i.e. 3.x -> 4.x). And will upgrade a minor version (i.e. 3.x -> 3.y) as long as there are no deprecation or regression impacts. 2. Preview runtime: No major version upgrades unless strictly necessary. Minor versions (3.x -> 3.y) will be upgraded to add latest features to a runtime.
-3. Long Term Support (LTS) runtime will be patched with security fixes only.
-4. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes will be backported based on risk assessment.
+3. Long Term Support (LTS) runtime is patched with security fixes only.
+4. End of life announced (EOLA) runtime will not have bug and feature fixes. Security fixes are backported based on risk assessment.
## Migration between Apache Spark versions - support General Upgrade guidelines/ FAQ's:
-Question: If a customer is seeking advice on how to migrate from 2.4 to 3.X, what steps should be taken?
+Question: What steps should be taken in migrating from 2.4 to 3.X?
Answer: Refer to the following migration guide: https://spark.apache.org/docs/latest/sql-migration-guide.html
-Question: I get an error when I try to upgrade Spark pool runtime using PowerShell commandlet when they have attached libraries
-
-Answer: Do not use PowerShell Commandlet if you have custom libraries installed in your synapse workspace. Instead follow these steps:
- -Recreate Spark Pool 3.3 from the ground up.
- -Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3
---
+Question: I get an error when I try to upgrade Spark pool runtime using PowerShell commandlet when the Spark pool has attached libraries
+Answer: Do not use PowerShell Commandlet if you have custom libraries attached to the Spark pool. Instead follow these steps:
+* Recreate Spark Pool 3.3 from the ground up.
+* Downgrade the current Spark Pool 3.3 to 3.1, remove any packages attached, and then upgrade again to 3.3
synapse-analytics Runtime For Apache Spark Lifecycle And Supportability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/runtime-for-apache-spark-lifecycle-and-supportability.md
Last updated 07/19/2022-+ # Synapse runtime for Apache Spark lifecycle and supportability
-Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime will be upgraded periodically to include new improvements, features, and patches.
+Apache Spark pools in Azure Synapse use runtimes to tie together essential component versions such as Azure Synapse optimizations, packages, and connectors with a specific Apache Spark version. Each runtime is upgraded periodically to include new improvements, features, and patches.
## Release cadence
The following chart captures a typical lifecycle path for a Synapse runtime for
| Runtime release stage | Typical Lifecycle* | Notes | | -- | -- | -- | | Preview | 3 months* | Microsoft Azure Preview terms apply. See here for details: [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/?cdn=disable) |
-| Generally Available (GA) | 12 months* | Generally available (GA) runtimes are open to all eligible customers and are ready for production use. <br/> A GA runtime may not be elected to move into an LTS stage at Microsoft discretion. |
+| Generally Available (GA) | 12 months* | Generally Available (GA) runtimes are open to all eligible customers and are ready for production use. <br/> A GA runtime may not be elected to move into an LTS stage at Microsoft discretion. |
| Long Term Support (LTS) | 12 months* | Long term support (LTS) runtimes are open to all eligible customers and are ready for production use, but customers are encouraged to expedite validation and workload migration to latest GA runtimes. | | End of Life announced (EOLA) | 12 months* for GA or LTS runtimes.<br/>1 month* for Preview runtimes. | Prior to the end of a given runtime's lifecycle, we aim to provide 12 months' notice by publishing the End-of-Life Announcement (EOLA) date in the [Azure Synapse Runtimes page](./apache-spark-version-support.md) and 6 months' email notice to customers as an exit ramp to migrate their workloads to a GA runtime. | | End of Life (EOL) | - | At this stage, the runtime is retired and no longer supported. |
The following chart captures a typical lifecycle path for a Synapse runtime for
> [!IMPORTANT] >
-> * The above timelines are provided as examples based on current Apache Spark releases. If the Apache Spark project changes the lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates will be noted on the [release notes](./apache-spark-version-support.md).
+> * The above timelines are provided as examples based on current Apache Spark releases. If the Apache Spark project changes the lifecycle of a specific version affecting a Synapse runtime, changes to the stage dates are noted on the [release notes](./apache-spark-version-support.md).
> * Both GA and LTS runtimes may be moved into EOL stage faster based on outstanding security risks and usage rates criteria at Microsoft discretion. > * Please refer to [Lifecycle FAQ - Microsoft Azure](/lifecycle/faq/azure) for information about Azure lifecycle policies. >
The following chart captures a typical lifecycle path for a Synapse runtime for
### Preview runtimes Azure Synapse Analytics provides previews to give you a chance to evaluate and share feedback on features before they become generally available (GA).
-At the end of the Preview lifecycle for the runtime, Microsoft will assess if the runtime will move into a Generally Availability (GA) based on customer usage, security and stability criteria.
+At the end of the Preview lifecycle for the runtime, Microsoft will assess if the runtime moves into a Generally Availability (GA) based on customer usage, security and stability criteria.
-If not eligible for GA stage, the Preview runtime will move into the retirement cycle.
+If not eligible for GA stage, the Preview runtime moves into the retirement cycle.
### Generally available runtimes
-Once a runtime is Generally Available, only security fixes will be backported. In addition, new components or features will be introduced if they don't change underlying dependencies or component versions.
+Once a runtime is Generally Available, only security fixes are backported. In addition, new components or features are introduced if they don't change underlying dependencies or component versions.
-At the end of the GA lifecycle for the runtime, Microsoft will assess if the runtime will have an extended lifecycle (LTS) based on customer usage, security and stability criteria.
+At the end of the GA lifecycle for the runtime, Microsoft will assess if the runtime has an extended lifecycle (LTS) based on customer usage, security and stability criteria.
-If not eligible for LTS stage, the GA runtime will move into the retirement cycle.
+If not eligible for LTS stage, the GA runtime moves into the retirement cycle.
### Long term support runtimes
-For runtimes that are covered by Long term support (LTS) customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. We recommend that customers don't onboard new workloads using an LTS runtime. Security fixes and stability improvements may be backported, but no new components or features will be introduced into the runtime at this stage.
+For runtimes that are covered by Long term support (LTS) customers are encouraged to expedite validation and migration of code base and workloads to the latest GA runtimes. We recommend that customers don't onboard new workloads using an LTS runtime. Security fixes and stability improvements may be backported, but no new components or features are introduced into the runtime at this stage.
### End of life announcement Prior to the end of the runtime lifecycle at any stage, an end of life announcement (EOLA) is performed. Support SLAs are applicable for EOL announced runtimes, but all customers must migrate to a GA stage runtime no later than the EOL date.
-During the EOLA stage, existing Synapse Spark pools will function as expected, and new pools of the same version can be created. The runtime version will be listed on Azure Synapse Studio, Synapse API, or Azure portal. At the same time, we strongly recommend migrating your workloads to the latest General Availability (GA) runtimes.
+During the EOLA stage, existing Synapse Spark pools function as expected, and new pools of the same version can be created. The runtime version is listed on Azure Synapse Studio, Synapse API, or Azure portal. At the same time, we strongly recommend migrating your workloads to the latest General Availability (GA) runtimes.
If necessary due to outstanding security issues, runtime usage, or other factors, **Microsoft may expedite moving a runtime into the final EOL stage at any time, at Microsoft's discretion.** ### End of life date and retirement As of the applicable EOL (End-of-Life) date, runtimes are considered retired and deprecated.
-* It is not possible to create new Spark pools using the retired version through Azure Synapse Studio, the Synapse API, or the Azure portal.
-* The retired runtime version will not be available in Azure Synapse Studio, the Synapse API, or the Azure portal.
+* It isn't possible to create new Spark pools using the retired version through Azure Synapse Studio, the Synapse API, or the Azure portal.
+* The retired runtime version won't be available in Azure Synapse Studio, the Synapse API, or the Azure portal.
* Spark Pool definitions and associated metadata will remain in the Synapse workspace for a defined period after the applicable End-of-Life (EOL) date. **However, all pipelines, jobs, and notebooks will no longer be able to execute.**
synapse-analytics Maintenance Scheduling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/maintenance-scheduling.md
During preview, some regions might not yet support the full set of available **D
![Message about region availability](./media/maintenance-scheduling/maintenance-not-active-toast.png)
+## Frequently asked questions
+
+### What is the expected frequency for the maintenance.
+
+Maintenance can happen more than once per month, because maintenance can include OS updates, security patches and drivers, internal Azure infrastructure updates, and DW patches and updates. Every customer has a twice-weekly schedule of maintenance cycles between through SaturdayΓÇôSunday, and TuesdayΓÇôThursday.
+
+### What changes have been made after the maintenance is completed, even though my dedicated SQL pool version remains the same?
+
+After a maintenance update is completed, the SQL pool version may remain unchanged. This is because maintenance can include OS updates, security patches and drivers, internal Azure infrastructure updates, and DW patches and updates. Only if a Synapse DW patch or update is included in the maintenance will you see a change to the SQL Dedicated Pool version.
+
+### Is it possible to upgrade the version of my dedicated SQL pool on demand?
+
+- No, scheduled maintenance handles the management of dedicated SQL pools. However, you might have some options to trigger the maintenance once the cycle started, depending on your situation. Verify [Skip or change maintenance schedule](#skip-or-change-maintenance-schedule)
+- It's important to keep in mind that the dedicated SQL Pool is a Platform as a Service (PaaS) feature. This implies that Microsoft Azure handles all kinds of tasks related to the service, such as infrastructure, maintenance, updates, and scalability. Scheduled maintenance can be tracked by setting an alert/notification so you stay informed of impending maintenance activity.
+
+### What changes, if any, should be made before or after the dedicated SQL pool maintenance is completed?
+
+- During maintenance, your service will be briefly taken offline, similar to what occurs during a pause, resume, or scale operation. Typically, the overall maintenance operation is completed in well under 30 minutes. However, it could take a little longer, depending on database activity during the maintenance window. We recommend pausing ETL, table updates, and especially transactional operations to avoid longer than normal maintenance. For example:
+- If your instance is extremely busy during the planned window, especially with frequent update and delete activity, the maintenance operation might take longer than the normal time. To reduce the chance of extended maintenance activity, we recommend limiting activity to mostly read-only queries against the database if possible, and especially avoiding long-running transactional queries (see the next item).
+- If there are active transactions when the maintenance begins, they are canceled and rolled back, potentially causing delays in restoring the online service. To prevent this situation, we recommend ensuring that there are no long-running transactions active at the start of your maintenance window.
+
+### We were notified about an upcoming dedicated SQL pool scheduled maintenance with tracking ID 0000-000, but it was subsequently canceled or rescheduled. What prompted the cancellation or rescheduling of the maintenance?
+
+There are various factors that could lead to the cancellation of scheduled maintenance, including actions such as:
+- Pausing or scaling operations after receiving a pending maintenance notification while the cycle is initiated.
+- If you are targeting different Service Level Objectives (SLOs) during the maintenance cycle, such as transitioning from any SLO higher than DW400c and then scaling back to an SLO lower or equal to DW400c, or vice versa, a cancellation could occur. This is because maintenance windows are not applicable for DW400c or lower performance levels, and they can undergo maintenance at any time.
+- Internal infrastructure factors, such as actual changes to planned maintenance scheduling by the release team.
+- Maintenance may be canceled or rescheduled if internal monitoring detects that maintenance is taking longer than expected. Maintenance must be completed within the Service Level Agreements (SLAs) defined by customer maintenance window settings.
+
+### Are there any best practices that I need to consider for our workload during the maintenance window?
+
+- Yes, if possible, pause all transactional and ETL workloads during the planned maintenance interval to avoid errors or delays in restoring the online service. Long-running transactional operations should be completed prior to an upcoming maintenance period.
+- For workloads to be resilient to interruptions caused by maintenance operations, use retry logic for both the connection and the command (query) levels, applying longer retry intervals and/or more retry attempts to withstand an extended connection loss that can extend up to or greater than 30 minutes in some cases.
+ ## Next steps - [Learn more](../../azure-monitor/alerts/alerts-metric.md?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json) about creating, viewing, and managing alerts by using Azure Monitor.
virtual-machines Image Builder Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-troubleshoot.md
To resolve the issue, delete the below resources one by one in the specific orde
For additional assistance, you can [contact Azure support](/azure/azure-portal/supportability/how-to-create-azure-support-request) to resolve the stuck deletion error.
+### Distribute target not found in the update request
+
+#### Error
+
+```text
+Validation failed: Distribute target with Runoutput name <runoutputname> not found in the update request. Deleting a distribution target is not allowed.
+```
+#### Cause
+
+This error occurs when an existing distribute target isn't found in the Patch request body.
+
+#### Solution
+
+The distribution array should contain all the distribution targets that is, new targets (if any), existing targets with no change and updated targets. If you want to remove an existing distribution target, delete and re-create the image template as deleting a distribution target is currently not supported through the Patch API.
+
+### Missing required fields
+
+#### Error
+
+```text
+Validation failed: 'ImageTemplate.properties.distribute[<index>]': Missing field <fieldname>. Please review http://aka.ms/azvmimagebuildertmplref for details on fields required in the Image Builder Template.
+```
+#### Cause
+
+This error occurs when a required field is missing from a distribute target.
+
+#### Solution
+
+When creating a request, please provide every required field in a distribute target even if there's no change.
+ ## DevOps tasks ### Troubleshoot the task
virtual-network Private Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/private-ip-addresses.md
description: Learn about private IP addresses in Azure. Previously updated : 08/24/2023 Last updated : 12/01/2023
Azure reserves the first four addresses in each subnet address range. The addres
There are two methods in which a private IP address is given: -- **Dynamic**: Azure assigns the next available unassigned or unreserved IP address in the subnet's address range. For example, Azure assigns 10.0.0.10 to a new resource, if addresses 10.0.0.4-10.0.0.9 are already assigned to other resources.
+### Dynamic allocation
- Dynamic is the default allocation method. Once assigned, dynamic IP addresses are released if a network interface is:
-
- * Deleted
+Azure assigns the next available unassigned or unreserved IP address in the subnet's address range. While this is normally the next sequentially available address, there's no guarantee that the address will be the next one in the range. For example, if addresses 10.0.0.4-10.0.0.9 are already assigned to other resources, the next IP address assigned is most likely 10.0.0.10. However, it could be any address between 10.0.0.10 and 10.0.0.254. If a specific Private IP address is required for a resource, you should use a static private IP address.
- * Reassigned to a different subnet within the same virtual network.
+Dynamic is the default allocation method. Once assigned, dynamic IP addresses are released if a network interface is:
- * The allocation method is changed to static, and a different IP address is specified.
-
- By default, Azure assigns the previous dynamically assigned address as the static address when you change the allocation method from dynamic to static.
+* Deleted
-- **Static**: You select and assign any unassigned or unreserved IP address in the subnet's address range.
+* Reassigned to a different subnet within the same virtual network.
- For example, a subnet's address range is 10.0.0.0/16 and addresses 10.0.0.4-10.0.0.9 are assigned to other resources. You can assign any address between 10.0.0.10 - 10.0.255.254. Static addresses are only released if a network interface is deleted.
-
- Azure assigns the static IP as the dynamic IP when the allocation method is changed. The reassignment occurs even if the address isn't the next available in the subnet. The address changes when the network interface is assigned to a different subnet.
-
- To assign the network interface to a different subnet, you change the allocation method from static to dynamic. Assign the network interface to a different subnet, then change the allocation method back to static. Assign an IP address from the new subnet's address range.
+* The allocation method is changed to static, and a different IP address is specified.
+
+By default, Azure assigns the previous dynamically assigned address as the static address when you change the allocation method from dynamic to static.
+
+### Static allocation
+
+With static allocation, you select and assign any unassigned or unreserved IP address in the subnet's address range.
+
+For example, a subnet's address range is 10.0.0.0/16 and addresses 10.0.0.4-10.0.0.9 are assigned to other resources. You can assign any address between 10.0.0.10 - 10.0.255.254. Static addresses are only released if a network interface is deleted.
+
+Azure assigns the static IP as the dynamic IP when the allocation method is changed. The reassignment occurs even if the address isn't the next available in the subnet. The address changes when the network interface is assigned to a different subnet.
+
+To assign the network interface to a different subnet, you change the allocation method from static to dynamic. Assign the network interface to a different subnet, then change the allocation method back to static. Assign an IP address from the new subnet's address range.
+
+> [!NOTE]
+> When requesting a private IP address, the allocation is not deterministic or sequential. There are no guarantees the next allocated IP address will utilize the next sequential IP address or use previously deallocated addresses. If a specific Private IP address is required for a resource, you should consider using a static private IP address.
## Virtual machines
virtual-network Virtual Network Network Interface Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-network-interface-addresses.md
Title: Configure IP addresses for an Azure network interface description: Learn how to add, change, and remove private and public IP addresses for a network interface. Previously updated : 08/24/2023 Last updated : 12/01/2023
The account you log into, or connect to Azure with, must be assigned to the [net
## Add IP addresses
-You can add as many [private](#private) and [public](#public) [IPv4](#ipv4) addresses as necessary to a network interface, within the limits listed in the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) article. You can add a private IPv6 address to one [secondary IP configuration](#secondary) (as long as there are no existing secondary IP configurations) for an existing network interface. Each network interface may have at most one IPv6 private address. You can optionally add a public IPv6 address to an IPv6 network interface configuration. See [IPv6](#ipv6) for details about using IPv6 addresses.
+You can add as many [private](#private) and [public](#public) [IPv4](#ipv4) addresses as necessary to a network interface, within the limits listed in the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits) article. You can add a private IPv6 address to one [secondary IP configuration](#secondary) (as long as there are no existing secondary IP configurations) for an existing network interface. Each network interface can have one IPv6 private address. You can optionally add a public IPv6 address to an IPv6 network interface configuration. See [IPv6](#ipv6) for details about using IPv6 addresses.
# [**Portal**](#tab/nic-address-portal)
az network nic ip-config create --resource-group myResourceGroup --name myIpConf
## Change IP address settings
-You may need to change the allocation method of an IPv4 address, change the static IPv4 address, or change the public IP address associated with a network interface. Place a virtual machine into the stopped (deallocated) state before changing the private IPv4 address of a secondary IP configuration associated with the secondary network interface. To learn more, see [primary and secondary network interfaces](../../virtual-network/virtual-network-network-interface-vm.md)).
+Situations arise where you need to change the allocation method of an IPv4 address, change the static IPv4 address, or change the public IP address associated with a network interface. Place a virtual machine into the stopped (deallocated) state before changing the private IPv4 address of a secondary IP configuration associated with the secondary network interface. To learn more, see [primary and secondary network interfaces](../../virtual-network/virtual-network-network-interface-vm.md)).
# [**Portal**](#tab/nic-address-portal)
az network nic ip-config delete --resource-group myResourceGroup --name myIpConf
Each network interface is assigned one primary IP configuration. A primary IP configuration: - Has a [private](#private) [IPv4](#ipv4) address assigned to it. You can't assign a private [IPv6](#ipv6) address to a primary IP configuration.-- May also have a [public](#public) IPv4 address assigned to it. You can't assign a public IPv6 address to a primary (IPv4) IP configuration.
+- Can have a [public](#public) IPv4 address assigned to it. You can't assign a public IPv6 address to a primary (IPv4) IP configuration.
### Secondary
-In addition to a primary IP configuration, a network interface may have zero or more secondary IP configurations assigned to it. A secondary IP configuration:
+In addition to a primary IP configuration, a network interface can have zero or more secondary IP configurations assigned to it. A secondary IP configuration:
-- Must have a private IPv4 or IPv6 address assigned to it. If the address is IPv6, the network interface can only have one secondary IP configuration. If the address is IPv4, the network interface may have multiple secondary IP configurations assigned to it. To learn more about how many private and public IPv4 addresses can be assigned to a network interface, see the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).-- May also have a public IPv4 or IPv6 address assigned to it. Assigning multiple IPv4 addresses to a network interface is helpful in scenarios such as:
+- Must have a private IPv4 or IPv6 address assigned to it. If the address is IPv6, the network interface can only have one secondary IP configuration. If the address is IPv4, the network interface can have multiple secondary IP configurations assigned to it. To learn more about how many private and public IPv4 addresses can be assigned to a network interface, see the [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits).
+- Can have a public IPv4 or IPv6 address assigned to it. Assigning multiple IPv4 addresses to a network interface is helpful in scenarios such as:
- Hosting multiple websites or services with different IP addresses and TLS/SSL certificates on a single server. - A virtual machine serving as a network virtual appliance, such as a firewall or load balancer. - The ability to add any of the private IPv4 addresses for any of the network interfaces to an Azure Load Balancer back-end pool. In the past, only the primary IPv4 address for the primary network interface could be added to a back-end pool. To learn more about how to load balance multiple IPv4 configurations, see [Load balancing multiple IP configurations](../../load-balancer/load-balancer-multiple-ip.md).
- - The ability to load balance one IPv6 address assigned to a network interface. To learn more about how to load balance to a private IPv6 address, see [Load balance IPv6 addresses](../../load-balancer/load-balancer-ipv6-overview.md).
+ - The ability to load balance one IPv6 address assigned to a network interface. To learn more about load balancing a private IPv6 address, see [Load balance IPv6 addresses](../../load-balancer/load-balancer-ipv6-overview.md).
## Address types
There are scenarios where it's necessary to manually set the IP address of a net
1. Ensure that the virtual machine is receiving a primary IP address from the Azure DHCP servers. Don't set this address in the operating system if running a Linux VM. 2. Delete the IP configuration to be changed. 3. Create a new IP configuration with the new address you would like to set.
-4. [Manually configure](virtual-network-multiple-ip-addresses-portal.md#os-config) the secondary IP addresses within the operating system (and also the primary IP address within Windows) to match what you set within Azure. Don't manually set the primary IP address in the OS network configuration on Linux, or it may not be able to connect to the Internet when the configuration is re-loaded.
-5. Re-load the network configuration on the guest operating system. This can be done by rebooting the system, or by running 'nmcli con down "System eth0 && nmcli con up "System eth0"' in Linux systems running NetworkManager.
+4. [Manually configure](virtual-network-multiple-ip-addresses-portal.md#os-config) the secondary IP addresses within the operating system (and also the primary IP address within Windows) to match what you set within Azure. Don't manually set the primary IP address in the OS network configuration on Linux, or it may not be able to connect to the Internet when the configuration is reloaded.
+5. Reload the network configuration on the guest operating system. This can be done by rebooting the system, or by running 'nmcli con down "System eth0 && nmcli con up "System eth0"' in Linux systems running NetworkManager.
6. Verify the networking set-up is as desired. Test connectivity for all IP addresses of the system.
-By following the previous steps, the private IP address assigned to the network interface within Azure, and within a virtual machine's operating system, remain the same. To keep track of which virtual machines within your subscription that you've manually set IP addresses within an operating system for, consider adding an Azure [tag](../../azure-resource-manager/management/tag-resources.md) to the virtual machines. You might use "IP address allocation: Static", for example. This way, you can easily find the virtual machines within your subscription that you've manually set the IP address for within the operating system.
+By following the previous steps, the private IP address assigned to the network interface within Azure, and within a virtual machine's operating system, remain the same. To keep track of virtual machines in your subscription that have manually set IP addresses within an operating system for, consider adding an Azure [tag](../../azure-resource-manager/management/tag-resources.md) to the virtual machines. You might use "IP address allocation: Static", for example. This way, you can easily find the virtual machines within your subscription that you've manually set the IP address for within the operating system.
In addition to enabling a virtual machine to communicate with other resources within the same, or connected virtual networks, a private IP address also enables a virtual machine to communicate outbound to the Internet. Outbound connections are source network address translated by Azure to an unpredictable public IP address. To learn more about Azure outbound Internet connectivity, see [Azure outbound Internet connectivity](../../load-balancer/load-balancer-outbound-connections.md). You can't communicate inbound to a virtual machine's private IP address from the Internet. If your outbound connections require a predictable public IP address, associate a public IP address resource to a network interface.
Public and private IP addresses are assigned using one of the following allocati
Dynamic private IPv4 and IPv6 (optionally) addresses are assigned by default. - **Public only**: Azure assigns the address from a range unique to each Azure region. You can download the list of ranges (prefixes) for the Azure [Public](https://www.microsoft.com/download/details.aspx?id=56519), [US government](https://www.microsoft.com/download/details.aspx?id=57063), [China](https://www.microsoft.com/download/details.aspx?id=57062), and [Germany](https://www.microsoft.com/download/details.aspx?id=57064) clouds. The address can change when a virtual machine is stopped (deallocated), then started again. You can't assign a public IPv6 address to an IP configuration using either allocation method.-- **Private only**: Azure reserves the first four addresses in each subnet address range, and doesn't assign the addresses. Azure assigns the next available address to a resource from the subnet address range. For example, if the subnet's address range is 10.0.0.0/16, and addresses 10.0.0.4-10.0.0.14 are already assigned (.0-.3 are reserved), Azure assigns 10.0.0.15 to the resource. Dynamic is the default allocation method. Once assigned, dynamic IP addresses are only released if a network interface is deleted, assigned to a different subnet within the same virtual network, or the allocation method is changed to static, and a different IP address is specified. By default, Azure assigns the previous dynamically assigned address as the static address when you change the allocation method from dynamic to static. -
+- **Private only**: Azure reserves the first four addresses in each subnet address range, and doesn't assign the addresses. Azure assigns the next available unassigned or unreserved IP address in the subnet's address range. While this is normally the next sequentially available address, there's no guarantee that the address will be the next one in the range. For example, if the subnet's address range is 10.0.0.0/16, and addresses 10.0.0.4-10.0.0.14 are already assigned (.0-.3 are reserved), the next IP address assigned is most likely 10.0.0.15. However, it could be any address between 10.0.0.10 and 10.0.0.254. If a specific Private IP address is required for a resource, you should use a static private IP address. Dynamic is the default allocation method. Once assigned, dynamic IP addresses are only released if a network interface is deleted, assigned to a different subnet within the same virtual network, or the allocation method is changed to static, and a different IP address is specified. By default, Azure assigns the previous dynamically assigned address as the static address when you change the allocation method from dynamic to static.
+
### Static You can (optionally) assign a public or private static IPv4 or IPv6 address to an IP configuration. To learn more about how Azure assigns static public IPv4 addresses, see [Manage an Azure public IP address](virtual-network-public-ip-address.md). - **Public only**: Azure assigns the address from a range unique to each Azure region. You can download the list of ranges (prefixes) for the Azure [Public](https://www.microsoft.com/download/details.aspx?id=56519), [US government](https://www.microsoft.com/download/details.aspx?id=57063), [China](https://www.microsoft.com/download/details.aspx?id=57062), and [Germany](https://www.microsoft.com/download/details.aspx?id=57064) clouds. The address doesn't change until the public IP address resource it's assigned to is deleted, or the allocation method is changed to dynamic. If the public IP address resource is associated to an IP configuration, it must be disassociated from the IP configuration before changing its allocation method.-- **Private only**: You select and assign an address from the subnet's address range. The address you assign can be any address within the subnet address range that isn't one of the first four addresses in the subnet's address range and isn't currently assigned to any other resource in the subnet. Static addresses are only released if a network interface is deleted. If you change the allocation method to static, Azure dynamically assigns the previously assigned dynamic IP address as the static address, even if the address isn't the next available address in the subnet's address range. The address also changes if the network interface is assigned to a different subnet within the same virtual network, but to assign the network interface to a different subnet, you must first change the allocation method from static to dynamic. Once you've assigned the network interface to a different subnet, you can change the allocation method back to static, and assign an IP address from the new subnet's address range.
+- **Private only**: You select and assign an address from the subnet's address range. The address you assign can be any address within the subnet address range outside one of the first four addresses in the subnet's address range and not currently assigned to an existing resource in the subnet. Static addresses are only released if a network interface is deleted. If you change the allocation method to static, Azure dynamically assigns the previously assigned dynamic IP address as the static address, even if the address isn't the next available address in the subnet's address range. The address also changes if the network interface is assigned to a different subnet within the same virtual network. In order to assign the network interface to a different subnet, you must first change the allocation method from static to dynamic. Once the network interface is assigned to a different subnet, you can change the allocation method back to static, and assign an IP address from the new subnet's address range.
## IP address versions
virtual-network Virtual Networks Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-networks-faq.md
Unicast is supported in virtual networks. Multicast, broadcast, IP-in-IP encapsu
### Can I deploy a DHCP server in a virtual network?
-Azure virtual networks provide DHCP service and DNS to VMs and client/server DHCP (source port UDP/68, destination port UDP/67) not supported in a virtual network.
+Azure virtual networks provide DHCP service and DNS to VMs. Client/server DHCP traffic (source port UDP/68, destination port UDP/67) is not supported in a virtual network.
You can't deploy your own DHCP service to receive and provide unicast or broadcast client/server DHCP traffic for endpoints inside a virtual network. Deploying a DHCP server VM with the intent to receive unicast DHCP relay (source port UDP/67, destination port UDP/67) traffic is also an *unsupported* scenario.