Updates from: 11/02/2023 02:13:28
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 09/29/2023 Last updated : 11/01/2023
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md) and [Azure AD B2C developer release notes](custom-policy-developer-notes.md)
-## September 2023
+## October 2023
+
+### Updated articles
-This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID. For more information about the rebranding, see the [New name for Azure Active Directory](/azure/active-directory/fundamentals/new-name) article.
+- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md) - Editorial updates
+- [Azure AD B2C: Frequently asked questions (FAQ)](faq.yml) - Editorial updates
+- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md) - Added breaking change on script tags
+
+## September 2023
### Updated articles
This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID.
- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - Editorial updates - [Azure AD B2C: Frequently asked questions (FAQ)'](faq.yml) - Editorial updates - [Define an ID token hint technical profile in an Azure Active Directory B2C custom policy](id-token-hint.md) - Editorial updates-- [Set up sign-in for multi-tenant Microsoft Entra ID using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md) - Editorial updates
+- [Set up sign-in for multitenant Microsoft Entra ID using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md) - Editorial updates
- [Set up sign-in for a specific Microsoft Entra organization in Azure Active Directory B2C](identity-provider-azure-ad-single-tenant.md) - Editorial updates - [Localization string IDs](localization-string-ids.md) - Editorial updates - [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates
This month, we renamed Azure Active Directory (Azure AD) to Microsoft Entra ID.
- [Page layout versions](page-layout.md) - Editorial updates - [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md) - Oauth Bearer Authentication updated to GA
-## June 2023
-
-### New articles
--- [Microsoft Azure Active Directory B2C external identity video series](external-identities-videos.md)-- [Manage directory size quota of your Azure Active Directory B2C tenant](tenant-management-directory-quota.md)-
-### Updated articles
--- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md) - [Azure AD B2C] Azure AD B2C Go-Local opt-in feature-- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel](configure-security-analytics-sentinel.md) - Removing product name from filename and links-- [Tutorial: Configure Azure Active Directory B2C with Azure Web Application Firewall](partner-web-application-firewall.md) - Removing product name from filename and links-- [Build a global identity solution with funnel-based approach](b2c-global-identity-funnel-based-design.md) - Removing product name from filename and links-- [Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration](b2c-global-identity-proof-of-concept-funnel.md) - Removing product name from filename and links-- [Azure Active Directory B2C global identity framework proof of concept for region-based configuration](b2c-global-identity-proof-of-concept-regional.md) - Removing product name from filename and links-- [Build a global identity solution with region-based approach](b2c-global-identity-region-based-design.md) - Removing product name from filename and links-- [Azure Active Directory B2C global identity framework](b2c-global-identity-solutions.md) - Removing product name from filename and links-- [Use the Azure portal to create and delete consumer users in Azure AD B2C](manage-users-portal.md) - [Azure AD B2C] Revoke user's session-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md) - Added steps to disable Azure monitor
ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md
Previously updated : 07/18/2023 Last updated : 11/01/2023 monikerRange: '>=doc-intel-3.0.0'
monikerRange: '>=doc-intel-3.0.0'
The General document v3.0 model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is only available with the v3.0 API. For more information on using the v3.0 API, see our [migration guide](v3-1-migration-guide.md).
-> [!NOTE]
-> The ```2023-07-31``` (GA) version of the general document model adds support for **normalized keys**.
- ## General document features * The general document model is a pretrained model; it doesn't require labels or training.
The General document v3.0 model combines powerful Optical Character Recognition
* The general document model supports structured, semi-structured, and unstructured documents.
-* Key names are spans of text within the document that are associated with a value. With the ```2023-07-31```(GA) API version, key names are normalized where applicable.
- * Selection marks are identified as fields with a value of ```:selected:``` or ```:unselected:``` ***Sample document processed in the Document Intelligence Studio***
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
monikerRange: '<=doc-intel-3.1.0'
::: moniker-end ::: moniker range=">=doc-intel-2.1.0"
- Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt document analysis or domain specific model or train a custom model tailored to your specific business needs and use cases. Document Intelligence can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
+ Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your apps and flows. You can use a prebuilt domain-specific model or train a custom model tailored to your specific business need and use cases. Document Intelligence can be used with the REST API or Python, C#, Java, and JavaScript SDKs.
::: moniker-end ## Model overview
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
Previously updated : 07/18/2023 Last updated : 11/01/2023 monikerRange: '>=doc-intel-3.0.0'
Try extracting text from forms and documents using the Document Intelligence Stu
## Supported extracted languages and locales
-The following lists include the currently GA languages in the most recent v3.0 version for Read, Layout, and Custom template (form) models.
+The following lists include the languages currently supported for the GA versions of Read, Layout, and Custom template (form) models.
> [!NOTE] > **Language code optional**
ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/language-support.md
Previously updated : 09/28/2022 Last updated : 11/01/2023
-# Summarization language support
+# Language support for document and conversation summarization
-Use this article to learn which natural languages are supported by document and conversation summarization.
+Use this article to learn which natural languages are supported by document and conversation summarization.
-# [Document summarization](#tab/document-summarization)
+## Document summarization
-## Languages supported by extractive and abstractive document summarization
+Extractive and abstractive document summarization supports the following languages:
| Language | Language code | Notes | |--|||
Use this article to learn which natural languages are supported by document and
| Spanish | `es` | | | Portuguese | `pt` | |
-# [Conversation summarization](#tab/conversation-summarization)
-
-## Languages supported by conversation summarization
+## Conversation summarization
Conversation summarization supports the following languages:
Conversation summarization supports the following languages:
|--||| | English | `en` | | --
-## Languages supported by custom summarization
+## Custom summarization
Custom summarization supports the following languages:
ai-services Model Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/model-versions.md
Customers can also deploy a specific version like GPT-4 0314 or GPT-4 0613 and c
* Deployments set to **Upgrade when expired** automatically update when its current version is retired. * Deployments that are set to **No Auto Upgrade** stop working when the model is retired.
-### VersionUpgradeOption
-
-You can check what model upgrade options are set for previously deployed models in [Azure OpenAI Studio](https://oai.azure.com). Select **Deployments** > Under the deployment name column select one of the deployment names that are highlighted in blue > The **Properties** will contain a value for **Version update policy**.
-
-The corresponding property can also be accessed via [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration), [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment), and [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show).
-
-|Option| Read | Update |
-||||
-| [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration) | Yes. If `versionUpgradeOption` is not returned it means it is `null` |Yes |
-| [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment) | Yes.`VersionUpgradeOption` can be checked for `$null`| Yes |
-| [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show) | Yes. It shows `null` if `versionUpgradeOption` is not set.| *No.* It is currently not possible to update the version upgrade option.|
-
-> [!NOTE]
-> `null` is equivalent to `AutoUpgradeWhenExpired`.
-
-**Azure PowerShell**
-
-Review the Azure PowerShell [getting started guide](/powershell/azure/get-started-azureps) to install Azure PowerShell locally or you can use the [Azure Cloud Shell](/azure/cloud-shell/overview).
-
-The steps below demonstrate checking the `VersionUpgradeOption` option property as well as updating it:
-
-```powershell
-// Step 1: Get Deployment
-$deployment = Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName}
-
-// Step 2: Show Deployment VersionUpgradeOption
-$deployment.Properties.VersionUpgradeOption
-
-// VersionUpgradeOption can be null - one way to check is
-$null -eq $deployment.Properties.VersionUpgradeOption
-
-// Step 3: Update Deployment VersionUpgradeOption
-$deployment.Properties.VersionUpgradeOption = "NoAutoUpgrade"
-New-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName} -Properties $deployment.Properties -Sku $deployment.Sku
-
-// repeat step 1 and 2 to confirm the change.
-// If not sure about deployment name, use this command to show all deployments under an account
-Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName}
-```
- ## How Azure updates OpenAI models Azure works closely with OpenAI to release new model versions. When a new version of a model is released, a customer can immediately test it in new deployments. Azure publishes when new versions of models are released, and notifies customers at least two weeks before a new version becomes the default version of the model. Azure also maintains the previous major version of the model until its retirement date, so customers can switch back to it if desired.
ai-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md
These models can only be used with the Chat Completion API.
GPT-4 version 0314 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
+See [model versions](../concepts/model-versions.md) to learn about how Azure OpenAI Service handles model version upgrades, and [working with models](../how-to/working-with-models.md) to learn how to view and configure the model version settings of your GPT-4 deployments.
+
+> [!NOTE]
+> Version `0314` of `gpt-4` and `gpt-4-32k` will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
+ | Model ID | Max Request (tokens) | Training Data (up to) | | | :: | :: | | `gpt-4` (0314) | 8,192 | Sep 2021 |
GPT-4 version 0314 is the first version of the model released. Version 0613 is
| `gpt-4-32k` (0613) | 32,768 | Sep 2021 | > [!NOTE]
-> Any region where GPT-4 is listed as available will always have access to both the 8K and 32K versions of the model
+> Regions where GPT-4 is listed as available have access to both the 8K and 32K versions of the model
### GPT-4 model availability | Model Availability | gpt-4 (0314) | gpt-4 (0613) | ||:|:|
-| Available to all subscriptions with Azure OpenAI access | | Canada East <br> Sweden Central <br> Switzerland North |
-| Available to subscriptions with current access to the model version in the region | East US <br> France Central <br> South Central US <br> UK South | Australia East <br> East US <br> East US 2 <br> France Central <br> Japan East <br> UK South |
+| Available to all subscriptions with Azure OpenAI access | | Canada East <br> France Central <br> Sweden Central <br> Switzerland North |
+| Available to subscriptions with current access to the model version in the region | East US <br> France Central <br> South Central US <br> UK South | Australia East <br> East US <br> East US 2 <br> Japan East <br> UK South |
### GPT-3.5 models
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can als
GPT-3.5 Turbo version 0301 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
+See [model versions](../concepts/model-versions.md) to learn about how Azure OpenAI Service handles model version upgrades, and [working with models](../how-to/working-with-models.md) to learn how to view and configure the model version settings of your GPT-3.5 Turbo deployments.
+ > [!NOTE] > Version `0301` of `gpt-35-turbo` will be retired no earlier than July 5, 2024. See [model updates](../how-to/working-with-models.md#model-updates) for model upgrade behavior.
ai-services Working With Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md
To view deprecation/expiration dates for all available models in a given region
## Model deployment upgrade configuration
-There are three distinct model deployment upgrade options which are configurable via REST API:
+You can check what model upgrade options are set for previously deployed models in [Azure OpenAI Studio](https://oai.azure.com). Select **Deployments** > Under the deployment name column select one of the deployment names that are highlighted in blue.
++
+This will open the **Properties** for the model deployment. You can view what upgrade options are set for your deployment under **Version update policy**:
++
+The corresponding property can also be accessed via [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration), [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment), and [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show).
+
+|Option| Read | Update |
+||||
+| [REST](../how-to/working-with-models.md#model-deployment-upgrade-configuration) | Yes. If `versionUpgradeOption` is not returned it means it is `null` |Yes |
+| [Azure PowerShell](/powershell/module/az.cognitiveservices/get-azcognitiveservicesaccountdeployment) | Yes.`VersionUpgradeOption` can be checked for `$null`| Yes |
+| [Azure CLI](/cli/azure/cognitiveservices/account/deployment#az-cognitiveservices-account-deployment-show) | Yes. It shows `null` if `versionUpgradeOption` is not set.| *No.* It is currently not possible to update the version upgrade option.|
+
+There are three distinct model deployment upgrade options:
| Name | Description | ||--| | `OnceNewDefaultVersionAvailable` | Once a new version is designated as the default, the model deployment will automatically upgrade to the default version within two weeks of that designation change being made. | |`OnceCurrentVersionExpired` | Once the retirement date is reached the model deployment will automatically upgrade to the current default version. |
-|`NoAutoUpgrade` | The model deployment will never automatically upgrade. Once the retirement date is reached the model deployment will stop working. You will need to update your code referencing that deployment to point to a non-expired model deployment. |
+|`NoAutoUpgrade` | The model deployment will never automatically upgrade. Once the retirement date is reached the model deployment will stop working. You will need to update your code referencing that deployment to point to a nonexpired model deployment. |
+
+> [!NOTE]
+> `null` is equivalent to `AutoUpgradeWhenExpired`. If the **Version update policy** option is not present in the properties for a model that supports model upgrades this indicates the value is currently `null`. Once you explicitly modify this value the property will be visible in the studio properties page as well as via the REST API.
+
+### Examples
+
+# [PowerShell](#tab/powershell)
+
+Review the Azure PowerShell [getting started guide](/powershell/azure/get-started-azureps) to install Azure PowerShell locally or you can use the [Azure Cloud Shell](/azure/cloud-shell/overview).
+
+The steps below demonstrate checking the `VersionUpgradeOption` option property as well as updating it:
+
+```powershell
+// Step 1: Get Deployment
+$deployment = Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName}
+
+// Step 2: Show Deployment VersionUpgradeOption
+$deployment.Properties.VersionUpgradeOption
+
+// VersionUpgradeOption can be null - one way to check is
+$null -eq $deployment.Properties.VersionUpgradeOption
+
+// Step 3: Update Deployment VersionUpgradeOption
+$deployment.Properties.VersionUpgradeOption = "NoAutoUpgrade"
+New-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName} -Properties $deployment.Properties -Sku $deployment.Sku
+
+// repeat step 1 and 2 to confirm the change.
+// If not sure about deployment name, use this command to show all deployments under an account
+Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName}
+```
+
+```powershell
+// To update to a new model version
+
+// Step 1: Get Deployment
+$deployment = Get-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName}
+
+// Step 2: Show Deployment Model properties
+$deployment.Properties.Model.Version
-To query the current model deployment settings including the deployment upgrade configuration for a given resource use [`Deployments List`](/rest/api/cognitiveservices/accountmanagement/deployments/list?tabs=HTTP#code-try-0)
+// Step 3: Update Deployed Model Version
+$deployment.Properties.Model.Version = "0613"
+New-AzCognitiveServicesAccountDeployment -ResourceGroupName {ResourceGroupName} -AccountName {AccountName} -Name {DeploymentName} -Properties $deployment.Properties -Sku $deployment.Sku
+
+// repeat step 1 and 2 to confirm the change.
+```
+
+# [REST](#tab/rest)
+
+To query the current model deployment settings including the deployment upgrade configuration for a given resource use [`Deployments List`](/rest/api/cognitiveservices/accountmanagement/deployments/list?tabs=HTTP#code-try-0). If the value is null you won't see a `versionUpgradeOption` property.
```http GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/deployments?api-version=2023-05-01
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
- `2023-05-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/1e71ad94aeb8843559d59d863c895770560d7c93/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/stable/2023-05-01/cognitiveservices.json) + ### Example response ```json {
- "id": "/subscriptions/{Subcription-GUID}/resourceGroups/{Resource-Group-Name}/providers/Microsoft.CognitiveServices/accounts/{Resource-Name}/deployments/text-davinci-003",
+ "value": [
+ {
+ "id": "/subscriptions/aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeeb/resourceGroups/az-test-openai/providers/Microsoft.CognitiveServices/accounts/aztestopenai001/deployments/gpt-35-turbo",
"type": "Microsoft.CognitiveServices/accounts/deployments",
- "name": "text-davinci-003",
+ "name": "gpt-35-turbo",
"sku": { "name": "Standard",
- "capacity": 60
+ "capacity": 80
}, "properties": { "model": { "format": "OpenAI",
- "name": "text-davinci-003",
- "version": "1"
+ "name": "gpt-35-turbo",
+ "version": "0301"
}, "versionUpgradeOption": "OnceNewDefaultVersionAvailable", "capabilities": { "completion": "true",
- "search": "true"
+ "chatCompletion": "true"
}, "raiPolicyName": "Microsoft.Default", "provisioningState": "Succeeded",
GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{
{ "key": "request", "renewalPeriod": 10,
- "count": 60
+ "count": 80
}, { "key": "token", "renewalPeriod": 60,
- "count": 60000
+ "count": 80000
} ]
- }
+ },
+ "systemData": {
+ "createdBy": "docs@contoso.com",
+ "createdByType": "User",
+ "createdAt": "2023-07-31T16:45:32.622404Z",
+ "lastModifiedBy": "docs@contoso.com",
+ "lastModifiedByType": "User",
+ "lastModifiedAt": "2023-10-31T13:59:34.4978286Z"
+ },
+ "etag": "\"aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee\""
+ }
+ ]
+}
``` You can then take the settings from this list to construct an update model REST API call as described below if you want to modify the deployment upgrade configuration. ++ ## Update & deploy models via the API ```http
This is only a subset of the available request body parameters. For the full lis
#### Example request ```Bash
-curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1?api-version=2023-05-01 \
+curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/gpt-35-turbo?api-version=2023-05-01 \
-H "Content-Type: application/json" \ -H 'Authorization: Bearer YOUR_AUTH_TOKEN' \
- -d '{"sku":{"name":"Standard","capacity":1},"properties": {"model": {"format": "OpenAI","name": "text-embedding-ada-002","version": "2"},"versionUpgradeOption":"OnceCurrentVersionExpired"}}'
+ -d '{"sku":{"name":"Standard","capacity":120},"properties": {"model": {"format": "OpenAI","name": "gpt-35-turbo","version": "0613"},"versionUpgradeOption":"OnceCurrentVersionExpired"}}'
``` > [!NOTE]
curl -X PUT https://management.azure.com/subscriptions/00000000-0000-0000-0000-0
#### Example response ```json
-{
- "id": "/subscriptions/{subscription-id}/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/text-embedding-ada-002-test-1",
+ {
+ "id": "/subscriptions/{subscription-id}/resourceGroups/resource-group-temp/providers/Microsoft.CognitiveServices/accounts/docs-openai-test-001/deployments/gpt-35-turbo",
"type": "Microsoft.CognitiveServices/accounts/deployments",
- "name": "text-embedding-ada-002-test-1",
+ "name": "gpt-35-turbo",
"sku": { "name": "Standard",
- "capacity": 1
+ "capacity": 120
}, "properties": { "model": { "format": "OpenAI",
- "name": "text-embedding-ada-002",
- "version": "2"
+ "name": "gpt-35-turbo",
+ "version": "0613"
}, "versionUpgradeOption": "OnceCurrentVersionExpired", "capabilities": {
- "embeddings": "true",
- "embeddingsMaxInputs": "1"
+ "chatCompletion": "true"
}, "provisioningState": "Succeeded",
- "ratelimits": [
+ "rateLimits": [
{ "key": "request", "renewalPeriod": 10,
- "count": 2
+ "count": 120
}, { "key": "token", "renewalPeriod": 60,
- "count": 1000
+ "count": 120000
} ] }, "systemData": { "createdBy": "docs@contoso.com", "createdByType": "User",
- "createdAt": "2023-06-13T00:12:38.885937Z",
+ "createdAt": "2023-02-28T02:57:15.8951706Z",
"lastModifiedBy": "docs@contoso.com", "lastModifiedByType": "User",
- "lastModifiedAt": "2023-06-13T02:41:04.8410965Z"
+ "lastModifiedAt": "2023-10-31T15:35:53.082912Z"
},
- "etag": "\"{GUID}\""
+ "etag": "\"GUID\""
} ```
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/embeddings.md
In this tutorial, you learn how to:
If you haven't already, you need to install the following libraries: ```cmd
-pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
+pip install "openai==0.28.1" num2words matplotlib plotly scipy scikit-learn pandas tiktoken
``` <!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).-->
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
In this tutorial you learn how to:
If you haven't already, you need to install the following libraries: ```cmd
-pip install openai json requests os tiktoken time
+pip install "openai==0.28.1" json requests os tiktoken time
``` [!INCLUDE [get-key-endpoint](../includes/get-key-endpoint.md)]
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
Previously updated : 06/05/2023 Last updated : 10/25/2023 zone_pivot_groups: programming-languages-speech-sdk
zone_pivot_groups: programming-languages-speech-sdk
In this article, you learn how to evaluate pronunciation with speech to text through the Speech SDK. To [get pronunciation assessment results](#get-pronunciation-assessment-results), you apply the `PronunciationAssessmentConfig` settings to a `SpeechRecognizer` object. > [!NOTE]
-> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
+> As a baseline, usage of pronunciation assessment costs the same as speech to text for pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
+>
+> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
You can get pronunciation assessment scores for:
You can get pronunciation assessment scores for:
> Pronunciation assessment is not available with the Speech SDK for Go. You can read about the concepts in this guide, but you must select another programming language for implementation details. ::: zone-end
-You must create a `PronunciationAssessmentConfig` object with the reference text, grading system, and granularity. Enabling miscue and other configuration settings are optional.
+You must create a `PronunciationAssessmentConfig` object. You need to configure the `PronunciationAssessmentConfig` object to enable prosody assessment for your pronunciation evaluation. This feature assesses aspects like stress, intonation, speaking speed, and rhythm, providing insights into the naturalness and expressiveness of your speech. For a content assessment (part of the [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario), you also need to configure the `PronunciationAssessmentConfig` object. By providing a topic description, you can enhance the assessment's understanding of the specific topic being spoken about, resulting in more precise content assessment scores.
::: zone pivot="programming-language-csharp" ```csharp
-var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
- referenceText: "good morning",
- gradingSystem: GradingSystem.HundredMark,
- granularity: Granularity.Phoneme,
- enableMiscue: true);
+var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
+ referenceText: "",
+ gradingSystem: GradingSystem.HundredMark,
+ granularity: Granularity.Phoneme,
+ enableMiscue: false);
+pronunciationAssessmentConfig.EnableProsodyAssessment();
+pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");
``` ::: zone-end
var pronunciationAssessmentConfig = new PronunciationAssessmentConfig(
::: zone pivot="programming-language-cpp" ```cpp
-auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
+auto pronunciationConfig = PronunciationAssessmentConfig::Create("", PronunciationAssessmentGradingSystem::HundredMark, PronunciationAssessmentGranularity::Phoneme, false);
+pronunciationConfig->EnableProsodyAssessment();
+pronunciationConfig->EnableContentAssessmentWithTopic("greeting");
``` ::: zone-end
auto pronunciationAssessmentConfig = PronunciationAssessmentConfig::CreateFromJs
::: zone pivot="programming-language-java" ```Java
-PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAssessmentConfig.fromJson("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"enableMiscue\":true}");
+PronunciationAssessmentConfig pronunciationConfig = new PronunciationAssessmentConfig("",
+PronunciationAssessmentGradingSystem.HundredMark, PronunciationAssessmentGranularity.Phoneme, false);
+pronunciationConfig.enableProsodyAssessment();
+pronunciationConfig.enableContentAssessmentWithTopic("greeting");
``` ::: zone-end
PronunciationAssessmentConfig pronunciationAssessmentConfig = PronunciationAsses
::: zone pivot="programming-language-python" ```Python
-pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_string="{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}")
+pronunciation_config = speechsdk.PronunciationAssessmentConfig(
+reference_text="",
+grading_system=speechsdk.PronunciationAssessmentGradingSystem.HundredMark,
+granularity=speechsdk.PronunciationAssessmentGranularity.Phoneme,
+enable_miscue=False)
+pronunciation_config.enable_prosody_assessment()
+pronunciation_config.enable_content_assessment_with_topic("greeting")
``` ::: zone-end
pronunciation_assessment_config = speechsdk.PronunciationAssessmentConfig(json_s
::: zone pivot="programming-language-javascript" ```JavaScript
-var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.fromJSON("{\"referenceText\":\"good morning\",\"gradingSystem\":\"HundredMark\",\"granularity\":\"Phoneme\",\"EnableMiscue\":true}");
+var pronunciationAssessmentConfig = new sdk.PronunciationAssessmentConfig(
+referenceText: "",
+gradingSystem: sdk.PronunciationAssessmentGradingSystem.HundredMark,
+granularity: sdk.PronunciationAssessmentGranularity.Phoneme,
+enableMiscue: false);
+pronunciationAssessmentConfig.EnableProsodyAssessment();
+pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting");
``` ::: zone-end
var pronunciationAssessmentConfig = SpeechSDK.PronunciationAssessmentConfig.from
::: zone pivot="programming-language-objectivec" ```ObjectiveC
-SPXPronunciationAssessmentConfiguration *pronunciationAssessmentConfig =
-[[SPXPronunciationAssessmentConfiguration alloc] init:@"good morning"
- gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
- granularity:SPXPronunciationAssessmentGranularity_Phoneme
- enableMiscue:true];
+SPXPronunciationAssessmentConfiguration *pronunicationConfig =
+[[SPXPronunciationAssessmentConfiguration alloc] init:@""
+ gradingSystem:SPXPronunciationAssessmentGradingSystem_HundredMark
+ granularity:SPXPronunciationAssessmentGranularity_Phoneme
+ enableMiscue:false];
+[pronunicationConfig enableProsodyAssessment];
+[pronunicationConfig enableContentAssessmentWithTopic:@"greeting"];
``` ::: zone-end
SPXPronunciationAssessmentConfiguration *pronunciationAssessmentConfig =
::: zone pivot="programming-language-swift" ```swift
-var pronunciationAssessmentConfig: SPXPronunciationAssessmentConfiguration?
-do {
- try pronunciationAssessmentConfig = SPXPronunciationAssessmentConfiguration.init(referenceText, gradingSystem: SPXPronunciationAssessmentGradingSystem.hundredMark, granularity: SPXPronunciationAssessmentGranularity.phoneme, enableMiscue: true)
-} catch {
- print("error \(error) happened")
- pronunciationAssessmentConfig = nil
- return
-}
+let pronAssessmentConfig = try! SPXPronunciationAssessmentConfiguration("",
+gradingSystem: .hundredMark,
+granularity: .phoneme,
+enableMiscue: false)
+pronAssessmentConfig.enableProsodyAssessment()
+pronAssessmentConfig.enableContentAssessment(withTopic: "greeting")
``` ::: zone-end
This table lists some of the key configuration parameters for pronunciation asse
| Parameter | Description | |--|-|
-| `ReferenceText` | The text that the pronunciation is evaluated against. |
+| `ReferenceText` | The text that the pronunciation is evaluated against.<br/><br/>The `ReferenceText` parameter is optional. Set the reference text if you want to run a [scripted assessment](#scripted-assessment-results) for the reading language learning scenario. Don't set the reference text if you want to run an [unscripted assessment](#unscripted-assessment-results) for the speaking language learning scenario.<br/><br/>For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing) |
| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. | | `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels greater than or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
-| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table.|
+| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. |
| `ScenarioId` | A GUID indicating a customized point system. | ## Syllable groups
To learn how to specify the learning language for pronunciation assessment in yo
::: zone-end - ::: zone pivot="programming-language-objectivec" ```ObjectiveC
To learn how to specify the learning language for pronunciation assessment in yo
### Result parameters
-This table lists some of the key pronunciation assessment results.
+Depending on whether you're using [scripted](#scripted-assessment-results) or [unscripted](#unscripted-assessment-results) assessment, you can get different pronunciation assessment results. Scripted assessment is for the reading language learning scenario, and unscripted assessment is for the speaking language learning scenario.
-| Parameter | Description |
-|--|-|
-| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|
-| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |
-| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |
-| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |
-| `ErrorType` | This value indicates whether a word is omitted, inserted, or mispronounced, compared to the `ReferenceText`. Possible values are `None`, `Omission`, `Insertion`, and `Mispronunciation`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.|
+> [!NOTE]
+> For pricing differences between scripted and unscripted assessment, see [the pricing note](./pronunciation-assessment-tool.md#pricing).
+
+#### Scripted assessment results
+
+This table lists some of the key pronunciation assessment results for the scripted assessment (reading scenario) and the supported granularity for each.
+
+| Parameter | Description |Granularity|
+|--|-|-|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives.|Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. |Full Text level|
+| `CompletenessScore` | Completeness of the speech, calculated by the ratio of pronounced words to the input reference text. |Full Text level|
+| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
+| `PronScore` | Overall score indicating the pronunciation quality of the given speech. `PronScore` is aggregated from `AccuracyScore`, `FluencyScore`, and `CompletenessScore` with weight. |Full Text level|
+| `ErrorType` | This value indicates whether a word is omitted, inserted, improperly inserted with a break, or missing a break at punctuation compared to the reference text. It also indicates whether a word is badly pronounced, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Omission`, `Insertion`, `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. The error type can be `Mispronunciation` when the pronunciation `AccuracyScore` for a word is below 60.| Word level|
+
+#### Unscripted assessment results
+
+This table lists some of the key pronunciation assessment results for the unscripted assessment (speaking scenario) and the supported granularity for each.
+
+> [!NOTE]
+> VocabularyScore, GrammarScore, and TopicScore parameters roll up to the combined content assessment.
+>
+> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
+
+| Response parameter | Description |Granularity|
+|--|-|-|
+| `AccuracyScore` | Pronunciation accuracy of the speech. Accuracy indicates how closely the phonemes match a native speaker's pronunciation. Syllable, word, and full text accuracy scores are aggregated from phoneme-level accuracy score, and refined with assessment objectives. | Phoneme level,<br>Syllable level (en-US only),<br>Word level,<br>Full Text level|
+| `FluencyScore` | Fluency of the given speech. Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words. | Full Text level|
+| `ProsodyScore` | Prosody of the given speech. Prosody indicates how natural the given speech is, including stress, intonation, speaking speed, and rhythm. | Full Text level|
+| `VocabularyScore` | Proficiency in lexical usage. It evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, and the level of lexical complexity. | Full Text level|
+| `GrammarScore` | Correctness in using grammar and variety of sentence patterns. Grammatical errors are jointly evaluated by lexical accuracy, grammatical accuracy, and diversity of sentence structures. | Full Text level|
+| `TopicScore` | Level of understanding and engagement with the topic, which provides insights into the speakerΓÇÖs ability to express their thoughts and ideas effectively and the ability to engage with the topic. | Full Text level|
+| `PronScore` | Overall score indicating the pronunciation quality of the given speech. This is aggregated from AccuracyScore, FluencyScore, and CompletenessScore with weight. | Full Text level|
+| `ErrorType` | This value indicates whether a word is badly pronounced, improperly inserted with a break, missing a break at punctuation, or monotonically rising, falling, or flat on the utterance. Possible values are `None` (meaning no error on this word), `Mispronunciation`, `UnexpectedBreak`, `MissingBreak`, and `Monotone`. | Word level|
+
+The following table describes the prosody assessment results in more detail:
+
+| Field | Description |
+|-|--|
+| `ProsodyScore` | Prosody score of the entire utterance. |
+| `Feedback` | Feedback on the word level, including Break and Intonation. |
+|`Break` | |
+| `ErrorTypes` | Error types related to breaks, including `UnexpectedBreak` and `MissingBreak`. In the current version, we donΓÇÖt provide the break error type. You need to set thresholds on the following fields ΓÇ£UnexpectedBreak ΓÇô ConfidenceΓÇ¥ and ΓÇ£MissingBreak ΓÇô confidenceΓÇ¥, respectively to decide whether there's an unexpected break or missing break before the word. |
+| `UnexpectedBreak` | Indicates an unexpected break before the word. |
+| `MissingBreak` | Indicates a missing break before the word. |
+| `Thresholds` | Suggested thresholds on both confidence scores are 0.75. That means, if the value of ΓÇÿUnexpectedBreak ΓÇô ConfidenceΓÇÖ is larger than 0.75, it can be decided to have an unexpected break. If the value of ΓÇÿMissingBreak ΓÇô confidenceΓÇÖ is larger than 0.75, it can be decided to have a missing break. If you want to have variable detection sensitivity on these two breaks, itΓÇÖs suggested to assign different thresholds to the 'UnexpectedBreak - Confidence' and 'MissingBreak - Confidence' fields. |
+|`Intonation`| Indicates intonation in speech. |
+| `ErrorTypes` | Error types related to intonation, currently supporting only Monotone. If the ΓÇÿMonotoneΓÇÖ exists in the field ΓÇÿErrorTypesΓÇÖ, the utterance is detected to be monotonic. Monotone is detected on the whole utterance, but the tag is assigned to all the words. All the words in the same utterance share the same monotone detection information. |
+| `Monotone` | Indicates monotonic speech. |
+| `Thresholds (Monotone Confidence)` | The fields 'Monotone - SyllablePitchDeltaConfidence' are reserved for user-customized monotone detection. If you're unsatisfied with the provided monotone decision, you can adjust the thresholds on these fields to customize the detection according to your preferences. |
### JSON result example
-Pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
+The [scripted](#scripted-assessment-results) pronunciation assessment results for the spoken word "hello" are shown as a JSON string in the following example. Here's what you should know:
- The phoneme [alphabet](#phoneme-alphabet-format) is IPA. - The [syllables](#syllable-groups) are returned alongside phonemes for the same word. - You can use the `Offset` and `Duration` values to align syllables with their corresponding phonemes. For example, the starting offset (11700000) of the second syllable ("loʊ") aligns with the third phoneme ("l"). The offset represents the time at which the recognized speech begins in the audio stream, and it's measured in 100-nanosecond units. To learn more about `Offset` and `Duration`, see [response properties](rest-speech-to-text-short.md#response-properties).
Pronunciation assessment results for the spoken word "hello" are shown as a JSON
## Pronunciation assessment in streaming mode
-Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore` , and `CompletenessScore` will vary over time throughout the recording and evaluation process.
+Pronunciation assessment supports uninterrupted streaming mode. The recording time can be unlimited through the Speech SDK. As long as you don't stop recording, the evaluation process doesn't finish and you can pause and resume evaluation conveniently. In streaming mode, the `AccuracyScore`, `FluencyScore`, `ProsodyScore`, and `CompletenessScore` will vary over time throughout the recording and evaluation process.
::: zone pivot="programming-language-csharp"
ai-services Language Learning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/language-learning-overview.md
Previously updated : 02/23/2023 Last updated : 10/25/2023 # Language learning with Azure AI Speech
-The Azure AI service for Speech platform is a comprehensive collection of technologies and services aimed at accelerating the incorporation of speech into applications. Azure AI services for Speech can be used to learn languages.
-
+One of the most important aspects of learning a new language is speaking and listening. Azure AI Speech provides features that can be used to help language learners.
## Pronunciation Assessment
-The [Pronunciation Assessment](pronunciation-assessment-tool.md) feature is designed to provide instant feedback to users on the accuracy, fluency, and prosody of their speech when learning a new language, so that they can speak and present in a new language with confidence. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+The [Pronunciation Assessment](pronunciation-assessment-tool.md) feature is designed to provide instant and comprehensive feedback to users on the accuracy, fluency, prosody, vocabulary usage, grammar correctness, and topic understanding of their speech when learning a new language, so that they can speak and present in a new language with confidence. For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
The Pronunciation Assessment feature offers several benefits for educators, service providers, and students. - For educators, it provides instant feedback, eliminates the need for time-consuming oral language assessments, and offers consistent and comprehensive assessments.
ai-services Pronunciation Assessment Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/pronunciation-assessment-tool.md
Previously updated : 09/08/2022 Last updated : 10/25/2023
Pronunciation assessment uses the Speech to text capability to provide subjective and objective feedback for language learners. Practicing pronunciation and getting timely feedback are essential for improving language skills. Assessments driven by experienced teachers can take a lot of time and effort and makes a high-quality assessment expensive for learners. Pronunciation assessment can help make the language assessment more engaging and accessible to learners of all backgrounds.
-Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input.
-- At the full-text level, pronunciation assessment offers additional Fluency and Completeness scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words, and Completeness indicates how many words are pronounced in the speech to the reference text input. An overall score aggregated from Accuracy, Fluency and Completeness is then given to indicate the overall pronunciation quality of the given speech. -- At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.-- Syllable-level accuracy scores are currently available via the [JSON file](?tabs=json#pronunciation-assessment-results) or [Speech SDK](how-to-pronunciation-assessment.md). -- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech.
+> [!NOTE]
+> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+
+This article describes how to use the pronunciation assessment tool without writing any code through the [Speech Studio](https://speech.microsoft.com). For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
+
+In addition to the baseline scores of accuracy, fluency, and completeness, the pronunciation assessment feature in Speech Studio includes more comprehensive scores to provide detailed feedback on various aspects of speech performance and understanding. The enhanced scores are as follows: Prosody score, Vocabulary score, Grammar score, and Topic score. These scores offer valuable insights into speech prosody, vocabulary usage, grammar correctness, and topic understanding.
+
+ :::image type="content" source="media/pronunciation-assessment/speaking-score.png" alt-text="Screenshot of overall pronunciation score and overall content score on Speech Studio.":::
+
+At the bottom of the Assessment result, two overall scores are displayed: Pronunciation score and Content score. In the Reading tab, you will find the Pronunciation Score displayed. In the Speaking tab, both the Pronunciation Score and the Content Score are displayed.
-This article describes how to use the pronunciation assessment tool through the [Speech Studio](https://speech.microsoft.com). You can get immediate feedback on the accuracy and fluency of your speech without writing any code. For information about how to integrate pronunciation assessment in your speech applications, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
+**Pronunciation Score**: This score represents an aggregated assessment of the pronunciation quality and includes four sub-aspects. These scores are available in both the reading and speaking tabs for both scripted and unscripted assessments.
+- **Accuracy score**: Evaluates the correctness of pronunciation.
+- **Fluency score**: Measures the level of smoothness and naturalness in speech.
+- **Completeness score**: Reflects the number of words pronounced correctly.
+- **Prosody score**: Assesses the use of appropriate intonation, rhythm, and stress. Several additional error types related to prosody assessment are introduced, such as Unexpected break, Missing break, and Monotone. These error types provide more detailed information about pronunciation errors compared to the previous engine.
+
+**Content Score**: This score provides an aggregated assessment of the content of the speech and includes three sub-aspects. This score is only available in the speaking tab for an unscripted assessment.
+- **Vocabulary score**: Evaluates the speaker's effective usage of words and their appropriateness within the given context to express ideas accurately, as well as the level of lexical complexity.
+- **Grammar score**: Evaluates the correctness of grammar usage and variety of sentence patterns. It considers lexical accuracy, grammatical accuracy, and diversity of sentence structures, providing a more comprehensive evaluation of language proficiency.
+- **Topic score**: Assesses the level of understanding and engagement with the topic discussed in the speech. It evaluates the speaker's ability to effectively express thoughts and ideas related to the given topic.
+
+These overall scores offer a comprehensive assessment of both pronunciation and content, providing learners with valuable feedback on various aspects of their speech performance and understanding. By using these enhanced features, language learners can gain deeper insights into their advantages and areas for improvement in both pronunciation and content expression.
> [!NOTE]
-> Usage of pronunciation assessment costs the same as standard Speech to text, whether pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for standard Speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
->
-> For information about availability of pronunciation assessment, see [supported languages](language-support.md?tabs=pronunciation-assessment) and [available regions](regions.md#speech-service).
+> Content and prosody assessments are only available in the [en-US](./language-support.md?tabs=pronunciation-assessment) locale.
+
+## Pricing
+
+As a baseline, usage of pronunciation assessment costs the same as speech to text for pay-as-you-go or commitment tier [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services). If you [purchase a commitment tier](../commitment-tier.md) for speech to text, the spend for pronunciation assessment goes towards meeting the commitment.
+
+The pronunciation assessment feature also offers additional scores that are not included in the baseline speech to text price: prosody, grammar, topic, and vocabulary. These scores are available as an add-on charge above the baseline speech to text price. For information about pricing, see [speech to text pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services).
+
+Here's a table of available pronunciation assessment scores, whether it's available in the [scripted](#conduct-a-scripted-assessment) or [unscripted](#conduct-an-unscripted-assessment) assessments, and whether it's included in the baseline speech to text price or the add-on price.
+
+| Score | Scripted or unscripted | Included in baseline speech to text price? |
+| | | |
+| Accuracy | Scripted and unscripted | Yes |
+| Fluency | Scripted and unscripted | Yes |
+| Completeness | Scripted | Yes |
+| Miscue | Scripted and unscripted | Yes |
+| Prosody | Scripted and unscripted | No |
+| Grammar | Unscripted only | No |
+| Topic | Unscripted only | No |
+| Vocabulary | Unscripted only | No |
+ ## Try out pronunciation assessment
You can explore and try out pronunciation assessment even without signing in.
> [!TIP] > To assess more than 5 seconds of speech with your own script, sign in with an [Azure account](https://azure.microsoft.com/free/cognitive-services) and use your <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesSpeechServices" title="Create a Speech resource" target="_blank">Speech resource</a>. +
+## Granularity of pronunciation assessment
+
+Pronunciation assessment provides various assessment results in different granularities, from individual phonemes to the entire text input.
+- At the full-text level, pronunciation assessment offers additional Fluency, Completeness, and Prosody scores: Fluency indicates how closely the speech matches a native speaker's use of silent breaks between words; Completeness indicates how many words are pronounced in the speech to the reference text input; Prosody indicates how well a speaker conveys elements of naturalness, expressiveness, and overall prosody in their speech. An overall score aggregated from Accuracy, Fluency, Completeness and Prosody is then given to indicate the overall pronunciation quality of the given speech. Pronunciation assessment also offers content score (Vocabulary, Grammar, and Topic) at the full-text level.
+- At the word-level, pronunciation assessment can automatically detect miscues and provide accuracy score simultaneously, which provides more detailed information on omission, repetition, insertions, and mispronunciation in the given speech.
+- Syllable-level accuracy scores are currently available via the [JSON file](?tabs=json#pronunciation-assessment-results) or [Speech SDK](how-to-pronunciation-assessment.md).
+- At the phoneme level, pronunciation assessment provides accuracy scores of each phoneme, helping learners to better understand the pronunciation details of their speech.
+
+## Reading and speaking scenarios
+
+For pronunciation assessment, there are two scenarios: Reading and Speaking.
+- Reading: This scenario is designed for [scripted assessment](#conduct-a-scripted-assessment). It requires the learner to read a given text. The reference text is provided in advance.
+- Speaking: This scenario is designed for [unscripted assessment](#conduct-an-unscripted-assessment). It requires the learner to speak on a given topic. The reference text is not provided in advance.
+
+### Conduct a scripted assessment
+ Follow these steps to assess your pronunciation of the reference text: 1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment). :::image type="content" source="media/pronunciation-assessment/pa.png" alt-text="Screenshot of how to go to Pronunciation Assessment on Speech Studio.":::
-1. Choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
+1. On the Reading tab, choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
- :::image type="content" source="media/pronunciation-assessment/pa-language.png" alt-text="Screenshot of choosing a supported language that you want to evaluate the pronunciation.":::
+ :::image type="content" source="media/pronunciation-assessment/select-reading-language.png" alt-text="Screenshot of choosing a supported language on reading tab that you want to evaluate the pronunciation.":::
-1. Choose from the provisioned text samples, or under the **Enter your own script** label, enter your own reference text.
+1. You can use provisioned text samples or enter your own script.
When reading the text, you should be close to microphone to make sure the recorded voice isn't too low.
- :::image type="content" source="media/pronunciation-assessment/pa-record.png" alt-text="Screenshot of where to record audio with a microphone.":::
+ :::image type="content" source="media/pronunciation-assessment/scripted-assessment.png" alt-text="Screenshot of where to record audio with a microphone on reading tab.":::
Otherwise you can upload recorded audio for pronunciation assessment. Once successfully uploaded, the audio will be automatically evaluated by the system, as shown in the following screenshot.
- :::image type="content" source="media/pronunciation-assessment/pa-upload.png" alt-text="Screenshot of uploading recorded audio to be assessed.":::
+ :::image type="content" source="media/pronunciation-assessment/upload-audio.png" alt-text="Screenshot of uploading recorded audio to be assessed.":::
+
+### Conduct an unscripted assessment
+
+If you want to conduct an unscripted assessment, select the Speaking tab. This allows you to conduct unscripted assessment without providing reference text in advance. Here's how to proceed:
+
+1. Go to **Pronunciation Assessment** in the [Speech Studio](https://aka.ms/speechstudio/pronunciationassessment).
+
+1. On the Speaking tab, choose a supported [language](language-support.md?tabs=pronunciation-assessment) that you want to evaluate the pronunciation.
+
+ :::image type="content" source="media/pronunciation-assessment/select-speaking-language.png" alt-text="Screenshot of choosing a supported language on speaking tab that you want to evaluate the pronunciation.":::
+
+1. Next, you have the option to select from sample topics provided or enter your own topic. This choice allows you to assess your ability to speak on a given subject without a predefined script.
+
+ :::image type="content" source="media/pronunciation-assessment/input-topic.png" alt-text="Screenshot of inputting a topic on speaking tab to assess your ability to speak on a given subject without a predefined script.":::
+
+ When recording your speech for pronunciation assessment, it's important to ensure that your recording time falls within the recommended range of 15 seconds (equivalent to more than 50 words) to 10 minutes. This time range is optimal for evaluating the content of your speech accurately. To receive a topic score, your spoken audio should contain at least 3 sentences.
+
+ You can also upload recorded audio for pronunciation assessment. Once successfully uploaded, the audio will be automatically evaluated by the system.
## Pronunciation assessment results
-Once you've recorded the reference text or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on the accuracy and fluency of spoken audio, by comparing a machine generated transcript of the input audio with the reference text. You can listen to your spoken audio, and download it if necessary.
+Once you've recorded your speech or uploaded the recorded audio, the **Assessment result** will be output. The result includes your spoken audio and the feedback on your speech assessment. You can listen to your spoken audio and download it if necessary.
You can also check the pronunciation assessment result in JSON. The word-level, syllable-level, and phoneme-level accuracy scores are included in the JSON file. ### [Display](#tab/display)
-The complete transcription is shown in the **Display** window. If a word is omitted, inserted, or mispronounced compared to the reference text, the word will be highlighted according to the error type. The error types in the pronunciation assessment are represented using different colors. Yellow indicates mispronunciations, gray indicates omissions, and red indicates insertions. This visual distinction makes it easier to identify and analyze specific errors. It provides a clear overview of the error types and frequencies in the spoken audio, helping you focus on areas that need improvement. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes.
+
+The complete transcription is shown in the **Display** window. The word will be highlighted according to the error type. The error types in the pronunciation assessment are represented using different colors. This visual distinction makes it easier to identify and analyze specific errors. It provides a clear overview of the error types and frequencies in the spoken audio, helping you focus on areas that need improvement. You can toggle on/off each error type to focus on specific types of errors or exclude certain types from the display. This feature provides flexibility in how you review and analyze the errors in your spoken audio. While hovering over each word, you can see accuracy scores for the whole word or specific phonemes.
+At the bottom of the Assessment result, scoring results will be displayed. For scripted pronunciation assessment, only pronunciation score (including accuracy score, fluency score, completeness score, and prosody score) will be provided. For unscripted pronunciation assessment, both pronunciation score (including accuracy score, fluency score, and prosody score) and content score (including vocabulary score, grammar score, and topic score) will be displayed.
### [JSON](#tab/json)
-The complete transcription is shown in the `text` attribute. You can see accuracy scores for the whole word, syllables, and specific phonemes. You can get the same results using the Speech SDK. For information, see [How to use Pronunciation Assessment](how-to-pronunciation-assessment.md).
+The complete transcription is shown in the `text` attribute. You can see accuracy scores for the whole word, syllables, and specific phonemes. You can get the same results using the Speech SDK. For information, see [How to use pronunciation assessment](how-to-pronunciation-assessment.md).
```json {
The complete transcription is shown in the `text` attribute. You can see accurac
-### Assessment scores in streaming mode
+## Assessment scores in streaming mode
Pronunciation Assessment supports uninterrupted streaming mode. The Speech Studio demo allows for up to 60 minutes of recording in streaming mode for evaluation. As long as you don't press the stop recording button, the evaluation process doesn't finish and you can pause and resume evaluation conveniently.
-Pronunciation Assessment evaluates three aspects of pronunciation: accuracy, fluency, and completeness. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 3 sub aspects: **Accuracy score**, **Fluency score**, and **Completeness score**. In streaming mode, since the **Accuracy score**, **Fluency score and Completeness score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score and Fluency score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final overall score is aggregated from **Accuracy score**, **Fluency score**, and **Completeness score** with weight.
+Pronunciation Assessment evaluates several aspects of pronunciation. At the bottom of **Assessment result**, you can see **Pronunciation score** as aggregated overall score which includes 4 sub aspects: **Accuracy score**, **Fluency score**, **Completeness score**, and **Prosody score**. In streaming mode, since the **Accuracy score**, **Fluency score**, and **Prosody score** will vary over time throughout the recording process, we demonstrate an approach on Speech Studio to display approximate overall score incrementally before the end of the evaluation, which weighted only with Accuracy score, Fluency score, and Prosody score. The **Completeness score** is only calculated at the end of the evaluation after you press the stop button, so the final pronunciation overall score is aggregated from **Accuracy score**, **Fluency score**, **Completeness score**, and **Prosody score** with weight.
Refer to the demo examples below for the whole process of evaluating pronunciation in streaming mode.
During recording a long paragraph, you can pause recording at any time. You can
**Finish recording**
-After you press the stop button, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, and **Completeness score** at the bottom.
+After you press the stop button, you can see **Pronunciation score**, **Accuracy score**, **Fluency score**, **Completeness score**, and **Prosody score** at the bottom.
:::image type="content" source="media/pronunciation-assessment/pa-after-recording-display-score.png" alt-text="Screenshot of overall assessment scores after recording." lightbox="media/pronunciation-assessment/pa-after-recording-display-score.png":::
aks Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/availability-zones.md
The following limitations apply when you create an AKS cluster using availabilit
### Azure disk availability zone support
+ - Volumes that use Azure managed LRS disks aren't zone-redundant resources, attaching across zones isn't supported. You need to co-locate volumes in the same zone as the specified node hosting the target pod.
- Volumes that use Azure managed ZRS disks (supported by Azure Disk CSI driver v1.5.0 and later) are zone-redundant resources. You can schedule those volumes on all zone and non-zone agent nodes. Kubernetes is aware of Azure availability zones since version 1.12. You can deploy a PersistentVolumeClaim object referencing an Azure Managed Disk in a multi-zone AKS cluster and [Kubernetes takes care of scheduling](https://kubernetes.io/docs/setup/best-practices/multiple-zones/#storage-access-for-zones) any pod that claims this PVC in the correct availability zone.
aks Free Standard Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/free-standard-pricing-tiers.md
az aks update --resource-group myResourceGroup --name myAKSCluster --tier free
az aks update --resource-group myResourceGroup --name myAKSCluster --tier standard ```
-This process takes several minutes to complete. When finished, the following example JSON snippet shows updating the existing cluster to the Standard tier in the Base SKU.
+This process takes several minutes to complete. You shouldn't experience any downtime while your cluster tier is being updated. When finished, the following example JSON snippet shows updating the existing cluster to the Standard tier in the Base SKU.
```output },
aks Image Cleaner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md
Once `eraser-controller-manager` is deployed,
- inside each worker pod, there are 3 containers: - collector: collect unused images - trivy-scanner: leverage [trivy](https://github.com/aquasecurity/trivy) to scan image vulnerabilities
- - remover: remove used images with vulnerabilities
+ - remover: remove unused images with vulnerabilities
- after clean up, worker pod will be deleted and its next schedule up is after the `--image-cleaner-interval-hours` you have set ### Manual mode You can also manually trigger the clean up by defining a CRD object `ImageList`. Then `eraser-contoller-manager` will create worker pod per node as well to finish manual removal. --- > [!NOTE] > After disabling Image Cleaner, the old configuration still exists. This means if you enable the feature again without explicitly passing configuration, the existing value is used instead of the default.
aks Manage Ssh Node Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-ssh-node-access.md
Title: Manage SSH access on Azure Kubernetes Service cluster nodes
description: Learn how to configure SSH on Azure Kubernetes Service (AKS) cluster nodes. Previously updated : 10/16/2023 Last updated : 11/01/2023 # Manage SSH for secure access to Azure Kubernetes Service (AKS) nodes
The following are examples of this command:
``` > [!IMPORTANT]
-> After you update the SSH key, AKS doesn't automatically reimage your node pool. At anytime you can choose to perform a [reimage operation][node-image-upgrade]. Only after reimage is complete does the update SSH key operation take effect.
+> After you update the SSH key, AKS doesn't automatically update your node pool. At anytime you can choose to perform a [nodepool update operation][node-image-upgrade]. Only after a node image update is complete does the update SSH key operation take effect.
## Next steps
api-management Developer Portal Extend Custom Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-extend-custom-functionality.md
For more advanced use cases, you can create and upload a custom widget to the de
npm run deploy ```
- If prompted, sign in to your Azure account.
+ If prompted, sign in to your Azure account.
+
+ > [!NOTE]
+ > When prompted to sign in, you must use a member account from the Microsoft Entra ID tenant that's associated with the Azure subscription where your API Management service resides. The account must not be a guest or a federated account and must have the appropriate permission to access the portal's administrative interface.
+ The custom widget is now deployed to your developer portal. Using the portal's administrative interface, you can add it on pages in the developer portal and set values for any custom properties configured in the widget.
api-management V2 Service Tiers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/v2-service-tiers-overview.md
Currently, the following API Management capabilities are unavailable in the v2 t
* Client certificate renegotiation * Requests to the gateway over localhost
+ > [!NOTE]
+ > Currently the policy document size limit in the v2 tiers is 16 KiB.
+ ## Deployment Deploy an instance of the Basic v2 or Standard v2 tier using the Azure portal, Azure REST API, or Azure Resource Manager or Bicep template.
A: Yes, a Premium v2 preview is planned and will be announced separately.
## Related content * Learn more about the API Management [tiers](api-management-features.md).++
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
Title: Deliver Extended Security Updates for Windows Server 2012 description: Learn how to deliver Extended Security Updates for Windows Server 2012. Previously updated : 10/09/2023 Last updated : 11/01/2023
There are some scenarios in which you may be eligible to receive Extended Securi
To qualify for these scenarios, you must have:
-1. Provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios)
+1. Provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates.
1. Onboarded your Windows Server 2012 and Windows Server 2012 R2 machines to Azure Arc-enabled servers for the purpose of Dev/Test with Visual Studio subscriptions or Disaster Recovery
To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, fol
In the case that you're using the ESU License for multiple exception scenarios, mark the license with the tag: Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 MULTIPURPOSEΓÇ¥
-1. Link the tagged license to your tagged Azure Arc-enabled Windows Server 2012 and Windows Server 2012 R2 machines.
+1. Link the tagged license to your tagged Azure Arc-enabled Windows Server 2012 and Windows Server 2012 R2 machines. **Do not license cores for these servers**.
- This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores.
+ This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing.
> [!NOTE] > The usage of these exception scenarios will be available for auditing purposes and abuse of these exceptions may result in recusal of WS2012 ESU privileges.
azure-arc Onboard Update Management Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-update-management-machines.md
Title: Connect machines from Azure Automation Update Management description: In this article, you learn how to connect hybrid machines to Azure Arc managed by Automation Update Management. Previously updated : 09/14/2021 Last updated : 11/01/2023
If you don't have an Azure subscription, create a [free account](https://azure.m
When the onboarding process is launched, an Active Directory [service principal](../../active-directory/fundamentals/service-accounts-principal.md) is created in the tenant.
-To install and configure the Connected Machine agent on the target machine, a master runbook named **Add-AzureConnectedMachines** runs in the Azure sandbox. Based on the operating system detected on the machine, the master runbook calls a child runbook named **Add-AzureConnectedMachineWindows** or **Add-AzureConnectedMachineLinux** that runs under the system [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) role directly on the machine. Runbook job output is written to the job history, and you can view their [status summary](../../automation/automation-runbook-execution.md#job-statuses) or drill into details of a specific runbook job in the [Azure portal](../../automation/manage-runbooks.md#view-statuses-in-the-azure-portal) or using [Azure PowerShell](../../automation/manage-runbooks.md#retrieve-job-statuses-using-powershell). Execution of runbooks in Azure Automation writes details in an activity log for the Automation account. For details of using the log, see [Retrieve details from Activity log](../../automation/manage-runbooks.md#retrieve-details-from-activity-log).
+To install and configure the Connected Machine agent on the target machine, a master runbook named **Add-UMMachinesToArc** runs in the Azure sandbox. Based on the operating system detected on the machine, the master runbook calls a child runbook named **Add-UMMachinesToArcWindowsChild** or **Add-UMMachinesToArcLinuxChild** that runs under the system [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md) role directly on the machine. Runbook job output is written to the job history, and you can view their [status summary](../../automation/automation-runbook-execution.md#job-statuses) or drill into details of a specific runbook job in the [Azure portal](../../automation/manage-runbooks.md#view-statuses-in-the-azure-portal) or using [Azure PowerShell](../../automation/manage-runbooks.md#retrieve-job-statuses-using-powershell). Execution of runbooks in Azure Automation writes details in an activity log for the Automation account. For details of using the log, see [Retrieve details from Activity log](../../automation/manage-runbooks.md#retrieve-details-from-activity-log).
The final step establishes the connection to Azure Arc using the `azcmagent` command using the service principal to register the machine as a resource in Azure.
azure-arc Agent Overview Scvmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/agent-overview-scvmm.md
Title: Overview of Azure Connected Machine agent to manage Windows and Linux machines description: This article provides a detailed overview of the Azure Connected Machine agent, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 10/20/2023 Last updated : 10/31/2023
# Overview of Azure Connected Machine agent to manage Windows and Linux machines
-The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers.
+When you [enable guest management](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale) on SCVMM VMs, Azure arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent.
## Agent components
azure-arc Enable Scvmm Inventory Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-scvmm-inventory-resources.md
Previously updated : 01/27/2023 Last updated : 10/31/2023 keywords: "VMM, Arc, Azure"
To enable the existing virtual machines in Azure, follow these steps:
1. Select **Enable** to start the deployment of the VM represented in Azure.
+>[!NOTE]
+>Moving SCVMM resources between Resource Groups and Subscriptions is currently not supported.
+ ## Next steps [Connect virtual machines to Arc](quickstart-connect-system-center-virtual-machine-manager-to-arc.md)
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager (preview) description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager (preview). Previously updated : 10/18/2023 Last updated : 10/30/2023 ms.
The following scenarios are supported in Azure Arc-enabled SCVMM (preview):
### Supported VMM versions
-Azure Arc-enabled SCVMM works with VMM 2016, 2019 and 2022 versions and supports SCVMM management servers with a maximum of 3500 VMs.
+Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM management servers with a maximum of 15000 VMs.
### Supported regions
Azure Arc-enabled SCVMM doesn't store/process customer data outside the region t
## Next steps
-[Create an Azure Arc VM](create-virtual-machine.md)
+[Create an Azure Arc VM](create-virtual-machine.md)
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
ms. Previously updated : 02/17/2023 Last updated : 10/31/2023
This QuickStart shows you how to connect your SCVMM management server to Azure A
>[!Note] >- If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed. >- If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with AAD authentication*. To fix this issue, download the latest version of the onboarding script and deploy the resource bridge again.
+>- Azure Arc Resource Bridge deployment using private link is currently not supported.
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> For dynamic IP allocation to appliance VM, DHCP server is required. For static IP allocation, VMM static IP pool is required. |
+| **SCVMM** | You need an SCVMM management server running version 2016 or later.<br/><br/> A private cloud with minimum free capacity of 16 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. |
| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
-| **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you may experience performance issues. |
+| **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. |
## Prepare SCVMM management server
The script execution will take up to half an hour and you'll be prompted for var
| **Private cloud selection** | Select the name of the private cloud where the Arc resource bridge VM should be deployed. | | **Virtual Network selection** | Select the name of the virtual network to which *Arc resource bridge VM* needs to be connected. This network should allow the appliance to talk to the VMM management server and the Azure endpoints (or internet). | | **Static IP pool** | Select the VMM static IP pool that will be used to allot IP address. |
-| **Control Pane IP** | Provide a reserved IP address (a reserved IP address in your DHCP range or a static IP outside of DHCP range but still available on the network). The key thing is this IP address shouldn't be assigned to any other machine on the network. |
+| **Control Plane IP** | Provide a reserved IP address in the same subnet as the static IP pool used for Resource Bridge deployment. This IP address should be outside of the range of static IP pool used for Resource Bridge deployment and shouldn't be assigned to any other machine on the network. |
| **Appliance proxy settings** | Type ΓÇÿYΓÇÖ if there's a proxy in your appliance network, else type ΓÇÿNΓÇÖ.| | **http** | Address of the HTTP proxy server. | | **https** | Address of the HTTPS proxy server.|
azure-arc Azure Arc Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/azure-arc-agent.md
# Azure Arc agent
-When you [enable guest management](enable-guest-management-at-scale.md) on VMware VMs, Azure Arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent.
+When you [enable guest management](enable-guest-management-at-scale.md) on VMware VMs, Azure Connected Machine agent is installed on the VMs. This is the same agent Arc-enabled servers use. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent.
## Agent components
azure-arc Quick Start Connect Vcenter To Arc Using Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md
Title: Connect VMware vCenter Server to Azure Arc by using the helper script
description: In this quickstart, you learn how to use the helper script to connect your VMware vCenter Server instance to Azure Arc. Previously updated : 09/05/2022 Last updated : 10/31/2023
First, the script deploys a virtual appliance called [Azure Arc resource bridge
### vCenter Server -- vCenter Server version 6.7, 7 or 8.
+- vCenter Server version 7 or 8.
- A virtual network that can provide internet access, directly or through a proxy. It must also be possible for VMs on this network to communicate with the vCenter server on TCP port (usually 443).
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
Title: Plan for deployment description: Learn about the support matrix for Arc-enabled VMware vSphere including vCenter Server versions supported, network requirements, and more. Previously updated : 08/18/2023 Last updated : 10/31/2023 # Customer intent: As a VI admin, I want to understand the support matrix for Arc-enabled VMware vSphere.
The following requirements must be met in order to use Azure Arc-enabled VMware
### Supported vCenter Server versions
-Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 6.7, 7 and 8.
+Azure Arc-enabled VMware vSphere (preview) works with vCenter Server versions 7 and 8.
> [!NOTE] > Azure Arc-enabled VMware vSphere (preview) currently supports vCenters with a maximum of 9500 VMs. If your vCenter has more than 9500 VMs, it's not recommended to use Arc-enabled VMware vSphere with it at this point.
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|Preview|Preview| | [OSS clustering](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview|
-| [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
+| [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Available|Available|Available|
| [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö (Passive) |Γ£ö (Active) |Γ£ö (Active) | | [Connection audit logs](cache-monitor-diagnostic-settings.md) |-|-|Γ£ö (Poll-based)|Γ£ö (Event-based)|Γ£ö (Event-based)| | [Redis Modules](cache-redis-modules.md) |-|-|-|Γ£ö|Preview|
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
namespace MyFunctionApp
} ```
-The [ILogger&lt;T&gt;] in this example was also obtained through dependency injection. It is registered automatically. To learn more about configuration options for logging, see [Logging](#logging).
+The [`ILogger<T>`][ILogger&lt;T&gt;] in this example was also obtained through dependency injection. It is registered automatically. To learn more about configuration options for logging, see [Logging](#logging).
> [!TIP] > The example used a literal string for the name of the client in both `Program.cs` and the function. Consider instead using a shared constant string defined on the function class. For example, you could add `public const string CopyStorageClientName = nameof(_copyContainerClient);` and then reference `BlobCopier.CopyStorageClientName` in both locations. You could similarly define the configuration section name with the function rather than in `Program.cs`.
The following example performs clean-up actions if a cancellation request has be
## Performance optimizations
-This section outlines options you can enable to improve performance around [cold start](./event-driven-scaling.md#cold-start).
+This section outlines options you can enable that improve performance around [cold start](./event-driven-scaling.md#cold-start).
In general, your app should use the latest versions of its core dependencies. At a minimum, you should update your project as follows:
Each trigger and binding extension also has its own minimum version requirement,
#### ASP.NET Core integration
-This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. Use of this feature for local testing requires [Core Tools version 4.0.5240 or later](./functions-run-local.md) and that you set `AzureWebJobsFeatureFlags` to "EnableHttpProxying" in `local.settings.json` if you are using Core Tools version 4.0.5274 and earlier. This model is not available to [apps targeting .NET Framework][supported-versions], which should instead leverage the [built-in model](#built-in-http-model).
+This section shows how to work with the underlying HTTP request and response objects using types from ASP.NET Core including [HttpRequest], [HttpResponse], and [IActionResult]. This model is not available to [apps targeting .NET Framework][supported-versions], which should instead leverage the [built-in model](#built-in-http-model).
> [!NOTE] > Not all features of ASP.NET Core are exposed by this model. Specifically, the ASP.NET Core middleware pipeline and routing capabilities are not available.
public class MyFunction {
The logger can also be obtained from a [FunctionContext] object passed to your function. Call the [GetLogger&lt;T&gt;] or [GetLogger] method, passing a string value that is the name for the category in which the logs are written. The category is usually the name of the specific function from which the logs are written. To learn more about categories, see the [monitoring article](functions-monitoring.md#log-levels-and-categories).
-Use the methods of [ILogger&lt;T&gt;] and [`ILogger`][ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories). You can customize the log levels for components added to your code by registering filters as part of the `HostBuilder` configuration:
+Use the methods of [`ILogger<T>`][ILogger&lt;T&gt;] and [`ILogger`][ILogger] to write various log levels, such as `LogWarning` or `LogError`. To learn more about log levels, see the [monitoring article](functions-monitoring.md#log-levels-and-categories). You can customize the log levels for components added to your code by registering filters as part of the `HostBuilder` configuration:
```csharp using Microsoft.Azure.Functions.Worker;
Azure Functions currently can be used with the following preview versions of .NE
| Operating system | .NET preview version | | - | - |
-| Windows | .NET 8 RC1 |
+| Windows | .NET 8 RC2 |
| Linux | .NET 8 RC2 | ### Using a preview .NET SDK
azure-functions Functions Bindings Http Webhook Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-http-webhook-trigger.md
The code in this article defaults to .NET Core syntax, used in Functions version
# [Isolated worker model](#tab/isolated-process)
-The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object:
-- The following example shows an HTTP trigger that returns a "hello, world" response as an [IActionResult], using [ASP.NET Core integration in .NET Isolated]: ```csharp
public IActionResult Run(
[IActionResult]: /dotnet/api/microsoft.aspnetcore.mvc.iactionresult
+The following example shows an HTTP trigger that returns a "hello world" response as an [HttpResponseData](/dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata) object:
++ # [In-process model](#tab/in-process) The following example shows a [C# function](functions-dotnet-class-library.md) that looks for a `name` parameter either in the query string or the body of the HTTP request. Notice that the return value is used for the output binding, but a return value attribute isn't required.
For Python v2 functions defined using a decorator, the following properties for
| Property | Description | |-|--|
-| `route` | Route for the http endpoint, if None, it will be set to function name if present or user defined python function name. |
-| `trigger_arg_name` | Argument name for HttpRequest, defaults to 'req'. |
-| `binding_arg_name` | Argument name for HttpResponse, defaults to '$return'. |
+| `route` | Route for the http endpoint. If None, it will be set to function name if present or user defined python function name. |
+| `trigger_arg_name` | Argument name for HttpRequest. The default value is 'req'. |
+| `binding_arg_name` | Argument name for HttpResponse. The default value is '$return'. |
| `methods` | A tuple of the HTTP methods to which the function responds. | | `auth_level` | Determines what keys, if any, need to be present on the request in order to invoke the function. |
The trigger input type is declared as one of the following types:
| Type | Description | |-|-|
-| [HttpRequestData] | A projection of the full request object. |
| [HttpRequest] | _Use of this type requires that the app is configured with [ASP.NET Core integration in .NET Isolated]._<br/>This gives you full access to the request object and overall HttpContext. |
+| [HttpRequestData] | A projection of the request object. |
| A custom type | When the body of the request is JSON, the runtime will try to parse it to set the object properties. |
-When using `HttpRequestData` or `HttpRequest`, custom types can also be bound to additional parameters using `Microsoft.Azure.Functions.Worker.Http.FromBodyAttribute`. Use of this attribute requires [`Microsoft.Azure.Functions.Worker.Extensions.Http` version 3.1.0 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http). Note that this is a different type than the similar attribute in `Microsoft.AspNetCore.Mvc`, and when using ASP.NET Core integration, you will need a fully qualified reference or `using` statement. The following example shows how to use the attribute to get just the body contents while still having access to the full `HttpRequest`, using the ASP.NET Core integration:
+When the trigger parameter is an `HttpRequestData` an `HttpRequest`, custom types can also be bound to additional parameters using `Microsoft.Azure.Functions.Worker.Http.FromBodyAttribute`. Use of this attribute requires [`Microsoft.Azure.Functions.Worker.Extensions.Http` version 3.1.0 or later](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Http). Note that this is a different type than the similar attribute in `Microsoft.AspNetCore.Mvc`, and when using ASP.NET Core integration, you will need a fully qualified reference or `using` statement. The following example shows how to use the attribute to get just the body contents while still having access to the full `HttpRequest`, using the ASP.NET Core integration:
```csharp using Microsoft.AspNetCore.Http;
The `webHookType` binding property indicates the type if webhook supported by th
| Type value | Description | | | |
-| **genericJson**| A general-purpose webhook endpoint without logic for a specific provider. This setting restricts requests to only those using HTTP POST and with the `application/json` content type.|
-| **[github](#github-webhooks)** | The function responds to [GitHub webhooks](https://developer.github.com/webhooks/). Don't use the `authLevel` property with GitHub webhooks. |
-| **[slack](#slack-webhooks)** | The function responds to [Slack webhooks](https://api.slack.com/outgoing-webhooks). Don't use the `authLevel` property with Slack webhooks. |
+| **`genericJson`**| A general-purpose webhook endpoint without logic for a specific provider. This setting restricts requests to only those using HTTP POST and with the `application/json` content type.|
+| **[`github`](#github-webhooks)** | The function responds to [GitHub webhooks](https://developer.github.com/webhooks/). Don't use the `authLevel` property with GitHub webhooks. |
+| **[`slack`](#slack-webhooks)** | The function responds to [Slack webhooks](https://api.slack.com/outgoing-webhooks). Don't use the `authLevel` property with Slack webhooks. |
When setting the `webHookType` property, don't also set the `methods` property on the binding.
azure-functions Functions Bindings Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-service-bus.md
Functions version 1.x doesn't support isolated worker process. To use the isolat
[Microsoft.ServiceBus.Messaging]: /dotnet/api/microsoft.servicebus.messaging - [upgrade your application to Functions 4.x]: ./migrate-version-1-version-4.md :::zone-end
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
These host version migration guides will also help you migrate to the isolated w
## Identify function apps to upgrade
-Use the following PowerShell script to generate a list of function apps in your subscription that currently use the in-process model:
+Use the following Azure PowerShell script to generate a list of function apps in your subscription that currently use the in-process model.
-```powershell
-$Subscription = '<YOUR SUBSCRIPTION ID>'
-
-Set-AzContext -Subscription $Subscription | Out-Null
+The script uses subscription that Azure PowerShell is currently configured to use. You can change the subscription by first running `Set-AzContext -Subscription '<YOUR SUBSCRIPTION ID>'` and replacing `<YOUR SUBSCRIPTION ID>` with the ID of the subscription you would like to evaluate.
+```azurepowershell-interactive
$FunctionApps = Get-AzFunctionApp $AppInfo = @{}
On version 4.x of the Functions runtime, your .NET function app targets .NET 6 w
If you haven't already, identify the list of apps that need to be migrated in your current Azure Subscription by using the [Azure PowerShell](#identify-function-apps-to-upgrade).
-Before you upgrade an app to the isolated worker model, you should thoroughly review the contents of this guide and familiarize yourself with the features of the [isolated worker model][isolated-guide].
+Before you upgrade an app to the isolated worker model, you should thoroughly review the contents of this guide and familiarize yourself with the features of the [isolated worker model][isolated-guide] and the [differences between the two models](./dotnet-isolated-in-process-differences.md).
To upgrade the application, you will:
The section outlines the various changes that you need to make to your local pro
> [!TIP] > If you are moving to an LTS or STS version of .NET, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
+First, you'll convert the project file and update your dependencies. As you do, you will see build errors for the project. In subsequent steps, you'll make the corresponding changes to remove these errors.
+ ### .csproj file
-The following example is a .csproj project file that uses .NET 6 on version 4.x:
+The following example is a `.csproj` project file that uses .NET 6 on version 4.x:
```xml <Project Sdk="Microsoft.NET.Sdk">
Use one of the following procedures to update this XML file to run in the isolat
[!INCLUDE [functions-dotnet-migrate-project-v4-isolated-2](../../includes/functions-dotnet-migrate-project-v4-isolated-2.md)]
-# [.NET Framework 4.8](#tab/v4)
+# [.NET 8](#tab/net8)
-# [.NET 8 (Preview)](#tab/net8)
+# [.NET Framework 4.8](#tab/v4)
-### Package and namespace changes
+### Package references
- When migrating to the isolated worker model, you need to change the packages your application references. Then you need to update the namespace of using statements and some types you reference. You can see the effect of these namespace changes on `using` statements in the [HTTP trigger template examples](#http-trigger-template) section later in this article.
+ When migrating to the isolated worker model, you need to change the packages your application references.
[!INCLUDE [functions-dotnet-migrate-packages-v4-isolated](../../includes/functions-dotnet-migrate-packages-v4-isolated.md)] ### Program.cs file
-When migrating to run in an isolated worker process, you must add the following program.cs file to your project:
+When migrating to run in an isolated worker process, you must add a `Program.cs` file to your project with the following contents:
-# [.NET 6](#tab/net6-isolated)
+# [.NET 6 / .NET 7 / .NET 8](#tab/net6-isolated+net7+net8)
+```csharp
+using Microsoft.Extensions.Hosting;
-# [.NET 7](#tab/net7)
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+host.Run();
+```
# [.NET Framework 4.8](#tab/v4) -
-# [.NET 8 (Preview)](#tab/net8)
+```csharp
+using Microsoft.Extensions.Hosting;
+using Microsoft.Azure.Functions.Worker;
+namespace Company.FunctionApp
+{
+ internal class Program
+ {
+ static void Main(string[] args)
+ {
+ FunctionsDebugger.Enable();
+
+ var host = new HostBuilder()
+ .ConfigureFunctionsWorkerDefaults()
+ .ConfigureServices(services => {
+ services.AddApplicationInsightsTelemetryWorkerService();
+ services.ConfigureFunctionsApplicationInsights();
+ })
+ .Build();
+ host.Run();
+ }
+ }
+}
+```
-### local.settings.json file
+The `Program.cs` file will replace any file that has the `FunctionsStartup` attribute, which is typically a `Startup.cs` file. In places where your `FunctionsStartup` code would reference `IFunctionsHostBuilder.Services`, you can instead add statements within the `.ConfigureServices()` method of the `HostBuilder` in your `Program.cs`. To learn more about working with `Program.cs`, see [Start-up and configuration](./dotnet-isolated-process-guide.md#start-up-and-configuration) in the isolated worker model guide.
-The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file).
+Once you have moved everything from any existing `FunctionsStartup` to the `Program.cs` file, you can delete the `FunctionsStartup` attribute and the class it was applied to.
-When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value to "dotnet-isolated". Make sure that your local.settings.json file has at least the following elements:
+### Function signature changes
+Some key types change between the in-process model and the isolated worker model. Many of these relate to the attributes, parameters, and return types that make up the function signature. For each of your functions, you must make changes to:
-### Class name changes
+- The function attribute (which also sets the function's name)
+- How the function obtains an `ILogger`/`ILogger<T>`
+- Trigger and binding attributes and parameters
-Some key classes change between the in-process model and the isolated worker model. The following table indicates key .NET classes used by Functions that change when migrating:
+The rest of this section will walk you through each of these steps.
-| In-process model | Isolated worker model|
-| | | |
-| `FunctionName` (attribute) | `Function` (attribute) |
-| `ILogger` | `ILogger`, `ILogger<T>` |
-| `HttpRequest` | `HttpRequestData`, `HttpRequest` (using [ASP.NET Core integration])|
-| `IActionResult` | `HttpResponseData`, `IActionResult` (using [ASP.NET Core integration])|
-| `FunctionsStartup` (attribute) | Uses [`Program.cs`](#programcs-file) instead |
+#### Function attributes
-[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
+The `FunctionName` attribute is replaced by the `Function` attribute in the isolated worker model. The new attribute has the same signature, and the only difference is in the name. You can therefore just perform a string replacement across your project.
-There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
+#### Logging
-### HTTP trigger template
+In the in-process model, you could include an additional `ILogger` parameter to your function, or you could use dependency injection to get an `ILogger<T>`. If you were already using dependency injection, the same mechanisms work in the isolated worker model.
-The differences between in-process and isolated worker process can be seen in HTTP triggered functions. The HTTP trigger template for the in-process model looks like the following example:
+However, for any Functions that relied on the `ILogger` method parameter, you will need to make a change. It is recommended that you use dependency injection to obtain an `ILogger<T>`. Use the following steps to migrate the function's logging mechanism:
+1. In your function class, add a `private readonly ILogger<MyFunction> _logger;` property, replacing `MyFunction` with the name of your function class.
+1. Create a constructor for your function class that takes in the `ILogger<T>` as a parameter:
-The HTTP trigger template for the migrated version looks like the following example:
+ ```csharp
+ public MyFunction(ILogger<MyFunction> logger) {
+ _logger = logger;
+ }
+ ```
-# [.NET 6](#tab/net6-isolated)
+ Replace both instances of `MyFunction` in the code snippet above with the name of your function class.
+1. For logging operations in your function code, replace references to the `ILogger` parameter with `_logger`.
+1. Remove the `ILogger` parameter from your function signature.
-You can also leverage [ASP.NET Core integration] to instead have the function look more like the following example:
+To learn more, see [Logging in the isolated worker model](./dotnet-isolated-process-guide.md#logging).
-```csharp
-[Function("HttpFunction")]
-public IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req)
+#### Trigger and binding changes
+
+When you [changed your package references in a previous step](#package-references), you introduced errors for your triggers and bindings that you will now fix:
+
+1. Remove any `using Microsoft.Azure.WebJobs;` statements.
+1. Add a `using Microsoft.Azure.Functions.Worker;` statement.
+1. For each binding attribute, change the attribute's name as specified in its reference documentation, which you can find in the [Supported bindings](./functions-triggers-bindings.md#supported-bindings) index. In general, the attribute names change as follows:
+
+ - **Triggers typically remain named the same way.** For example, `QueueTrigger` is the attribute name for both models.
+ - **Input bindings typically need "Input" added to their name.** For example, if you used the `CosmosDB` input binding attribute in the in-process model, this would now be `CosmosDBInput`.
+ - **Output bindings typically need "Output" added to their name.** For example, if you used the `Queue` output binding attribute in the in-process model, this would now be `QueueOutput`.
+
+1. Update the attribute parameters to reflect the isolated worker model version, as specified in the binding's reference documentation.
+
+ For example, in the in-process model, a blob output binding is represented by a `[Blob(...)]` attribute that includes an `Access` property. In the isolated worker model, the blob output attribute would be `[BlobOutput(...)]`. The binding no longer requires the `Access` property, so that parameter can be removed. So `[Blob("sample-images-sm/{fileName}", FileAccess.Write, Connection = "MyStorageConnection")]` would become `[BlobOutput("sample-images-sm/{fileName}", Connection = "MyStorageConnection")]`.
+
+1. Move output bindings out of the function parameter list. If you have just one output binding, you can apply this to the return type of the function. If you have multiple outputs, create a new class with properties for each output, and apply the attributes to those properties. To learn more, see [Multiple output bindings](./dotnet-isolated-process-guide.md#multiple-output-bindings).
+
+1. Consult each binding's reference documentation for the types it allows you to bind to. In some cases, you may need to change the type. For output bindings, if the in-process model version used an `IAsyncCollector<T>`, you can replace this with binding to an array of the target type: `T[]`. You can also consider replacing the output binding with a client object for the service it represents, either as the binding type for an input binding if available, or by [injecting a client yourself](./dotnet-isolated-process-guide.md#register-azure-clients).
+
+1. If your function includes an `IBinder` parameter, remove it. Replace the functionality with a client object for the service it represents, either as the binding type for an input binding if available, or by [injecting a client yourself](./dotnet-isolated-process-guide.md#register-azure-clients).
+
+1. Update the function code to work with any new types.
+
+### local.settings.json file
+
+The local.settings.json file is only used when running locally. For information, see [Local settings file](functions-develop-local.md#local-settings-file).
+
+When migrating from running in-process to running in an isolated worker process, you need to change the `FUNCTIONS_WORKER_RUNTIME` value to "dotnet-isolated". Make sure that your local.settings.json file has at least the following elements:
+
+```json
{
- return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet-isolated"
+ }
} ```
-# [.NET 7](#tab/net7)
+The value you have configured for `AzureWebJobsStorage`` might be different. You do not need to change its value as part of the migration.
+### Example function migrations
-You can also leverage [ASP.NET Core integration] to instead have the function look more like the following example:
+#### HTTP trigger example
+
+An HTTP trigger for the in-process model might look like the following example:
```csharp
-[Function("HttpFunction")]
-public IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req)
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.WebJobs;
+using Microsoft.Azure.WebJobs.Extensions.Http;
+using Microsoft.Extensions.Logging;
+
+namespace Company.Function
{
- return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ public static class HttpTriggerCSharp
+ {
+ [FunctionName("HttpTriggerCSharp")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get", Route = null)] HttpRequest req,
+ ILogger log)
+ {
+ log.LogInformation("C# HTTP trigger function processed a request.");
+
+ return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ }
+ }
} ```
-# [.NET Framework 4.8](#tab/v4)
-
+An HTTP trigger for the migrated version might like the following example:
+# [.NET 6 / .NET 7 / .NET 8](#tab/net6-isolated+net7+net8)
-# [.NET 8 (Preview)](#tab/net8)
+```csharp
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Extensions.Logging;
+namespace Company.Function
+{
+ public class HttpTriggerCSharp
+ {
+ private readonly ILogger<HttpTriggerCSharp> _logger;
+
+ public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpTriggerCSharp")]
+ public IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequest req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ }
+ }
+}
+```
-You can also leverage [ASP.NET Core integration] to instead have the function look more like the following example:
+# [.NET Framework 4.8](#tab/v4)
```csharp
-[Function("HttpFunction")]
-public IActionResult Run(
- [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req)
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Extensions.Logging;
+using System.Net;
+
+namespace Company.Function
{
- return new OkObjectResult($"Welcome to Azure Functions, {req.Query["name"]}!");
+ public class HttpTriggerCSharp
+ {
+ private readonly ILogger<HttpTriggerCSharp> _logger;
+
+ public HttpTriggerCSharp(ILogger<HttpTriggerCSharp> logger)
+ {
+ _logger = logger;
+ }
+
+ [Function("HttpTriggerCSharp")]
+ public HttpResponseData Run([HttpTrigger(AuthorizationLevel.Function, "get")] HttpRequestData req)
+ {
+ _logger.LogInformation("C# HTTP trigger function processed a request.");
+
+ var response = req.CreateResponse(HttpStatusCode.OK);
+ response.Headers.Add("Content-Type", "text/plain; charset=utf-8");
+
+ response.WriteString($"Welcome to Azure Functions, {req.Query["name"]}!");
+
+ return response;
+ }
+ }
} ```
Once you've completed these steps, your app has been fully migrated to the isola
[isolated-guide]: ./dotnet-isolated-process-guide.md [.NET Upgrade Assistant]: /dotnet/core/porting/upgrade-assistant-overview
+[ASP.NET Core integration]: ./dotnet-isolated-process-guide.md#aspnet-core-integration
+
+[HttpRequestData]: /dotnet/api/microsoft.azure.functions.worker.http.httprequestdata?view=azure-dotnet&preserve-view=true
+[HttpResponseData]: /dotnet/api/microsoft.azure.functions.worker.http.httpresponsedata?view=azure-dotnet&preserve-view=true
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
For apps that run on devices or desktop computers or in a web browser, you shoul
### Confidential client applications
-For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities]. Managed identities can provide an identity for your web service to use when connecting to Azure Maps using Microsoft Entra authentication. If so, your web service uses that identity to obtain the required Microsoft Entra tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles] possible.
+For apps that run on servers (such as web services and service/daemon apps), if you prefer to avoid the overhead and complexity of managing secrets, consider [Managed Identities]. Managed identities can provide an identity for your web service to use when connecting to Azure Maps using [Microsoft Entra authentication]. If so, your web service uses that identity to obtain the required Microsoft Entra tokens. You should use Azure RBAC to configure what access the web service is given, using the [Least privileged roles] possible.
## Next steps
For apps that run on servers (such as web services and service/daemon apps), if
> [Tutorial: Add app authentication to your web app running on Azure App Service] [Authentication with Azure Maps]: azure-maps-authentication.md
-[Azure Active Directory (Azure AD) authentication]: ../active-directory/fundamentals/active-directory-whatis.md
+[Microsoft Entra authentication]: ../active-directory/fundamentals/active-directory-whatis.md
[Configurable token lifetimes in the Microsoft identity platform (preview)]: ../active-directory/develop/configurable-token-lifetimes.md [Create SAS tokens]: azure-maps-authentication.md#create-sas-tokens [Cross origin resource sharing (CORS)]: azure-maps-authentication.md#cross-origin-resource-sharing-cors
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
# Authentication with Azure Maps
-Azure Maps supports three ways to authenticate requests: Shared Key authentication, [Microsoft Entra ID] authentication, and Shared Access Signature (SAS) Token authentication. This article explains authentication methods to help guide your implementation of Azure Maps services. The article also describes other account controls such as disabling local authentication for Azure Policy and Cross-Origin Resource Sharing (CORS).
+Azure Maps supports three ways to authenticate requests: Shared Key authentication, Microsoft Entra ID authentication, and Shared Access Signature (SAS) Token authentication. This article explains authentication methods to help guide your implementation of Azure Maps services. The article also describes other account controls such as disabling local authentication for Azure Policy and Cross-Origin Resource Sharing (CORS).
> [!NOTE] > To improve secure communication with Azure Maps, we now support Transport Layer Security (TLS) 1.2, and we're retiring support for TLS 1.0 and 1.1. If you currently use TLS 1.x, evaluate your TLS 1.2 readiness and develop a migration plan with the testing described in [Solving the TLS 1.0 Problem].
To learn more about authenticating the Azure Maps Control with Microsoft Entra I
> [!div class="nextstepaction"] > [Use the Azure Maps Map Control]
-[Azure Active Directory (Azure AD)]: ../active-directory/fundamentals/active-directory-whatis.md
[Solving the TLS 1.0 Problem]: /security/solving-tls1-problem [View authentication details]: how-to-manage-authentication.md#view-authentication-details [Manage authentication in Azure Maps]: how-to-manage-authentication.md
To learn more about authenticating the Azure Maps Control with Microsoft Entra I
[Azure custom roles]: ../role-based-access-control/custom-roles.md [management group]: ../governance/management-groups/overview.md [Management API]: /rest/api/maps-management/
-[Azure AD authentication]: #azure-ad-authentication
+[Microsoft Entra authentication]: #microsoft-entra-authentication
[What is Azure Policy?]: ../governance/policy/overview.md [user-assigned managed identity]: ../active-directory/managed-identities-azure-resources/overview.md [understanding access control]: #understand-sas-token-access-control
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
<! learn.microsoft.com Links > [Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control
-[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication
[Azure Maps Drawing Error Visualizer]: drawing-error-visualizer.md [Azure Maps services]: index.yml [Azure Maps Web SDK]: how-to-use-map-control.md
The following example shows how to update a dataset, create a new tileset, and d
[Create custom styles for indoor maps]: how-to-create-custom-styles.md [Drawing package requirements]: drawing-requirements.md [Drawing package warnings and errors]: drawing-conversion-error-codes.md
+[Microsoft Entra authentication]: ../active-directory/fundamentals/active-directory-whatis.md
[How to create data registry]: how-to-create-data-registries.md [Indoor maps wayfinding service]: how-to-creator-wayfinding.md [Instantiate the Indoor Manager]: how-to-use-indoor-module.md#instantiate-the-indoor-manager
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
The following list describes common words used with the Azure Maps services.
<a name="Zoom level"></a> **Zoom level**: Specifies the level of detail and how much of the map is visible. When zoomed all the way to level 0, the full world map is often visible. But, the map shows limited details such as country/region names, borders, and ocean names. When zoomed in closer to level 17, the map displays an area of a few city blocks with detailed road information. In Azure Maps, the highest zoom level is 22. For more information, see the [Zoom levels and tile grid] documentation. [Altitude]: #altitude
-[Azure Maps and Azure AD]: azure-maps-authentication.md
+[Azure Maps and Microsoft Entra ID]: azure-maps-authentication.md
[Bearing]: #heading [Bounding box]: #bounding-box [consumption model documentation]: consumption-model.md
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
main().catch((err) => {
```
-This code snippet shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Microsoft Entra credential](#using-an-azure-ad-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult] object and writes them to the console. For more information, see the [FuzzySearchRequest] documentation.
-
+This code snippet shows how to use the `MapsSearch` method from the Azure Maps Search client library to create a `client` object with your Azure credentials. You can use either your Azure Maps subscription key or the [Microsoft Entra credential](#using-a-microsoft-entra-credential) from the previous section. The `path` parameter specifies the API endpoint, which is "/search/fuzzy/{format}" in this case. The `get` method sends an HTTP GET request with the query parameters, such as `query`, `coordinates`, and `countryFilter`. The query searches for Starbucks locations near Seattle in the US. The SDK returns the results as a [FuzzySearchResult] object and writes them to the console. For more information, see the [FuzzySearchRequest] documentation.
+ Run `search.js` with Node.js: ```powershell
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md
This table outlines common authentication and authorization scenarios in Azure M
> [!IMPORTANT] > For production applications, we recommend implementing Microsoft Entra ID with Azure role-based access control (Azure RBAC).
-| Scenario | Authentication | Authorization | Development effort | Operational effort |
-| --| -- | - | | |
-| [Trusted daemon app or non-interactive client app] | Shared Key | N/A | Medium | High |
-| [Trusted daemon or non-interactive client app] | Microsoft Entra ID | High | Low | Medium |
-| [Web single page app with interactive single-sign-on]| Microsoft Entra ID | High | Medium | Medium |
-| [Web single page app with non-interactive sign-on] | Microsoft Entra ID | High | Medium | Medium |
-| [Web app, daemon app, or non-interactive sign-on app]| SAS Token | High | Medium | Low |
-| [Web application with interactive single-sign-on] | Microsoft Entra ID | High | High | Medium |
-| [IoT device or an input constrained application] | Microsoft Entra ID | High | Medium | Medium |
+| Scenario | Authentication | Authorization | Development effort | Operational effort |
+| --| | - | | |
+| [Trusted daemon app or non-interactive client app] | Shared Key | N/A | Medium | High |
+| [Trusted daemon or non-interactive client app] | Microsoft Entra ID | High | Low | Medium |
+| [Web single page app with interactive single-sign-on]| Microsoft Entra ID | High | Medium | Medium |
+| [Web single page app with non-interactive sign-on] | Microsoft Entra ID | High | Medium | Medium |
+| [Web app, daemon app, or non-interactive sign-on app]| SAS Token | High | Medium | Low |
+| [Web application with interactive single-sign-on] | Microsoft Entra ID | High | High | Medium |
+| [IoT device or an input constrained application] | Microsoft Entra ID | High | Medium | Medium |
## View built-in Azure Maps role definitions
The results display the current Azure Maps role assignments.
Request a token from the Microsoft Entra token endpoint. In your Microsoft Entra ID request, use the following details:
-| Azure environment | Microsoft Entra token endpoint | Azure resource ID |
+| Azure environment | Microsoft Entra token endpoint | Azure resource ID |
| - | -- | | | Azure public cloud | `https://login.microsoftonline.com` | `https://atlas.microsoft.com/` | | Azure Government cloud | `https://login.microsoftonline.us` | `https://atlas.microsoft.com/` |
Explore samples that show how to integrate Microsoft Entra ID with Azure Maps:
> [Microsoft Entra authentication samples] [Azure portal]: https://portal.azure.com/
-[Azure AD authentication samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples
+[Microsoft Entra authentication samples]: https://github.com/Azure-Samples/Azure-Maps-AzureAD-Samples
[View usage metrics]: how-to-view-api-usage.md
-[Authentication scenarios for Azure AD]: ../active-directory/develop/authentication-vs-authorization.md
+[Authentication scenarios for Microsoft Entra ID]: ../active-directory/develop/authentication-vs-authorization.md
[the table of scenarios]: how-to-manage-authentication.md#choose-an-authentication-and-authorization-scenario [Trusted daemon app or non-interactive client app]: how-to-secure-daemon-app.md [Trusted daemon or non-interactive client app]: how-to-secure-daemon-app.md
Explore samples that show how to integrate Microsoft Entra ID with Azure Maps:
[IoT device or an input constrained application]: how-to-secure-device-code.md [Shared access signature (SAS) token authentication]: azure-maps-authentication.md#shared-access-signature-token-authentication [application categories]: ../active-directory/develop/authentication-flows-app-scenarios.md#application-categories
-[Azure Active Directory (Azure AD)]: ../active-directory/fundamentals/active-directory-whatis.md
+[Microsoft Entra ID]: ../active-directory/fundamentals/active-directory-whatis.md
[Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication [free account]: https://azure.microsoft.com/free/ [managed identities for Azure resources]: ../active-directory/managed-identities-azure-resources/overview.md
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Learn how to use the Creator services to render indoor maps in your application:
> [Use the Indoor Maps module] [Authorization with role-based access control]: azure-maps-authentication.md#authorization-with-role-based-access-control
-[Azure AD authentication]: azure-maps-authentication.md#azure-ad-authentication
+[Microsoft Entra authentication]: azure-maps-authentication.md#microsoft-entra-authentication
[Azure Maps Creator tutorial]: tutorial-creator-indoor-maps.md [Azure Maps pricing]: https://aka.ms/CreatorPricing [Azure portal]: https://portal.azure.com
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Create the web application in Microsoft Entra ID for users to sign in. The web a
6. Copy the Microsoft Entra app ID and the Microsoft Entra tenant ID from the app registration to use in the Web SDK. Add the Microsoft Entra app registration details and the `x-ms-client-id` from the Azure Map account to the Web SDK.
- ```javascript
+ ```html
<link rel="stylesheet" href="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.css" type="text/css" /> <script src="https://atlas.microsoft.com/sdk/javascript/mapcontrol/3/atlas.min.js" /> <script>
azure-maps How To Use Ts Rest Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ts-rest-sdk.md
For more code samples that use the TypeScript REST SDK with Web SDK integration,
[Azure TypeScript REST SDK]: ./rest-sdk-developer-guide.md#javascripttypescript [JavaScript/TypeScript REST SDK Developers Guide]: ./how-to-dev-guide-js-sdk.md [MapsSearch]: /javascript/api/@azure-rest/maps-search
-[Azure Active Directory credential]: ./how-to-dev-guide-js-sdk.md#using-an-azure-ad-credential
+[Microsoft Entra credential]: ./how-to-dev-guide-js-sdk.md#using-an-azure-ad-credential
[Azure Key credential]: ./how-to-dev-guide-js-sdk.md#using-a-subscription-key-credential [@azure/identity]: https://www.npmjs.com/package/@azure/identity [@azure/core-auth]: https://www.npmjs.com/package/@azure/core-auth
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
description: Find out how to add traffic data to maps. Learn about flow data, and see how to use the Azure Maps Web SDK to add incident data and flow data to maps. Previously updated : 06/15/2023 Last updated : 10/26/2023
map.setTraffic({
The [Traffic Overlay] sample demonstrates how to display the traffic overlay on a map. For the source code for this sample, see [Traffic Overlay source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/WMLRPw/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
The [Traffic Overlay] sample demonstrates how to display the traffic overlay on
The [Traffic Overlay Options] tool lets you switch between the different traffic overlay settings to see how the rendering changes. For the source code for this sample, see [Traffic Overlay Options source code]. <!-- > [!VIDEO //codepen.io/azuremaps/embed/RwbPqRY/?height=700&theme-id=0&default-tab=result]
map.controls.add(new atlas.control.TrafficLegendControl(), { position: 'bottom-l
The [Traffic controls] sample is a fully functional map that shows how to display traffic data on a map. For the source code for this sample, see [Traffic controls source code]. <!-- > [!VIDEO https://codepen.io/azuremaps/embed/ZEWaeLJ?height500&theme-id=0&default-tab=js,result&embed-version=2&editable=true]
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm Rsyslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md
Overview of Azure Monitor Agent for Linux Syslog collection and supported RFC standards: - Azure Monitor Agent installs an output configuration for the system Syslog daemon during the installation process. The configuration file specifies the way events flow between the Syslog daemon and Azure Monitor Agent.-- For `rsyslog` (most Linux distributions), the configuration file is `/etc/rsyslog.d/10-azuremonitoragent.conf`. For `syslog-ng`, the configuration file is `/etc/syslog-ng/conf.d/azuremonitoragent.conf`.-- Azure Monitor Agent listens to a UNIX domain socket to receive events from `rsyslog` / `syslog-ng`. The socket path for this communication is `/run/azuremonitoragent/default_syslog.socket`.
+- For `rsyslog` (most Linux distributions), the configuration file is `/etc/rsyslog.d/10-azuremonitoragent-omfwd.conf`. For `syslog-ng`, the configuration file is `/etc/syslog-ng/conf.d/azuremonitoragent-tcp.conf`.
+- Azure Monitor Agent listens to a TCP port to receive events from `rsyslog` / `syslog-ng`. The port for this communication is logged at `/etc/opt/microsoft/azuremonitoragent/config-cache/syslog.port`.
+ > [!NOTE]
+ > Before Azure Monitor Agent version 1.28, it used a Unix domain socket instead of TCP port to receive events from rsyslog. `omfwd` output module in `rsyslog` offers spooling and retry mechanisms for improved reliability.
- The Syslog daemon uses queues when Azure Monitor Agent ingestion is delayed or when Azure Monitor Agent isn't reachable. - Azure Monitor Agent ingests Syslog events via the previously mentioned socket and filters them based on facility or severity combination from data collection rule (DCR) configuration in `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Any `facility` or `severity` not present in the DCR is dropped. - Azure Monitor Agent attempts to parse events in accordance with **RFC3164** and **RFC5424**. It also knows how to parse the message formats listed on [this website](./azure-monitor-agent-overview.md#data-sources-and-destinations).
rsyslogd 1484 syslog 14w REG 8,1 3601566564 0 35280 /var/log/syslog (
### Rsyslog default configuration logs all facilities to /var/log/ On some popular distros (for example, Ubuntu 18.04 LTS), rsyslog ships with a default configuration file (`/etc/rsyslog.d/50-default.conf`), which logs events from nearly all facilities to disk at `/var/log/syslog`. RedHat/CentOS family Syslog events are stored under `/var/log/` but in a different file: `/var/log/messages`.
-Azure Monitor Agent doesn't rely on Syslog events being logged to `/var/log/`. Instead, it configures the rsyslog service to forward events over a socket directly to the `azuremonitoragent` service process (mdsd).
+Azure Monitor Agent doesn't rely on Syslog events being logged to `/var/log/`. Instead, it configures the rsyslog service to forward events over a TCP port directly to the `azuremonitoragent` service process (mdsd).
#### Fix: Remove high-volume facilities from /etc/rsyslog.d/50-default.conf
-If you're sending a high log volume through rsyslog and your system is set up to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to Azure Monitor Agent because rsyslog uses a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent.conf`.
+If you're sending a high log volume through rsyslog and your system is set up to log events for these facilities, consider modifying the default rsyslog config to avoid logging and storing them under `/var/log/`. The events for this facility would still be forwarded to Azure Monitor Agent because rsyslog uses a different configuration for forwarding placed in `/etc/rsyslog.d/10-azuremonitoragent-omfwd.conf`.
1. For example, to remove `local4` events from being logged at `/var/log/syslog` or `/var/log/messages`, change this line in `/etc/rsyslog.d/50-default.conf` from this snippet:
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
If your VM doesn't have Azure Monitor Agent installed, the DCR deployment trigge
When Azure Monitor Agent is installed on a Linux machine, it installs a default Syslog configuration file that defines the facility and severity of the messages that are collected if Syslog is enabled in a DCR. The configuration file is different depending on the Syslog daemon that the client has installed. ### Rsyslog
-On many Linux distributions, the rsyslogd daemon is responsible for consuming, storing, and routing log messages sent by using the Linux Syslog API. Azure Monitor Agent uses the UNIX domain socket output module (`omuxsock`) in rsyslog to forward log messages to Azure Monitor Agent.
+On many Linux distributions, the rsyslogd daemon is responsible for consuming, storing, and routing log messages sent by using the Linux Syslog API. Azure Monitor Agent uses the TCP forward output module (`omfwd`) in rsyslog to forward log messages to Azure Monitor Agent.
The Azure Monitor Agent installation includes default config files that get placed under the following directory: `/etc/opt/microsoft/azuremonitoragent/syslog/rsyslogconf/` When Syslog is added to a DCR, these configuration files are installed under the `etc/rsyslog.d` system directory and rsyslog is automatically restarted for the changes to take effect. These files are used by rsyslog to load the output module and forward the events to the Azure Monitor Agent daemon by using defined rules.
-The built-in `omuxsock` module can't be loaded more than once. For this reason, the configurations for loading of the module and forwarding of the events with corresponding forwarding format template are split in two different files. Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with all log levels.
+Its default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities with all log levels.
```
-$ cat /etc/rsyslog.d/10-azuremonitoragent.conf
+$ cat /etc/rsyslog.d/10-azuremonitoragent-omfwd.conf
# Azure Monitor Agent configuration: forward logs to azuremonitoragent
-$OMUxSockSocket /run/azuremonitoragent/default_syslog.socket
-template(name="AMA_RSYSLOG_TraditionalForwardFormat" type="string" string="<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%")
-$OMUxSockDefaultTemplate AMA_RSYSLOG_TraditionalForwardFormat
-# Forwarding all events through Unix Domain Socket
-*.* :omuxsock:
+
+template(name="AMA_RSYSLOG_TraditionalForwardFormat" type="string" string="<%PRI%>%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%")
+# queue.workerThreads sets the maximum worker threads, it will scale back to 0 if there is no activity
+# Forwarding all events through TCP port
+*.* action(type="omfwd"
+template="AMA_RSYSLOG_TraditionalForwardFormat"
+queue.type="LinkedList"
+queue.filename="omfwd-azuremonitoragent"
+queue.maxFileSize="32m"
+action.resumeRetryCount="-1"
+action.resumeInterval="5"
+action.reportSuspension="on"
+action.reportSuspensionContinuation="on"
+queue.size="25000"
+queue.workerThreads="100"
+queue.dequeueBatchSize="2048"
+queue.saveonshutdown="on"
+target="127.0.0.1" Port="28330" Protocol="tcp")
```
-```
-$ cat /etc/rsyslog.d/05-azuremonitoragent-loadomuxsock.conf
-# Azure Monitor Agent configuration: load rsyslog forwarding module.
-$ModLoad omuxsock
-```
- On some legacy systems, such as CentOS 7.3, we've seen rsyslog log formatting issues when a traditional forwarding format is used to send Syslog events to Azure Monitor Agent. For these systems, Azure Monitor Agent automatically places a legacy forwarder template instead: `template(name="AMA_RSYSLOG_TraditionalForwardFormat" type="string" string="%TIMESTAMP% %HOSTNAME% %syslogtag%%msg:::sp-if-no-1st-sp%%msg%\n")` ### Syslog-ng
-The configuration file for syslog-ng is installed at `/etc/opt/microsoft/azuremonitoragent/syslog/syslog-ngconf/azuremonitoragent.conf`. When Syslog collection is added to a DCR, this configuration file is placed under the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` system directory and syslog-ng is automatically restarted for the changes to take effect.
+The configuration file for syslog-ng is installed at `/etc/opt/microsoft/azuremonitoragent/syslog/syslog-ngconf/azuremonitoragent-tcp.conf`. When Syslog collection is added to a DCR, this configuration file is placed under the `/etc/syslog-ng/conf.d/azuremonitoragent-tcp.conf` system directory and syslog-ng is automatically restarted for the changes to take effect.
The default contents are shown in the following example. This example collects Syslog messages sent from the local agent for all facilities and all severities. ```
-$ cat /etc/syslog-ng/conf.d/azuremonitoragent.conf
-# Azure MDSD configuration: syslog forwarding config for mdsd agent options {};
-
-# during install time, we detect if s_src exist, if it does then we
-
-# replace it by appropriate source name like in redhat 's_sys'
-
-# Forwrding using unix domain socket
-
-destination d_azure_mdsd {
-
-unix-dgram("/run/azuremonitoragent/default_syslog.socket"
-
-flags(no_multi_line)
-
-);
-};
-
-log { source(s_src); # will be automatically parsed from /etc/syslog-ng/syslog-ng.conf
-destination(d_azure_mdsd); };
+$ cat /etc/syslog-ng/conf.d/azuremonitoragent-tcp.conf
+# Azure MDSD configuration: syslog forwarding config for mdsd agent
+options {};
+
+# during install time, we detect if s_src exist, if it does then we
+# replace it by appropriate source name like in redhat 's_sys'
+# Forwrding using tcp
+destination d_azure_mdsd {
+ network("127.0.0.1"
+ port(28330)
+ log-fifo-size(25000));
+};
+
+log {
+ source(s_src); # will be automatically parsed from /etc/syslog-ng/syslog-ng.conf
+ destination(d_azure_mdsd);
+ flags(flow-control);
+};
``` >[!Note]
azure-monitor Alerts Troubleshoot Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-log.md
Log alerts provide an option to mute fired alert actions for a set amount of tim
A common issue is that you think that the alert didn't fire, but it was actually the rule configuration.
-![Suppress alerts](media/alerts-troubleshoot-log/LogAlertSuppress.png)
### Alert scope resource has been moved, renamed, or deleted
The alert time range is limited to a maximum of two days. Even if the query cont
If the query requires more data than the alert evaluation, you can change the time range manually. If there's ago command in the query, it will be changed automatically to be 2 days (48 hours). ## Log alert fired unnecessarily
azure-monitor Alerts Webhooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-webhooks.md
Azure alerts use HTTP POST to send the alert contents in JSON format to a webhoo
## Configure webhooks via the Azure portal To add or update the webhook URI, in the [Azure portal](https://portal.azure.com/), go to **Create/Update Alerts**.
-![Add an alert rule pane](./media/alerts-webhooks/Alertwebhook.png)
You can also configure an alert to post to a webhook URI by using [Azure PowerShell cmdlets](../powershell-samples.md#create-metric-alerts), a [cross-platform CLI](../cli-samples.md#work-with-alerts), or [Azure Monitor REST APIs](/rest/api/monitor/alertrules).
azure-monitor It Service Management Connector Secure Webhook Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/it-service-management-connector-secure-webhook-connections.md
The steps of the Secure Webhook data flow are:
1. Creates a work item (for example, an incident) in the ITSM tool. 1. Binds the ID of the configuration item to the customer management database.
-![Diagram that shows how the ITSM tool communicates with Microsoft Entra ID, Azure alerts, and an action group.](media/it-service-management-connector-secure-webhook-connections/secure-export-diagram.png)
## Benefits of Secure Webhook
azure-monitor Itsm Connector Secure Webhook Connections Azure Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsm-connector-secure-webhook-connections-azure-configuration.md
To register the application with Microsoft Entra ID:
1. In Microsoft Entra ID, select **Expose application**. 1. Select **Add** for **Application ID URI**.
- [![Screenshot that shows the option for setting the U R I of the application I D.](media/itsm-connector-secure-webhook-connections-azure-configuration/azure-ad.png)](media/itsm-connector-secure-webhook-connections-azure-configuration/azure-ad-expand.png#lightbox)
+ :::image type="content" source="media/itsm-connector-secure-webhook-connections-azure-configuration/azure-ad.png" lightbox="media/itsm-connector-secure-webhook-connections-azure-configuration/azure-ad.png" alt-text="Screenshot that shows the option for setting the U R I of the application I D.":::
1. Select **Save**. ## Define a service principal
To add a webhook to an action, follow these instructions for Secure Webhook:
The following image shows the configuration of a sample Secure Webhook action:
- ![Screenshot that shows a Secure Webhook action.](media/itsm-connector-secure-webhook-connections-azure-configuration/secure-webhook.png)
+ :::image type="content" source="media/itsm-connector-secure-webhook-connections-azure-configuration/secure-webhook.png" lightbox="media/itsm-connector-secure-webhook-connections-azure-configuration/secure-webhook.png" alt-text="Screenshot that shows a Secure Webhook action.":::
## Configure the ITSM tool environment Secure Webhook supports connections with the following ITSM tools:
azure-monitor Itsmc Connections Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-connections-servicenow.md
As a part of setting up OAuth, we recommend:
1. Select the old token from the list according to the OAuth name and expiration date.
- ![Screenshot that shows a list of tokens for OAuth.](media/itsmc-connections-servicenow/snow-system-oauth.png)
+ :::image type="content" source="media/itsmc-connections-servicenow/snow-system-oauth.png" lightbox="media/itsmc-connections-servicenow/snow-system-oauth.png" alt-text="Screenshot that shows a list of tokens for OAuth.":::
1. Select **Revoke Access** > **Revoke**. ## Install the user app and create the user role
Use the following procedure to create a ServiceNow connection.
2. Under **Workspace Data Sources**, select **ITSM Connections**.
- ![Screenshot that shows selection of a data source.](media/itsmc-overview/add-new-itsm-connection.png)
+ :::image type="content" source="media/itsmc-overview/add-new-itsm-connection.png" lightbox="media/itsmc-overview/add-new-itsm-connection.png" alt-text="Screenshot that shows selection of a data source.":::
3. At the top of the right pane, select **Add**.
Use the following procedure to create a ServiceNow connection.
| **Work Items To Sync** | Select the ServiceNow work items that you want to sync to Azure Log Analytics, through ITSMC. The selected values are imported into Log Analytics. Options are incidents and change requests.| | **Create New Configuration Item in ITSM Product** | Select this option if you want to create the configuration items in the ITSM product. When it's selected, ITSMC creates configuration items (if none exist) in the supported ITSM system. It's disabled by default. |
-![Screenshot of boxes and options for adding a ServiceNow connection.](media/itsmc-connections-servicenow/itsm-connection-servicenow-connection-latest.png)
When you're successfully connected and synced:
azure-monitor Itsmc Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-dashboard.md
The dashboard contains information on the alerts that were sent to the ITSM tool
In the **WORK ITEMS CREATED** area, the graph and the table below it contain the count of the work items per type. If you select the graph or the table, you can see more details about the work items.
-![Screenshot that shows a created work item.](media/itsmc-resync-servicenow/itsm-dashboard-workitems.png)
### Affected computers
In the **IMPACTED COMPUTERS** area, the table lists computers and their associat
The table contains a limited number of rows. If you want to see all the rows, select **See all**.
-![Screenshot that shows affected computers.](media/itsmc-resync-servicenow/itsm-dashboard-impacted-comp.png)
### Connector status
The table contains a limited number of rows. If you want to see all the rows, se
To learn more about the messages in the table, see [this article](itsmc-dashboard-errors.md).
-![Screenshot that shows connector status.](media/itsmc-resync-servicenow/itsm-dashboard-connector-status.png)
### Alert rules
In the **ALERT RULES** area, the table contains information on the number of ale
The table contains a limited number of rows. If you want to see all the rows, select **See all**.
-![Screenshot that shows alert rules.](media/itsmc-resync-servicenow/itsm-dashboard-alert-rules.png)
azure-monitor Itsmc Secure Webhook Connections Bmc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-secure-webhook-connections-bmc.md
Ensure that you've met the following prerequisites:
1. Search for the **Create Incident from Azure Alerts** flow. 1. Copy the webhook URL.
- ![Screenshot that shows the webhook U R L in Integration Studio.](media/itsmc-secure-webhook-connections-bmc/bmc-url.png)
+ :::image type="content" source="media/itsmc-secure-webhook-connections-bmc/bmc-url.png" lightbox="media/itsmc-secure-webhook-connections-bmc/bmc-url.png" alt-text="Screenshot that shows the webhook U R L in Integration Studio.":::
1. Follow the instructions according to the version: * [Enabling prebuilt integration with Azure Monitor for version 20.02](https://docs.bmc.com/docs/multicloud/enabling-prebuilt-integration-with-azure-monitor-879728195.html)
Ensure that you've met the following prerequisites:
- **Check**: Selected by default to enable usage. - The Azure tenant ID and Azure application ID are taken from the application that you defined earlier.
- ![Screenshot that shows BMC configuration.](media/itsmc-secure-webhook-connections-bmc/bmc-configuration.png)
+ :::image type="content" source="media/itsmc-secure-webhook-connections-bmc/bmc-configuration.png" lightbox="media/itsmc-secure-webhook-connections-bmc/bmc-configuration.png" alt-text="Screenshot that shows BMC configuration.":::
azure-monitor Itsmc Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-troubleshoot-overview.md
An alternative is to be notified through ITSMC. ITSMC gives you the option to se
Depending on your configuration when you set up a connection, ITSMC can sync up to 120 days of incident and change request data. To get the log record schema for this data, see the [Data synced from your ITSM product](./itsmc-synced-data.md) article. You can visualize the incident and change request data by using the ITSMC dashboard:-
-![Screenshot that shows the ITSMC dashboard.](media/itsmc-overview/itsmc-overview-sample-log-analytics.png)
+<!-- convertborder later -->
The dashboard also provides information about connector status. You can use that information as a starting point to analyze problems with the connections. For more information, see [Error investigation using the dashboard](./itsmc-dashboard.md).
Service Map automatically discovers the application components on Windows and Li
Service Map shows connections between servers, processes, and ports across any TCP-connected architecture. Other than the installation of an agent, no configuration is required. For more information, see [Using Service Map](../vm/service-map.md). If you're using Service Map, you can view the service desk items created in IT Service Management (ITSM) solutions, as shown in this example:-
-![Screenshot that shows the Log Analytics screen.](media/itsmc-overview/itsmc-overview-integrated-solutions.png)
+<!-- convertborder later -->
## Resolve problems
The following sections identify common symptoms, possible causes, and resolution
* For ServiceNow, ensure that you have [sufficient privileges](itsmc-connections-servicenow.md#install-the-user-app-and-create-the-user-role) in the corresponding ITSM product. * For Service Manager connections:
- * Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify that the connection is successfully established with the on-premises Service Manager computer, go to the web app URL as described in the [documentation for making a hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
+ * Ensure that the web app is successfully deployed and that the hybrid connection is created. To verify that the connection is successfully established with the on-premises Service Manager computer, go to the web app URL. For more information, see the [documentation for making a hybrid connection](./itsmc-connections-scsm.md#configure-the-hybrid-connection).
### Duplicate work items are created
The following sections identify common symptoms, possible causes, and resolution
### Work items are not created
-**Cause**: There can be several reasons for this:
+**Cause**: The cause can be one of several reasons:
* Code was modified on the ServiceNow side. * Permissions are misconfigured.
The following sections identify common symptoms, possible causes, and resolution
### Sync connection
-**Cause**: There can be several reasons for this:
+**Cause**: The cause can be one of several reasons:
* Templates aren't shown as a part of the action definition dropdown and an error message is shown: "Can't retrieve the template configuration, see the connector logs for more information."
-* Values aren't shown in the dropdowns of the default fields as a part of the action definition and an error message is shown: "No values found for the following fields: \<field names\>."
+* Values aren't shown in the dropdowns of the default fields as a part of the action definition. In addition, an error message is shown: "No values found for the following fields: \<field names\>."
* Incidents/Events aren't created in ServiceNow. **Resolution**:
The following sections identify common symptoms, possible causes, and resolution
* Check the [dashboard](itsmc-dashboard.md) and review the errors in the section for connector status. Then review the [common errors and their resolutions](itsmc-dashboard-errors.md) ### In the incidents received from ServiceNow, the configuration item is blank
-**Cause**: There can be several reasons for this:
+**Cause**: The cause can be one of several reasons:
* The alert isn't a log alert. Configuration items are only supported by log alerts.
-* The search results do not include the **Computer** or **Resource** column.
-* The values in the configuration item field do not match an entry in the CMDB.
+* The search results don't include the **Computer** or **Resource** column.
+* The values in the configuration item field don't match an entry in the CMDB.
**Resolution**: * Check if the alert is a log alert. If it isn't a log alert, configuration items are not supported.
-* If the search results do not have a Computer or Resource column, add them to the query. When you are defining a query in Log Search alerts you need to have in the query result the Configuration items names with one of the label names "Computer", "Resource", "_ResourceId" or "ResourceIdΓÇ¥. This mapping will enable to map the configuration items to the ITSM payload
-* Check that the values in the Computer and Resource columns are identical to the values in the CMDB. If they are not, add a new entry to the CMDB with the matching values.
+* If the search results don't have a Computer or Resource column, add them to the query. When you're defining a query in Log Search alerts you need to have in the query result the Configuration items names with one of the label names "Computer", "Resource", "_ResourceId" or "ResourceIdΓÇ¥. This mapping enables to map the configuration items to the ITSM payload
+* Check that the values in the Computer and Resource columns are identical to the values in the CMDB. If they aren't, add a new entry to the CMDB with the matching values.
azure-monitor Proactive Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-diagnostics.md
You can access the detections issued by smart detection from the emails you rece
You can discover detections in two ways: * **You receive an email** from Application Insights. Here's a typical example:
-
- ![Screenshot that shows an email alert.](./media/proactive-diagnostics/03.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/proactive-diagnostics/03.png" lightbox="./media/proactive-diagnostics/03.png" alt-text="Screenshot that shows an email alert." border="false":::
Select **See the analysis of this issue** to see more information in the portal. * **The smart detection pane** in Application Insights. Under the **Investigate** menu, select **Smart Detection** to see a list of recent detections.-
- ![Screenshot that shows recent detections.](./media/proactive-diagnostics/04.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/proactive-diagnostics/04.png" lightbox="./media/proactive-diagnostics/04.png" alt-text="Screenshot that shows recent detections." border="false":::
Select a detection to view its details.
azure-monitor Smart Detection Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/smart-detection-performance.md
No, a notification doesn't mean that your app definitely has a problem. It's sim
The notifications include diagnostic information. Here's an example:
-![Here is an example of Server Response Time Degradation detection](media/smart-detection-performance/server_response_time_degradation.png)
1. **Triage**. The notification shows you how many users or how many operations are affected. This information can help you assign a priority to the problem. 2. **Scope**. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or locations? This information can be obtained from the notification.
The notifications include diagnostic information. Here's an example:
## Configure Email Notifications Smart detection notifications are enabled by default. They are sent to users that have [Monitoring Reader](../../role-based-access-control/built-in-roles.md#monitoring-reader) and [Monitoring Contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor) access to the subscription in which the Application Insights resource resides. To change the default notification, either click **Configure** in the email notification, or open **Smart detection settings** in Application Insights.
-
- ![Smart Detection Settings](media/smart-detection-performance/smart_detection_configuration.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/smart-detection-performance/smart_detection_configuration.png" lightbox="media/smart-detection-performance/smart_detection_configuration.png" alt-text="Smart Detection Settings" border="false":::
* You can disable the default notification, and replace it with a specified list of emails.
Modern applications often adopt a micro services design approach, which in many
Example of dependency degradation notification:
-![Here is an example of Dependency Duration Degradation detection](media/smart-detection-performance/dependency_duration_degradation.png)
Notice that it tells you:
Anomalies like these are hard to detect just by inspecting the data, but are mor
Currently, our algorithms look at page load times, request response times at the server, and dependency response times. You don't have to set any thresholds or configure rules. Machine learning and data mining algorithms are used to detect abnormal patterns.-
-![From the email alert, click the link to open the diagnostic report in Azure](./media/smart-detection-performance/03.png)
+<!-- convertborder later -->
* **When** shows the time the issue was detected. * **What** describes the problem that was detected, and th characteristics of the set of events that we found, which displayed the problem behavior.
azure-monitor Container Insights Agent Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-agent-config.md
ConfigMap is a global list and there can be only one ConfigMap applied to the ag
| Key | Data type | Value | Description | |--|--|--|--|
-| `[agent_settings.proxy_config] ignore_proxy_settings =` | Boolean | True or false | Set this value to true to ignore proxy settings. On both AKS & Arc K8s environments, if your cluster is configured with forward proxy, then proxy settings are automatically applied and used for the agent. For certain configurations, such as, with AMPLS + Proxy, you may with for the proxy config to be ignored. . By default, this setting is set to `false`. |
+| `[agent_settings.proxy_config] ignore_proxy_settings =` | Boolean | True or false | Set this value to true to ignore proxy settings. On both AKS & Arc K8s environments, if your cluster is configured with forward proxy, then proxy settings are automatically applied and used for the agent. For certain configurations, such as, with AMPLS + Proxy, you might with for the proxy config to be ignored. . By default, this setting is set to `false`. |
## Configure and deploy ConfigMaps
Output similar to the following example appears with the annotation schema-versi
schema-versions=v1 ```
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How do I enable log collection for containers in the kube-system namespace through Helm?
+
+The log collection from containers in the kube-system namespace is disabled by default. You can enable log collection by setting an environment variable on Azure Monitor Agent. See the [Container insights](https://aka.ms/azuremonitor-containers-helm-chart) GitHub page.
+
++ ## Next steps - Container insights doesn't include a predefined set of alerts. Review the [Create performance alerts with Container insights](./container-insights-log-alerts.md) to learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures.
azure-monitor Container Insights Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-analyze.md
The icons in the status field indicate the online statuses of pods, as described
Azure Network Policy Manager includes informative Prometheus metrics that you can use to monitor and better understand your network configurations. It provides built-in visualizations in either the Azure portal or Grafana Labs. For more information, see [Monitor and visualize network configurations with Azure npm](../../virtual-network/kubernetes-network-policies.md#monitor-and-visualize-network-configurations-with-azure-npm).
+## Frequently asked questions
+
+This section provides answers to common questions.
+ ## Next steps
azure-monitor Container Insights Livedata Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md
Suspend or pause autoscroll for only a short period of time while you're trouble
>[!IMPORTANT] >No data is stored permanently during the operation of this feature. All information captured during the session is deleted when you close your browser or navigate away from it. Data only remains present for visualization inside the five-minute window of the metrics feature. Any metrics older than five minutes are also deleted. The Live Data buffer queries within reasonable memory usage limits.
+## Frequently asked questions
+
+This section provides answers to common questions.
++ ## Next steps - To continue learning how to use Azure Monitor and monitor other aspects of your AKS cluster, see [View Azure Kubernetes Service health](container-insights-analyze.md).
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
ContainerInventory
### Kubernetes events
+> [!NOTE]
+> By default, Normal event types aren't collected, so you won't see them when you query the KubeEvents table unless the *collect_all_kube_events* ConfigMap setting is enabled. If you need to collect Normal events, enable *collect_all_kube_events setting* in the *container-azm-ms-agentconfig* ConfigMap. See [Configure agent data collection for Container insights](./container-insights-agent-config.md) for information on how to configure the ConfigMap.
++ ``` kusto KubeEvents | where not(isempty(Namespace))
The output shows results similar to the following example:
:::image type="content" source="./media/container-insights-log-query/log-query-example-kubeagent-events.png" alt-text="Screenshot that shows log query results of informational events from an agent." lightbox="media/container-insights-log-query/log-query-example-kubeagent-events.png":::
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Can I view metrics collected in Grafana?
+
+Container insights support viewing metrics stored in your Log Analytics workspace in Grafana dashboards. We've provided a template that you can download from the Grafana [dashboard repository](https://grafana.com/grafana/dashboards?dataSource=grafana-azure-monitor-datasource&category=docker). Use it to get started and as a reference to help you learn how to query data from your monitored clusters to visualize in custom Grafana dashboards.
+
+### Why are log lines larger than 16 KB split into multiple records in Log Analytics?
+
+The agent uses the [Docker JSON file logging driver](https://docs.docker.com/config/containers/logging/json-file/) to capture the stdout and stderr of containers. This logging driver splits log lines [larger than 16 KB](https://github.com/moby/moby/pull/22982) into multiple lines when they're copied from stdout or stderr to a file.
+
+ ## Next steps Container insights doesn't include a predefined set of alerts. To learn how to create recommended alerts for high CPU and memory utilization to support your DevOps or operational processes and procedures, see [Create performance alerts with Container insights](./container-insights-log-alerts.md).
azure-monitor Container Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-overview.md
Container insights supports clusters running the Linux and Windows Server 2019 o
>[!NOTE] > Container insights support for Windows Server 2022 operating system is in public preview.
+## Frequently asked questions
+This section provides answers to common questions.
+
+### Is there support for collecting Kubernetes audit logs for ARO clusters?
+
+No. Container insights don't support collection of Kubernetes audit logs.
+
+### Does Container Insights support pod sandboxing?
+
+Yes, Container Insights supports pod sandboxing through support for Kata Containers. For more details on pod sandboxing in AKS, [refer to the AKS docs](/azure/aks/use-pod-sandboxing).
## Next steps
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
The following table summarizes known errors you might encounter when you use Con
| Error message "No data for selected filters" | It might take some time to establish monitoring data flow for newly created clusters. Allow at least 10 to 15 minutes for data to appear for your cluster.<br><br>If data still doesn't show up, check if the Log Analytics workspace is configured for `disableLocalAuth = true`. If yes, update back to `disableLocalAuth = false`.<br><br>`az resource show --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]"`<br><br>`az resource update --ids "/subscriptions/[Your subscription ID]/resourcegroups/[Your resource group]/providers/microsoft.operationalinsights/workspaces/[Your workspace name]" --api-version "2021-06-01" --set properties.features.disableLocalAuth=False` | | Error message "Error retrieving data" | While an AKS cluster is setting up for health and performance monitoring, a connection is established between the cluster and a Log Analytics workspace. A Log Analytics workspace is used to store all monitoring data for your cluster. This error might occur when your Log Analytics workspace has been deleted. Check if the workspace was deleted. If it was, reenable monitoring of your cluster with Container insights. Then specify an existing workspace or create a new one. To reenable, [disable](container-insights-optout.md) monitoring for the cluster and [enable](container-insights-enable-new-cluster.md) Container insights again. | | "Error retrieving data" after adding Container insights through `az aks cli` | When you enable monitoring by using `az aks cli`, Container insights might not be properly deployed. Check whether the solution is deployed. To verify, go to your Log Analytics workspace and see if the solution is available by selecting **Legacy solutions** from the pane on the left side. To resolve this issue, redeploy the solution. Follow the instructions in [Enable Container insights](container-insights-onboard.md). |
+| Error message "Missing Subscription registration" | If you receive the error "Missing Subscription registration for Microsoft.OperationsManagement," you can resolve it by registering the resource provider **Microsoft.OperationsManagement** in the subscription where the workspace is defined. For the steps, see [Resolve errors for resource provider registration](../../azure-resource-manager/templates/error-register-resource-provider.md). |
+| Error message "The reply url specified in the request doesn't match the reply urls configured for the application: '<application ID\>'." | You might see this error message when you enable live logs. For the solution, see [View container data in real time with Container insights](./container-insights-livedata-setup.md#configure-azure-ad-integrated-authentication). |
To help diagnose the problem, we've provided a [troubleshooting script](https://github.com/microsoft/Docker-Provider/tree/ci_prod/scripts/troubleshoot).
The solution to this issue is to clean up the existing resources of the Containe
If the preceding steps didn't resolve the installation of Azure Monitor Containers Extension issues, create a support ticket to send to Microsoft for further investigation. ## Duplicate alerts being received
-You may have enabled Prometheus alert rules without disabling Container insights recommended alerts. See [Migrate from Container insights recommended alerts to Prometheus recommended alert rules (preview)](container-insights-metric-alerts.md#migrate-from-metric-rules-to-prometheus-rules-preview).
+You might have enabled Prometheus alert rules without disabling Container insights recommended alerts. See [Migrate from Container insights recommended alerts to Prometheus recommended alert rules (preview)](container-insights-metric-alerts.md#migrate-from-metric-rules-to-prometheus-rules-preview).
+ ## I see info banner "You do not have the right cluster permissions which will restrict your access to Container Insights features. Please reach out to your cluster admin to get the right permission"
+
+Container Insights has historically allowed users to access the Azure portal experience based on the access permission of the Log Analytics workspace. It now checks cluster-level permission to provide access to the Azure portal experience. You might need your cluster admin to assign this permission.
+
+For basic read-only cluster level access, assign the **Monitoring Reader** role for the following types of clusters.
+
+- AKS without Kubernetes role-based access control (RBAC) authorization enabled
+- AKS enabled with Microsoft Entra SAML-based single sign-on
+- AKS enabled with Kubernetes RBAC authorization
+- AKS configured with the cluster role binding clusterMonitoringUser
+- [Azure Arc-enabled Kubernetes clusters](../../azure-arc/kubernetes/overview.md)
+
+See [Assign role permissions to a user or group](../../aks/control-kubeconfig-access.md#assign-role-permissions-to-a-user-or-group) for details on how to assign these roles for AKS and [Access and identity options for Azure Kubernetes Service (AKS)](../../aks/concepts-identity.md) to learn more about role assignments.
+
+## I don't see Image and Name property values populated when I query the ContainerLog table
+
+For agent version ciprod12042019 and later, by default these two properties aren't populated for every log line to minimize cost incurred on log data collected. There are two options to query the table that include these properties with their values:
+
+### Option 1
+
+Join other tables to include these property values in the results.
+
+Modify your queries to include `Image` and `ImageTag` properties from the `ContainerInventory` table by joining on `ContainerID` property. You can include the `Name` property (as it previously appeared in the `ContainerLog` table) from the `KubepodInventory` table's `ContainerName` field by joining on the `ContainerID` property. We recommend this option.
+
+The following example is a sample detailed query that explains how to get these field values with joins.
+
+```
+//Let's say we're querying an hour's worth of logs
+let startTime = ago(1h);
+let endTime = now();
+//Below gets the latest Image & ImageTag for every containerID, during the time window
+let ContainerInv = ContainerInventory | where TimeGenerated >= startTime and TimeGenerated < endTime | summarize arg_max(TimeGenerated, *) by ContainerID, Image, ImageTag | project-away TimeGenerated | project ContainerID1=ContainerID, Image1=Image ,ImageTag1=ImageTag;
+//Below gets the latest Name for every containerID, during the time window
+let KubePodInv = KubePodInventory | where ContainerID != "" | where TimeGenerated >= startTime | where TimeGenerated < endTime | summarize arg_max(TimeGenerated, *) by ContainerID2 = ContainerID, Name1=ContainerName | project ContainerID2 , Name1;
+//Now join the above 2 to get a 'jointed table' that has name, image & imagetag. Outer left is safer in case there are no kubepod records or if they're latent
+let ContainerData = ContainerInv | join kind=leftouter (KubePodInv) on $left.ContainerID1 == $right.ContainerID2;
+//Now join ContainerLog table with the 'jointed table' above and project-away redundant fields/columns and rename columns that were rewritten
+//Outer left is safer so you don't lose logs even if we can't find container metadata for loglines (due to latency, time skew between data types, etc.)
+ContainerLog
+| where TimeGenerated >= startTime and TimeGenerated < endTime
+| join kind= leftouter (
+ ContainerData
+) on $left.ContainerID == $right.ContainerID2 | project-away ContainerID1, ContainerID2, Name, Image, ImageTag | project-rename Name = Name1, Image=Image1, ImageTag=ImageTag1
+```
+
+### Option 2
+
+Reenable collection for these properties for every container log line.
+
+If the first option isn't convenient because of query changes involved, you can reenable collecting these fields. Enable the setting `log_collection_settings.enrich_container_logs` in the agent config map as described in the [data collection configuration settings](./container-insights-agent-config.md).
+
+> [!NOTE]
+> We don't recommend the second option for large clusters that have more than 50 nodes. It generates API server calls from every node in the cluster to perform this enrichment. This option also increases data size for every log line collected.
+
+## I can't upgrade a cluster after onboarding
+
+Here's the scenario: You enabled Container insights for an Azure Kubernetes Service cluster. Then you deleted the Log Analytics workspace where the cluster was sending its data. Now when you attempt to upgrade the cluster, it fails. To work around this issue, you must disable monitoring and then reenable it by referencing a different valid workspace in your subscription. When you try to perform the cluster upgrade again, it should process and complete successfully.
## Next steps
azure-monitor Prometheus Metrics Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-metrics-enable.md
This option adds Prometheus metrics to a cluster already enabled for Container i
See [Collect Prometheus metrics from AKS cluster (preview)](../essentials/prometheus-metrics-enable.md) for details on [verifying your deployment](../essentials/prometheus-metrics-enable.md#verify-deployment) and [limitations](../essentials/prometheus-metrics-enable.md#limitations-during-enablementdeployment) #### From an existing cluster
-This options enables Prometheus, Grafana, and Container insights on a cluster.
+This option enables Prometheus, Grafana, and Container insights on a cluster.
1. Open the clusters menu in the Azure portal and select **Insights**. 3. Select **Configure monitoring**.
To uninstall the metrics add-on, see [Disable Prometheus metrics collection on a
The list of regions Azure Monitor Metrics and Azure Monitor Workspace is supported in can be found [here](https://aka.ms/ama-metrics-supported-regions) under the Managed Prometheus tag.
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Does enabling managed service for Prometheus on my Azure Kubernetes Service cluster also enable Container insights?
+
+You have options for how you can collect your Prometheus metrics. If you use the Azure portal and enable Prometheus metrics collection and install the Azure Kubernetes Service (AKS) add-on from the Azure Monitor workspace UX, it won't enable Container insights and collection of log data. When you go to the Insights page on your AKS cluster, you're prompted to enable Container insights to collect log data.<br>
+
+If you use the Azure portal and enable Prometheus metrics collection and install the AKS add-on from the Insights page of your AKS cluster, it enables log collection into a Log Analytics workspace. and Prometheus metrics collection into an Azure Monitor workspace.
+ ## Next steps - [See the default configuration for Prometheus metrics](./prometheus-metrics-scrape-default.md)
azure-monitor Analyze Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/analyze-metrics.md
This section provides answers to common questions.
[Platform metrics](./monitor-azure-resource.md#monitoring-data) are collected automatically for Azure resources. You must perform some configuration, though, to collect metrics from the guest OS of a virtual machine. For a Windows VM, install the diagnostic extension and configure the Azure Monitor sink as described in [Install and configure Azure Diagnostics extension for Windows (WAD)](../agents/diagnostics-extension-windows-install.md). For Linux, install the Telegraf agent as described in [Collect custom metrics for a Linux VM with the InfluxData Telegraf agent](./collect-custom-metrics-linux-telegraf.md). + ## Next steps - [Troubleshoot metrics explorer](metrics-troubleshoot.md)
azure-monitor Azure Monitor Workspace Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
To set up an Azure monitor workspace as a data source for Grafana using a Resour
If your Grafana instance is self managed, see [Use Azure Monitor managed service for Prometheus as data source for self-managed Grafana using managed system identity](./prometheus-self-managed-grafana-azure-active-directory.md)
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### Can I use Azure Managed Grafana in a different region than my Azure Monitor workspace and managed service for Prometheus?
+
+Yes. When you use managed service for Prometheus, you can create your Azure Monitor workspace in any of the supported regions. Your Azure Kubernetes Service clusters can be in any region and send data into an Azure Monitor workspace in a different region. Azure Managed Grafana can also be in a different region than where you created your Azure Monitor workspace.
azure-monitor Azure Monitor Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-overview.md
Azure Monitor workspaces will eventually contain all metric data collected by Az
## Azure Monitor workspace architecture
-While a single Azure Monitor workspace may be sufficient for many use cases using Azure Monitor, many organizations create multiple workspaces to better meet their needs. This article presents a set of criteria for deciding whether to use a single Azure Monitor workspace, multiple Azure Monitor workspaces, and the configuration and placement of those workspaces.
+While a single Azure Monitor workspace can be sufficient for many use cases using Azure Monitor, many organizations create multiple workspaces to better meet their needs. This article presents a set of criteria for deciding whether to use a single Azure Monitor workspace, multiple Azure Monitor workspaces, and the configuration and placement of those workspaces.
### Design criteria
The following table presents criteria to consider when designing an Azure Monito
||| |Segregate by logical boundaries |Create separate Azure Monitor workspaces for operational data based on logical boundaries, such as by a role, application type, type of metric etc.| |Azure tenants | For multiple Azure tenants, create an Azure Monitor workspace in each tenant. Data sources can only send monitoring data to an Azure Monitor workspace in the same Azure tenant. |
-|Azure regions |Each Azure Monitor workspace resides in a particular Azure region. Regulatory or compliance requirements may dictate the storage of data in particular locations. |
+|Azure regions |Each Azure Monitor workspace resides in a particular Azure region. Regulatory or compliance requirements might dictate the storage of data in particular locations. |
|Data ownership |Create separate Azure Monitor workspaces to define data ownership, such as by subsidiaries or affiliated companies.| ### Considerations when creating an Azure Monitor workspace
When an Azure Monitor workspace reaches 80% of its maximum capacity or is foreca
In certain circumstances, splitting an Azure Monitor workspace into multiple workspaces can be necessary. For example: * Monitoring data in sovereign clouds: Create an Azure Monitor workspace in each sovereign cloud.
-* Compliance or regulatory requirements that mandate storage of data in specific regions: Create an Azure Monitor workspace per region as per requirements. There may be a need to manage the scale of metrics for large services or financial institutions with regional accounts.
+* Compliance or regulatory requirements that mandate storage of data in specific regions: Create an Azure Monitor workspace per region as per requirements. There might be a need to manage the scale of metrics for large services or financial institutions with regional accounts.
* Separating metrics in test, pre-production, and production environments: Create an Azure Monitor workspace per environment. >[!Note]
Data stored in the Azure Monitor Workspace is handled in accordance with all sta
- Data is retained for 18 months - For details about the Azure Monitor managed service for Prometheus' support of PII/EUII data, please see details [here](./prometheus-metrics-overview.md)
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### What's the difference between an Azure Monitor workspace and a Log Analytics workspace?
+
+An Azure Monitor workspace is a unique environment for data collected by Azure Monitor. Each workspace has its own data repository, configuration, and permissions. Azure Monitor workspaces will eventually contain all metrics collected by Azure Monitor, including native metrics. Currently, the only data hosted by an Azure Monitor workspace is Prometheus metrics.
+
+### Can I delete Prometheus metrics from an Azure Monitor workspace?
+
+Data is removed from the Azure Monitor workspace according to its data retention period, which is 18 months.
++ ## Next steps - Learn more about the [Azure Monitor data platform](../data-platform.md).
azure-monitor Prometheus Api Promql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-api-promql.md
For more information on Prometheus metrics limits, see [Prometheus metrics](../.
[!INCLUDE [prometheus-case-sensitivity.md](..//includes/prometheus-case-sensitivity.md)]
+## Frequently asked questions
+
+This section provides answers to common questions.
++++ ## Next steps [Azure Monitor workspace overview](./azure-monitor-workspace-overview.md)
azure-monitor Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-grafana.md
Versions 9.x and greater of Grafana support Azure Authentication, but it's not e
:::image type="content" source="media/prometheus-grafana/prometheus-data-source.png" alt-text="Screenshot of configuration for Prometheus data source." lightbox="media/prometheus-grafana/prometheus-data-source.png":::
+## Frequently asked questions
+This section provides answers to common questions.
+++ ## Next steps - [Configure self-managed Grafana to use Azure-managed Prometheus with Microsoft Entra ID](./prometheus-self-managed-grafana-azure-active-directory.md).
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for
- To monitor Windows nodes & pods in your cluster(s), follow steps outlined [here](./prometheus-metrics-enable.md#enable-windows-metrics-collection). - Azure Managed Grafana isn't currently available in the Azure US Government cloud. - Usage metrics (metrics under `Metrics` menu for the Azure Monitor workspace) - Ingestion quota limits and current usage for any Azure monitor Workspace aren't available yet in US Government cloud.-- During node updates, you may experience gaps lasting 1 to 2 minutes in some metric collections from our cluster level collector. This gap is due to a regular action from Azure Kubernetes Service to update the nodes in your cluster. This behavior is expected and occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior.
+- During node updates, you might experience gaps lasting 1 to 2 minutes in some metric collections from our cluster level collector. This gap is due to a regular action from Azure Kubernetes Service to update the nodes in your cluster. This behavior is expected and occurs due to the node it runs on being updated. None of our recommended alert rules are affected by this behavior.
## Prometheus references Following are links to Prometheus documentation.
Following are links to Prometheus documentation.
- [Writing Exporters](https://aka.ms/azureprometheus-promio-exporters)
+## Frequently asked questions
+
+This section provides answers to common questions.
+
+### How do I retrieve Prometheus metrics?
+
+All data is retrieved from an Azure Monitor workspace by using queries that are written in Prometheus Query Language (PromQL). You can write your own queries, use queries from the open source community, and use Grafana dashboards that include PromQL queries. See the [Prometheus project](https://prometheus.io/docs/prometheus/latest/querying/basics/).
++
+### When I use managed service for Prometheus, can I store data for more than one cluster in an Azure Monitor workspace?
+
+Yes. Managed service for Prometheus is intended to enable scenarios where you can store data from several Azure Kubernetes Service clusters in a single Azure Monitor workspace. See [Azure Monitor workspace overview](./azure-monitor-workspace-overview.md?#azure-monitor-workspace-architecture).
+
+### What types of resources can send Prometheus metrics to managed service for Prometheus?
+
+Our agent can be used on Azure Kubernetes Service clusters and Azure Arc-enabled Kubernetes clusters. It's installed as a managed add-on for AKS clusters and an extension for Azure Arc-enabled Kubernetes clusters and you can configure it to collect the data you want. You can also configure remote write on Kubernetes clusters running in Azure, another cloud, or on-premises by following our instructions for enabling remote write.
+
+If you use the Azure portal to enable Prometheus metrics collection and install the AKS add-on or Azure Arc-enabled Kubernetes extension from the Insights page of your cluster, it enables logs collection into Log Analytics and Prometheus metrics collection into managed service for Prometheus. For more information, see [Data sources](#data-sources).
+++ ## Next steps - [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md).
azure-monitor Prometheus Self Managed Grafana Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-self-managed-grafana-azure-active-directory.md
Grafana now supports connecting to Azure Monitor managed Prometheus using the [P
1. Select **Save & test** :::image type="content" source="./media/prometheus-self-managed-grafana-azure-active-directory/configure-grafana.png" alt-text="A screenshot showing the Grafana settings page for adding a data source.":::
+## Frequently asked questions
+
+This section provides answers to common questions.
++++ ## Next steps - [Configure Grafana using managed system identity](./prometheus-grafana.md). - [Collect Prometheus metrics for your AKS cluster](../essentials/prometheus-metrics-enable.md).
azure-monitor Prometheus Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-workbooks.md
If you receive a message indicating that "You currently do not have any Promethe
If your workbook query does not return data with a message "You do not have query access": - Check that you have sufficient permissions to perform **microsoft.monitor/accounts/read** assigned through Access Control (IAM) in your Azure Monitor workspace.-- Confirm if your Networking settings support query access. You may need to enable private access through your private endpoint or change settings to allow public access.-- If you have ad block enabled in your browser, you may need to pause or disable and refresh the workbook in order to view data.
+- Confirm if your Networking settings support query access. You might need to enable private access through your private endpoint or change settings to allow public access.
+- If you have ad block enabled in your browser, you might need to pause or disable and refresh the workbook in order to view data.
+## Frequently asked questions
+
+This section provides answers to common questions.
++++ ## Next steps * [Collect Prometheus metrics from AKS cluster](./prometheus-metrics-enable.md) * [Azure Monitor workspace](./azure-monitor-workspace-overview.md)
azure-monitor Log Analytics Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-tutorial.md
Previously updated : 06/22/2022 Last updated : 10/31/2023
azure-monitor Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace.md
description: Learn how to move your Log Analytics workspace to another subscript
Previously updated : 07/06/2023 Last updated : 10/30/2023
In this article, you'll learn the steps to move a Log Analytics workspace to ano
## Prerequisites - The subscription or resource group where you want to move your Log Analytics workspace must be located in the same region as the Log Analytics workspace you're moving.-- The move operation requires that no services can be linked to the workspace. Prior to the move, delete solutions that rely on linked services, including an Azure Automation account. These solutions must be removed before you can unlink your Automation account. Data collection for the solutions will stop and their tables will be removed from the UI, but data will remain in the workspace per the table retention period. When you add solutions after the move, ingestion is restored and tables become visible with data. Linked services include:
+- The move operation requires that no services can be linked to the workspace. Prior to the move, delete solutions that rely on linked services, including an Azure Automation account. These solutions must be removed before you can unlink your Automation account. Data collection for the solutions will stop and their tables will be removed from the UI, but data remains in workspace for the retention period defined for table. When you add solutions back after the move, ingestion restored and tables become visible with data. Linked services include:
- Update management - Change tracking - Start/Stop VMs during off-hours - Microsoft Defender for Cloud - Connected [Log Analytics agents](../agents/log-analytics-agent.md) and [Azure Monitor Agent](../agents/azure-monitor-agent-overview.md) remain connected to the workspace after the move with no interruption to ingestion.-- Microsoft Sentinel can't be deployed on the Log Analytics workspace.
+- Alerts should be re-created after the move, since permissions for alerts is based on workspace resource ID, which changes with the move. Alerts created after June 1, 2019, or in workspaces that were [upgraded from the legacy Log Analytics Alert API to the scheduledQueryRules API](../alerts/alerts-log-api-switch.md) can be exported in templates and deployed after the move. You can [check if the scheduledQueryRules API is used for alerts in your workspace](../alerts/alerts-log-api-switch.md#check-switching-status-of-workspace). Alternatively, you can configure alerts manually in the target workspace.
+- Update resource paths after a workspace move for Azure or external resources that point to the workspace. For example: [Azure Monitor alert rules](../alerts/alerts-resource-move.md), Third-party applications, Custom scripting, etc.
## Permissions required
In this article, you'll learn the steps to move a Log Analytics workspace to ano
| Unlink the Automation account | `Microsoft.OperationalInsights/workspaces/linkedServices/delete` permissions on the linked Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example. | | Move a Log Analytics workspace. | `Microsoft.OperationalInsights/workspaces/delete` and `Microsoft.OperationalInsights/workspaces/write` permissions on the Log Analytics workspace, as provided by the [Log Analytics Contributor built-in role](./manage-access.md#log-analytics-contributor), for example. |
-## Workspace move considerations
+## Considerations and limits
Consider these points before you move a Log Analytics workspace: -- Managed solutions that are installed in the workspace will be moved in this operation.
+- It can take Azure Resource Manager a few hours to complete. Solutions might be unresponsive during the operation.
+- Managed solutions that are installed in the workspace, will be moved as well.
+- Managed solutions are workspace's objects and can't be moved independently.
- Workspace keys (both primary and secondary) are regenerated with a workspace move operation. If you keep a copy of your workspace keys in Azure Key Vault, update them with the new keys generated after the workspace is moved. >[!IMPORTANT] > **Microsoft Sentinel customers** > - Currently, after Microsoft Sentinel is deployed on a workspace, moving the workspace to another resource group or subscription isn't supported. > - If you've already moved the workspace, disable all active rules under **Analytics** and reenable them after five minutes. This solution should be effective in most cases, although it's unsupported and undertaken at your own risk.
-> - It could take Azure Resource Manager a few hours to complete. Solutions might be unresponsive during the operation.
->
-> **Re-create alerts:** All alerts must be re-created because the permissions are based on the workspace resource ID, which changes during a workspace move or resource name change. Alerts in workspaces created after June 1, 2019, or in workspaces that were [upgraded from the legacy Log Analytics Alert API to the scheduledQueryRules API](../alerts/alerts-log-api-switch.md) can be exported in templates and deployed after the move. You can [check if the scheduledQueryRules API is used for alerts in your workspace](../alerts/alerts-log-api-switch.md#check-switching-status-of-workspace). Alternatively, you can configure alerts manually in the target workspace.
->
-> **Update resource paths:** After a workspace move, any Azure or external resources that point to the workspace must be reviewed and updated to point to the new resource target path.
->
-> Examples:
-> - [Azure Monitor alert rules](../alerts/alerts-resource-move.md)
-> - Third-party applications
-> - Custom scripting
->
-
-<a name='verify-the-azure-active-directory-tenant'></a>
## Verify the Microsoft Entra tenant
-The workspace source and destination subscriptions must exist within the same Microsoft Entra tenant. Use Azure PowerShell to verify that both subscriptions have the same tenant ID.
+The workspace source and destination subscriptions must exist within the same Microsoft tenant. Use Azure PowerShell to verify that both subscriptions have the same Entra tenant ID.
### [Portal](#tab/azure-portal)
azure-monitor Vminsights Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-performance.md
Limitations in performance collection with VM insights:
- Available memory isn't available in all Linux versions, including Red Hat Linux (RHEL) 6 and CentOS 6. It will be available in Linux versions that use [kernel version 3.14](http://www.man7.org/linux/man-pages/man1/free.1.html) or higher. It might be available in some kernel versions between 3.0 and 3.14. - Metrics are only available for data disks on Linux virtual machines that use XFS filesystem or EXT filesystem family (EXT2, EXT3, EXT4).
+- Collecting performance metrics from network shared drives is unsupported.
## Multi-VM perspective from Azure Monitor
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
na Previously updated : 10/20/2023 Last updated : 11/01/2023
Cool access offers [performance metrics](azure-netapp-files-metrics.md#cool-acce
You can enable tiering at the volume level for a newly created capacity pool that uses the Standard service level. How you're billed is based on the following factors: * The capacity in the Standard service level
+* Unallocated capacity within the capacity pool
* The capacity in the cool tier (by enabling tiering for volumes in a Standard capacity pool) * Network transfer between the hot tier and the cool tier at the rate that is determined by the markup on top of the transaction cost (`GET` and `PUT` requests) on blob storage and private link transfer in either direction between the hot tiers.
-Billing calculation for a Standard capacity pool is at the hot-tier rate for the data that isn't tiered to the cool tier. When you enable tiering for volumes, the capacity in the cool tier will be at the rate of the cool tier, and the remaining capacity will be at the rate of the hot tier. The rate of the cool tier is lower than the hot tier's rate.
+Billing calculation for a Standard capacity pool is at the hot-tier rate for the data that isn't tiered to the cool tier; this includes unallocated capacity within the capacity pool. When you enable tiering for volumes, the capacity in the cool tier will be at the rate of the cool tier, and the remaining capacity will be at the rate of the hot tier. The rate of the cool tier is lower than the hot tier's rate.
### Examples of billing structure
When you create volumes in the capacity pool and start tiering data to the cool
* Assume that you create three volumes with 1 TiB each. You don't enable tiering at the volume level. The billing calculation is as follows:
- * 4-TiB capacity at the hot tier rate
+ * 3-TiB of allocated capacity at the hot tier rate
+ * 1-TiB of unallocated capacity at the hot tier rate
* Zero capacity at the cool tier rate * Zero network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers.
This section shows you examples of storage and network transfer costs with varyi
In these examples, assume: * The hot tier storage cost is $0.000202/GiB/hr. The cool tier storage cost is $0.000082/GiB/hr. * Network transfer cost (including read or write activities from the cool tier) is $0.020000/GiB.
+* You have a 5-TiB capacity pool with cool access enabled.
+* You have 1-TiB of unallocated capacity within the capacity pool
* You have a 4-TiB volume enabled for cool access. * 3 TiB of the 4 TiB is moved to the cool tier after the coolness period. * You read or write 20% of data each month from the cool tier.
In these examples, assume:
> * The rates considered in the examples are for an example region and may be different for your intended region of deployment. > * If data is read from or written to the cool tier, it will cause the percentage of data distribution in the hot tier and cool tier to change. The calculations in this article demonstrate initial percentage distribution in the hot and cool tiers, and not after the 20% of data has been moved to or from the cool tier.
+> [!NOTE]
+> The following examples include 1 TiB of unallocated space in the capacity pool to show how unallocated space is charged when cool access is enabled. To maximize your savings, the capacity pool size should be reduced to eliminate unallocated pool capacity.
+ #### Example 1: Coolness period is set to 7 days Your storage cost for the *first month* would be: | Cost | Description | Calculation | ||||
+| Unallocated storage cost for Day 1~30 (30 days) | 1 TiB of unallocated storage | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` |
| Storage cost for Day 1~7 (7 days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 7 days x 730/30 hrs. x $0.000202/GiB/hr. = $140.93` | | Storage cost for Day 8~30 (23 days) | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 23 days x 730/30 hrs. x $0.000202/GiB/hr. = $115.77` <br><br> `3 TiB x 1024 x 23 days x 730/30 hrs. x $0.000082/GiB/hr. = $140.98` | | Network transfer cost | Moving inactive data to cool tier <br><br> 20% of data read/write from cool tier | `3 TiB x 1024 x $0.020000/GiB = $61.44` <br><br> `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` |
-| **First month total** || **`$471.41`** |
+| **First month total** || **`$622.41`** |
Your monthly storage cost for the *second and subsequent months* would be: | Cost | Description | Calculation | ||||
-| Storage cost for 30 days | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `3 TiB x 1024 x 30 days x 730/30 hrs. x $0.000082/GiB/hr. = $183.89` |
+| Storage cost for 30 days | 1 TiB of unallocated storage <br><br> 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `3 TiB x 1024 x 30 days x 730/30 hrs. x $0.000082/GiB/hr. = $183.89` |
| Network transfer cost | 20% of data read/write from cool tier | `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` |
-| **Second and subsequent monthly total** || **`$347.18`** |
+| **Second and subsequent monthly total** || **`$498.18`** |
Your first six-month savings:
-* Cost without cool access: `4 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 6 months = $3,623.98`
+* Cost without cool access: `5 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 6 months = $4,529.97`
* Cost with cool access:
- `First month + Second month + … + Sixth month = $471.41 + (5x $347.18) = $2,207.29`
-* Savings using cool access: **`39.09%`**
+ `First month + Second month + … + Sixth month = $622.41 + (5x $498.18) = $3,113.31`
+* Savings using cool access: **`31.27%`**
Your first twelve-month savings:
-* Cost without cool access: `4 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 11 months = $7,247.95`
-* Cost with cool access: `First month + Second month + … + twelfth month = $471.41 + (11 x $347.18)= $4,290.36`
-* Savings using cool access: **`40.81%`**
+* Cost without cool access: `5 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 12 months = $9,059.94`
+* Cost with cool access: `First month + Second month + … + twelfth month = $622.41 + (11 x $498.18) = $6,102.39`
+* Savings using cool access: **`32.64%`**
#### Example 2: Coolness period is set to 35 days
-All 4 TiB is active data (in hot tier) for the first month. Your storage cost for the *first month* would be:
-`4 TiB x 1024 x 730hr. x $0.000202/GiB/hr. = $604.00`
+All 5 TiB is active data (in hot tier) for the first month. Your storage cost for the *first month* would be:
+`5 TiB x 1024 x 730hr. x $0.000202/GiB/hr. = $755.00`
Your storage cost for the *second month* would be: | Cost | Description | Calculation | ||||
+| Unallocated storage cost for Day 1~30 (30 days) | 1 TiB of unallocated storage | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` |
| Storage cost for Day 1~5 (5 days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 5 days x 730/30 hrs. x $0.000202/GiB/hr. = $100.67` | | Storage cost for Day 6~30 (25 days) | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 25 days x 730/30 hrs. x $0.000202/GiB/hr. = $125.83` <br><br> `3 TiB x 1024 x 25 days x 730/30 hrs. x $0.000082/GiB/hr. = $153.24` | | Network transfer cost | Moving inactive data to cool tier <br><br> 20% of data read/write from cool tier | `3 TiB x 1024 x $0.020000 /GiB = $61.44` <br><br> `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` |
-| **Second month total** || **`$453.47`** |
+| **Second month total** || **`$604.47`** |
Your monthly storage cost for *third and subsequent months* would be: | Cost | Description | Calculation | ||||
-| Storage cost for 30 days | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `3 TiB x 1024 x 30 days x 730/30 hrs. x $0.000082/GiB/hr. = $183.89` |
+| Storage cost for 30 days | 1 TiB of unallocated storage <br><br> 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00`<br><br> `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `3 TiB x 1024 x 30 days x 730/30 hrs. x $0.000082/GiB/hr. = $183.89` |
| Network transfer cost | 20% of data read/write from cool tier | `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` |
-| **Third and subsequent monthly total** || **`$347.18`** |
+| **Third and subsequent monthly total** || **`$498.18`** |
Your first six-month savings:
-* Cost without cool access: `4 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 6 months = $3,623.98`
+* Cost without cool access: `5 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 6 months = $4,529.97`
* Cost with cool access:
- `First month + Second month + … + Sixth month = $604.00 + $453.47 + (4 x $347.18) = $2,446.17`
-* Savings using cool access: **`32.50%`**
+ `First month + Second month + … + Sixth month = $755.00 + $604.47 + (4 x $498.18) = $3,352.19`
+* Savings using cool access: **`25.99%`**
Your first twelve-month savings:
-* Cost without cool access: `4 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 11 months = $7,247.95`
-* Cost with cool access: `First month + Second month + … + twelfth month = $604.00 + $453.47 + (10 x $347.18) = $4,529.23`
-* Savings using cool access: **`37.51%`**
+* Cost without cool access: `5 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 12 months = $9,059.94`
+* Cost with cool access: `First month + Second month + … + twelfth month = $755.00 + $604.47 + (10 x $498.18) = $6,341.27`
+* Savings using cool access: **`30.00%`**
#### Example 3: Coolness period is set to 63 days
-All 4 TiB is active data (in hot tier) for the first two months. Your monthly storage cost for the *first and second months* would be: `4 TiB x 1024 x 730hr. x $0.000202/GiB/hr. = $604.00`
+All 5 TiB is active data (in hot tier) for the first two months. Your monthly storage cost for the *first and second months* would be: `5 TiB x 1024 x 730hr. x $0.000202/GiB/hr. = $755.00`
Your storage cost for the *third month* would be: | Cost | Description | Calculation | ||||
+| Unallocated storage cost for Day 1~30 (30 days) | 1 TiB of unallocated storage | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` |
| Storage cost for Day 1~3 (3 days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 3 days x 730/30 hrs. x $0.000202/GiB/hr. = $60.40` | | Storage cost for Day 4~30 (27 days) | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 27 days x 730/30 hrs. x $0.000202/GiB/hr. = $135.90` <br><br> `3 TiB x 1024 x 27 days x 730/30 hrs. x $0.000082/GiB/hr. = $165.50` | | Network transfer cost | Moving inactive data to cool tier <br><br> 20% of data read/write from cool tier | `3 TiB x 1024 x $0.020000/GiB = $61.44` <br><br> `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` |
-| **Third month total** || **`$435.53`** |
+| **Third month total** || **`$586.52`** |
Your monthly storage cost for the *fourth and subsequent months* would be: | Cost | Description | Calculation | ||||
-| Storage cost for 30 days | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `3 TiB x 1024 x 30 days x 730/30 hrs. x $0.000082/GiB/hr. = $183.89` |
+| Storage cost for 30 days | 1 TiB of unallocated storage <br><br> 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` <br><br> `3 TiB x 1024 x 30 days x 730/30 hrs. x $0.000082/GiB/hr. = $183.89` |
| Network transfer cost | 20% of data read/write from cool tier | `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` |
-| **Fourth and subsequent monthly total** || **`$347.18`** |
+| **Fourth and subsequent monthly total** || **`$498.18`** |
Your first six-month savings:
-* Cost without cool access: `4 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 6 months = $3,623.98`
+* Cost without cool access: `5 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 6 months = $4,529.97`
* Cost with cool access:
- `First month + Second month + … + Sixth month = (2 x $604.00) + $435.53 + (3 x $347.18) = $2,685.05`
-* Savings using cool access: **`25.91%`**
+ `First month + Second month + … + Sixth month = (2 x $755.00) + $586.52 + (3 x $498.18) = $3,591.06`
+* Savings using cool access: **`20.73%`**
Your first twelve-month savings:
-* Cost without cool access: `4 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 11 months = $7,247.95`
-* Cost with cool access: `First month + Second month + … + twelfth month = (2 x $604.00) + $435.53 + (9 x $347.18) = $4,768.11`
-* Savings using cool access: **`34.21%`**
+* Cost without cool access: `5 TiB x 1024 x $0.000202/GiB/hr. x 730 hrs. x 12 months = $9,059.94`
+* Cost with cool access: `First month + Second month + … + twelfth month = (2 x $755.00) + $586.52 + (9 x $498.18) = $6,580.14`
+* Savings using cool access: **`27.37%`**
> [!TIP]
azure-netapp-files Performance Linux Concurrency Session Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/performance-linux-concurrency-session-slots.md
This article helps you understand concurrency best practices for session slots a
NFSv3 does not have a mechanism to negotiate concurrency between the client and the server. The client and the server each defines its limit without consulting the other. For the best performance, you should line up the maximum number of client-side `sunrpc` slot table entries with that supported without pushback on the server. When a client overwhelms the server network stackΓÇÖs ability to process a workload, the server responds by decreasing the window size for the connection, which is not an ideal performance scenario.
-By default, modern Linux kernels define the per-connection `sunrpc` slot table entry size `sunrpc.max_tcp_slot_table_entries` as supporting 65,536 outstanding operations, as shown in the following table.
+By default, modern Linux kernels define the per-connection `sunrpc` slot table entry size `sunrpc.tcp_max_slot_table_entries` as supporting 65,536 outstanding operations, as shown in the following table.
| Azure NetApp Files NFSv3 server <br> Maximum execution contexts per connection | Linux client <br> Default maximum `sunrpc` slot table entries per connection | |-|-|
A concurrency level as low as 155 is sufficient to achieve 155,000 Oracle DB NFS
See [Oracle database performance on Azure NetApp Files single volumes](performance-oracle-single-volumes.md) for details.
-The `sunrpc.max_tcp_slot_table_entries` tunable is a connection-level tuning parameter. *As a best practice, set this value to 128 or less per connection, not surpassing 10,000 slots environment wide.*
+The `sunrpc.tcp_max_slot_table_entries` tunable is a connection-level tuning parameter. *As a best practice, set this value to 128 or less per connection, not surpassing 10,000 slots environment wide.*
### Examples of slot count based on concurrency recommendation Examples in this section demonstrate the slot count based on concurrency recommendation.
-#### Example 1 ΓÇô One NFS client, 65,536 `sunrpc.max_tcp_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128 based on the server-side limit of 128
+#### Example 1 ΓÇô One NFS client, 65,536 `sunrpc.tcp_max_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128 based on the server-side limit of 128
-Example 1 is based on a single client workload with the default `sunrpc.max_tcp_slot_table_entry` value of 65,536 and a single network connection, that is, no `nconnect`. In this case, a concurrency of 128 is achievable.
+Example 1 is based on a single client workload with the default `sunrpc.tcp_max_slot_table_entry` value of 65,536 and a single network connection, that is, no `nconnect`. In this case, a concurrency of 128 is achievable.
* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11` * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP`) * The client in theory can issue no more than 65,536 requests in flight to the server per connection. * The server will accept no more than 128 requests in flight from this single connection.
-#### Example 2 ΓÇô One NFS client, 128 `sunrpc.max_tcp_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128
+#### Example 2 ΓÇô One NFS client, 128 `sunrpc.tcp_max_slot_table_entries`, and no `nconnect` for a maximum concurrency of 128
-Example 2 is based on a single client workload with a `sunrpc.max_tcp_slot_table_entry` value of 128, but without the `nconnect` mount option. With this setting, a concurrency of 128 is achievable from a single network connection.
+Example 2 is based on a single client workload with a `sunrpc.tcp_max_slot_table_entry` value of 128, but without the `nconnect` mount option. With this setting, a concurrency of 128 is achievable from a single network connection.
* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11` * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP) ` * The client will issue no more than 128 requests in flight to the server per connection. * The server will accept no more than 128 requests in flight from this single connection.
-#### Example 3 ΓÇô One NFS client, 100 `sunrpc.max_tcp_slot_table_entries`, and `nconnect=8` for a maximum concurrency of 800
+#### Example 3 ΓÇô One NFS client, 100 `sunrpc.tcp_max_slot_table_entries`, and `nconnect=8` for a maximum concurrency of 800
-Example 3 is based on a single client workload, but with a lower `sunrpc.max_tcp_slot_table_entry` value of 100. This time, the `nconnect=8` mount option used spreading the workload across 8 connection. With this setting, a concurrency of 800 is achievable spread across the 8 connections. This amount is the concurrency needed to achieve 400,000 IOPS.
+Example 3 is based on a single client workload, but with a lower `sunrpc.tcp_max_slot_table_entry` value of 100. This time, the `nconnect=8` mount option used spreading the workload across 8 connection. With this setting, a concurrency of 800 is achievable spread across the 8 connections. This amount is the concurrency needed to achieve 400,000 IOPS.
* `NFS_Server=10.10.10.10, NFS_Client=10.10.10.11` * `Connection 1 (10.10.10.10:2049, 10.10.10.11:6543,TCP), Connection 2 (10.10.10.10:2049, 10.10.10.11:6454,TCP)… Connection 8 (10.10.10.10:2049, 10.10.10.11:7321,TCP)`
Example 3 is based on a single client workload, but with a lower `sunrpc.max_tc
* The client will issue no more than 100 requests in flight to the server from this connection. * The server is expected to accept no more than 128 requests in flight from the client for this connection.
-#### Example 4 ΓÇô 250 NFS clients, 8 `sunrpc.max_tcp_slot_table_entries`, and no `nconnect` for a maximum concurrency of 2000
+#### Example 4 ΓÇô 250 NFS clients, 8 `sunrpc.tcp_max_slot_table_entries`, and no `nconnect` for a maximum concurrency of 2000
-Example 4 uses the reduced per-client `sunrpc.max_tcp_slot_table_entry` value of 8 for a 250 machine-count EDA environment. In this scenario, a concurrency of 2000 is reached environment wide, a value more than sufficient to drive 4,000 MiB/s of a backend EDA workload.
+Example 4 uses the reduced per-client `sunrpc.tcp_max_slot_table_entry` value of 8 for a 250 machine-count EDA environment. In this scenario, a concurrency of 2000 is reached environment wide, a value more than sufficient to drive 4,000 MiB/s of a backend EDA workload.
* `NFS_Server=10.10.10.10, NFS_Client1=10.10.10.11` * `Connection (10.10.10.10:2049, 10.10.10.11:6543,TCP)`
Example 4 uses the reduced per-client `sunrpc.max_tcp_slot_table_entry` value of
* The client will issue no more than 8 requests in flight to the server per connection. * The server will accept no more than 128 requests in flight from this single connection.
-When using NFSv3, *you should collectively keep the storage endpoint slot count to 10,000 or less*. It is best to set the per-connection value for `sunrpc.max_tcp_slot_table_entries` to less than 128 when an application scales out across many network connections (`nconnect` and HPC in general, and EDA in particular).
+When using NFSv3, *you should collectively keep the storage endpoint slot count to 10,000 or less*. It is best to set the per-connection value for `sunrpc.tcp_max_slot_table_entries` to less than 128 when an application scales out across many network connections (`nconnect` and HPC in general, and EDA in particular).
-### How to calculate the best `sunrpc.max_tcp_slot_table_entries`
+### How to calculate the best `sunrpc.tcp_max_slot_table_entries`
Using *Littles Law*, you can calculate the total required slot table entry count. In general, consider the following factors:
The calculation translates to a concurrency of 160:
`(160 = 16,000 × 0.010)`
-Given the need for 1,250 clients, you could safely set `sunrpc.max_tcp_slot_table_entries` to 2 per client to reach the 4,000 MiB/s. However, you might decide to build in extra headroom by setting the number per client to 4 or even 8, keeping well under the 10,000 recommended slot ceiling.
+Given the need for 1,250 clients, you could safely set `sunrpc.tcp_max_slot_table_entries` to 2 per client to reach the 4,000 MiB/s. However, you might decide to build in extra headroom by setting the number per client to 4 or even 8, keeping well under the 10,000 recommended slot ceiling.
-### How to set `sunrpc.max_tcp_slot_table_entries` on the client
+### How to set `sunrpc.tcp_max_slot_table_entries` on the client
-1. Add `sunrpc.max_tcp_slot_table_entries=<n>` to the `/etc/sysctl.conf` configuration file.
+1. Add `sunrpc.tcp_max_slot_table_entries=<n>` to the `/etc/sysctl.conf` configuration file.
During tuning, if a value lower than 128 is found optimal, replace 128 with the appropriate number. 2. Run the following command: `$ sysctl -p`
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Now that you've attached a datastore on Azure NetApp Files-based NFS volume to y
- **Can a single Azure NetApp Files datastore be added to multiple clusters within different Azure VMware Solution SDDCs?**
- Yes, you can connect an Azure NetApp Files volume as a datastore to multiple clusters in different SDDCs. Each SDDC will need connectivity via the ExpressRoute gateway in the Azure NetApp Files virtual network.
+ Yes, you can connect an Azure NetApp Files volume as a datastore to multiple clusters in different SDDCs. Each SDDC will need connectivity via the ExpressRoute gateway in the Azure NetApp Files virtual network. Latency considerations apply.
backup Azure Backup Architecture For Sap Hana Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md
Title: Azure Backup Architecture for SAP HANA Backup
+ Title: Azure Backup architecture for SAP HANA Backup
description: Learn about Azure Backup architecture for SAP HANA backup. Previously updated : 06/20/2023 Last updated : 11/02/2023
This section provides you with an understanding about the backup process of an H
>[!Note] >For the HANA VMs that are already backed-up as individual machines, you can do the grouping only for future backups.
+### Backup architecture for database instance snapshot
+
+Azure Backup integrates Azure-managed disk full or incremental snapshots with HANA snapshot commands to deliver instant backup and recovery capabilities for HANA.
+
+**SAP HANA database instance snapshot backup**
+
+The backup architecture explains the different permissions that are required for the Azure Backup service, which resides on a HANA virtual machine (VM), to take snapshots of the managed disks and place them in a user-specified resource group that's mentioned in the policy. To do so, you can use the system-assigned managed identity of the source VM.
+++
+**SAP HANA database instance snapshot restore**
+
+The restore architecture explains the different permissions required during the restore operation. Azure Backup uses the target VMΓÇÖs managed identity to read disk snapshots from a user-specified resource group, create disks in a target resource group, and attach them to the target VM.
+++ ## Next steps - Learn about the supported configurations and scenarios in the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md). - Learn about how to [backup SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md). - Learn about how to [backup SAP HANA System Replication databases in Azure VMs](sap-hana-database-with-hana-system-replication-backup.md).-- Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs (preview)](sap-hana-database-instances-backup.md).
+- Learn about how to [backup SAP HANA databases' snapshot instances in Azure VMs](sap-hana-database-instances-backup.md).
backup Backup Azure Reserved Pricing Optimize Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-reserved-pricing-optimize-cost.md
An Azure Backup Storage reservation covers only the amount of data that's stored
Azure Backup Storage reserved capacity is available for backup data stored in the vault-standard tier.
-LRS, GRS, and RA-GRS redundancies are supported for reservations. For more information about redundancy options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
+LRS, GRS, RA-GRS, and ZRS redundancies are supported for reservations. For more information about redundancy options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md).
>[!Note] >Azure Backup Storage reserved capacity isn't applicable for Protected Instance cost. It's also not applicable to vault-archive tier.
To purchase reserved capacity, follow these steps:
| Subscription | The subscription that's used to pay for the Azure Backup Storage reservation. The payment method on the selected subscription is used in charging the costs. The subscription must be one of the following types: <br><br> - **Enterprise Agreement (offer numbers: MS-AZR-0017P or MS-AZR-0148P)**: For an Enterprise subscription, the charges are deducted from the enrollment's Azure Prepayment (previously called monetary commitment) balance or charged as overage. <br><br> - **Individual subscription with pay-as-you-go rates (offer numbers: MS-AZR-0003P or MS-AZR-0023P)**: For an individual subscription with pay-as-you-go rates, the charges are billed to the credit card or invoice payment method on the subscription. <br><br> - Microsoft Customer Agreement subscriptions <br><br> - CSP subscriptions. | | Region | The region where the reservation is in effect. | | Vault tier | The vault tier for which the reservation is in effect. Currently, only reservations for vault-standard tier are supported. |
- | Redundancy | The redundancy option for the reservation. Options include LRS, GRS, and RA-GRS. For more information about redundancy options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md). |
+ | Redundancy | The redundancy option for the reservation. Options include LRS, GRS, RA-GRS and ZRS. For more information about redundancy options, see [Azure Storage redundancy](../storage/common/storage-redundancy.md). |
| Billing frequency | Indicates how often the account is billed for the reservation. Options include Monthly or Upfront. | | Size | The amount of capacity to reserve. | | Term | One year or three years. |
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 05/24/2023 Last updated : 11/02/2023
Refer to the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and t
### Establish network connectivity
-For all operations, an SAP HANA database running on an Azure VM requires connectivity to the Azure Backup service, Azure Storage, and Microsoft Entra ID. This can be achieved by using private endpoints or by allowing access to the required public IP addresses or FQDNs. Not allowing proper connectivity to the required Azure services may lead to failure in operations like database discovery, configuring backup, performing backups, and restoring data.
+For all operations, an SAP HANA database running on an Azure VM requires connectivity to the Azure Backup service, Azure Storage, and Microsoft Entra ID. This can be achieved by using private endpoints or by allowing access to the required public IP addresses or FQDNs. Not allowing proper connectivity to the required Azure services might lead to failure in operations like database discovery, configuring backup, performing backups, and restoring data.
The following table lists the various alternatives you can use for establishing connectivity:
The following table lists the various alternatives you can use for establishing
| Private endpoints | Allow backups over private IPs inside the virtual network <br><br> Provide granular control on the network and vault side | Incurs standard private endpoint [costs](https://azure.microsoft.com/pricing/details/private-link/) | | NSG service tags | Easier to manage as range changes are automatically merged <br><br> No additional costs | Can be used with NSGs only <br><br> Provides access to the entire service | | Azure Firewall FQDN tags | Easier to manage since the required FQDNs are automatically managed | Can be used with Azure Firewall only |
-| Allow access to service FQDNs/IPs | No additional costs. <br><br> Works with all network security appliances and firewalls. <br><br> You can also use service endpoints for *Storage*. However, for *Azure Backup* and *Microsoft Entra ID*, you need to assign the access to the corresponding IPs/FQDNs. | A broad set of IPs or FQDNs may be required to be accessed. |
+| Allow access to service FQDNs/IPs | No additional costs. <br><br> Works with all network security appliances and firewalls. <br><br> You can also use service endpoints for *Storage*. However, for *Azure Backup* and *Microsoft Entra ID*, you need to assign the access to the corresponding IPs/FQDNs. | A broad set of IPs or FQDNs might be required to be accessed. |
| [Virtual Network Service Endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) | Can be used for Azure Storage. <br><br> Provides large benefit to optimize performance of data plane traffic. | Can't be used for Microsoft Entra ID, Azure Backup service. | | Network Virtual Appliance | Can be used for Azure Storage, Microsoft Entra ID, Azure Backup service. <br><br> **Data plane** <ul><li> Azure Storage: `*.blob.core.windows.net`, `*.queue.core.windows.net`, `*.blob.storage.azure.net` </li></ul> <br><br> **Management plane** <ul><li> Microsoft Entra ID: Allow access to FQDNs mentioned in sections 56 and 59 of [Microsoft 365 Common and Office Online](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). </li><li> Azure Backup service: `.backup.windowsazure.com` </li></ul> <br>Learn more about [Azure Firewall service tags](../firewall/fqdn-tags.md). | Adds overhead to data plane traffic and decrease throughput/performance. |
You can also use the following FQDNs to allow access to the required services fr
5. No restart of any service is required. The Azure Backup service will attempt to route the Microsoft Entra traffic via the proxy server mentioned in the JSON file. +
+##### Use outbound rules
+
+If the Firewall or NSG settings block the `ΓÇ£management.azure.comΓÇ¥` domain from Azure Virtual Machine, snapshot backups will fail.
+
+Create the following outbound rule and allow the domain name to do the database backup. Learn hot to [create outbound rules](../machine-learning/how-to-access-azureml-behind-firewall.md).
+
+- **Source**: IP address of the VM.
+- **Destination**: Service Tag.
+- **Destination Service Tag**: `AzureResourceManager`
+++++++++ [!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)] ## Enable Cross Region Restore
backup Quick Sap Hana Database Instance Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-sap-hana-database-instance-restore.md
+
+ Title: Quickstart - Restore the entire SAP HANA system to a snapshot restore point
+description: In this quickstart, learn how to restore the entire SAP HANA system to a snapshot restore point.
+ms.devlang: azurecli
+ Last updated : 11/02/2023++++++
+# Quickstart: Restore the entire SAP HANA database to a snapshot restore point
+
+This quickstart describes how to restore the entire SAP HANA to a snapshot restore point using the Azure portal.
+
+Azure Backup now allows you to restore the SAP HANA snapshot and storage snapshot as disks by selecting Attach and then mount them to the target machine.
+
+>[!Note]
+>Currently, Azure Backup doesn't automatically restore the HANA system to the required point.
+
+For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
+
+## Prerequisites
+
+- Ensure that you have the backup configured and have the recovery points created to do restore. Learn more about the [configuration of backup for HANA database instance snapshots on Azure VM](sap-hana-database-instances-backup.md).
+- Ensure that you have the [required permissions for the snapshot restore](sap-hana-database-instances-restore.md#permissions-required-for-the-snapshot-restore).
+
+## Restore the database
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Troubleshoot backup of SAP HANA databases instance snapshot on Azure](sap-hana-database-instance-troubleshoot.md)
backup Sap Hana Database About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-about.md
Title: About SAP HANA database backup on Azure VMs
+ Title: About the SAP HANA database backup on Azure VMs
description: In this article, you'll learn about backing up SAP HANA databases that are running on Azure virtual machines. Previously updated : 06/25/2023 Last updated : 11/02/2023
Azure Backup now supports backing up databases that have HSR enabled. This means
Although there are multiple physical nodes (primary and secondary), the backup service now considers them a single HSR container.
-## Back up database instance snapshots (preview)
+## Back up database instance snapshots
As databases grow in size, the time it takes to restore them becomes a factor when you're dealing with streaming backups. Also, during backup, the time the database takes to generate Backint streams can grow in proportion to the churn, which can be factor as well.
As per SAP recommendation, it's mandatory to have weekly full snapshots for all
Learn how to: - [Back up SAP HANA databases on Azure VMs](backup-azure-sap-hana-database.md).-- [Back up SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-with-hana-system-replication-backup.md).-- [Back up SAP HANA database snapshot instances on Azure VMs (preview)](sap-hana-database-instances-backup.md).
+- [Back up SAP HANA System Replication databases on Azure VMs](sap-hana-database-with-hana-system-replication-backup.md).
+- [Back up SAP HANA database snapshot instances on Azure VMs](sap-hana-database-instances-backup.md).
- [Restore SAP HANA databases on Azure VMs](./sap-hana-db-restore.md). - [Manage SAP HANA databases that are backed up by using Azure Backup](./sap-hana-db-manage.md).
backup Sap Hana Database Instance Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instance-troubleshoot.md
Azure VM and retry the operation. For more information, see the [Azure workload
## Next steps
-Learn about [Azure Backup service to back up database instances (preview)](sap-hana-db-about.md#using-the-azure-backup-service-to-back-up-database-instances-preview).
+Learn about [Azure Backup service to back up database instances](sap-hana-db-about.md#using-the-azure-backup-service-to-back-up-database-instances).
backup Sap Hana Database Instances Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-backup.md
Title: Back up SAP HANA database instances on Azure VMs description: In this article, you'll learn how to back up SAP HANA database instances that are running on Azure virtual machines. Previously updated : 10/05/2022 Last updated : 11/02/2023
-# Back up SAP HANA database instance snapshots on Azure VMs (preview)
-
-Azure Backup now performs an SAP HANA storage snapshot-based backup of an entire database instance. Backup combines an Azure managed disk full or incremental snapshot with HANA snapshot commands to provide instant HANA backup and restore.
+# Back up SAP HANA database instance snapshots on Azure VMs
This article describes how to back up SAP HANA database instances that are running on Azure VMs to an Azure Backup Recovery Services vault.
-In this article, you'll learn how to:
-
->[!div class="checklist"]
->- Create and configure a Recovery Services vault.
->- Create a policy.
->- Discover database instances.
->- Configure backups.
->- Track a backup job.
+Azure Backup now performs an SAP HANA storage snapshot-based backup of an entire database instance. Backup combines an Azure managed disk full or incremental snapshot with HANA snapshot commands to provide instant HANA backup and restore.
For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
For more information about the supported configurations and scenarios, see [SAP
According to SAP, it's mandatory to run a weekly full backup of all databases within an instance. Currently, logs are also mandatory for a database when you're creating a policy. With snapshots happening daily, we donΓÇÖt see a need for incremental or differential backups in the database policy. Therefore, all databases in the database instance, which is required to be protected by a snapshot, should have a database policy of only *weekly fulls + logs ONLY*, along with daily snapshots at an instance level. >[!Important]
->Because the policy doesnΓÇÖt call for differential or incremental backups, we do *not* recommend that you trigger on-demand differential backups from any client.
+>- As per SAP advisory, we recommend you to configure *Database via Backint* with *weekly fulls + log backup only* policy before configuring *DB Instance via Snapshot* backup. If *weekly fulls + logs backup only using Backint based backup* isn't enabled, snapshot backup configuration will fail.
+> :::image type="content" source="./media/sap-hana-database-instances-backup/backup-goal-database-via-backint.png" alt-text="Screenshot shows the 'Database via Backint' backup goal." lightbox="./media/sap-hana-database-instances-backup/backup-goal-database-via-backint.png":::
+>- Because the policy doesnΓÇÖt call for differential or incremental backups, we do *not* recommend that you trigger on-demand differential backups from any client.
To summarize the backup policy:
When you're assigning permissions, consider the following:
- We recommend that you *not* change the resource groups after they're given or assigned to Azure Backup, because it makes it easier to handle the permissions.
-Learn about the [permissions required for snapshot restore](sap-hana-database-instances-restore.md#permissions-required-for-the-snapshot-restore).
+Learn about the [permissions required for snapshot restore](sap-hana-database-instances-restore.md#permissions-required-for-the-snapshot-restore) and the [SAP HANA instance snapshot backup architecture](azure-backup-architecture-for-sap-hana-backup.md#backup-architecture-for-database-instance-snapshot).
+
+### Establish network connectivity
+
+[Learn about](backup-azure-sap-hana-database.md#establish-network-connectivity) the network configurations required for HANA instance snapshot.
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
To create a policy for the SAP HANA database instance backup, follow these steps
1. Select **Add**.
-1. On the **Select policy type** pane, select **SAP HANA in Azure VM (DB Instance via snapshot) [Preview]**.
+1. On the **Select policy type** pane, select **SAP HANA in Azure VM (DB Instance via snapshot)**.
- :::image type="content" source="./media/sap-hana-database-instances-backup/select-sap-hana-instance-policy-type.png" alt-text="Screenshot that shows a list of policy types.":::
+ :::image type="content" source="./media/sap-hana-database-instances-backup/select-sap-hana-instance-policy-type.png" alt-text="Screenshot that shows a list of policy types." lightbox="./media/sap-hana-database-instances-backup/select-sap-hana-instance-policy-type.png":::
1. On the **Create policy** pane, do the following:
- :::image type="content" source="./media/sap-hana-database-instances-backup/create-policy.png" alt-text="Screenshot that shows the 'Create policy' pane for configuring backup and restore.":::
+ :::image type="content" source="./media/sap-hana-database-instances-backup/create-policy.png" alt-text="Screenshot that shows the 'Create policy' pane for configuring backup and restore." lightbox="./media/sap-hana-database-instances-backup/create-policy.png":::
- a. **Policy name**: Enter a unique policy name.
- b. **Snapshot Backup**: Set the **Time** and **Timezone** for backup in the dropdown lists. The default settings are *10:30 PM* and *(UTC) Coordinated Universal Time*.
+ 1. **Policy name**: Enter a unique policy name.
+ 1. **Snapshot Backup**: Set the **Time** and **Timezone** for backup in the dropdown lists. The default settings are *10:30 PM* and *(UTC) Coordinated Universal Time*.
- >[!Note]
- >Azure Backup currently supports **Daily** backup only.
+ >[!Note]
+ >Azure Backup currently supports **Daily** backup only.
- c. **Instant Restore**: Set the retention of recovery snapshots from *1* to *35* days. The default value is *2*.
- d. **Resource group**: Select the appropriate resource group in the drop-down list.
- e. **Managed Identity**: Select a managed identity in the dropdown list to assign permissions for taking snapshots of the managed disks and place them in the resource group that you've selected in the policy.
+ 1. **Instant Restore**: Set the retention of recovery snapshots from *1* to *35* days. The default value is *2*.
+ 1. **Resource group**: Select the appropriate resource group in the drop-down list.
+ 1. **Managed Identity**: Select a managed identity in the dropdown list to assign permissions for taking snapshots of the managed disks and place them in the resource group that you've selected in the policy.
+ You can also create a new managed identity for snapshot backup and restore. To create a managed identity and assign it to the VM with SAP HANA database, follow these steps:
+
+ 1. Select **+ Create**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/start-create-managed-identity.png" alt-text="Screenshot that shows how to create managed identity." lightbox="./media/sap-hana-database-instances-backup/start-create-managed-identity.png":::
+
+ 1. On the **Create User Assigned Managed Identity** page, choose the required *Subscription*, *Resource group*, *Instance region*, and add an *Instance name*.
+ 1. Select **Review + create**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/configure-new-managed-identity.png" alt-text="Screenshot that shows how to configure a new managed identity." lightbox="./media/sap-hana-database-instances-backup/configure-new-managed-identity.png":::
+
+ 1. Go to the *VM with SAP HANA database*, and then select **Identity** > **User assigned** tab.
+ 1. Select **User assigned managed identity**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/assign-vm-user-assigned-managed-identity.png" alt-text="Screenshot shows how to assign user-assigned managed identity to VM with SAP HANA database." lightbox="./media/sap-hana-database-instances-backup/assign-vm-user-assigned-managed-identity.png":::
+
+ 1. Select the *subscription*, *resource group*, and the *new user-assigned managed identity*.
+ 1. Select **Add**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/add-user-assigned-permission-to-vm.png" alt-text="Screenshot shows how to add the new user-assigned managed identity." lightbox="./media/sap-hana-database-instances-backup/add-user-assigned-permission-to-vm.png":::
+
+ 1. On the **Create policy** page, under **Managed Identity**, select the *newly created user-assigned managed identity* > **OK**.
+
+ :::image type="content" source="./media/sap-hana-database-instances-backup/add-new-user-assigned-managed-identity-to-backup-policy.png" alt-text="Screenshot shows how to add new user-assigned managed identity to the backup policy." lightbox="./media/sap-hana-database-instances-backup/add-new-user-assigned-managed-identity-to-backup-policy.png":::
++++ You need to manually assign the permissions for the Azure Backup service to delete the snapshots as per the policy. Other [permissions are assigned in the Azure portal](#configure-snapshot-backup). To assign the Disk Snapshot Contributor role to the Backup Management Service manually in the snapshot resource group, see [Assign Azure roles by using the Azure portal](../role-based-access-control/role-assignments-portal.md?tabs=current).
You'll also need to [create a policy for SAP HANA database backup](backup-azure-
To discover the database instance where the snapshot is present, see the [Back up SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md#discover-the-databases).
-## Configure snapshot backup
-
-Before you configure a snapshot backup in this section, [configure the backup for the database](backup-azure-sap-hana-database.md#configure-backup).
-
-Then, to configure a snapshot backup, do the following:
-
-1. In the Recovery Services vault, select **Backup**.
-
-1. Select **SAP HANA in Azure VM** as the data source type, select a Recovery Services vault to use for backup, and then select **Continue**.
-
-1. On the **Backup Goal** pane, under **Step 2: Configure Backup**, select **DB Instance via snapshot (Preview)**, and then select **Configure Backup**.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/select-db-instance-via-snapshot.png" alt-text="Screenshot that shows the 'DB Instance via snapshot' option.":::
-
-1. On the **Configure Backup** pane, in the **Backup policy** dropdown list, select the database instance policy, and then select **Add/Edit** to check the available database instances.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/add-database-instance-backup-policy.png" alt-text="Screenshot that shows where to select and add a database instance policy.":::
-
- To edit a DB instance selection, select the checkbox that corresponds to the instance name, and then select **Add/Edit**.
-
-1. On the **Select items to backup** pane, select the checkboxes next to the database instances that you want to back up, and then select **OK**.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/select-database-instance-for-backup.png" alt-text="Screenshot that shows the 'Select items to backup' pane and a list of database instances.":::
-
- When you select HANA instances for backup, the Azure portal validates for missing permissions in the system-assigned managed identity that's assigned to the policy.
-
- If the permissions aren't present, you need to select **Assign missing roles/identity** to assign all permissions.
-
- The Azure portal then automatically re-validates the permissions, and the **Backup readiness** column displays the status as *Success*.
-
-1. When the backup readiness check is successful, select **Enable backup**.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/enable-hana-database-instance-backup.png" alt-text="Screenshot that shows that the HANA database instance backup is ready to be enabled.":::
-
-## Run an on-demand backup
-
-To run an on-demand backup, do the following:
-
-1. In the Azure portal, select a Recovery Services vault.
-
-1. In the Recovery Services vault, on the left pane, select **Backup items**.
-
-1. By default, **Primary Region** is selected. Select **SAP HANA in Azure VM**.
-
-1. On the **Backup Items** pane, select the **View details** link next to the SAP HANA snapshot instance.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/hana-snapshot-view-details.png" alt-text="Screenshot that shows the 'View details' links next to the HANA database snapshot instances.":::
-
-1. Select **Backup now**.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/start-backup-hana-snapshot.png" alt-text="Screenshot that shows the 'Backup now' button for starting a backup of a HANA database snapshot instance.":::
-
-1. On the **Backup now** pane, select **OK**.
-
- :::image type="content" source="./media/sap-hana-database-instances-backup/trigger-backup-hana-snapshot.png" alt-text="Screenshot showing to trigger HANA database snapshot instance backup.":::
-
-## Track a backup job
-
-The Azure Backup service creates a job if you schedule backups or if you trigger an on-demand backup operation for tracking. To view the backup job status, do the following:
-
-1. In the Recovery Services vault, on the left pane, select **Backup Jobs**.
-
- The jobs dashboard displays the status of the jobs that were triggered in the past 24 hours. To modify the time range, select **Filter**, and then make the required changes.
-1. To review the details of a job, select the **View details** link next to the job name.
## Next steps Learn how to: -- [Restore SAP HANA database instance snapshots on Azure VMs (preview)](sap-hana-database-instances-restore.md)-- [Manage SAP HANA databases on Azure VMs (preview)](sap-hana-database-manage.md)
+- [Restore SAP HANA database instance snapshots on Azure VMs](sap-hana-database-instances-restore.md)
+- [Manage SAP HANA databases on Azure VMs](sap-hana-database-manage.md)
backup Sap Hana Database Instances Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-restore.md
Title: Restore SAP HANA database instances on Azure VMs description: In this article, you'll learn how to restore SAP HANA database instances on Azure virtual machines. Previously updated : 10/05/2022 Last updated : 11/02/2023
-# Restore SAP HANA database instance snapshots on Azure VMs (preview)
+# Restore SAP HANA database instance snapshots on Azure VMs
This article describes how to restore a backed-up SAP HANA database instance to another target virtual machine (VM) via snapshots.
You can restore the HANA snapshot and storage snapshot as disks by selecting **A
Here are the two workflows: - [Restore the entire HANA system (the system database and all tenant databases) to a single snapshot-based restore point](#restore-the-entire-system-to-a-snapshot-restore-point).-- [Restore the system database and all tenant databases to a different log point in time over a snapshot](#restore-the-database-to-a-different-log-point-in-time-over-a-snapshot).
+- [Restore the system database and all tenant databases to a different logpoint-in-time over a snapshot](#restore-the-database-to-a-different-logpoint-in-time-over-a-snapshot).
>[!Note] >SAP HANA recommends that you recover the entire system during the snapshot restore. This means that you would also restore the system database. If the system database is restored, the users/access information is also overwritten or updated, and subsequent attempts at recovery of tenant databases might fail after the system database recovery. The two options to resolve this issue are:
After the restore is completed, you can revoke these permissions.
>- The credentials that are used should have permissions to grant roles to other resources. The roles should be Owner or User Access Administrator, as mentioned in [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md#step-4-check-your-prerequisites). >- You can use the Azure portal to assign all the preceding permissions during the restore.
-## Restore the entire system to a snapshot restore point
-
-In the following sections, you'll learn how to restore the system to the snapshot restore point.
-
-### Select and mount the snapshot
-
-To select and mount the snapshot, do the following:
-
-1. In the Azure portal, go to the Recovery Services vault.
-
-1. On the left pane, select **Backup items**.
-
-1. Select **Primary Region**, and then select **SAP HANA in Azure VM**.
-
- :::image type="content" source="./media/sap-hana-database-instances-restore/select-vm-in-primary-region.png" alt-text="Screenshot that shows where to select the primary region option for VM selection.":::
-
-1. On the **Backup Items** page, select **View details** corresponding to the SAP HANA snapshot instance.
-
- :::image type="content" source="./media/sap-hana-database-instances-restore/select-view-details.png" alt-text="Screenshot that shows where to view the details of the HANA database snapshot.":::
-
-1. Select **Restore**.
+Learn about the [SAP HANA instance snapshot restore architecture](azure-backup-architecture-for-sap-hana-backup.md#backup-architecture-for-database-instance-snapshot).
- :::image type="content" source="./media/sap-hana-database-instances-restore/restore-hana-snapshot.png" alt-text="Screenshot that shows the 'Restore' option for the HANA database snapshot.":::
+### Establish network connectivity
-1. On the **Restore** pane, select the target VM to which the disks should be attached, the required HANA instance, and the resource group.
-
-1. On the **Restore Point** pane, choose **Select**.
-
- :::image type="content" source="./media/sap-hana-database-instances-restore/restore-system-database-restore-point.png" alt-text="Screenshot showing to select HANA snapshot recovery point.":::
-
-1. On the **Select restore point** pane, select a recovery point, and then select **OK**.
-
-1. Select the corresponding resource group and the *managed identity* to which all permissions are assigned for restore.
-
-1. Select **Validate** to check to ensure that all the permissions are assigned to the managed identity for the relevant scopes.
-
-1. If the permissions aren't assigned, select **Assign missing roles/identity**.
-
- After the roles are assigned, the Azure portal automatically re-validates the permission updates.
+[Learn about](backup-azure-sap-hana-database.md#establish-network-connectivity) the network configurations required for HANA instance snapshot.
-1. Select **Attach and mount snapshot** to attach the disks to the VM.
-
-1. Select **OK** to create disks from snapshots, attach them to the target VM, and mount them.
-
-### Restore the system database
-
-Recover the system database from the data snapshot by using HANA Studio. For more information, see the [SAP documentation](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/9fd053d58cb94ac69655b4ebc41d7b05.html).
-
->[!Note]
->After you've restored the system database, you need to run the preregistration script on the target VM to update the user credentials.
-
-### Restore tenant databases
+## Restore the entire system to a snapshot restore point
-When the system database is restored, recover all tenant databases from a data snapshot by using HANA Studio. For more information, see the [HANA documentation](https://help.sap.com/docs/SAP_HANA_COCKPIT/afa922439b204e9caf22c78b6b69e4f2/b2c283094b9041e7bdc0830c06b77bf8.html).
-## Restore the database to a different log point in time over a snapshot
+## Restore the database to a different logpoint-in-time over a snapshot
-To restore the database to a different log point in time, do the following.
+To restore the database to a different logpoint-in-time, do the following.
### Select and mount the nearest snapshot
-First, identify the snapshot that's nearest to the required log point in time. Then [attach and mount that snapshot](#select-and-mount-the-snapshot) to the target VM.
+First, identify the snapshot that's nearest to the required logpoint-in-time. Then [attach and mount that snapshot](#select-and-mount-the-snapshot) to the target VM.
### Restore system database
To select and restore the required point in time for the system database, follow
1. Below the **Restore Point** box, select the **Select** link.
- :::image type="content" source="./media/sap-hana-database-instances-restore/restore-logs-over-snapshot-restore-point.png" alt-text="Screenshot that shows how to select the log restore points of the system database instance for restore.":::
+ :::image type="content" source="./media/sap-hana-database-instances-restore/restore-over-snapshot.png" alt-text="Screenshot that shows how to select the log restore points of the system database instance for restore.":::
1. On the **Select restore point** pane, select the restore point, and then select **OK**.
To restore the tenant database, do the following:
1. On the **Restore** pane, select the target VM to which the disks should be attached, the required HANA instance, and the resource group.
- :::image type="content" source="./media/sap-hana-database-instances-restore/log-over-snapshots-for-tenant-database-restore-point.png" alt-text="Screenshot that shows where to select the restore point of the log over snapshots for the tenant database.":::
+ :::image type="content" source="./media/sap-hana-database-instances-restore/restore-over-snapshot.png" alt-text="Screenshot that shows where to select the restore point of the log over snapshots for the tenant database.":::
Ensure that the target VM and target disk resource group have relevant permissions by using the PowerShell or CLI script.
The managed disk snapshots don't get transferred to the Recovery Services vault.
## Next steps - [About SAP HANA database backup on Azure VMs](sap-hana-db-about.md).-- [Manage SAP HANA database instances on Azure VMs (preview)](sap-hana-database-manage.md).
+- [Manage SAP HANA database instances on Azure VMs](sap-hana-database-manage.md).
backup Sap Hana Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-manage.md
This article describes common tasks for managing and monitoring SAP HANA databas
You'll learn how to monitor jobs and alerts, trigger an on-demand backup, edit policies, stop and resume database protection, and unregister a VM from backups. >[!Note]
->Support for HANA instance snapshots is in preview.
+>Support for HANA instance snapshots is in now generally available.
If you haven't configured backups yet for your SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md). To earn more about the supported configurations and scenarios, see [Support matrix for backup of SAP HANA databases on Azure VMs](sap-hana-backup-support-matrix.md).
backup Tutorial Configure Sap Hana Database Instance Snapshot Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-configure-sap-hana-database-instance-snapshot-backup.md
+
+ Title: Tutorial - Configure SAP HANA database instance snapshot backup
+description: In this tutorial, learn how to configure the SAP HANA database instance snapshot backup and run an on-demand backup.
+ Last updated : 11/02/2023++++++
+# Tutorial: Configure SAP HANA database instance snapshot backup
+
+This tutorial describes how to configure backup for SAP HANA database instance snapshot and run an on-demand backup using Azure CLI.
+
+Azure Backup now performs an SAP HANA storage snapshot-based backup of an entire database instance. Backup combines an Azure managed disk full or incremental snapshot with HANA snapshot commands to provide instant HANA backup and restore.
+
+For more information on the supported scenarios, see the [support matrix](./sap-hana-backup-support-matrix.md#scenario-support) for SAP HANA.
+
+## Before you start
+
+- Ensure that you have the [permissions for the backup operation](sap-hana-database-instances-backup.md#permissions-required-for-backup).
+- [Create a Recovery Services vault](sap-hana-database-instances-backup.md#create-a-recovery-services-vault) for the backup and restore operations.
+- [Create a backup policy](sap-hana-database-instances-backup.md#create-a-policy).
++
+## Next steps
+
+- [Learn how to restore an SAP HANA database instance snapshot in Azure VM](sap-hana-database-instances-restore.md).
+- [Troubleshoot common issues with SAP HANA database backups](backup-azure-sap-hana-database-troubleshoot.md).
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup
-description: Learn about the new features in Azure Backup.
+description: Learn about the new features in the Azure Backup service.
Previously updated : 09/29/2023 Last updated : 11/02/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- November 2023
+ - [SAP HANA instance snapshot backup support is now generally available](#sap-hana-instance-snapshot-backup-support-is-now-generally-available)
- September 2023 - [Multi-user authorization using Resource Guard for Backup vault is now generally available](#multi-user-authorization-using-resource-guard-for-backup-vault-is-now-generally-available) - [Enhanced soft delete for Azure Backup is now generally available](#enhanced-soft-delete-for-azure-backup-is-now-generally-available)
You can learn more about the new releases by bookmarking this page or by [subscr
- February 2021 - [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## SAP HANA instance snapshot backup support is now generally available
+
+Azure Backup now supports SAP HANA instance snapshot backup and enhanced restore, which provides a cost-effective backup solution using managed disk incremental snapshots. Because instant backup uses snapshot, the effect on the database is minimum.
+
+ You can now take an instant snapshot of the entire HANA instance and backup- logs for all databases, with a single solution. It also enables you to do an instant restore of the entire instance with point-in-time recovery using logs over the snapshot.
+
+>[!Note]
+>- Currently, the snapshots are stored on your storage account/operational tier and isn't stored in Recovery Services vault.
+>- Original Location Restore (OLR) is not supported.
+>- For pricing, as per SAP advisory, you must do a *weekly full backup + logs* streaming/Backint based backup so that the existing protected instance fee and storage cost are applied. For snapshot backup, the snapshot data created by Azure Backup is saved in your storage account and incurs snapshot storage charges. Thus, in addition to streaming/Backint backup charges, you're charged for per GB data stored in your snapshots, which is charged separately. Learn more about [Snapshot pricing](https://azure.microsoft.com/pricing/details/managed-disks/) and [Streaming/Backint based backup pricing](https://azure.microsoft.com/pricing/details/backup/?ef_id=_k_CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE_k_&amp;OCID=AIDcmmf1elj9v5_SEM__k_CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE_k_&amp;gclid=CjwKCAjwp8OpBhAFEiwAG7NaEsaFZUxIBD-FH1IUIfF-7yZRWAYJSMHP67InGf0drY0X2Km71KOKDBoCktgQAvD_BwE).
+
+For more information, see [Back up databases' instance snapshots](sap-hana-database-about.md#back-up-database-instance-snapshots).
+ ## Multi-user authorization using Resource Guard for Backup vault is now generally available Azure Backup now supports multi-user authorization (MUA) that allows you to add an additional layer of protection to critical operations on your Backup vaults. For MUA, Azure Backup uses the Azure resource, Resource Guard, to ensure critical operations are performed only with applicable authorization.
Azure Backup now supports SAP HANA instance snapshot backup that provides a cost
You can now take an instant snapshot of the entire HANA instance and backup logs for all databases, with a single solution. It also enables you to instantly restore the entire instance with point-in-time recovery using logs over the snapshot.
-For more information, see [Back up databases' instance snapshots (preview)](sap-hana-database-about.md#back-up-database-instance-snapshots-preview).
+For more information, see [Back up databases' instance snapshots (preview)](sap-hana-database-about.md#back-up-database-instance-snapshots).
## SAP HANA System Replication database backup support (preview)
bastion Configuration Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configuration-settings.md
The Developer SKU has different requirements and limitations than the other SKU
[!INCLUDE [Developer SKU regions](../../includes/bastion-developer-sku-regions.md)]
+> [!NOTE]
+> VNet peering isn't currently supported for the Developer SKU.
+ ### Specify SKU | Method | SKU Value | Links |
chaos-studio Chaos Studio Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-bicep.md
Title: Use Bicep to create an experiment in Azure Chaos Studio Preview
-description: Sample Bicep templates to create Azure Chaos Studio Preview experiments.
+ Title: Use Bicep to create an experiment in Azure Chaos Studio
+description: Sample Bicep templates to create Azure Chaos Studio experiments.
-# Use Bicep to create an experiment in Azure Chaos Studio Preview
+# Use Bicep to create an experiment in Azure Chaos Studio
[!INCLUDE [About Bicep](../../includes/resource-manager-quickstart-bicep-introduction.md)]
-This article includes a sample Bicep file to get started in Azure Chaos Studio Preview, including:
+This article includes a sample Bicep file to get started in Azure Chaos Studio , including:
* Onboarding a resource as a target (for example, a Virtual Machine) * Enabling capabilities on the target (for example, Virtual Machine Shutdown)
resource vm 'Microsoft.Compute/virtualMachines@2023-03-01' existing = {
} // Deploy the Chaos Studio target resource to the Virtual Machine
-resource chaosTarget 'Microsoft.Chaos/targets@2022-10-01-preview' = {
+resource chaosTarget 'Microsoft.Chaos/targets@2023-11-01' = {
name: 'Microsoft-VirtualMachine' location: location scope: vm
resource chaosRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-04-01
} // Deploy the Chaos Studio experiment resource
-resource chaosExperiment 'Microsoft.Chaos/experiments@2022-10-01-preview' = {
+resource chaosExperiment 'Microsoft.Chaos/experiments@2023-11-01' = {
name: experimentName location: location // Doesn't need to be the same as the Targets & Capabilities location identity: {
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Title: Azure Chaos Studio Preview fault and action library
-description: Understand the available actions you can use with Azure Chaos Studio Preview, including any prerequisites and parameters.
+ Title: Azure Chaos Studio fault and action library
+description: Understand the available actions you can use with Azure Chaos Studio, including any prerequisites and parameters.
-# Azure Chaos Studio Preview fault and action library
+# Azure Chaos Studio fault and action library
-The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Azure Chaos Studio Preview](./chaos-studio-fault-providers.md).
+The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Azure Chaos Studio](./chaos-studio-fault-providers.md).
## Time delay
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
-# Azure Chaos Studio Preview limitations and known issues
+# Azure Chaos Studio limitations and known issues
-During the public preview of Azure Chaos Studio, there are a few limitations and known issues that the team is aware of and working to resolve.
+The following are known limitations in Chaos Studio.
## Limitations -- **Supported regions** - The target resources must be in [one of the regions supported by the Azure Chaos Studio Preview](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio).-- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do not support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move.
+- **Supported regions** - The target resources must be in [one of the regions supported by the Azure Chaos Studio](https://azure.microsoft.com/global-infrastructure/services/?products=chaos-studio).
+- **Resource Move not supported** - Azure Chaos Studio tracked resources (for example, Experiments) currently do NOT support Resource Move. Experiments can be easily copied (by copying Experiment JSON) for use in other subscriptions, resource groups, or regions. Experiments can also already target resources across regions. Extension resources (Targets and Capabilities) do support Resource Move.
- **VMs require network access to Chaos studio** - For agent-based faults, the virtual machine must have outbound network access to the Chaos Studio agent service: - Regional endpoints to allowlist are listed in [Permissions and security in Azure Chaos Studio](chaos-studio-permissions-security.md#network-security). - If you're sending telemetry data to Application Insights, the IPs in [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) are also required.
chaos-studio Chaos Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-overview.md
Title: What is Azure Chaos Studio Preview?
+ Title: What is Azure Chaos Studio?
description: Measure, understand, and build resilience to incidents by using chaos engineering to inject faults and monitor how your application responds.
-# What is Azure Chaos Studio Preview?
+# What is Azure Chaos Studio?
-[Azure Chaos Studio Preview](https://azure.microsoft.com/services/chaos-studio) is a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. Chaos engineering is a methodology by which you inject real-world faults into your application to run controlled fault injection experiments.
+[Azure Chaos Studio](https://azure.microsoft.com/services/chaos-studio) is a managed service that uses chaos engineering to help you measure, understand, and improve your cloud application and service resilience. Chaos engineering is a methodology by which you inject real-world faults into your application to run controlled fault injection experiments.
Resilience is the capability of a system to handle and recover from disruptions. Application disruptions can cause errors and failures that can adversely affect your business or mission. Whether you're developing, migrating, or operating Azure applications, it's important to validate and improve your application's resilience.
chaos-studio Chaos Studio Permissions Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-permissions-security.md
Title: Permissions and security for Azure Chaos Studio Preview
-description: Understand how permissions work in Azure Chaos Studio Preview and how you can secure resources from accidental fault injection.
+ Title: Permissions and security for Azure Chaos Studio
+description: Understand how permissions work in Azure Chaos Studio and how you can secure resources from accidental fault injection.
Last updated 06/30/2023
-# Permissions and security in Azure Chaos Studio Preview
+# Permissions and security in Azure Chaos Studio
-Azure Chaos Studio Preview enables you to improve service resilience by systematically injecting faults into your Azure resources. Fault injection is a powerful way to improve service resilience, but it can also be dangerous. Causing failures in your application can have more impact than originally intended and open opportunities for malicious actors to infiltrate your applications.
+Azure Chaos Studio enables you to improve service resilience by systematically injecting faults into your Azure resources. Fault injection is a powerful way to improve service resilience, but it can also be dangerous. Causing failures in your application can have more impact than originally intended and open opportunities for malicious actors to infiltrate your applications.
Chaos Studio has a robust permission model that prevents faults from being run unintentionally or by a bad actor. In this article, you learn how you can secure resources that are targeted for fault injection by using Chaos Studio.
chaos-studio Chaos Studio Private Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-private-networking.md
Title: Integration of virtual network injection with Chaos Studio
-description: Learn how to use virtual network injection with Azure Chaos Studio Preview.
+description: Learn how to use virtual network injection with Azure Chaos Studio.
-# Virtual network injection in Azure Chaos Studio Preview
+# Virtual network injection in Azure Chaos Studio
Azure [Virtual Network](../virtual-network/virtual-networks-overview.md) is the fundamental building block for your private network in Azure. A virtual network enables many types of Azure resources to securely communicate with each other, the internet, and on-premises networks. A virtual network is similar to a traditional network that you operate in your own datacenter. It brings other benefits of Azure's infrastructure, such as scale, availability, and isolation.
-Virtual network injection allows an Azure Chaos Studio Preview resource provider to inject containerized workloads into your virtual network so that resources without public endpoints can be accessed via a private IP address on the virtual network. After you've configured virtual network injection for a resource in a virtual network and enabled the resource as a target, you can use it in multiple experiments. An experiment can target a mix of private and nonprivate resources if the private resources are configured according to the instructions in this article.
+Virtual network injection allows an Azure Chaos Studio resource provider to inject containerized workloads into your virtual network so that resources without public endpoints can be accessed via a private IP address on the virtual network. After you've configured virtual network injection for a resource in a virtual network and enabled the resource as a target, you can use it in multiple experiments. An experiment can target a mix of private and nonprivate resources if the private resources are configured according to the instructions in this article.
We are also now excited to share that Chaos Studio supports running **agent-based experiments** using Private Endpoints! Chaos Studio now supports Private Link for **both** service-direct and agent-based experiments. If you would like to use Private-Link for agent-service, please reach out to your CSA or the Chaos Studio help team for instructions on how to get yourself onboarded. For private link for service-direct faults, read the following sections for instructions on how to use them.
chaos-studio Chaos Studio Quickstart Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-azure-portal.md
Title: Create and run a chaos experiment by using Azure Chaos Studio Preview
-description: Understand the steps to create and run an Azure Chaos Studio Preview experiment in 10 minutes.
+ Title: Create and run a chaos experiment by using Azure Chaos Studio
+description: Understand the steps to create and run an Azure Chaos Studio experiment in 10 minutes.
-# Quickstart: Create and run a chaos experiment by using Azure Chaos Studio Preview
-Get started with Azure Chaos Studio Preview by using a virtual machine (VM) shutdown service-direct experiment to make your service more resilient to that failure in real-world scenarios.
+# Quickstart: Create and run a chaos experiment by using Azure Chaos Studio
+Get started with Azure Chaos Studio by using a virtual machine (VM) shutdown service-direct experiment to make your service more resilient to that failure in real-world scenarios.
## Prerequisites - An Azure subscription. [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
Create an Azure resource and ensure that it's one of the supported [fault provid
## Enable Chaos Studio on the VM you created 1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **Chaos Studio (preview)** in the search bar.
+1. Search for **Chaos Studio** in the search bar.
1. Select **Targets** and go to the VM you created. 1. Select the checkbox next to your VM. Select **Enable targets** > **Enable service-direct targets** from the dropdown menu.
chaos-studio Chaos Studio Quickstart Dns Outage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-quickstart-dns-outage.md
Title: Use Chaos Studio to replicate a DNS outage by using the NSG fault
-description: Get started with Azure Chaos Studio Preview by creating a DNS outage by using the network security group fault.
+description: Get started with Azure Chaos Studio by creating a DNS outage by using the network security group fault.
# Quickstart: Replicate a DNS outage by using the NSG fault
-The network security group (NSG) fault enables you to modify your existing NSG rules as part of a chaos experiment in Azure Chaos Studio Preview. By using this fault, you can block network traffic to your Azure resources and simulate a loss of connectivity or outages of dependent resources.
+The network security group (NSG) fault enables you to modify your existing NSG rules as part of a chaos experiment in Azure Chaos Studio. By using this fault, you can block network traffic to your Azure resources and simulate a loss of connectivity or outages of dependent resources.
In this quickstart, you create a chaos experiment that blocks all traffic to external (internet) DNS servers for 15 minutes. With this experiment, you can validate that resources connected to the Azure virtual network associated with the target NSG don't have a dependency on external DNS servers. In this way, you can validate one of the risk-threat model requirements.
First, you register a fault provider on the subscription where your NSG is hoste
1. Replace `$SUBSCRIPTION_ID` used in the prior step and execute the following command to register the `AzureNetworkSecurityGroupChaos` fault provider: ```azurecli
- az rest --method put --url "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/providers/microsoft.chaos/chaosProviderConfigurations/AzureNetworkSecurityGroupChaos?api-version=2021-06-21-preview" --body @AzureNetworkSecurityGroupChaos.json --resource "https://management.azure.com"
+ az rest --method put --url "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/providers/microsoft.chaos/chaosProviderConfigurations/AzureNetworkSecurityGroupChaos?api-version=2023-11-01" --body @AzureNetworkSecurityGroupChaos.json --resource "https://management.azure.com"
``` 1. (Optional) Delete the *AzureNetworkSecurityGroupChaos.json* file you previously created because it's no longer required. Close Cloud Shell.
If you're not going to continue using any faults related to NSGs:
1. Replace **$SUBSCRIPTION_ID** with the Azure subscription ID where the NSG fault provider was provisioned. Run the following command: ```azurecli
- az rest --method delete --url "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/providers/microsoft.chaos/chaosProviderConfigurations/AzureNetworkSecurityGroupChaos?api-version=2021-06-21-preview" --resource "https://management.azure.com"
+ az rest --method delete --url "https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/providers/microsoft.chaos/chaosProviderConfigurations/AzureNetworkSecurityGroupChaos?api-version=2023-11-01" --resource "https://management.azure.com"
```
chaos-studio Chaos Studio Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-region-availability.md
Title: Regional availability of Azure Chaos Studio Preview
-description: Understand how Azure Chaos Studio Preview makes chaos experiments and chaos targets available in Azure regions.
+ Title: Regional availability of Azure Chaos Studio
+description: Understand how Azure Chaos Studio makes chaos experiments and chaos targets available in Azure regions.
Last updated 4/29/2022
-# Regional availability of Azure Chaos Studio Preview
+# Regional availability of Azure Chaos Studio
-This article describes the regional availability model for Azure Chaos Studio Preview. It explains the difference between a region where experiments can be deployed and one where resources can be targeted. It also provides an overview of the Chaos Studio high-availability model.
+This article describes the regional availability model for Azure Chaos Studio. It explains the difference between a region where experiments can be deployed and one where resources can be targeted. It also provides an overview of the Chaos Studio high-availability model.
Chaos Studio is a regional Azure service, which means that the service is deployed and run within an Azure region. Chaos Studio has two regional components: the region where an experiment is deployed and the region where a resource is targeted.
chaos-studio Chaos Studio Run Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-run-experiment.md
Title: Run and manage a chaos experiment in Azure Chaos Studio Preview
-description: Learn how to start, stop, view details, and view history for a chaos experiment in Azure Chaos Studio Preview.
+ Title: Run and manage a chaos experiment in Azure Chaos Studio
+description: Learn how to start, stop, view details, and view history for a chaos experiment in Azure Chaos Studio.
-# Run and manage an experiment in Azure Chaos Studio Preview
+# Run and manage an experiment in Azure Chaos Studio
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. This article provides an overview of how to use Azure Chaos Studio Preview with a chaos experiment that you've previously created.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. This article provides an overview of how to use Azure Chaos Studio with a chaos experiment that you've previously created.
## Start an experiment 1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **Chaos Studio (preview)** in the search bar.
+1. Search for **Chaos Studio** in the search bar.
1. Select **Experiments**. This experiment list view is where you can start, stop, or delete experiments in bulk. You can also create a new experiment.
chaos-studio Chaos Studio Samples Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-samples-rest-api.md
Title: Use the REST APIs to manage Azure Chaos Studio Preview experiments
-description: Run and manage a chaos experiment with Azure Chaos Studio Preview by using REST APIs.
+ Title: Use the REST APIs to manage Azure Chaos Studio experiments
+description: Run and manage a chaos experiment with Azure Chaos Studio by using REST APIs.
> [!WARNING] > Injecting faults can affect your application or service. Be careful not to disrupt customers.
-The Azure Chaos Studio Preview API provides support for starting experiments programmatically. You can also use the Azure Resource Manager client and the Azure CLI to execute these commands from the console. The examples in this article are for the Azure CLI.
+The Azure Chaos Studio REST API provides support for starting experiments programmatically. You can also use the Azure Resource Manager client and the Azure CLI to execute these commands from the console. The examples in this article are for the Azure CLI.
> [!Warning] > These APIs are still under development and subject to change.
chaos-studio Chaos Studio Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-service-limits.md
Title: Azure Chaos Studio Preview service limits
+ Title: Azure Chaos Studio service limits
description: Understand the throttling and usage limits for Azure Chaos Studio.
-# Azure Chaos Studio Preview service limits
-This article provides service limits for Azure Chaos Studio Preview. For more information about Azure-wide service limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
+# Azure Chaos Studio service limits
+This article provides service limits for Azure Chaos Studio. For more information about Azure-wide service limits and quotas, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md).
## Experiment and target limits
chaos-studio Chaos Studio Target Selection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-target-selection.md
Title: Target selection in Azure Chaos Studio Preview
-description: Understand two different ways to select experiment targets in Azure Chaos Studio Preview.
+ Title: Target selection in Azure Chaos Studio
+description: Understand two different ways to select experiment targets in Azure Chaos Studio.
Last updated 09/25/2023
-# Target selection in Azure Chaos Studio Preview
+# Target selection in Azure Chaos Studio
Every chaos experiment is made up of a different combination of faults and targets, building up to a unique outage scenario to test your system's resilience against. You may want to select a fixed set of targets for your chaos experiment, or provide a rule in which all matching fault-onboarded resources are included as targets in your experiment. Chaos Studio enables you to do both by providing both manual and query-based target selection.
List-based manual target selection allows you to select a fixed set of onboarded
Query-based dynamic target selection allows you to input a KQL query that will select all onboarded targets that match the query result set. Using your query, you may filter targets based on common Azure resource parameters including type, region, name, and more. Upon experiment creation time, only the query itself will be added to your chaos experiment.
-The inputted query will run and add onboarded targets that match its result set upon experiment execution time. Thus, any resources onboarded to Chaos Studio after experiment creation time that match the query result set upon experiment execution time will be targeted by your experiment. You may preview your query's result set when adding it to your experiment, but be aware that it may not match the result set at experiment execution time. An example of a possible dynamic target query is shown below.
+The inputted query will run and add onboarded targets that match its result set upon experiment execution time. Thus, any resources onboarded to Chaos Studio after experiment creation time that match the query result set upon experiment execution time will be targeted by your experiment. You may your query's result set when adding it to your experiment, but be aware that it may not match the result set at experiment execution time. An example of a possible dynamic target query is shown below.
[ ![Screenshot that shows the query-based dynamic target selection option in the Azure portal.](images/dynamic-target-selection-preview.png) ](images/dynamic-target-selection-preview.png#lightbox)
chaos-studio Chaos Studio Targets Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-targets-capabilities.md
Title: Targets and capabilities in Azure Chaos Studio Preview
-description: Understand how to control resource onboarding in Azure Chaos Studio Preview by using targets and capabilities.
+ Title: Targets and capabilities in Azure Chaos Studio
+description: Understand how to control resource onboarding in Azure Chaos Studio by using targets and capabilities.
Last updated 11/01/2021
-# Targets and capabilities in Azure Chaos Studio Preview
+# Targets and capabilities in Azure Chaos Studio
Before you can inject a fault against an Azure resource, the resource must first have corresponding targets and capabilities enabled. Targets and capabilities control which resources are enabled for fault injection and which faults can run against those resources.
-By using targets and capabilities [along with other security measures](chaos-studio-permissions-security.md), you can avoid accidental or malicious fault injection with Azure Chaos Studio Preview. For example, with targets and capabilities, you can allow the CPU pressure fault to run against your production virtual machines while preventing the kill process fault from running against them.
+By using targets and capabilities [along with other security measures](chaos-studio-permissions-security.md), you can avoid accidental or malicious fault injection with Azure Chaos Studio. For example, with targets and capabilities, you can allow the CPU pressure fault to run against your production virtual machines while preventing the kill process fault from running against them.
## Targets
An experiment can only inject faults on onboarded targets with the corresponding
For reference, a list of capability names, fault URNs, and parameters is available [in our fault library](chaos-studio-fault-library.md). You can use the HTTP response to create a capability or do a GET on an existing capability to get this information on demand. For example, to do a GET on a VM shutdown capability: ```azurecli
-az rest --method get --url "https://management.azure.com/subscriptions/fd9ccc83-faf6-4121-9aff-2a2d685ca2a2/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine/capabilities/shutdown-1.0?api-version=2021-08-11-preview"
+az rest --method get --url "https://management.azure.com/subscriptions/fd9ccc83-faf6-4121-9aff-2a2d685ca2a2/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine/capabilities/shutdown-1.0?api-version=2023-11-01"
``` Returns the following JSON:
chaos-studio Chaos Studio Tutorial Aad Outage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aad-outage-portal.md
# Use a chaos experiment template to induce an outage on an Azure Active Directory instance
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you induce an outage on an Azure Active Directory resource using a pre-populated experiment template and Azure Chaos Studio Preview.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you induce an outage on an Azure Active Directory resource using a pre-populated experiment template and Azure Chaos Studio.
## Prerequisites
You can use a chaos experiment to verify that your application is resilient to f
## Enable Chaos Studio on your network security group
-Azure Chaos Studio Preview can't inject faults against a resource until that resource is added to Chaos Studio. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Network security groups have only one target type (service-direct) and one capability (set rules). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources might have many other capabilities.
+Azure Chaos Studio can't inject faults against a resource until that resource is added to Chaos Studio. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Network security groups have only one target type (service-direct) and one capability (set rules). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources might have many other capabilities.
1. Open the [Azure portal](https://portal.azure.com). 1. Search for **Chaos Studio** in the search bar.
chaos-studio Chaos Studio Tutorial Agent Based Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-cli.md
ms.devlang: azurecli
# Create a chaos experiment that uses an agent-based fault with the Azure CLI
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a high CPU event on a Linux virtual machine (VM) by using a chaos experiment and Azure Chaos Studio Preview. Run this experiment to help you defend against an application from becoming resource starved.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a high CPU event on a Linux virtual machine (VM) by using a chaos experiment and Azure Chaos Studio. Run this experiment to help you defend against an application from becoming resource starved.
You can use these same steps to set up and run an experiment for any agent-based fault. An *agent-based* fault requires setup and installation of the chaos agent. A service-direct fault runs directly against an Azure resource without any need for instrumentation.
Next, set up a Microsoft-Agent target on each VM or virtual machine scale set th
1. Create the target by replacing `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Replace `target.json` with the name of the JSON file you created in the previous step. ```azurecli-interactive
- az rest --method put --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview --body @target.json --query properties.agentProfileId -o tsv
+ az rest --method put --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2023-11-01 --body @target.json --query properties.agentProfileId -o tsv
``` If you receive a PowerShell parsing error, switch to a Bash terminal as recommended for this tutorial or surround the referenced JSON file in single quotes (`--body '@target.json'`).
Next, set up a Microsoft-Agent target on each VM or virtual machine scale set th
1. Create the capabilities by replacing `$RESOURCE_ID` with the resource ID of the target VM or virtual machine scale set. Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md) (for example, `CPUPressure-1.0`). ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/$CAPABILITY?api-version=2023-11-01" --body "{\"properties\":{}}"
``` For example, if you're enabling the CPU Pressure capability: ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/CPUPressure-1.0?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-Agent/capabilities/CPUPressure-1.0?api-version=2023-11-01" --body "{\"properties\":{}}"
``` ### Install the Chaos Studio virtual machine extension
The chaos agent is an application that runs in your VM or virtual machine scale
- Optionally, an Application Insights instrumentation key that enables the agent to send diagnostic events to Application Insights. 1. Before you begin, make sure you have the following details:
- * **agentProfileId**: The property returned when you create the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview` and copy the `agentProfileId` property.
- * **ClientId**: The client ID of the user-assigned managed identity used in the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2021-09-15-preview` and copy the `clientId` property.
+ * **agentProfileId**: The property returned when you create the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2023-11-01` and copy the `agentProfileId` property.
+ * **ClientId**: The client ID of the user-assigned managed identity used in the target. If you don't have this property, you can run `az rest --method get --uri https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-Agent?api-version=2023-11-01` and copy the `clientId` property.
* **(Optionally) AppInsightsKey**: The instrumentation key for your Application Insights component, which you can find on the Application Insights page in the portal under **Essentials**. 1. Install the Chaos Studio VM extension. Replace `$VM_RESOURCE_ID` with the resource ID of your VM or replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$VMSS_NAME` with those properties for your virtual machine scale set. Replace `$AGENT_PROFILE_ID` with the agent Profile ID. Replace `$USER_IDENTITY_CLIENT_ID` with the client ID of your managed identity. Replace `$APP_INSIGHTS_KEY` with your Application Insights instrumentation key. If you aren't using Application Insights, remove that key/value pair.
After you've successfully deployed your VM, now you can create your experiment.
1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you've saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive
- az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2021-09-15-preview --body @experiment.json
+ az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2023-11-01 --body @experiment.json
``` Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step.
You're now ready to run your experiment. To see the effect, we recommend that yo
1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive
- az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2021-09-15-preview
+ az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2023-11-01
``` 1. The response includes a status URL that you can use to query experiment status as the experiment runs.
chaos-studio Chaos Studio Tutorial Agent Based Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-agent-based-portal.md
# Create a chaos experiment that uses an agent-based fault with the Azure portal
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a high CPU event on a Linux virtual machine (VM) by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against an application from becoming resource starved.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a high CPU event on a Linux virtual machine (VM) by using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against an application from becoming resource starved.
You can use these same steps to set up and run an experiment for any agent-based fault. An *agent-based* fault requires setup and installation of the chaos agent. A service-direct fault runs directly against an Azure resource without any need for instrumentation.
Virtual machines have two target types. One target type enables service-direct f
> Prior to finishing the next steps, you must [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). Then you assign it to the target VM or virtual machine scale set. 1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **Chaos Studio (preview)** in the search bar.
+1. Search for **Chaos Studio** in the search bar.
1. Select **Targets** and move to your VM. ![Screenshot that shows the Targets view in the Azure portal.](images/tutorial-agent-based-targets.png)
chaos-studio Chaos Studio Tutorial Aks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-cli.md
Title: Create a chaos experiment using a Chaos Mesh fault with Azure CLI
-description: Create an experiment that uses an AKS Chaos Mesh fault by using Azure Chaos Studio Preview with the Azure CLI.
+description: Create an experiment that uses an AKS Chaos Mesh fault by using Azure Chaos Studio with the Azure CLI.
Last updated 04/21/2022
ms.devlang: azurecli
# Create a chaos experiment that uses a Chaos Mesh fault with the Azure CLI
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause periodic Azure Kubernetes Service (AKS) pod failures on a namespace by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against service unavailability when there are sporadic failures.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause periodic Azure Kubernetes Service (AKS) pod failures on a namespace by using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against service unavailability when there are sporadic failures.
Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source chaos engineering platform for Kubernetes, to inject faults into an AKS cluster. Chaos Mesh faults are [service-direct](chaos-studio-tutorial-aks-portal.md) faults that require Chaos Mesh to be installed on the AKS cluster. You can use these same steps to set up and run an experiment for any AKS Chaos Mesh fault.
Chaos Studio can't inject faults against a resource unless that resource is adde
1. Create a target by replacing `$RESOURCE_ID` with the resource ID of the AKS cluster you're adding. ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh?api-version=2023-11-01" --body "{\"properties\":{}}"
``` 1. Create the capabilities on the target by replacing `$RESOURCE_ID` with the resource ID of the AKS cluster you're adding. Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md). ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/$CAPABILITY?api-version=2023-11-01" --body "{\"properties\":{}}"
``` For example, if you're enabling the `PodChaos` capability: ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.ContainerService/managedClusters/myCluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/PodChaos-2.1?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.ContainerService/managedClusters/myCluster/providers/Microsoft.Chaos/targets/Microsoft-AzureKubernetesServiceChaosMesh/capabilities/PodChaos-2.1?api-version=2023-11-01" --body "{\"properties\":{}}"
``` This step must be done for each capability you want to enable on the cluster.
Now you can create your experiment. A chaos experiment defines the actions you w
1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure you've saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive
- az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2021-09-15-preview --body @experiment.json
+ az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2023-11-01 --body @experiment.json
``` Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step.
You're now ready to run your experiment. To see the effect, we recommend that yo
1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive
- az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2021-09-15-preview
+ az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2023-11-01
``` 1. The response includes a status URL that you can use to query experiment status as the experiment runs.
chaos-studio Chaos Studio Tutorial Aks Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aks-portal.md
Title: Create an experiment using a Chaos Mesh fault with the Azure portal
-description: Create an experiment that uses an AKS Chaos Mesh fault by using Azure Chaos Studio Preview with the Azure portal.
+description: Create an experiment that uses an AKS Chaos Mesh fault by using Azure Chaos Studio with the Azure portal.
Last updated 04/21/2022
# Create a chaos experiment that uses a Chaos Mesh fault to kill AKS pods with the Azure portal
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause periodic Azure Kubernetes Service (AKS) pod failures on a namespace by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against service unavailability when there are sporadic failures.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause periodic Azure Kubernetes Service (AKS) pod failures on a namespace by using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against service unavailability when there are sporadic failures.
Chaos Studio uses [Chaos Mesh](https://chaos-mesh.org/), a free, open-source chaos engineering platform for Kubernetes, to inject faults into an AKS cluster. Chaos Mesh faults are [service-direct](chaos-studio-tutorial-aks-portal.md) faults that require Chaos Mesh to be installed on the AKS cluster. You can use these same steps to set up and run an experiment for any AKS Chaos Mesh fault.
You can also [use the installation instructions on the Chaos Mesh website](https
Chaos Studio can't inject faults against a resource unless that resource is added to Chaos Studio first. You add a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. AKS clusters have only one target type (service-direct), but other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Each type of Chaos Mesh fault is represented as a capability like PodChaos, NetworkChaos, and IOChaos. 1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **Chaos Studio (preview)** in the search bar.
+1. Search for **Chaos Studio** in the search bar.
1. Select **Targets** and go to your AKS cluster. ![Screenshot that shows the Targets view in the Azure portal.](images/tutorial-aks-targets.png)
chaos-studio Chaos Studio Tutorial Dynamic Target Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-cli.md
If you want to install and use the CLI locally, this tutorial requires Azure CLI
## Enable Chaos Studio on your Virtual Machine Scale Sets instance
-Azure Chaos Studio Preview can't inject faults against a resource unless that resource was added to Chaos Studio first. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource.
+Azure Chaos Studio can't inject faults against a resource unless that resource was added to Chaos Studio first. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource.
Virtual Machine Scale Sets has only one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources also might have many other capabilities. 1. Create a [target for your virtual machine scale set](chaos-studio-fault-providers.md) resource. Replace `$RESOURCE_ID` with the resource ID of the virtual machine scale set you're adding: ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachineScaleSet?api-version=2022-10-01-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachineScaleSet?api-version=2023-11-01" --body "{\"properties\":{}}"
``` 1. Create the capabilities on the virtual machine scale set target. Replace `$RESOURCE_ID` with the resource ID of the resource you're adding. Specify the `VirtualMachineScaleSet` target and the `Shutdown-2.0` capability. ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachineScaleSet/capabilities/Shutdown-2.0?api-version=2022-10-01-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachineScaleSet/capabilities/Shutdown-2.0?api-version=2023-11-01" --body "{\"properties\":{}}"
``` You've now successfully added your virtual machine scale set to Chaos Studio.
Now you can create your experiment. A chaos experiment defines the actions you w
1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure that you saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive
- az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2022-10-01-preview --body @experiment.json
+ az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2023-11-01 --body @experiment.json
``` Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step.
You're now ready to run your experiment. To see the effect, check the portal to
1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive
- az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2022-10-01-preview
+ az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2023-11-01
``` 1. The response includes a status URL that you can use to query experiment status as the experiment runs.
chaos-studio Chaos Studio Tutorial Dynamic Target Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-dynamic-target-portal.md
You can use these same steps to set up and run an experiment for any fault that
## Enable Chaos Studio on your virtual machine scale sets
-Azure Chaos Studio Preview can't inject faults against a resource until that resource is added to Chaos Studio. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource.
+Azure Chaos Studio can't inject faults against a resource until that resource is added to Chaos Studio. To add a resource to Chaos Studio, create a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource.
Virtual Machine Scale Sets has only one target type (`Microsoft-VirtualMachineScaleSet`) and one capability (`shutdown`). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources also might have many other capabilities.
chaos-studio Chaos Studio Tutorial Service Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-cli.md
ms.devlang: azurecli
# Create a chaos experiment that uses a service-direct fault with the Azure CLI
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a multi-read, single-write Azure Cosmos DB failover by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against data loss when a failover event occurs.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a multi-read, single-write Azure Cosmos DB failover by using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against data loss when a failover event occurs.
You can use these same steps to set up and run an experiment for any service-direct fault. A *service-direct* fault runs directly against an Azure resource without any need for instrumentation, unlike agent-based faults, which require installation of the chaos agent.
Chaos Studio can't inject faults against a resource unless that resource was add
1. Create a target by replacing `$RESOURCE_ID` with the resource ID of the resource you're adding. Replace `$TARGET_TYPE` with the [target type you're adding](chaos-studio-fault-providers.md): ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/$TARGET_TYPE?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/$TARGET_TYPE?api-version=2023-11-01" --body "{\"properties\":{}}"
``` For example, if you're adding a virtual machine as a service-direct target: ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine?api-version=2023-11-01" --body "{\"properties\":{}}"
``` 1. Create the capabilities on the target by replacing `$RESOURCE_ID` with the resource ID of the resource you're adding. Replace `$TARGET_TYPE` with the [target type you're adding](chaos-studio-fault-providers.md). Replace `$CAPABILITY` with the [name of the fault capability you're enabling](chaos-studio-fault-library.md). ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/$TARGET_TYPE/capabilities/$CAPABILITY?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/$RESOURCE_ID/providers/Microsoft.Chaos/targets/$TARGET_TYPE/capabilities/$CAPABILITY?api-version=2023-11-01" --body "{\"properties\":{}}"
``` For example, if you're enabling the virtual machine shutdown capability: ```azurecli-interactive
- az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine/capabilities/shutdown-1.0?api-version=2021-09-15-preview" --body "{\"properties\":{}}"
+ az rest --method put --url "https://management.azure.com/subscriptions/b65f2fec-d6b2-4edd-817e-9339d8c01dc4/resourceGroups/myRG/providers/Microsoft.Compute/virtualMachines/myVM/providers/Microsoft.Chaos/targets/Microsoft-VirtualMachine/capabilities/shutdown-1.0?api-version=2023-11-01" --body "{\"properties\":{}}"
``` You've now successfully added your Azure Cosmos DB account to Chaos Studio.
Now you can create your experiment. A chaos experiment defines the actions you w
1. Create the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. Make sure that you've saved and uploaded your experiment JSON. Update `experiment.json` with your JSON filename. ```azurecli-interactive
- az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2021-09-15-preview --body @experiment.json
+ az rest --method put --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME?api-version=2023-11-01 --body @experiment.json
``` Each experiment creates a corresponding system-assigned managed identity. Note the principal ID for this identity in the response for the next step.
You're now ready to run your experiment. To see the effect, we recommend that yo
1. Start the experiment by using the Azure CLI. Replace `$SUBSCRIPTION_ID`, `$RESOURCE_GROUP`, and `$EXPERIMENT_NAME` with the properties for your experiment. ```azurecli-interactive
- az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2021-09-15-preview
+ az rest --method post --uri https://management.azure.com/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.Chaos/experiments/$EXPERIMENT_NAME/start?api-version=2023-11-01
``` 1. The response includes a status URL that you can use to query experiment status as the experiment runs.
chaos-studio Chaos Studio Tutorial Service Direct Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-service-direct-portal.md
Title: Create an experiment using a service-direct fault with Chaos Studio
-description: Create an experiment that uses a service-direct fault with Azure Chaos Studio Preview to fail over an Azure Cosmos DB instance.
+description: Create an experiment that uses a service-direct fault with Azure Chaos Studio to fail over an Azure Cosmos DB instance.
# Create a chaos experiment that uses a service-direct fault to fail over an Azure Cosmos DB instance
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a multi-read, single-write Azure Cosmos DB failover by using a chaos experiment and Azure Chaos Studio Preview. Running this experiment can help you defend against data loss when a failover event occurs.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you cause a multi-read, single-write Azure Cosmos DB failover by using a chaos experiment and Azure Chaos Studio. Running this experiment can help you defend against data loss when a failover event occurs.
You can use these same steps to set up and run an experiment for any service-direct fault. A *service-direct* fault runs directly against an Azure resource without any need for instrumentation. Agent-based faults require installation of the chaos agent.
You can use these same steps to set up and run an experiment for any service-dir
Chaos Studio can't inject faults against a resource unless that resource is added to Chaos Studio first. You add a resource to Chaos Studio by creating a [target and capabilities](chaos-studio-targets-capabilities.md) on the resource. Azure Cosmos DB accounts have only one target type (service-direct) and one capability (failover). Other resources might have up to two target types. One target type is for service-direct faults. Another target type is for agent-based faults. Other resources might have many other capabilities. 1. Open the [Azure portal](https://portal.azure.com).
-1. Search for **Chaos Studio (preview)** in the search bar.
+1. Search for **Chaos Studio** in the search bar.
1. Select **Targets** and go to your Azure Cosmos DB account. ![Screenshot that shows the Targets view in the Azure portal.](images/tutorial-service-direct-targets.png)
chaos-studio Sample Policy Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-policy-targets.md
Title: Azure Policy samples for adding resources to Chaos Studio Preview
-description: Sample Azure policies to add resources to Azure Chaos Studio Preview by using targets and capabilities.
+ Title: Azure Policy samples for adding resources to Chaos Studio
+description: Sample Azure policies to add resources to Azure Chaos Studio by using targets and capabilities.
-# Azure Policy samples for adding resources to Azure Chaos Studio Preview
-This article includes sample [Azure Policy](../governance/policy/overview.md) definitions that create [targets and capabilities](chaos-studio-targets-capabilities.md) for a specific resource type. You can automatically add resources to Azure Chaos Studio Preview. First, you [deploy these samples as custom policy definitions](../governance/policy/tutorials/create-custom-policy-definition.md). Then you [assign the policy](../governance/policy/assign-policy-portal.md) to a scope.
+# Azure Policy samples for adding resources to Azure Chaos Studio
+This article includes sample [Azure Policy](../governance/policy/overview.md) definitions that create [targets and capabilities](chaos-studio-targets-capabilities.md) for a specific resource type. You can automatically add resources to Azure Chaos Studio. First, you [deploy these samples as custom policy definitions](../governance/policy/tutorials/create-custom-policy-definition.md). Then you [assign the policy](../governance/policy/assign-policy-portal.md) to a scope.
In these samples, we add service-direct targets and capabilities for each [supported resource type](chaos-studio-fault-providers.md) by using [targets and capabilities](chaos-studio-targets-capabilities.md).
In these samples, we add service-direct targets and capabilities for each [suppo
"resources": [ { "type": "Microsoft.Cache/Redis/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureCacheForRedis')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.Cache/Redis/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureCacheForRedis/Reboot-1.0')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
"resources": [ { "type": "Microsoft.DocumentDB/databaseAccounts/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-CosmosDB')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.DocumentDB/databaseAccounts/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-CosmosDB/Failover-1.0')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
"resources": [ { "type": "Microsoft.ContainerService/managedClusters/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/NetworkChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/PodChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/StressChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/IOChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/TimeChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/KernelChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/DNSChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/HTTPChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
"resources": [ { "type": "Microsoft.Network/networkSecurityGroups/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-NetworkSecurityGroup')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.Network/networkSecurityGroups/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-NetworkSecurityGroup/SecurityRule-1.0')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
"resources": [ { "type": "Microsoft.Compute/virtualMachines/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-VirtualMachine')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.Compute/virtualMachines/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-VirtualMachine/Shutdown-1.0')]", "location": "[parameters('location')]", "dependsOn": [
In these samples, we add service-direct targets and capabilities for each [suppo
"resources": [ { "type": "Microsoft.Compute/virtualMachineScaleSets/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-VirtualMachineScaleSet')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.Compute/virtualMachineScaleSets/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-VirtualMachineScaleSet/Shutdown-1.0')]", "location": "[parameters('location')]", "dependsOn": [
chaos-studio Sample Template Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-experiment.md
Title: Azure Resource Manager template samples for chaos experiments
-description: Sample Azure Resource Manager templates to create Azure Chaos Studio Preview experiments.
+description: Sample Azure Resource Manager templates to create Azure Chaos Studio experiments.
-# ARM template samples for experiments in Azure Chaos Studio Preview
-This article includes sample [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/syntax.md) to create a [chaos experiment](chaos-studio-chaos-experiments.md) in Azure Chaos Studio Preview. Each sample includes a template file and a parameters file with sample values to provide to the template.
+# ARM template samples for experiments in Azure Chaos Studio
+This article includes sample [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/syntax.md) to create a [chaos experiment](chaos-studio-chaos-experiments.md) in Azure Chaos Studio. Each sample includes a template file and a parameters file with sample values to provide to the template.
## Create an experiment (sample)
In this sample, we create a chaos experiment with a single target resource and a
"resources": [ { "type": "Microsoft.Chaos/experiments",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[parameters('experimentName')]", "location": "[parameters('location')]", "identity": {
chaos-studio Sample Template Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-template-targets.md
Title: ARM template samples for targets and capabilities in Chaos Studio
-description: Sample ARM templates to add resources to Azure Chaos Studio Preview by using targets and capabilities.
+ Title: Resource Manager template samples for targets and capabilities in Chaos Studio
+description: Sample Azure Resource Manager (ARM) templates to add resources to Azure Chaos Studio by using targets and capabilities.
-# ARM template samples for targets and capabilities in Azure Chaos Studio Preview
-This article includes sample [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/syntax.md) to create [targets and capabilities](chaos-studio-targets-capabilities.md) to add a resource to Azure Chaos Studio Preview. Each sample includes a template file and a parameters file with sample values to provide to the template.
+# Azure Resource Manager template samples for targets and capabilities in Azure Chaos Studio
+This article includes sample [Azure Resource Manager templates (ARM templates)](../azure-resource-manager/templates/syntax.md) to create [targets and capabilities](chaos-studio-targets-capabilities.md) to add a resource to Azure Chaos Studio. Each sample includes a template file and a parameters file with sample values to provide to the template.
## Add service-direct target and capabilities (single capability)
In this sample, we add an Azure Cosmos DB instance by using [targets and capabil
"resources": [ { "type": "Microsoft.DocumentDB/databaseAccounts/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-CosmosDB')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.DocumentDB/databaseAccounts/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-CosmosDB/Failover-1.0')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
"resources": [ { "type": "Microsoft.ContainerService/managedClusters/providers/targets",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh')]", "location": "[parameters('location')]", "properties": {} }, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/NetworkChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/PodChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/StressChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/IOChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/TimeChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/KernelChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/DNSChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
In this sample, we add an Azure Kubernetes Service cluster by using [targets and
}, { "type": "Microsoft.ContainerService/managedClusters/providers/targets/capabilities",
- "apiVersion": "2021-09-15-preview",
+ "apiVersion": "2023-11-01",
"name": "[concat(parameters('resourceName'), '/', 'Microsoft.Chaos/Microsoft-AzureKubernetesServiceChaosMesh/HTTPChaos-2.1')]", "location": "[parameters('location')]", "dependsOn": [
chaos-studio Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/troubleshooting.md
Title: Troubleshoot common Azure Chaos Studio Preview problems
-description: Learn to troubleshoot common problems when you use Azure Chaos Studio Preview.
+ Title: Troubleshoot common Azure Chaos Studio problems
+description: Learn to troubleshoot common problems when you use Azure Chaos Studio.
Last updated 11/10/2021
-# Troubleshoot issues with Azure Chaos Studio Preview
+# Troubleshoot issues with Azure Chaos Studio
-As you use Azure Chaos Studio Preview, you might occasionally encounter some problems. This article explains common problems and troubleshooting steps.
+As you use Azure Chaos Studio, you might occasionally encounter some problems. This article explains common problems and troubleshooting steps.
## General troubleshooting tips
chaos-studio Tutorial Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/tutorial-schedule.md
Title: Schedule a recurring experiment run with Chaos Studio Preview
-description: Set up a logic app that schedules a chaos experiment in Azure Chaos Studio Preview to run periodically.
+ Title: Schedule a recurring experiment run with Chaos Studio
+description: Set up a logic app that schedules a chaos experiment in Azure Chaos Studio to run periodically.
-# Tutorial: Schedule a recurring experiment with Azure Chaos Studio Preview
+# Tutorial: Schedule a recurring experiment with Azure Chaos Studio
-Azure Chaos Studio Preview lets you run chaos experiments that intentionally fail part of your application or service to verify that it's resilient against those failures. It can be useful to run these chaos experiments periodically to ensure that your application's resilience hasn't regressed or to meet compliance requirements. In this tutorial, you use a [logic app](../logic-apps/logic-apps-overview.md) to trigger an experiment to run once a day.
+Azure Chaos Studio lets you run chaos experiments that intentionally fail part of your application or service to verify that it's resilient against those failures. It can be useful to run these chaos experiments periodically to ensure that your application's resilience hasn't regressed or to meet compliance requirements. In this tutorial, you use a [logic app](../logic-apps/logic-apps-overview.md) to trigger an experiment to run once a day.
In this tutorial, you learn how to:
Now that you have a trigger, add an [action](../logic-apps/logic-apps-overview.m
| **Resource Group** | <*Resource-group-name*> | The name for the resource group where your chaos experiment is deployed. This example uses **chaosstudiodemo**. | | **Resource Provider** | `Microsoft.Chaos` | The Chaos Studio resource provider. | | **Short Resource Id** | `experiments/`<*Resource-group-name*> | The name of your chaos experiment preceded by **experiments/**. |
- | **Client Api Version** | `2021-09-15-preview` | The Chaos Studio REST API version. |
+ | **Client Api Version** | `2023-11-01` | The Chaos Studio REST API version. |
| **Action name** | `start` | The name of the Chaos Studio experiment action. Always **start**. | 1. Save your logic app. On the designer toolbar, select **Save**.
cloud-shell Vnet Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-shell/vnet-deployment.md
description: This article provides step-by-step instructions to deploy Azure Cloud Shell in a private virtual network. ms.contributor: jahelmic Previously updated : 10/10/2023 Last updated : 11/01/2023 Title: Deploy Azure Cloud Shell in a virtual network with quickstart templates
Fill in the following values:
- **Resource Group**: The name of the resource group for the Cloud Shell virtual network deployment. - **Region**: The location of the resource group. - **Virtual Network**: The name of the Cloud Shell virtual network.
+- **Subnet Address ranges** - This deployment creates three subnets. You need to plan your address
+ ranges for each subnet.
+ - **Container subnet** - You need enough IP addresses to support the number of concurrent sessions
+ that you expect to use.
+ - **Relay Subnet** - You need at least one IP address for the Relay subnet.
+ - **Storage Subnet Name** - You need enough IP addresses to support the number of concurrent
+ sessions that you expect to use.
- **Azure Container Instance OID**: The ID of the Azure container instance for your resource group. - **Azure Relay Namespace**: The name that you want to assign to the Azure Relay resource that the template creates.
the [quickstart templates][07] to configure a virtual network for Cloud Shell.
[![Screenshot of Azure Container Instance Service details.][96]][96a]
-## 3. Create the virtual network by using the ARM template
+## 3. Create the required network resources by using the ARM template
Use the [Azure Cloud Shell - VNet][08] template to create Cloud Shell resources in a virtual network. The template creates three subnets under the virtual network that you created earlier. You
communication-services Room Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/rooms/room-concept.md
Here are the main scenarios where rooms are useful:
- **Rooms enable scheduled communication experience.** Rooms help service platforms deliver meeting-style experiences while still being suitably generic for a wide variety of industry applications. Services can schedule and manage rooms for patients seeking medical advice, financial planners working with clients, and lawyers providing legal services. - **Rooms enable an invite-only experience.** Rooms allow your services to control which users can join the room for a virtual appointment with doctors or financial consultants. This will allow only a subset of users with assigned Communication Services identities to join a room call. - **Rooms enable structured communications through roles and permissions.** Rooms allow developers to assign predefined roles to users to exercise a higher degree of control and structure in communication. Ensure only presenters can speak and share content in a large meeting or in a virtual conference.-- **Rooms enable to perform calls using PSTN.** Rooms enable users to invite participants to a meeting by making phone calls through the public switched telephone network (PSTN).
+- **Add PSTN participants.** Invite public switched telephone network (PSTN) participants to a call using a number purchased through your subscription or via Azure direct routing to your Session Border Controller (SBC).
## When to use rooms
The tables below provide detailed capabilities mapped to the roles. At a high le
| - Render a video in multiple places (local camera or remote stream) | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Set/Update video scaling mode | ✔️ | ✔️ | ✔️ <br>(Only Remote)</br> | | - Render remote video stream | ✔️ | ✔️ | ✔️ |
-| **PSTN calls** | | |
+| **Add PSTN participants** | | |
| - Call participants using phone calls | ✔️ | ❌ | ❌ | *) Only available on the web calling SDK. Not available on iOS and Android calling SDKs
communications-gateway Configure Test Numbers Zoom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/configure-test-numbers-zoom.md
+
+ Title: Set up test numbers for Zoom Phone Cloud Peering with Azure Communications Gateway
+description: Learn how to configure Azure Communications Gateway with Zoom Phone Cloud Peering numbers for testing.
++++ Last updated : 11/06/2023+
+#CustomerIntent: As someone deploying Azure Communications Gateway, I want to test my deployment so that I can be sure that calls work.
++
+# Configure test numbers for Zoom Phone Cloud Peering with Azure Communications Gateway
+
+To test Zoom Phone Cloud Peering with Azure Communications Gateway, you need test numbers. By following this article, you can set up the required user and number configuration in Zoom, on Azure Communications Gateway and in your network. You can then start testing.
+
+## Prerequisites
+
+You must have [chosen test numbers](deploy.md#prerequisites). You need two types of test number:
+- Integration testing by your staff.
+- Service verification (continuous call testing) by your chosen communication services.
+
+You must have completed the following procedures.
+
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md)
+- [Deploy Azure Communications Gateway](deploy.md)
+- [Connect Azure Communications Gateway to Zoom Phone Cloud Peering](connect-zoom.md)
+
+Your organization must have integrated with Azure Communications Gateway's Provisioning API. Someone in your organization must be able to make requests using the Provisioning API during this procedure.
+
+You must be an owner or admin of a Zoom account that you want to use for testing.
+
+You must be able to contact your Zoom representative.
+
+## Configure the test numbers for integration testing on Azure Communications Gateway
+
+You must provision Azure Communications Gateway with the details of the test numbers for integration testing. This provisioning allows Azure Communications Gateway to identify that the calls should have Zoom service.
+
+> [!IMPORTANT]
+> Do not provision the service verification numbers for Zoom. Azure Communications Gateway routes calls involving those numbers automatically. Any provisioning you do for those numbers has no effect.
+
+This step requires Azure Communications Gateway's Provisioning API. The API allows you to indicate to Azure Communications Gateway which service(s) you are supporting for each number, using _account_ and _number_ resources.
+- Account resources are descriptions of your customers (typically, an enterprise), and per-customer settings for service provisioning.
+- Number resources belong to an account. They describe numbers, the services (for example, Zoom) that the numbers make use of, and any extra per-number configuration.
+
+Use the Provisioning API for Azure Communications Gateway to:
+
+1. Provision an account to group the test numbers. Enable Zoom service for the account.
+1. Provision the details of the numbers you chose under the account. Enable each number for Zoom service.
+
+## Configure users in Zoom with the test numbers for integration testing
+
+Upload the numbers for integration testing to Zoom. When you do this, you can optionally configure Zoom to add a header with custom contents to SIP INVITEs. You can use this header to identify the Zoom account for the number or indicate that these are test numbers. For more information on this header, see Zoom's _Zoom Phone Provider Exchange Solution Reference Guide_.
+
+Use [https://support.zoom.us/hc/en-us/articles/360020808292-Managing-phone-numbers](https://support.zoom.us/hc/en-us/articles/360020808292-Managing-phone-numbers) to assign the numbers for integration testing to the user accounts that you will use for testing.
+
+> [!IMPORTANT]
+> Do not assign the service verification numbers to Zoom user accounts. In the next step, you will ask your Zoom representative to configure the service verification numbers for you.
+
+## Provide Zoom with the details of the service verification numbers
+
+Ask your Zoom representative to set up the resiliency and failover verification tests using the service verification numbers. Zoom must map the service verification numbers to datacenters in ascending numerical order. For example, if you allocated +19075550101 and +19075550102, Zoom must map +19075550101 to the datacenters for DID 1 and +19075550102 to the datacenters for DID 2.
+
+This ordering matches how Azure Communications Gateway routes calls for these tests, so allows Azure Communications Gateway to pass the tests.
+
+## Update your network's routing configuration
+
+Update your network configuration to route calls involving all the test numbers to Azure Communications Gateway. For more information about how to route calls to Azure Communications Gateway, see [Call routing requirements](reliability-communications-gateway.md#call-routing-requirements).
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Prepare for live traffic](prepare-for-live-traffic-zoom.md)
+
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
After you have deployed Azure Communications Gateway and connected it to your co
This article describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile. When you have finished the steps in this article, you will be ready to [Prepare for live traffic](prepare-for-live-traffic-operator-connect.md) with Operator Connect, Teams Phone Mobile and Azure Communications Gateway.
+> [!TIP]
+> This article assumes that your Azure Communications Gateway onboarding team from Microsoft is also onboarding you to Operator Connect and/or Teams Phone Mobile. If you've chosen a different onboarding partner for Operator Connect or Teams Phone Mobile, you need to ask them to arrange changes to the Operator Connect and/or Teams Phone Mobile environments.
+ ## Prerequisites You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md).
communications-gateway Connect Zoom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-zoom.md
+
+ Title: Connect Azure Communications Gateway to Zoom Phone Cloud Peering
+description: After deploying Azure Communications Gateway, you can configure it to connect to Zoom servers for Zoom Phone Cloud Peering.
++++ Last updated : 11/06/2023+
+ - template-how-to-pattern
++
+# Connect Azure Communications Gateway to Zoom Phone Cloud Peering
+
+After you have deployed Azure Communications Gateway and connected it to your core network, you need to connect it to Zoom.
+
+This article describes how to start setting up Azure Communications Gateway for Zoom Phone Cloud Peering. When you have finished the steps in this article, you can set up test users for test calls and prepare for live traffic.
+
+## Prerequisites
+
+You must have started the onboarding process with Zoom to become a Zoom Phone Cloud Peering provider. For more information on Cloud Peering, see [Zoom's Cloud Peering information](https://partner.zoom.us/partner-type/cloud-peering/).
+
+You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md).
+
+Your organization must have integrated with Azure Communications Gateway's Provisioning API.
+
+You must have **Reader** access to the subscription into which Azure Communications Gateway is deployed.
+
+You must be able to contact your Zoom representative.
+
+## Ask your onboarding team for the FQDNs and IP addresses for Azure Communications Gateway
+
+Ask your onboarding team for:
+
+- All the IP addresses that Azure Communications Gateway could use to send signaling and media to Zoom.
+- The FQDNs (fully qualified domain names) that Zoom should use to contact each Azure Communications Gateway region.
+
+Your Zoom representative needs these values to configure Zoom for Azure Communications Gateway.
+
+## Ask your Zoom representative to configure Zoom
+
+Ask your Zoom representative to configure Zoom for Azure Communications Gateway using the IP addresses and FQDNs that you obtained from your onboarding team.
+
+Zoom must:
+
+- Allowlist traffic from the IP addresses for Azure Communications Gateway.
+- Route calls to the FQDNs for Azure Communications Gateway.
+
+You can choose whether Zoom should use an active-active or active-backup distribution of calls to the Azure Communications Gateway regions.
+
+> [!TIP]
+> Don't provide your Zoom representative with the FQDNs from the **Overview** page for your Azure Communications Gateway resource. Those FQDNs are for the connection to your networks.
+
+## Next step
+
+> [!div class="nextstepaction"]
+> [Configure test numbers](configure-test-numbers-zoom.md)
communications-gateway Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md
You must have completed [Prepare to deploy Azure Communications Gateway](prepare
|The Azure resource group in which to create the Azure Communications Gateway resource. |**Project details: Resource group**| |The name for the deployment. This name can contain alphanumeric characters and `-`. It must be 3-24 characters long. |**Instance details: Name**| |The management Azure region: the region in which your monitoring and billing data is processed. We recommend that you select a region near or colocated with the two regions for handling call traffic. |**Instance details: Region**
- |The voice codecs to use between Azure Communications Gateway and your network. |**Call Handling: Supported codecs**|
+ |The voice codecs to use between Azure Communications Gateway and your network. We recommend that you only specify any codecs if you have a strong reason to restrict codecs (for example, licensing of specific codecs) and you can't configure your network or endpoints not to offer specific codecs. Restricting codecs can reduce the overall voice quality due to lower-fidelity codecs being selected. |**Call Handling: Supported codecs**|
|Whether your Azure Communications Gateway resource should handle emergency calls as standard calls or directly route them to the Emergency Routing Service Provider (US only; only for Operator Connect or Teams Phone Mobile). |**Call Handling: Emergency call handling**|
- |A list of dial strings used for emergency calling.|**Call Handling: Emergency dial strings**|
+ |A comma-separated list of dial strings used for emergency calls. For Microsoft Teams, specify dial strings as the standard emergency number (for example `999`). For Zoom, specify dial strings in the format `+<country-code><emergency-number>` (for example `+44999`).|**Call Handling: Emergency dial strings**|
|Whether to use an autogenerated `*.commsgw.azure.com` domain name or to use a subdomain of your own domain by delegating it to Azure Communications Gateway. For more information on this choice, see [the guidance on creating a network design](prepare-to-deploy.md#create-a-network-design). | **DNS: Domain name options** | |(Required if you choose an autogenerated domain) The scope at which the autogenerated domain name label for Azure Communications Gateway is unique. Communications Gateway resources are assigned an autogenerated domain name label that depends on the name of the resource. Selecting **Tenant** gives a resource with the same name in the same tenant but a different subscription the same label. Selecting **Subscription** gives a resource with the same name in the same subscription but a different resource group the same label. Selecting **Resource Group** gives a resource with the same name in the same resource group the same label. Selecting **No Re-use** means the label doesn't depend on the name, resource group, subscription or tenant. |**DNS: Auto-generated Domain Name Scope**| | (Required if you choose a delegated domain) The domain to delegate to this Azure Communications Gateway deployment | **DNS: DNS domain name** |
For Microsoft Teams Direct Routing:
|**Value**|**Field name(s) in Azure portal**| |||
-| IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to the Provisioning API, in a comma-separated list. Use of the Provisioning API is required to provision Azure Communications Gateway with numbers for Direct Routing. | **Options common to multiple communications
+| IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to Azure Communications Gateway's Provisioning API, in a comma-separated list. Use of the Provisioning API is required to provision numbers for Direct Routing. | **Options common to multiple communications
| Whether to add a custom SIP header to messages entering your network by using Azure Communications Gateway's Provisioning API | **Options common to multiple communications | (Only if you choose to add a custom SIP header) The name of any custom SIP header | **Options common to multiple communications
For Teams Phone Mobile:
|The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.|**Teams Phone Mobile: Teams voicemail pilot number**| | How you plan to use Mobile Control Point (MCP) to route Teams Phone Mobile calls to Microsoft Phone System. Choose from **Integrated** (to deploy MCP in Azure Communications Gateway), **On-premises** (to use an existing on-premises MCP) or **None** (if you'll use another method to route calls). |**Teams Phone Mobile: MCP**|
+For Zoom Phone Cloud Peering:
-## Collect test line and number configuration values
+|**Value**|**Field name(s) in Azure portal**|
+|||
+| The Zoom region to connect to | **Zoom: Zoom region** |
+| IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to Azure Communications Gateway's Provisioning API, in a comma-separated list. Use of the Provisioning API is required to provision numbers for Zoom Phone Cloud Peering. | **Options common to multiple communications
+| Whether to add a custom SIP header to messages entering your network by using Azure Communications Gateway's Provisioning API | **Options common to multiple communications
+| (Only if you choose to add a custom SIP header) The name of any custom SIP header | **Options common to multiple communications
-Collect all of the values in the following table for all the test lines that you want to configure for Azure Communications Gateway.
+## Collect values for service verification numbers
- |**Value**|**Field name(s) in Azure portal**|
- |||
- |A name for the test line. |**Name**|
- |The phone number for the test line, in E.164 format and including the country code. |**Phone Number**|
- |The purpose of the test line: **Manual** (for manual test calls by you and/or Microsoft staff during integration testing) or **Automated** (for automated validation with Microsoft Teams test suites - Operator Connect and Teams Phone Mobile only).|**Testing purpose**|
+Collect all of the values in the following table for all the service verification numbers required by Azure Communications Gateway.
-> [!IMPORTANT]
-> For Operator Connect and Teams Phone Mobile, you must configure at least six automated test lines. We recommend nine automated test lines (to allow simultaneous tests).
+For Operator Connect and Teams Phone Mobile:
+
+|**Value**|**Field name(s) in Azure portal**|
+|||
+|A name for the test line. |**Name**|
+|The phone number for the test line, in E.164 format and including the country code. |**Phone Number**|
+|The purpose of the test line (always **Automated**).|**Testing purpose**|
+
+For Zoom Phone Cloud Peering:
+
+|**Value**|**Field name(s) in Azure portal**|
+|||
+|The phone number for the test line, in E.164 format and including the country code. |**Phone Number**|
+
+Microsoft Teams Direct Routing doesn't require service verification numbers.
## Decide if you want tags
Use the Azure portal to create an Azure Communications Gateway resource.
1. Use the information you collected in [Collect basic information for deploying an Azure Communications Gateway](#collect-basic-information-for-deploying-an-azure-communications-gateway) to fill out the fields in the **Basics** configuration tab and then select **Next: Service Regions**. 1. Use the information you collected in [Collect configuration values for service regions](#collect-configuration-values-for-service-regions) to fill out the fields in the **Service Regions** tab and then select **Next: Communications Services**. 1. Select the communications services that you want to support in the **Communications Services** configuration tab, use the information that you collected in [Collect configuration values for each communications service](#collect-configuration-values-for-each-communications-service) to fill out the fields, and then select **Next: Test Lines**.
-1. Use the information that you collected in [Collect test line and number configuration values](#collect-test-line-and-number-configuration-values) to fill out the fields in the **Test Lines** configuration tab and then select **Next: Tags**.
+1. Use the information that you collected in [Collect values for service verification numbers](#collect-values-for-service-verification-numbers) to fill out the fields in the **Test Lines** configuration tab and then select **Next: Tags**.
+ - Don't configure numbers for integration testing.
+ - Microsoft Teams Direct Routing doesn't require service verification numbers.
1. (Optional) Configure tags for your Azure Communications Gateway resource: enter a **Name** and **Value** for each tag you want to create. 1. Select **Review + create**.
When your resource has been provisioned, you can connect Azure Communications Ga
1. Enable Bidirectional Forwarding Detection (BFD) on your on-premises edge routers to speed up link failure detection. - The interval must be 150 ms (or 300 ms if you can't use 150 ms). - With MAPS, BFD must bring up the BGP peer for each Private Network Interface (PNI).
-1. Meet any other requirements for your communications platform (for example, the *Network Connectivity Specification* for Operator Connect or Teams Phone Mobile). If you don't have access to Operator Connect or Teams Phone Mobile specifications, contact your onboarding team.
+1. Meet any other requirements for your communications platform (for example, the *Network Connectivity Specification* for Operator Connect or Teams Phone Mobile). If you need access to Operator Connect or Teams Phone Mobile specifications, contact your onboarding team.
## Configure domain delegation with Azure DNS
communications-gateway Emergency Calls Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calls-operator-connect.md
+
+ Title: Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway
+description: Understand Azure Communications Gateway's support for emergency calling with Operator Connect and Teams Phone Mobile
++++ Last updated : 10/09/2023+++
+# Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway
+
+Azure Communications Gateway supports Operator Connect and Teams Phone Mobile subscribers making emergency calls from Microsoft Teams clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you need to consider.
+
+## Overview of emergency calling with Azure Communications Gateway
+
+If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body.
+
+Unless you choose to route emergency calls directly to an Emergency Routing Service Provider (US only), Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP). For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing) and the considerations for [Operator Connect](/microsoftteams/considerations-operator-connect) or [Teams Phone Mobile](/microsoftteams/considerations-teams-phone-mobile).
+
+Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from several sources, all supported by Azure Communications Gateway:
+
+- [Dynamic locations](/microsoftteams/configure-dynamic-emergency-calling), based on the location of the client used to make the call.
+ - Enterprise administrators must add physical locations associated with network connectivity into the Location Information Server (LIS) in Microsoft Teams.
+ - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location.
+- Static locations that you assign to numbers.
+ - The Operator Connect API allows you to associate numbers with locations that enterprise administrators have already configured in the Microsoft Teams Admin Center as part of uploading numbers.
+ - Azure Communications Gateway's Number Management Portal also allows you to associate numbers with locations during upload. You can also manage the locations associated with numbers after the numbers have been uploaded.
+- Static locations that your enterprise customers assign. When you upload numbers, you can choose whether enterprise administrators can modify the location information associated with each number.
+
+> [!NOTE]
+> If you are taking responsibility for assigning static locations to numbers, note that enterprise administrators must have created the locations within the Microsoft Teams Admin Center first.
+
+Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy Azure Communications Gateway](deploy.md). These strings are also used by Microsoft Teams to identify emergency calls.
+
+## Emergency calling in the United States
+
+Within the United States, Microsoft Teams supports the Emergency Routing Service Providers (ERSPs) listed in the ["911 service providers" section of the list of Session Border Controllers certified for Direct Routing)](/microsoftteams/direct-routing-border-controllers). Azure Communications Gateway has been certified to interoperate with these ERSPs.
+
+You must route emergency calls to one of these ERSPs. If your network doesn't support PIDF-LO SIP bodies, Azure Communications Gateway can route emergency calls directly to your chosen ERSP. You must arrange this routing with your onboarding team.
+
+## Emergency calling with Teams Phone Mobile
+
+For Teams Phone Mobile subscribers, Azure Communications Gateway routes emergency calls from Microsoft Teams clients to your network in the same way as other originating calls. The call includes location information in accordance with the [emergency call considerations for Teams Phone Mobile](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing#considerations-for-teams-phone-mobile).
+
+Your network must not route emergency calls from native dialers to Azure Communications Gateway or Microsoft Teams.
+
+## Next steps
+
+- Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing).
+- Learn about [dynamic emergency calling in Microsoft Teams](/microsoftteams/configure-dynamic-emergency-calling).
communications-gateway Emergency Calls Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calls-teams-direct-routing.md
+
+ Title: Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway
+description: Understand Azure Communications Gateway's support for emergency calling with Microsoft Teams Direct Routing
++++ Last updated : 10/09/2023+++
+# Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway
+
+Azure Communications Gateway supports Microsoft Teams Direct Routing subscribers making emergency calls from Microsoft Teams clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you need to consider.
+
+## Overview of emergency calling with Azure Communications Gateway
+
+If a subscriber uses a Microsoft Teams client to make an emergency call and the subscriber's number is associated with Azure Communications Gateway, Microsoft Phone System routes the call to Azure Communications Gateway. The call has location information encoded in a PIDF-LO (Presence Information Data Format Location Object) SIP body.
+
+Azure Communications Gateway routes emergency calls to your network with this PIDF-LO location information unaltered. It is your responsibility to:
+
+- Ensure that these emergency calls are properly routed to an appropriate Public Safety Answering Point (PSAP).
+- Configure the SIP trunks to Azure Communications Gateway in your tenant to support PIDF-LO. You typically set this configuration when you [set up Direct Routing support](connect-teams-direct-routing.md#connect-your-tenant-to-azure-communications-gateway).
+
+For more information on how Microsoft Teams handles emergency calls, see [the Microsoft Teams documentation on managing emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing) and the [considerations for Direct Routing](/microsoftteams/considerations-direct-routing).
+
+## Emergency numbers and location information
+
+Azure Communications Gateway identifies emergency calls based on the dialing strings configured when you [deploy Azure Communications Gateway](deploy.md). These strings are also used by Microsoft Teams to identify emergency calls.
+
+Microsoft Teams always sends location information on SIP INVITEs for emergency calls. This information can come from:
+
+- [Dynamic locations](/microsoftteams/configure-dynamic-emergency-calling), based on the location of the client used to make the call.
+ - Enterprise administrators must add physical locations associated with network connectivity into the Location Information Server (LIS) in Microsoft Teams.
+ - When Microsoft Teams clients make an emergency call, they obtain their physical location based on their network location.
+- Static locations that your customers assign.
+
+## ELIN support for Direct Routing (preview)
+
+ELIN (Emergency Location Identifier Number) is the traditional method for signaling dynamic emergency location information for networks that don't support PIDF-LO. With Direct Routing, the Microsoft Phone System can add an ELIN (a phone number) representing the location to the message body. If ELIN support (preview) is configured, Azure Communications Gateway replaces the caller's number with this phone number when forwarding the call to your network. The Public Safety Answering Point (PSAP) can then look up this number to identify the location of the caller.
+
+> [!IMPORTANT]
+> If you require ELIN support (preview), discuss your requirements with a Microsoft representative.
+
+## Next steps
+
+- Learn about [the key concepts in Microsoft Teams emergency calling](/microsoftteams/what-are-emergency-locations-addresses-and-call-routing).
+- Learn about [dynamic emergency calling in Microsoft Teams](/microsoftteams/configure-dynamic-emergency-calling).
communications-gateway Emergency Calls Zoom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/emergency-calls-zoom.md
+
+ Title: Emergency calling for Zoom Phone Cloud Peering with Azure Communications Gateway
+description: Understand Azure Communications Gateway's support for emergency calling with Zoom Phone Cloud Peering
++++ Last updated : 11/06/2023+++
+# Emergency calling for Zoom Phone Cloud Peering with Azure Communications Gateway
+
+Azure Communications Gateway supports Zoom Phone subscribers making emergency calls from Zoom clients. This article describes how Azure Communications Gateway routes these calls to your network and the key facts you need to consider.
+
+## Emergency calling in the United States and Canada
+
+By default, Zoom routes emergency calls in the United States and Canada over dedicated trunks to emergency service providers. Emergency calls therefore don't reach your Azure Communications Gateway deployment or your network.
+
+If you want Zoom to route emergency calls to your network (via Azure Communications Gateway), refer to the _Zoom Phone Provider Exchange Solution Reference Guide_ and contact your Zoom representative. You must then configure Azure Communications Gateway and your network to handle emergency calls in the same way as emergency calls outside the United States and Canada.
+
+## Emergency calling outside the United States and Canada
+
+Azure Communications Gateway routes emergency calls from Zoom clients to your network in the same way as other originating calls. Zoom signals emergency numbers in the format `+<country-code><emergency-short-code>` (for example `+44999`), where the emergency short codes are as specified in https://support.zoom.us/hc/articles/360029961151-Special-service-numbers.
+
+You must:
+
+1. Identify the combinations of country codes and emergency short codes that you need to support.
+2. Specify these combinations (prefixed with `+`) when you [deploy Azure Communications Gateway](deploy.md#collect-basic-information-for-deploying-an-azure-communications-gateway), or by editing your existing configuration.
+3. Configure your network to treat calls to these numbers as emergency calls.
+
+If your network can't route emergency calls in the format `+<country-code><emergency-short-code>`, contact your onboarding team or raise a support request to discuss your requirements for number conversion.
+
+## Next steps
+
+- Learn about [Azure Communications Gateway's interoperability with Zoom Phone Cloud Peering](interoperability-zoom.md).
communications-gateway Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md
Previously updated : 10/09/2023 Last updated : 11/06/2023 #CustomerIntent: As someone setting up Azure Communications Gateway, I want to understand the steps I need to carry out to have live traffic through my deployment.
For Operator Connect and Teams Phone Mobile, also read:
- [Overview of interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md) - [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).-- [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md)
+- [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calls-operator-connect.md).
For Microsoft Teams Direct Routing, also read: - [Overview of interoperability of Azure Communications Gateway with Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md).-- [Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway](emergency-calling-teams-direct-routing.md)
+- [Emergency calling for Microsoft Teams Direct Routing with Azure Communications Gateway](emergency-calls-teams-direct-routing.md).
+
+For Zoom Phone Cloud Peering, also read:
+
+- [Overview of interoperability of Azure Communications Gateway with Zoom Phone Cloud Peering](interoperability-zoom.md).
+- [Emergency calling for Zoom Phone Cloud Peering with Azure Communications Gateway](emergency-calls-zoom.md).
As part of your planning, ensure your network can support the connectivity and interoperability requirements in these articles.
Use the following procedures to deploy Azure Communications Gateway and connect
1. [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) describes the steps you need to take before you can start creating your Azure Communications Gateway resource. You might need to refer to some of the articles listed in [Learn about and plan for Azure Communications Gateway](#learn-about-and-plan-for-azure-communications-gateway). 1. [Deploy Azure Communications Gateway](deploy.md) describes how to create your Azure Communications Gateway resource in the Azure portal and connect it to your networks. 1. [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md) describes how to integrate with the Provisioning API. Integrating with the API is:
- - Required for Microsoft Teams Direct Routing.
+ - Required for Microsoft Teams Direct Routing and Zoom Phone Cloud Peering.
- Optional for Operator Connect: only required to add custom headers to messages entering your core network. - Not supported for Teams Phone Mobile.
Use the following procedures to integrate with Microsoft Teams Direct Routing.
1. [Connect Azure Communications Gateway to Microsoft Teams Direct Routing](connect-teams-direct-routing.md) describes how to connect Azure Communications Gateway to the Microsoft Phone System for Microsoft Teams Direct Routing. 1. [Configure a test customer for Microsoft Teams Direct Routing](configure-test-customer-teams-direct-routing.md) describes how to configure Azure Communications Gateway and Microsoft 365 with a test customer.
-1. [Configure test numbers for Microsoft Teams Direct Routing](configure-test-numbers-teams-direct-routing.md) describes how to configure Azure Communications Gateway and Microsoft 365 with a test numbers.
+1. [Configure test numbers for Microsoft Teams Direct Routing](configure-test-numbers-teams-direct-routing.md) describes how to configure Azure Communications Gateway and Microsoft 365 with test numbers.
1. [Prepare for live traffic with Microsoft Teams Direct Routing and Azure Communications Gateway](prepare-for-live-traffic-teams-direct-routing.md) describes how to test your deployment and launch your service.
+Use the following procedures to integrate with Zoom Phone Cloud Peering.
+
+1. [Connect Azure Communications Gateway to Zoom Phone Cloud Peering](connect-zoom.md) describes how to connect Azure Communications Gateway to Zoom servers.
+1. [Configure test numbers for Zoom Phone Cloud Peering](configure-test-numbers-zoom.md) describes how to configure Azure Communications Gateway and Zoom with test numbers.
+1. [Prepare for live traffic with Zoom Phone Cloud Peering and Azure Communications Gateway](prepare-for-live-traffic-zoom.md) describes how to test your deployment and launch your service.
+ ## Next steps - Learn about [your network and Azure Communications Gateway](role-in-network.md)
communications-gateway Integrate With Provisioning Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/integrate-with-provisioning-api.md
Whether you need to integrate with the REST API depends on your chosen communica
|Communications service |Provisioning API integration |Purpose | ||||
-|Microsoft Teams Direct Routing |Required |- Configure the subdomain associated with each Direct Routing customer<br>- Generate DNS records specific to each customer (as required by the Microsoft 365 environment).<br>- Indicate that numbers are enabled for Direct Routing.<br>- (Optional) Configure a custom header for messages to your network|
+|Microsoft Teams Direct Routing |Required |- Configure the subdomain associated with each Direct Routing customer<br>- Generate DNS records specific to each customer (as required by the Microsoft 365 environment)<br>- Indicate that numbers are enabled for Direct Routing.<br>- (Optional) Configure a custom header for messages to your network|
|Operator Connect|Optional|(Optional) Configure a custom header for messages to your network| |Teams Phone Mobile|Not supported|N/A|
+|Zoom Phone Cloud Peering |Required |- Indicate that numbers are enabled for Zoom<br>- (Optional) Configure a custom header for messages to your network|
## Prerequisites
The following steps summarize the Azure configuration you need.
- [Connect to Operator Connect or Teams Phone Mobile](connect-operator-connect.md) - [Connect to Microsoft Teams Direct Routing](connect-teams-direct-routing.md)
+- [Connect to Zoom Phone Cloud Peering](connect-zoom.md)
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
Azure Communications Gateway can manipulate signaling and media to meet the requ
Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to the Microsoft Phone System, allowing you to support Operator Connect (for fixed line networks) and Teams Phone Mobile (for mobile networks). The following diagram shows where Azure Communications Gateway sits in your network.
- Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains certified SBC function and the MCP application server for anchoring mobile calls.
+ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains SBC function, the MCP application server for anchoring Teams Phone Mobile calls and a provisioning API.
:::image-end:::
-Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
+Calls flow from Microsoft Teams clients through the Microsoft Phone System and Azure Communications Gateway into your network.
## Compliance with Certified SBC specifications
The Microsoft Phone System typically requires SRTP for media. Azure Communicatio
### Media handling for calls
-You must select the codecs that you want to support when you deploy Azure Communications Gateway. If the Microsoft Phone System doesn't support these codecs, Azure Communications Gateway can perform transcoding (converting between codecs) on your behalf.
+You must select the codecs that you want to support when you deploy Azure Communications Gateway.
Operator Connect and Teams Phone Mobile require core networks to support ringback tones (ringing tones) during call transfer. Core networks must also support comfort noise. If your core networks can't meet these requirements, Azure Communications Gateway can inject media into calls.
communications-gateway Interoperability Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-teams-direct-routing.md
In this article, you learn:
Azure Communications Gateway sits at the edge of your fixed line network. It connects this network to the Microsoft Phone System, allowing you to support Microsoft Teams Direct Routing. The following diagram shows where Azure Communications Gateway sits in your network. Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System and a fixed operator network over SIP and RTP. Azure Communications Gateway and the Microsoft Phone System connect multiple customers to the operator network. Azure Communications Gateway also has a provisioning API to which a BSS client in the operator's management network must connect. Azure Communications Gateway contains certified SBC function. :::image-end:::
-Calls flow from endpoints in your networks through Azure Communications Gateway and the Microsoft Phone System into Microsoft Teams clients.
+Calls flow from Microsoft Teams clients through the Microsoft Phone System and Azure Communications Gateway into your network.
## Compliance with Certified SBC specifications
The Microsoft Phone System typically requires SRTP for media. Azure Communicatio
### Media handling for calls
-You must select the codecs that you want to support when you deploy Azure Communications Gateway. If the Microsoft Phone System doesn't support these codecs, Azure Communications Gateway can perform transcoding (converting between codecs) on your behalf.
+You must select the codecs that you want to support when you deploy Azure Communications Gateway.
Microsoft Teams Direct Routing requires core networks to support ringback tones (ringing tones) during call transfer. Core networks must also support comfort noise. If your core networks can't meet these requirements, Azure Communications Gateway can inject media into calls.
communications-gateway Interoperability Zoom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-zoom.md
+
+ Title: Overview of Zoom Phone Cloud Peering with Azure Communications Gateway
+description: Understand how Azure Communications Gateway fits into your fixed and mobile networks and into the Zoom Phone Cloud Peering program.
++++ Last updated : 11/06/2023+++
+# Overview of interoperability of Azure Communications Gateway with Zoom Phone Cloud Peering
+
+Azure Communications Gateway can manipulate signaling and media to meet the requirements of your networks and the Zoom Phone Cloud Peering program. This article provides an overview of the interoperability features that Azure Communications Gateway offers for Zoom Phone Cloud Peering.
+
+> [!IMPORTANT]
+> You must be a telecommunications operator or service provider to use Azure Communications Gateway.
+
+## Role and position in the network
+
+Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to Zoom servers, allowing you to support the Zoom Phone Cloud Peering program. The following diagram shows where Azure Communications Gateway sits in your network.
++
+ Architecture diagram showing Azure Communications Gateway connecting to Zoom servers and a fixed operator network over SIP and RTP. Azure Communications Gateway and Zoom Phone Cloud Peering connect multiple customers to the operator network. Azure Communications Gateway also has a provisioning API to which a BSS client in the operator's management network must connect. Azure Communications Gateway contains certified SBC function.
++
+You provide a trunk towards Zoom (via Azure Communications Gateway) for your customers. Calls flow from Zoom clients through the Zoom servers and Azure Communications Gateway into your network.
++
+Azure Communications Gateway does not support Premises Peering (where each customer has an eSBC) for Zoom Phone.
+
+## SIP signaling
+
+Azure Communications Gateway automatically interworks calls to support the requirements of the Zoom Phone Cloud Peering program, including:
+
+- Early media
+- 180 responses without SDP
+- 183 responses with SDP
+- Strict rules on normalizing headers used to route calls
+- Conversion of various headers to P-Asserted-Identity headers
+
+You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for:
+
+- Advanced SIP header or SDP message manipulation
+- Support for reliable provisional messages (100rel)
+- Interworking away from inband DTMF tones
+
+## SRTP media
+
+The Zoom Phone Cloud Peering program requires SRTP for media. Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers further media manipulation features to allow your networks to interoperate with the Zoom servers.
+
+### Media handling for calls
+
+Azure Communications Gateway can use Opus, G.722 and G.711 towards Zoom servers, with a packetization time of 20ms. You must select the codecs that you want to support when you deploy Azure Communications Gateway.
+
+If your network cannot support a packetization time of 20ms, you must contact your onboarding team or raise a support request to discuss your requirements for transrating (changing packetization time).
+
+### Media interworking options
+
+Azure Communications Gateway offers multiple media interworking options. For example, you might need to:
+
+- Control bandwidth allocation
+- Prioritize specific media traffic for Quality of Service
+
+For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
+
+## Next steps
+
+- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
+- Learn about [requesting changes to Azure Communications Gateway](request-changes.md).
communications-gateway Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/onboarding.md
Title: Onboarding to Microsoft Teams Phone with Azure Communications Gateway
-description: Understand the Included Benefits and your other options for onboarding
+ Title: Onboarding for Azure Communications Gateway
+description: Understand the Included Benefits and your other options for onboarding to Azure Communications Gateway for Microsoft or Zoom connectivity
Previously updated : 07/27/2023 Last updated : 11/06/2023 # Onboarding with Included Benefits for Azure Communications Gateway
-To launch Operator Connect and/or Teams Phone Mobile, you need an onboarding partner. Launching requires changes to the Operator Connect or Teams Phone Mobile environments and your onboarding partner manages the integration process and coordinates with Microsoft Teams on your behalf. They can also help you design and set up your network for success.
-
-We provide a customer success program and onboarding service called _Included Benefits_ for operators deploying Azure Communications Gateway. We work with your team to enable rapid and effective solution design and deployment. The program includes tailored guidance from Azure for Operators engineers, using proven practices and architectural guides.
+Azure Communications Gateway includes a project team for helping you design and set up your network for success. This service includes a customer success program and onboarding service called _Included Benefits_. We work with your team to enable rapid and effective solution design and deployment. The program includes tailored guidance from Azure for Operators engineers, using proven practices and architectural guides. If you're not eligible for Included Benefits or you require more support, discuss your requirements with your Microsoft sales representative.
+
+The Operator Connect and Teams Phone Mobile programs also require an onboarding partner who manages the necessary changes to the Operator Connect or Teams Phone Mobile environments and coordinates with Microsoft Teams on your behalf. The Azure Communications Gateway Included Benefits project team fulfills this role, but you can choose a different onboarding partner to coordinate with Microsoft Teams on your behalf.
## Eligibility for Included Benefits and alternatives
Included Benefits is available to operator customers who:
- Have an active paid Azure subscription. - Have a defined project using Azure Communications Gateway with intent to deploy. A defined project has an executive sponsor, committed customer/partner resources, established success metrics, and clear timelines for start and end of the project.-- Are located in a country/region supported by Azure Communications Gateway. Engagements are in English (although we may offer engagements in your local language, depending on the availability of our teams).
+- Are located in a country/region supported by Azure Communications Gateway. Engagements are in English (although we might offer engagements in your local language, depending on the availability of our teams).
There's no cost to you for the Included Benefits program.
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Title: What is Azure Communications Gateway?
-description: Azure Communications Gateway provides telecoms operators with the capabilities and network functions required to connect their network to Microsoft Teams.
+description: Azure Communications Gateway allows telecoms operators to interoperate with Operator Connect, Teams Phone Mobile, Microsoft Teams Direct Routing and Zoom Phone.
Previously updated : 10/09/2023 Last updated : 11/06/2023 # What is Azure Communications Gateway?
-Azure Communications Gateway enables Microsoft Teams calling through the Operator Connect, Teams Phone Mobile and Microsoft Teams Direct Routing programs. It provides Voice and IT integration with Microsoft Teams across both fixed and mobile networks. It's certified as part of the Operator Connect Accelerator program.
+Azure Communications Gateway enables Microsoft Teams calling through the Operator Connect, Teams Phone Mobile and Microsoft Teams Direct Routing programs and Zoom calling through the Zoom Phone Cloud Peering program. It provides Voice and IT integration with these communications services across both fixed and mobile networks. It's certified as part of the Operator Connect Accelerator program.
[!INCLUDE [communications-gateway-tsp-restriction](includes/communications-gateway-tsp-restriction.md)]
- Diagram that shows how Azure Communications Gateway connects to the Microsoft Phone System and to your fixed and mobile networks. Microsoft Teams clients connect to the Microsoft Phone system. Your fixed network connects to PSTN endpoints. Your mobile network connects to Teams Phone Mobile users.
+ Diagram that shows how Azure Communications Gateway connects to the Microsoft Phone System, Zoom Phone and to your fixed and mobile networks. Microsoft Teams clients connect to Microsoft Phone System. Zoom clients connect to Zoom Phone. Your fixed network connects to PSTN endpoints. Your mobile network connects to Teams Phone Mobile users. Azure Communications Gateway connects Microsoft Phone System, Zoom Phone and your fixed and mobile networks.
:::image-end:::
-Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including Teams Certified SBC function) so that you can integrate with Operator Connect, Teams Phone Mobile or Microsoft Teams Direct Routing quickly, reliably and in a secure manner.
+Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including SBC function certified by Microsoft Teams and Zoom) so that you can integrate with your chosen communications services quickly, reliably and in a secure manner.
As part of Microsoft Azure, the network elements in Azure Communications Gateway are fully managed and include an availability SLA. This full management simplifies network operations integration and accelerates the timeline for adding new network functions into production. ## Architecture
-Azure Communications Gateway acts as the edge of your network, ensuring compliance with the requirements of the Operator Connect and Teams Phone Mobile programs.
+Azure Communications Gateway acts as the edge of your network. This position allows it to interwork between your network and your chosen communications services and meet the requirements of your chosen programs.
+To ensure availability, Azure Communications Gateway is deployed into two Azure Regions within a given Geography, as shown in the following diagram. It supports both active-active and primary-backup geographic redundancy models to fit with your network design.
-To ensure availability, Azure Communications Gateway is deployed into two Azure Regions within a given Geography. It supports both active-active and primary-backup geographic redundancy models to fit with your network design.
-For more information about the networking requirements, see [Your network and Azure Communications Gateway](role-in-network.md) and [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
+For more information about the networking and call routing requirements, see [Your network and Azure Communications Gateway](role-in-network.md#network-requirements) and [Reliability in Azure Communications Gateway](reliability-communications-gateway.md).
Traffic from all enterprises shares a single SIP trunk, using a multitenant format. This multitenant format ensures the solution is suitable for both the SMB and Enterprise markets.
Traffic from all enterprises shares a single SIP trunk, using a multitenant form
## Voice features
-Azure Communications Gateway supports the SIP and RTP requirements for Teams Certified SBCs. It can transform call flows to suit your network with minimal disruption to existing infrastructure.
+Azure Communications Gateway supports the SIP and RTP requirements for certified SBCs for Microsoft Teams and Zoom Phone. It can transform call flows to suit your network with minimal disruption to existing infrastructure.
Azure Communications Gateway's voice features include: -- **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and Microsoft Teams. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, for example:
+- **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and communications services. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, for example:
- 100rel and early media inter-working - Downstream call forking with codec changes
Azure Communications Gateway's voice features include:
- Media transcoding - Ringback injection - **Call control integration for Teams Phone Mobile** - Azure Communications Gateway includes an optional IMS Application Server called Mobile Control Point (MCP). MCP ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).-- **Optional direct peering to Emergency Routing Service Providers for Operator Connect and Teams Phone Mobile (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md).
+- **Optional direct peering to Emergency Routing Service Providers for Operator Connect and Teams Phone Mobile (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calls-operator-connect.md).
## Provisioning and API integration for Operator Connect and Teams Phone Mobile
The Number Management Portal is available as part of the optional API Bridge fea
Azure Communications Gateway also automatically integrates with Operator Connect APIs to upload call duration data to Microsoft Teams. For more information, see [Providing call duration data to Microsoft Teams](interoperability-operator-connect.md#providing-call-duration-data-to-microsoft-teams).
-## Multitenant support and caller ID screening for Direct Routing
+## Multitenant support and caller ID screening for Microsoft Teams Direct Routing
Microsoft Teams Direct Routing's multitenant model for carrier telecommunications operators requires inbound messages to Microsoft Teams to indicate the Microsoft tenant associated with your customers. Azure Communications Gateway automatically updates the SIP signaling to indicate the correct tenant, using information that you provision onto Azure Communications Gateway. This process removes the need for your core network to map between numbers and customer tenants. For more information, see [Identifying the customer tenant for Microsoft Phone System](interoperability-teams-direct-routing.md#identifying-the-customer-tenant-for-microsoft-phone-system).
communications-gateway Plan And Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/plan-and-manage-costs.md
This article describes how you're charged for Azure Communications Gateway and h
After you've started using Azure Communications Gateway, you can use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act.
-Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, you're billed for all Azure services and resources used in your Azure subscription. This billing includes third-party services.
+Costs for Azure Communications Gateway are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Communications Gateway, your Azure bill includes all services and resources used in your Azure subscription, including third-party Azure services.
## Prerequisites
When you deploy Azure Communications Gateway, you're charged for how you use the
- A "Fixed Network Service Fee" or a "Mobile Network Service Fee" meter. - This meter is charged hourly and includes the use of 999 users for testing and early adoption.
- - Operator Connect and Microsoft Teams Direct Routing are fixed networks.
+ - Operator Connect, Microsoft Teams Direct Routing and Zoom Phone Cloud Peering are fixed networks.
- Teams Phone Mobile is a mobile network. - If your deployment includes fixed networks and mobile networks, you're charged the Mobile Network Service Fee. - A series of tiered per-user meters that charge based on the number of users that are assigned to the deployment. These per-user fees are based on the maximum number of users during your billing cycle, excluding the 999 users included in the service availability fee.
For example, if you have 28,000 users assigned to the deployment each month, you
* 3000 users in the 25000+ tier > [!NOTE]
-> A Microsoft Teams Direct Routing user is any telephone number configured with Direct Routing on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number.
+> A Microsoft Teams Direct Routing or Zoom Phone Cloud Peering user is any telephone number configured with Direct Routing service or Zoom service on Azure Communications Gateway. Billing for the user starts as soon as you have configured the number.
> > An Operator Connect or Teams Phone Mobile user is any telephone number that meets all the following criteria. >
You must pay for Azure networking costs, because these costs aren't included in
- If you're connecting to the public internet with ExpressRoute Microsoft Peering, you must purchase ExpressRoute circuits with a specified bandwidth and data billing model. - If you're connecting into Azure as a next hop, you might need to pay virtual network peering costs.
+You must also pay for any costs charged by the communications services to which you're connecting. These costs don't appear on your Azure bill, and you need to pay them to the communications service yourself.
+ ### Costs if you cancel or change your deployment If you cancel Azure Communications Gateway, your final bill or invoice includes charges on service fee meters for the part of the billing cycle before you cancel. Per-user meters charge for the entire billing cycle.
If you have multiple Azure Communications Gateway deployments and you move users
### Using Azure Prepayment with Azure Communications Gateway
-You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+You can pay for Azure Communications Gateway charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third-party products and services including those from the Azure Marketplace.
## Monitor costs
communications-gateway Prepare For Live Traffic Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-operator-connect.md
Before you can launch your Operator Connect or Teams Phone Mobile service, you a
- Test your service. - Prepare for launch.
-In this article, you learn about the steps you and your onboarding team must take.
+In this article, you learn about the steps that you and your onboarding team must take.
> [!TIP]
-> In many cases, your onboarding team is from Microsoft, provided through the [Included Benefits](onboarding.md) or through a separate arrangement.
+> This article assumes that your Azure Communications Gateway onboarding team from Microsoft is also onboarding you to Operator Connect and/or Teams Phone Mobile. If you've chosen a different onboarding partner for Operator Connect or Teams Phone Mobile, you need to ask them to arrange changes to the Operator Connect and/or Teams Phone Mobile environments.
> [!IMPORTANT] > Some steps can require days or weeks to complete. For example, you'll need to wait at least seven days for automated testing of your deployment and schedule your launch date at least two weeks in advance. We recommend that you read through these steps in advance to work out a timeline.
communications-gateway Prepare For Live Traffic Teams Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-teams-direct-routing.md
Before you can launch your Microsoft Teams Direct Routing service, you and your
- Test your service. - Prepare for launch.
-In this article, you learn about the steps you and your onboarding team must take.
-
-> [!TIP]
-> In many cases, your onboarding team is from Microsoft, provided through the [Included Benefits](onboarding.md) or through a separate arrangement.
+In this article, you learn about the steps that you and your Azure Communications Gateway onboarding team must take.
> [!IMPORTANT] > Some steps can require days or weeks to complete. We recommend that you read through these steps in advance to work out a timeline.
communications-gateway Prepare For Live Traffic Zoom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-zoom.md
+
+ Title: Prepare for Zoom Phone Cloud Peering live traffic with Azure Communications Gateway
+description: After deploying Azure Communications Gateway, you and your onboarding team must carry out further integration work before you can launch your Zoom Phone Cloud Peering service.
++++ Last updated : 11/06/2023++
+# Prepare for live traffic with Zoom Phone Cloud Peering and Azure Communications Gateway
+
+Before you can launch your Zoom Phone Cloud Peering service, you and your onboarding team must:
+
+- Test your service.
+- Prepare for launch.
+
+In this article, you learn about the steps that you and your Azure Communications Gateway onboarding team must take.
+
+> [!IMPORTANT]
+> Some steps can require days or weeks to complete. We recommend that you read through these steps in advance to work out a timeline.
+
+## Prerequisites
+
+You must have completed the following procedures.
+
+- [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md)
+- [Deploy Azure Communications Gateway](deploy.md)
+- [Connect Azure Communications Gateway to Zoom Phone Cloud Peering](connect-zoom.md)
+- [Configure test numbers for Zoom Phone Cloud Peering](configure-test-numbers-zoom.md)
+
+You must be able to contact your Zoom representative.
+
+## Carry out integration testing and request changes
+
+Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh.
+
+You must test typical call flows for your network. Your onboarding team will provide an example test plan that we recommend you follow. Your test plan should include call flow, failover, and connectivity testing.
+
+- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft must make the changes for you.
+- If you need changes to the configuration of devices in your core network, you must make those changes.
+- If you need changes to Zoom configuration, you must arrange those changes with Zoom.
+
+## Run connectivity tests and provide proof to Zoom
+
+Before you can launch, Zoom requires proof that your network is properly connected to their servers. The testing you need to carry out is described in Zoom's _Zoom Phone Provider Exchange Solution Reference Guide_ or other documentation provided by your Zoom representative.
+
+You must capture the signaling in your network and provide the proof to your Zoom representative.
+
+## Test raising a ticket
+
+You must test that you can raise tickets in the Azure portal to report problems with Azure Communications Gateway. See [Get support or request changes for Azure Communications Gateway](request-changes.md).
+
+> [!NOTE]
+> If we think a problem is caused by traffic from Zoom servers, we might ask you to raise a separate support request with Zoom. Ensure you also know how to raise a support request with Zoom.
+
+## Learn about monitoring Azure Communications Gateway
+
+Your staff can use a selection of key metrics to monitor Azure Communications Gateway. These metrics are available to anyone with the Reader role on the subscription for Azure Communications Gateway. See [Monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
+
+## Schedule launch
+
+Your launch date is the date that you'll be able to start selling Zoom Phone Cloud Peering service. You must arrange this date with your Zoom representative.
+
+## Next steps
+
+- Learn about [getting support and requesting changes for Azure Communications Gateway](request-changes.md).
+- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
communications-gateway Prepare To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-to-deploy.md
Previously updated : 10/09/2023 Last updated : 11/06/2023 # Prepare to deploy Azure Communications Gateway
The following sections describe the information you need to collect and the deci
## Arrange onboarding
-You need an onboarding partner to deploy Azure Communications Gateway. If you're not eligible for onboarding to Microsoft Teams through Azure Communications Gateway's [Included Benefits](onboarding.md) or you haven't arranged alternative onboarding with Microsoft through a separate arrangement, you need to arrange an onboarding partner yourself.
+You need an Microsoft onboarding team to deploy Azure Communications Gateway. Azure Communications Gateway includes an onboarding program called [Included Benefits](onboarding.md). If you're not eligible for Included Benefits or you require more support, discuss your requirements with your Microsoft sales representative.
+
+The Operator Connect and Teams Phone Mobile programs also require an onboarding partner who manages the necessary changes to the Operator Connect or Teams Phone Mobile environments and coordinates with Microsoft Teams on your behalf. The Azure Communications Gateway Included Benefits project team fulfills this role, but you can choose a different onboarding partner to coordinate with Microsoft Teams on your behalf.
## Ensure you have a suitable support plan
If you want to use ExpressRoute Microsoft Peering, consult with your onboarding
Ensure your network is set up as shown in the following diagram and has been configured in accordance with any network connectivity specifications that you've been issued for your chosen communications services. You must have two Azure Regions with cross-connect functionality. For more information on the reliability design for Azure Communications Gateway, see [Reliability in Azure Communications Gateway](reliability-communications-gateway.md). You must decide whether you want Azure Communications Gateway to have an autogenerated `*.commsgw.azure.com` domain name or a subdomain of a domain you already own, using [domain delegation with Azure DNS](../dns/dns-domain-delegation.md). Domain delegation provides topology hiding and might increase customer trust, but requires giving us full control over the subdomain that you delegate. For Microsoft Teams Direct Routing, choose domain delegation if you don't want customers to see an `*.commsgw.azure.com` in their Microsoft 365 admin centers.
For Teams Phone Mobile, you must decide how your network should determine whethe
For more information on these options, see [Call control integration for Teams Phone Mobile](interoperability-operator-connect.md#call-control-integration-for-teams-phone-mobile) and [Mobile Control Point in Azure Communications Gateway](mobile-control-point.md).
-If you plan to route emergency calls through Azure Communications Gateway for Operator Connect or Teams Phone Mobile, read [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calling-operator-connect.md) to learn about your options.
+If you plan to route emergency calls through Azure Communications Gateway, read about emergency calling with your chosen communications service:
+
+- [Microsoft Teams Direct Routing](emergency-calls-teams-direct-routing.md)
+- [Operator Connect and Teams Phone Mobile](emergency-calls-operator-connect.md)
+- [Zoom Phone Cloud Peering](emergency-calls-zoom.md)
## Configure MAPS or ExpressRoute
communications-gateway Reliability Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/reliability-communications-gateway.md
- subject-reliability - references_regions Previously updated : 10/09/2023 Last updated : 11/06/2023 # Reliability in Azure Communications Gateway
Each Azure Communications Gateway deployment consists of three separate regions:
## Service regions
-Service regions contain the voice and API infrastructure used for handling traffic between Microsoft Phone System and your network. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode. This geo-redundancy is mandated by the Operator Connect and Teams Phone Mobile programs. Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level.
+Service regions contain the voice and API infrastructure used for handling traffic between your network and your chosen communications services. Each instance of Azure Communications Gateway consists of two service regions that are deployed in an active-active mode (as required by the Operator Connect and Teams Phone Mobile programs). Fast failover between the service regions is provided at the infrastructure/IP level and at the application (SIP/RTP/HTTP) level.
> [!TIP] > You must always have two service regions, even if one of the service regions chosen is in a single-region Azure Geography (for example, Qatar). If you choose a single-region Azure Geography, choose a second Azure region in a different Azure Geography.
-These service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end-users should still be able to make and receive calls successfully during any Zone or Regional downtime.
+These service regions are identical in operation and provide resiliency to both Zone and Regional failures. Each service region can carry 100% of the traffic using the Azure Communications Gateway instance. As such, end users should still be able to make and receive calls successfully during any Zone or Regional downtime.
### Call routing requirements
-Azure Communications Gateway offers a 'successful redial' redundancy model: calls handled by failing peers are terminated, but new calls are routed to healthy peers. This model mirrors the redundancy model provided by Microsoft Teams itself.
+Azure Communications Gateway offers a 'successful redial' redundancy model: calls handled by failing peers are terminated, but new calls are routed to healthy peers. This model mirrors the redundancy model provided by Microsoft Teams.
We expect your network to have two geographically redundant sites. Each site should be paired with an Azure Communications Gateway region. The redundancy model relies on cross-connectivity between your network and Azure Communications Gateway service regions.
We expect your network to have two geographically redundant sites. Each site sho
Diagram of two operator sites (operator site A and operator site B) and two service regions (service region A and service region B). Operator site A has a primary route to service region A and a secondary route to service region B. Operator site B has a primary route to service region B and a secondary route to service region A. :::image-end:::
-Each Azure Communications Gateway service region provides an SRV record. This record contains all the SIP peers providing SBC functionality (for routing calls to Microsoft Phone System) within the region.
+Each Azure Communications Gateway service region provides an SRV record. This record contains all the SIP peers providing SBC functionality (for routing calls to communications services) within the region.
If your Azure Communications Gateway includes Mobile Control Point (MCP), each service region provides an extra SRV record for MCP. Each per-region MCP record contains MCP within the region at top priority and MCP in the other region at a lower priority.
Each site in your network must:
> - If the SRV lookup returns multiple targets, use the weight and priority of each target to select a single target. > - Send new calls to available Azure Communications Gateway peers.
-When your network routes calls to Microsoft Phone System (through Azure Communications Gateway's SIP peers), it must:
+When your network routes calls to Azure Communications Gateway's SIP peers for SBC function, it must:
> [!div class="checklist"] > - Use SIP OPTIONS (or a combination of OPTIONS and SIP traffic) to monitor the availability of the Azure Communications Gateway SIP peers.
Monitoring services might be temporarily unavailable until service has been rest
## Choosing management and service regions
-A single deployment of Azure Communications Gateway is designed to handle your Operator Connect and Teams Phone Mobile traffic within a geographic area. Both service regions should be deployed within the same geographic area (for example North America) to ensure that latency on voice calls remain within the limits required by the Operator Connect and Teams Phone Mobile programs. Consider the following points when you choose your service region locations:
+A single deployment of Azure Communications Gateway is designed to handle your traffic within a geographic area. Deploy both service regions within the same geographic area (for example North America). This model ensures that latency on voice calls remains within the limits required by the Operator Connect and Teams Phone Mobile programs.
+
+Consider the following points when you choose your service region locations:
- Select from the list of available Azure regions. You can see the Azure regions that can be selected as service regions on the [Products by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/) page. - Choose regions near to your own premises and the peering locations between your network and Microsoft to reduce call latency.
Management regions can be colocated with service regions. We recommend choosing
## Service-level agreements
-The reliability design described in this document is implemented by Microsoft and isn't configurable. For more information on the Azure Communications Gateway service-level agreements (SLAs), see the Azure Communications Gateway SLA.
+The reliability design described in this document is implemented by Microsoft and isn't configurable. For more information on the Azure Communications Gateway service-level agreements (SLAs), see the [Azure Communications Gateway SLA](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
## Next steps
communications-gateway Request Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/request-changes.md
Last updated 01/08/2023
# Get support or request changes to your Azure Communications Gateway
-If you notice problems with Azure Communications Gateway or you need Microsoft to make changes, you can raise a support request (also known as a support ticket). This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+If you notice problems with Azure Communications Gateway or you need Microsoft to make changes, you can raise a support request (also known as a support ticket) in the Azure portal.
-Azure provides unlimited support for subscription management, which includes billing, quota adjustments, and account transfers. For technical support, you need a support plan, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
+When you raise a request, we'll investigate. If we think the problem is caused by traffic from Zoom servers, we might ask you to raise a separate support request with Zoom.
+
+This article provides an overview of how to raise support requests for Azure Communications Gateway. For more detailed information on raising support requests, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
## Prerequisites
+We strongly recommend a Microsoft support plan that includes technical support, such as [Microsoft Unified Support](https://www.microsoft.com/en-us/unifiedsupport/overview) or [Premier Support](https://www.microsoft.com/en-us/unifiedsupport/premier).
+
+You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Communications Gateway subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level.
+
+## Confirm that you need to raise an Azure Communications Gateway support request
+ Perform initial troubleshooting to help determine if you should raise an issue with Azure Communications Gateway or a different component. Raising issues for the correct component helps resolve your issues faster. Raise an issue with Azure Communications Gateway if you experience an issue with:
Raise an issue with Azure Communications Gateway if you experience an issue with
- The Number Management Portal. - Your Azure bill relating to Azure Communications Gateway.
-You must have an **Owner**, **Contributor**, or **Support Request Contributor** role in your Azure Communications Gateway subscription, or a custom role with [Microsoft.Support/*](../role-based-access-control/resource-provider-operations.md#microsoftsupport) at the subscription level.
+If you're providing Zoom service, you'll need to raise a separate support request with Zoom for any changes that you need to your Zoom configuration.
-## 1. Generate a support request in the Azure portal
+## Create a support request in the Azure portal
1. Sign in to the [Azure portal](https://ms.portal.azure.com/). 1. Select the question mark icon in the top menu bar. 1. Select the **Help + support** button. 1. Select **Create a support request**.
-## 2. Enter a description of the problem or the change
+## Enter a description of the problem or the change
1. Concisely describe your problem or the change you need in the **Summary** box. 1. Select an **Issue type** from the drop-down menu.
You must have an **Owner**, **Contributor**, or **Support Request Contributor**
1. From the new **Problem subtype** drop-down menu, select the problem subtype that most accurately describes your issue. If the problem type you selected only has one subtype, the subtype is automatically selected. 1. Select **Next**.
-## 3. Assess the recommended solutions
+## Assess the recommended solutions
Based on the information you provided, we might show you recommended solutions you can use to try to resolve the problem. In some cases, we might even run a quick diagnostic. Solutions are written by Azure engineers and will solve most common problems. If you're still unable to resolve the issue, continue creating your support request by selecting **Return to support request** then selecting **Next**.
-## 4. Enter additional details
+## Enter additional details
In this section, we collect more details about the problem or the change and how to contact you. Providing thorough and detailed information in this step helps us route your support request to the right engineer. For more information, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
-## 5. Review and create your support request
+## Review and create your support request
Before creating your request, review the details and diagnostics that you'll send to support. If you want to change your request or the files you've uploaded, select **Previous** to return to any tab. When you're happy with your request, select **Create**.
communications-gateway Role In Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/role-in-network.md
Previously updated : 10/09/2023 Last updated : 11/06/2023 # Your network and Azure Communications Gateway
-Azure Communications Gateway sits at the edge of your network. This position allows it to manipulate signaling and media to meet the requirements of your networks and your chosen communications services. Azure Communications Gateway includes many interoperability settings by default, and you can arrange further interoperability configuration.
+Azure Communications Gateway sits at the edge of your network. This position allows it to manipulate signaling and media to meet the requirements of your networks and your chosen communications services (for example, Microsoft Operator Connect or Zoom Phone Cloud Peering). Azure Communications Gateway includes many interoperability settings by default, and you can arrange further interoperability configuration.
> [!TIP] > This section provides a brief overview of Azure Communications Gateway's interoperability features. For detailed information about interoperability with a specific communications service, see: > - [Interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md). > - [Interoperability of Azure Communications Gateway with Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md).
+> - [Overview of interoperability of Azure Communications Gateway with Zoom Phone Cloud Peering](interoperability-zoom.md)
## Role and position in the network Azure Communications Gateway sits at the edge of your fixed line and mobile networks. It connects these networks to one or more communications services. The following diagram shows where Azure Communications Gateway sits in your network.
- Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System, a softswitch in a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains certified SBC function and the MCP application server for anchoring mobile calls.
+ Architecture diagram showing Azure Communications Gateway connecting to the Microsoft Phone System and Zoom Phone Cloud Peering, a fixed line deployment and a mobile IMS core. Azure Communications Gateway contains SBC function, the MCP application server for anchoring Teams Phone Mobile calls and a provisioning API.
:::image-end::: Azure Communications Gateway provides all the features of a traditional session border controller (SBC). These features include:
Connectivity between your networks and Azure Communications Gateway must meet an
[!INCLUDE [communications-gateway-maps-or-expressroute](includes/communications-gateway-maps-or-expressroute.md)]
+The following diagram shows an operator network using MAPS or ExpressRoute (as recommended) to connect to Azure Communications Gateway.
++ For more information on how to route calls between Azure Communications Gateway and your network, see [Call routing requirements](reliability-communications-gateway.md#call-routing-requirements). ## SIP signaling support
You can arrange more interworking function as part of your initial network desig
Azure Communications Gateway supports both RTP and SRTP, and can interwork between them. Azure Communications Gateway offers other media manipulation features to allow your networks to interoperate with your chosen communications services. For example, you can use Azure Communications for: -- Transcoding (converting) between codecs supported by your network and codecs supported by your chosen communications service. - Changing how RTCP is handled - Controlling bandwidth allocation - Prioritizing specific media traffic for Quality of Service
For full details of the media interworking features available in Azure Communica
- Learn about [interoperability for Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md) - Learn about [interoperability for Microsoft Teams Direct Routing](interoperability-teams-direct-routing.md)
+- Learn about [interoperability for Zoom Phone Cloud Peering](interoperability-zoom.md)
- Learn about [onboarding and Inclusive Benefits](onboarding.md)-- Learn about [planning an Azure Communications Gateway deployment](get-started.md)
+- Learn about [planning an Azure Communications Gateway deployment](get-started.md)
communications-gateway Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/security.md
Previously updated : 10/09/2023 Last updated : 11/06/2023
Azure Communications Gateway uses mutual TLS for SIP, meaning that both the clie
You must manage the certificates that your network presents to Azure Communications Gateway. By default, Azure Communications Gateway supports the DigiCert Global Root G2 certificate and the Baltimore CyberTrust Root certificate as root certificate authority (CA) certificates. If the certificate that your network presents to Azure Communications Gateway uses a different root CA certificate, you must provide this certificate to your onboarding team when you [connect Azure Communications Gateway to your networks](deploy.md#connect-azure-communications-gateway-to-your-networks).
-We manage the certificate that Azure Communications Gateway uses to connect to your network and Microsoft Phone System. Azure Communications Gateway's certificate uses the DigiCert Global Root G2 certificate as the root CA certificate. If your network doesn't already support this certificate as a root CA certificate, you must download and install this certificate when you [connect Azure Communications Gateway to your networks](deploy.md#connect-azure-communications-gateway-to-your-networks).
+We manage the certificate that Azure Communications Gateway uses to connect to your network, Microsoft Phone System and Zoom servers. Azure Communications Gateway's certificate uses the DigiCert Global Root G2 certificate as the root CA certificate. If your network doesn't already support this certificate as a root CA certificate, you must download and install this certificate when you [connect Azure Communications Gateway to your networks](deploy.md#connect-azure-communications-gateway-to-your-networks).
### Cipher suites for SIP and RTP
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Previously updated : 09/06/2023 Last updated : 11/06/2023 # What's new in Azure Communications Gateway? This article covers new features and improvements for Azure Communications Gateway.
+## November 2023
+
+### Support for Zoom Phone Cloud Peering
+
+From November 2023, Azure Communications Gateway supports providing PSTN connectivity to Zoom with Zoom Phone Cloud Peering. You can provide Zoom Phone calling services to many customers, each with many users, with minimal disruption to your existing network.
+
+For more information about Zoom Phone Cloud Peering with Azure Communications Gateway, see [Overview of interoperability of Azure Communications Gateway with Zoom Phone Cloud Peering](interoperability-zoom.md). For an overview of deploying and configuring Azure Communications Gateway for Zoom, see [Get started with Azure Communications Gateway](get-started.md).
+ ## October 2023 ### Support for multitenant Microsoft Teams Direct Routing
container-apps Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/services.md
As you move from development to production, you can move from an add-on to a man
The following table shows you which service to use in development, and which service to use in production.
-| Functionality | dev service | Production managed service |
+| Functionality | Add on | Production managed service |
|||| | Cache | Open-source Redis | Azure Cache for Redis | | Database | N/A | Azure Cosmos DB |
For more information on the service commands and arguments, see the
## Limitations -- dev services are in public preview.-- Any container app created before May 23, 2023 isn't eligible to use dev services.-- dev services come with minimal guarantees. For instance, they're automatically restarted if they crash, however there's no formal quality of service or high-availability guarantees associated with them. For production workloads, use Azure-managed services.
+- Add ons are in public preview.
+- Any container app created before May 23, 2023 isn't eligible to use add ons.
+- Add ons come with minimal guarantees. For instance, they're automatically restarted if they crash, however there's no formal quality of service or high-availability guarantees associated with them. For production workloads, use Azure-managed services.
## Next steps
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Previously updated : 11/30/2022 Last updated : 10/31/2023 adobe-target: true
-# Welcome to Azure Cosmos DB
+# Azure Cosmos DB ΓÇô Unified AI Database
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table, PostgreSQL](includes/appliesto-nosql-mongodb-cassandra-gremlin-table-postgresql.md)] Today's applications are required to be highly responsive and always online. To achieve low latency and high availability, instances of these applications need to be deployed in datacenters that are close to their users. Applications need to respond in real time to large changes in usage at peak hours, store ever increasing volumes of data, and make this data available to users in milliseconds.
-Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
+Recently, the surge of AI-powered applications created another layer of complexity, because many of these applications currently integrate a multitude of data stores. For example, some teams built applications that simultaneously connect to MongoDB, Postgres, and Gremlin. These databases differ in implementation workflow and operational performances, posing extra complexity for scaling applications. Azure Cosmos DB can simplify and expedite your development by being the single AI database for your applications. Azure Cosmos DB accommodates all your operational data models, including relational, document, vector, key-value, graph, and table.
-Use Retrieval Augmented Generation (RAG) to bring the most semantically relevant data to enrich your AI-powered applications built with Azure OpenAI models like GPT-3.5 and GPT-4. For more information, see [Retrieval Augmented Generation (RAG) with Azure Cosmos DB](vector-search.md#retrieval-augmented-generation).
+Azure Cosmos DB is a fully managed NoSQL and relational database for modern app development including; AI, digital commerce, Internet of Things, booking management, and other types of solutions. Azure Cosmos DB offers single-digit millisecond response times, automatic and instant scalability, along with guaranteed speed at any scale. Business continuity is assured with [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) availability and enterprise-grade security.
App development is faster and more productive thanks to: -- Turnkey multi region data distribution anywhere in the world
+- Turnkey multi-region data distribution anywhere in the world
- Open source APIs - SDKs for popular languages.-- Retrieval Augmented Generation that brings your data to Azure OpenAI to
+- AI database functionalities like native vector search or seamless integration with Azure AI Services to support Retrieval Augmented Generation
As a fully managed service, Azure Cosmos DB takes database administration off your hands with automatic management, updates and patching. It also handles capacity management with cost-effective serverless and automatic scaling options that respond to application needs to match capacity with demand.
You can [Try Azure Cosmos DB for Free](https://azure.microsoft.com/try/cosmosdb/
## Key Benefits
+Here's some key benefits of using Azure Cosmos DB.
+ ### Guaranteed speed at any scale Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cosmos-db) speed and throughput, fast global access, and instant elasticity.
Gain unparalleled [SLA-backed](https://azure.microsoft.com/support/legal/sla/cos
### Simplified application development
-Build fast with open source APIs, multiple SDKs, schemaless data and no-ETL analytics over operational data.
+Build fast with open-source APIs, multiple SDKs, schemaless data and no-ETL analytics over operational data.
- Deeply integrated with key Azure services used in modern (cloud-native) app development including Azure Functions, IoT Hub, AKS (Azure Kubernetes Service), App Service, and more. - Choose from multiple database APIs including the native API for NoSQL, MongoDB, PostgreSQL, Apache Cassandra, Apache Gremlin, and Table.
+- Use Azure Cosmos DB as your unified AI database for data models like relational, document, vector, key-value, graph, and table.
- Build apps on API for NoSQL using the languages of your choice with SDKs for .NET, Java, Node.js and Python. Or your choice of drivers for any of the other database APIs. - Change feed makes it easy to track and manage changes to database containers and create triggered events with Azure Functions. - Azure Cosmos DB's schema-less service automatically indexes all your data, regardless of the data model, to deliver blazing fast queries.
End-to-end database management, with serverless and automatic scaling matching y
### Azure Synapse Link for Azure Cosmos DB
-[Azure Synapse Link for Azure Cosmos DB](synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables near real time analytics over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
+[Azure Synapse Link for Azure Cosmos DB](synapse-link.md) is a cloud-native hybrid transactional and analytical processing (HTAP) capability that enables analytics at near real-time over operational data in Azure Cosmos DB. Azure Synapse Link creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics.
- Reduced analytics complexity with No ETL jobs to manage. - Near real-time insights into your operational data.
End-to-end database management, with serverless and automatic scaling matching y
- Analytics for locally available, globally distributed, multi-region writes. - Native integration with Azure Synapse Analytics.
-## Solutions that benefit from Azure Cosmos DB
-
-[Web, mobile, gaming, and IoT applications](use-cases.md) that handle massive amounts of data, reads, and writes at a [global scale](distribute-data-globally.md) with near-real response times benefit from Azure Cosmos DB. Azure Cosmos DB's [guaranteed high availability](https://azure.microsoft.com/support/legal/sl#web-and-mobile-applications).
+## Azure Cosmos DB is more than an AI database
-## Next steps
+Besides AI database, Azure Cosmos DB should also be your goto database for web, mobile, gaming, and IoT applications. Azure Cosmos DB is well positioned for solutions that handle massive amounts of data, reads, and writes at a global scale with near-real response times. Azure Cosmos DB's guaranteed high availability, high throughput, low latency, and tunable consistency are huge advantages when building these types of applications. Learn about how Azure Cosmos DB can be used to build IoT and telematics, retail and marketing, gaming and web and mobile applications.
-Get started with Azure Cosmos DB with one of our quickstarts:
+## Related content
- Learn [how to choose an API](choose-api.md) in Azure Cosmos DB-- [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)-- [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)-- [Get started with Azure Cosmos DB for Apache Cassandra](cassandr)-- [Get started with Azure Cosmos DB for Apache Gremlin](gremlin/quickstart-dotnet.md)-- [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)-- [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-app-stacks-python.md)-- [A whitepaper on next-gen app development with Azure Cosmos DB](https://azure.microsoft.com/resources/microsoft-azure-cosmos-db-flexible-reliable-cloud-nosql-at-any-scale/)-- Trying to do capacity planning for a migration to Azure Cosmos DB?
- - If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](convert-vcore-to-request-unit.md)
- - If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-
-> [!div class="nextstepaction"]
-> [Try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/)
+ - [Get started with Azure Cosmos DB for NoSQL](nosql/quickstart-dotnet.md)
+ - [Get started with Azure Cosmos DB for MongoDB](mongodb/create-mongodb-nodejs.md)
+ - [Get started with Azure Cosmos DB for Apache Cassandra](cassandr)
+ - [Get started with Azure Cosmos DB for Apache Gremlin](gremlin/quickstart-dotnet.md)
+ - [Get started with Azure Cosmos DB for Table](table/quickstart-dotnet.md)
+ - [Get started with Azure Cosmos DB for PostgreSQL](postgresql/quickstart-app-stacks-python.md)
cosmos-db Computed Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/computed-properties.md
The constraints on computed property names are:
Queries in the computed property definition must be valid syntactically and semantically, otherwise the create or update operation fails. Queries should evaluate to a deterministic value for all items in a container. Queries may evaluate to undefined or null for some items, and computed properties with undefined or null values behave the same as persisted properties with undefined or null values when used in queries.
-The constraints on computed property query definitions are:
+The limitations on computed property query definitions are:
-- Queries must specify a FROM clause that represents the root item reference. Examples of supported FROM clauses are `FROM c`, `FROM root c`, and `FROM MyContainer c`.
+- Queries must specify a FROM clause that represents the root item reference. Examples of supported FROM clauses are: `FROM c`, `FROM root c`, and `FROM MyContainer c`.
- Queries must use a VALUE clause in the projection.-- Queries can't use any of the following clauses: WHERE, GROUP BY, ORDER BY, TOP, DISTINCT, OFFSET LIMIT, EXISTS, ALL, and NONE.
+- Queries can't include a JOIN.
+- Queries can't use non-deterministic Scalar expressions. Examples of non-deterministic scalar expressions are: GetCurrentDateTime, GetCurrentTimeStamp, GetCurrentTicks, and RAND.
+- Queries can't use any of the following clauses: WHERE, GROUP BY, ORDER BY, TOP, DISTINCT, OFFSET LIMIT, EXISTS, ALL, LAST, FIRST, and NONE.
- Queries can't include a scalar subquery.-- Aggregate functions, spatial functions, nondeterministic functions, and user defined functions aren't supported.
+- Aggregate functions, spatial functions, nondeterministic functions, and user defined functions (UDFs) aren't supported.
## Create computed properties
There are a few considerations for indexing computed properties, including:
> [!NOTE] > All computed properties are defined at the top level of the item. The path is always `/<computed property name>`.
+> [!TIP]
+> Every time you update container properties, the old values are overwritten. If you have existing computed properties and want to add new ones, be sure that you add both new and existing computed properties to the collection.
+
+>![NOTE]
+> When the definition of an indexed computed property is modified, it's not automatically reindexed. To index the modified computed property, you'll first need to drop the computed property from the index. Then after the reindexing is completed, add the computed property back to the index policy.
+>
+> If you want to delete a computed property, you'll first need to remove it from the index policy.
++ ### Add a single index for computed properties To add a single index for a computed property named `cp_myComputedProperty`:
Adding computed properties to a container doesn't consume RUs. Write operations
## Related content - [Manage indexing policies](../how-to-manage-indexing-policy.md)-- [Model document data](../../modeling-data.md)
+- [Model document data](../../modeling-data.md)
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
+
+ Title: Vector database
+
+description: Use Retrieval Augmented Generation (RAG) and vector search to ground your Azure OpenAI models with data stored in Azure Cosmos DB as a vector database.
++++ Last updated : 10/31/2023++
+# Using Azure Cosmos DB as a vector database
++
+You likely considered augmenting your applications with Large Language Models (LLMs) that can access your own data store through Retrieval Augmented Generation (RAG). This approach allows you to
+
+- Generate contextually relevant and accurate responses to user prompts from AI models
+- Overcome ChatGPT, GPT-3.5, or GPT-4ΓÇÖs token limits
+- Reduce the costs from frequent fine-tuning on updated data
+
+Some RAG implementation tutorials demonstrate integrating vector databases. Instead of adding a separate vector database to your existing tech stack, you can achieve the same outcome using Azure Cosmos DB with Azure OpenAI Service and optionally Azure Cognitive Search when working with multi-modal data.
+
+Here are some solutions:
+
+| | Description |
+| | |
+| **[Azure Cosmos DB for NoSQL with Azure Cognitive Search](#azure-cosmos-db-for-nosql-and-azure-cognitive-search)**. | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure Cognitive Search. |
+| **[Azure Cosmos DB for Mongo DB vCore](#azure-cosmos-db-for-mongodb-vcore)**. | Featuring native support for vector search, store your application data and vector embeddings together in a single MongoDB-compatible service. |
+| **[Azure Cosmos DB for PostgreSQL](#azure-cosmos-db-for-postgresql)**. | Offering native support vector search, you can store your data and vectors together in a scalable PostgreSQL offering. |
+
+## Related concepts
+
+You might first want to ensure that you understand the following concepts:
+
+- Grounding LLMs
+- Retrieval Augmented Generation (RAG)
+- Embeddings
+- Vector search
+- Prompt engineering
+
+RAG harnesses LLMs and external knowledge to effectively handle custom data or domain-specific knowledge. It involves extracting pertinent information from a custom data source and integrating it into the model request through prompt engineering.
+
+A robust mechanism is necessary to identify the most relevant data from the custom source that can be passed to the LLM. This mechanism allows you to optimize for the LLMΓÇÖs limit on the number of tokens per request. This limitation is where embeddings play a crucial role. By converting the data in your database into embeddings and storing them as vectors for future use, we apply the advantage of capturing the semantic meaning of the text, going beyond mere keywords to comprehend the context.
+
+Prior to sending a request to the LLM, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the LLM request using prompt engineering.
+
+Here are multiple ways to implement RAG on your data stored in Azure Cosmos DB, thus achieving the same outcome as using a vector database.
+
+## Azure Cosmos DB for NoSQL and Azure Cognitive Search
+
+Implement RAG patterns with Azure Cosmos DB for NoSQL and Azure Cognitive Search. This approach enables powerful integration of your data residing in Azure Cosmos DB for NoSQL into your AI-oriented applications. Azure Cognitive Search empowers you to efficiently index, and query high-dimensional vector data, allowing you to use Azure Cosmos DB for NoSQL for the same purpose as a vector database.
+
+### Code samples
+
+- [.NET RAG Pattern retail reference solution for NoSQL](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch)
+- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel)
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch)
+
+## Azure Cosmos DB for MongoDB vCore
+
+Use the native vector search feature in Azure Cosmos DB for MongoDB vCore, which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
+
+### Code samples
+
+- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)
+- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)
+- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)
+
+## Azure Cosmos DB for PostgreSQL
+
+Use the native vector search feature in Azure Cosmos DB for PostgreSQL, offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
+
+### Code samples
+
+- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch)
+
+## Related content
+
+- [Vector search with Azure Cognitive Search](../search/vector-search-overview.md)
+- [Vector search with Azure Cosmos DB for MongoDB vCore](mongodb/vcore/vector-search.md)
+- [Vector search with Azure Cosmos DB PostgreSQL](postgresql/howto-use-pgvector.md)
+- Learn more about [Azure OpenAI embeddings](../ai-services/openai/concepts/understand-embeddings.md)
+- Learn how to [generate embeddings using Azure OpenAI](../ai-services/openai/tutorials/embeddings.md)
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-search.md
- Title: Vector search using Azure Cosmos DB-
-description: Use Retrieval Augmented Generation (RAG) and vector search to ground your Azure OpenAI models with data stored in Azure Cosmos DB.
---- Previously updated : 09/20/2023--
-# Vector search with data in Azure Cosmos DB
--
-The Large Language Models (LLMs) in Azure OpenAI are incredibly powerful tools that can take your AI-powered applications to the next level. The utility of LLMs can increase significantly when the models can have access to the right data, at the right time, from your application's data store. This process is known as Retrieval Augmented Generation (RAG) and there are many ways to do this today with Azure Cosmos DB.
-
-In this article, we review key concepts for RAG and then provide links to tutorials and sample code that demonstrate some of most powerful RAG patterns using *vector search* to bring the most semantically relevant data to your LLMs. These tutorials can help you become comfortable with using your Azure Cosmos DB data in Azure OpenAI models.
-
-To jump right into tutorials and sample code for RAG patterns with Azure Cosmos DB, use the following links:
-
-| | Description |
-| | |
-| **[Azure Cosmos DB for NoSQL with Azure Cognitive Search](#azure-cosmos-db-for-nosql-and-azure-cognitive-search)**. | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure Cognitive Search. |
-| **[Azure Cosmos DB for Mongo DB vCore](#azure-cosmos-db-for-mongodb-vcore)**. | Featuring native support for vector search, store your application data and vector embeddings together in a single MongoDB-compatible service. |
-| **[Azure Cosmos DB for PostgreSQL](#azure-cosmos-db-for-postgresql)**. | Offering native support vector search, you can store your data and vectors together in a scalable PostgreSQL offering. |
-
-## Key concepts
-
-This section includes key concepts that are critical to implementing RAG with Azure Cosmos DB and Azure OpenAI.
-
-### Retrieval Augmented Generation (RAG) <a id="retrieval-augmented-generation"></a>
-
-RAG involves the process of retrieving supplementary data to provide the LLM with the ability to use this data when it generates responses. When presented with a user's question or prompt, RAG aims to select the most pertinent and current domain-specific knowledge from external sources, such as articles or documents. This retrieved information serves as a valuable reference for the model when generating its response. For example, a simple RAG pattern using Azure Cosmos DB for NoSQL could be:
-
-1. Insert data into an Azure Cosmos DB for NoSQL database and collection.
-2. Create embeddings from a data property using an Azure OpenAI Embeddings model
-3. Link the Azure Cosmos DB for NoSQL to Azure Cognitive Search (for vector indexing/search)
-4. Create a vector index over the embeddings properties.
-5. Create a function to perform vector similarity search based on a user prompt.
-6. Perform question answering over the data using an Azure OpenAI Completions model
-
-The RAG pattern, with prompt engineering, serves the purpose of enhancing response quality by offering more contextual information to the model. RAG enables the model to apply a broader knowledge base by incorporating relevant external sources into the generation process, resulting in more comprehensive and informed responses. For more information on "grounding" LLMs, see [grounding LLMs - Microsoft Community Hub](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857)
-
-### Prompts and prompt engineering
-
-A prompt refers to a specific text or information that can serve as an instruction to an LLM, or as contextual data that the LLM can build upon. A prompt can take various forms, such as a question, a statement, or even a code snippet. Prompts can serve as:
--- **Instructions** provide directives to the LLM-- **Primary content**: gives information to the LLM for processing-- **Examples**: help condition the model to a particular task or process-- **Cues**: direct the LLM's output in the right direction-- **Supporting content**: represents supplemental information the LLM can use to generate output-
-The process of creating good prompts for a scenario is called *prompt engineering*. For more information about prompts and best practices for prompt engineering, see [Azure OpenAI Service - Azure OpenAI | Microsoft Learn](../ai-services/openai/concepts/prompt-engineering.md).
-
-### Tokens
-
-Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word `hamburger` would be divided into tokens such as `ham`, `bur`, and `ger` while a short and common word like `pear` would be considered a single token.
-
-In Azure OpenAI, input text provided to the API is turned into tokens (tokenized). The number of tokens processed in each API request depends on factors such as the length of the input, output, and request parameters. The quantity of tokens being processed also impacts the response time and throughput of the models. There are limits to the amount tokens each model can take in a single request/response from Azure OpenAI. [Learn more about Azure OpenAI Service quotas and limits here](../ai-services/openai/quotas-limits.md)
-
-### Vectors
-
-Vectors are ordered arrays of numbers (typically floats) that can represent information about some data. For example, an image can be represented as a vector of pixel values, or a string of text can be represented as a vector or ASCII values. The process for turning data into a vector is called *vectorization*.
-
-### Embeddings
-
-Embeddings are vectors that represent important features of data. Embeddings are often learned by using a deep learning model, and machine learning and AI models utilized them as features. Embeddings can also capture semantic similarity between similar concepts. For example, in generating an embedding for the words `person` and `human`, we would expect their embeddings (vector representation) to be similar in value since the words are also semantically similar.
-
- Azure OpenAI features models for creating embeddings from text data. The service breaks text out into tokens and generates embeddings using models pretrained by OpenAI. [Learn more about creating embeddings with Azure OpenAI here.](../ai-services/openai//concepts/understand-embeddings.md)
-
-### Vector search
-
-Vector search refers to the process of finding all vectors in a dataset that are semantically similar to a specific query vector. Therefore, a query vector for the word `human`, and I search the entire dictionary for semantically similar words, I would expect to find the word `person` as a close match. This closeness, or distance, is measured using a similarity metric such as cosine similarity. The more similar the vectors are, the smaller the distance between them.
-
-Consider a scenario where you have a query over millions of document and you want to find the most similar document in your data. You can create embeddings for your data and the query document using Azure OpenAI. Then, you can perform a vector search to find the most similar documents from your dataset. However, performing a vector search across a few examples is trivial. Performing this same search across thousands or millions of data points becomes challenging. There are also trade-offs between exhaustive search and approximate nearest neighbor (ANN) search methods including latency, throughput, accuracy, and cost, all of which can depend on the requirements of your application.
-
-Adding Azure Cosmos DB vector search capabilities to Azure OpenAI Service enables you to store long term memory and chat history to improve your Large Language Model (LLM) solution. Vector search allows you to efficiently query back the most relevant context to personalize Azure OpenAI prompts in a token-efficient manner. Storing vector embeddings alongside the data in an integrated solution minimizes the need to manage data synchronization and accelerates your time-to-market for AI app development.
-
-## Using Azure Cosmos DB data with Azure OpenAI
-
-The RAG pattern harnesses external knowledge and models to effectively handle custom data or domain-specific knowledge. It involves extracting pertinent information from an external data source and integrating it into the model request through prompt engineering.
-
-A robust mechanism is necessary to identify the most relevant data from the external source that can be passed to the model considering the limitation of a restricted number of tokens per request. This limitation is where embeddings play a crucial role. By converting the data in our database into embeddings and storing them as vectors for future use, we apply the advantage of capturing the semantic meaning of the text, going beyond mere keywords to comprehend the context.
-
-Prior to sending a request to Azure OpenAI, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the model request using prompt engineering.
-
-There are multiple ways to use RAG and vector search with your data stored in Azure Cosmos DB.
--
-## Azure Cosmos DB for NoSQL and Azure Cognitive Search
-
-Implement RAG-patterns with Azure Cosmos DB for NoSQL and Azure Cognitive Search. This approach enables powerful integration of your data residing in Azure Cosmos DB for NoSQL into your AI-oriented applications. Azure Cognitive Search empowers you to efficiently index, and query high-dimensional vector data, which is stored in Azure Cosmos DB for NoSQL.
-
-### Code samples
--- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)-- [.NET samples - Hackathon project](https://github.com/Azure/Build-Modern-AI-Apps-Hackathon)-- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch)-- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel)-- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch)-
-## Azure Cosmos DB for MongoDB vCore
-
-RAG can be applied using the native vector search feature in Azure Cosmos DB for MongoDB vCore, facilitating a smooth merger of your AI-centric applications with your stored data in Azure Cosmos DB. The use of vector search offers an efficient way to store, index, and search high-dimensional vector data directly within Azure Cosmos DB for MongoDB vCore alongside other application data. This approach removes the necessity of migrating your data to costlier alternatives for vector search.
-
-### Code samples
--- [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)-- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore)-- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore)-
-## Azure Cosmos DB for PostgreSQL
-
-You can employ RAG by utilizing native vector search within Azure Cosmos DB for PostgreSQL. This strategy provides a seamless integration of your AI-driven applications, including the ones developed using Azure OpenAI embeddings, with your data housed in Azure Cosmos DB. By taking advantage of vector search, you can effectively store, index, and execute queries on high-dimensional vector data directly within Azure Cosmos DB for PostgreSQL along with the rest of your data.
-
-### Code samples
--- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch)--
-## Next steps
---- [Vector search with Azure Cognitive Search](../search/vector-search-overview.md)-- [Vector search with Azure Cosmos DB for MongoDB vCore](mongodb/vcore/vector-search.md)-- [Vector search with Azure Cosmos DB PostgreSQL](postgresql/howto-use-pgvector.md)-- Learn more about [Azure OpenAI embeddings](../ai-services/openai/concepts/understand-embeddings.md)-- Learn how to [generate embeddings using Azure OpenAI](../ai-services/openai/tutorials/embeddings.md)-----
data-factory Connector Microsoft Fabric Lakehouse Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse-files.md
Title: Copy data in Microsoft Fabric Lakehouse Files (Preview)
+ Title: Copy and transform data in Microsoft Fabric Lakehouse Files (Preview)
-description: Learn how to copy data to and from Microsoft Fabric Lakehouse Files (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
+description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse Files (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
Previously updated : 09/28/2023 Last updated : 11/01/2023
-# Copy data in Microsoft Fabric Lakehouse Files (Preview) using Azure Data Factory or Azure Synapse Analytics
+# Copy and transform data in Microsoft Fabric Lakehouse Files (Preview) using Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Files (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+The Microsoft Fabric Lakehouse serves as a data architecture platform designed to store, manage, and analyze both structured and unstructured data within a single location. This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Files (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse Files (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
> [!IMPORTANT] > This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
This Microsoft Fabric Lakehouse Files connector is supported for the following c
| Supported capabilities|IR | Managed private endpoint| || --| --| |[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
This section describes the resulting behavior of the copy operation for differen
| false |flattenHierarchy | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;autogenerated name for File2<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. | | false |mergeFiles | Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File2<br/>&nbsp;&nbsp;&nbsp;&nbsp;Subfolder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File3<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File4<br/>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;File5 | The target Folder1 is created with the following structure: <br/><br/>Folder1<br/>&nbsp;&nbsp;&nbsp;&nbsp;File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1<br/><br/>Subfolder1 with File3, File4, and File5 isn't picked up. |
+## Mapping data flow properties
+When transforming data in mapping data flow, you can read and write to files in Microsoft Fabric Lakehouse. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+
+### Microsoft Fabric Lakehouse Files as a source type
+
+Microsoft Fabric Lakehouse Files connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
+### Microsoft Fabric Lakehouse Files as a sink type
+
+Microsoft Fabric Lakehouse Files connector supports the following file formats. Refer to each article for format-based settings.
+
+- [Avro format](format-avro.md)
+- [Delimited text format](format-delimited-text.md)
+- [JSON format](format-json.md)
+- [ORC format](format-orc.md)
+- [Parquet format](format-parquet.md)
+
## Next steps For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Connector Microsoft Fabric Lakehouse Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-lakehouse-table.md
Title: Copy data in Microsoft Fabric Lakehouse Table (Preview)
+ Title: Copy and Transform data in Microsoft Fabric Lakehouse Table (Preview)
-description: Learn how to copy data to and from Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
+description: Learn how to copy and transform data to and from Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics pipelines.
Previously updated : 09/28/2023 Last updated : 11/01/2023
-# Copy data in Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics
+# Copy and Transform data in Microsoft Fabric Lakehouse Table (Preview) using Azure Data Factory or Azure Synapse Analytics
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Table (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+The Microsoft Fabric Lakehouse serves as a data architecture platform designed to store, manage, and analyse both structured and unstructured data within a single location. This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Lakehouse Table (Preview) and use Data Flow to transform data in Microsoft Fabric Lakehouse Files (Preview). To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
> [!IMPORTANT] > This connector is currently in preview. You can try it out and give us feedback. If you want to take a dependency on preview connectors in your solution, please contact [Azure support](https://azure.microsoft.com/support/).
This Microsoft Fabric Lakehouse Table connector is supported for the following c
| Supported capabilities|IR | Managed private endpoint| || --| --| |[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+|[Mapping data flow](concepts-data-flow-overview.md) (source/sink)|&#9312; |Γ£ô |
<small>*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*</small>
To copy data from Microsoft Fabric Lakehouse Table, set the **type** property in
} ] ```
+## Mapping data flow properties
+
+When transforming data in mapping data flow, you can read and write to tables in Microsoft Fabric Lakehouse. For more information, see the [source transformation](data-flow-source.md) and [sink transformation](data-flow-sink.md) in mapping data flows.
+
+### Microsoft Fabric Lakehouse Table as a source type
+
+There are no configurable properties under source options.
+
+### Microsoft Fabric Lakehouse Table as a sink type
+
+The following properties are supported in the Mapping Data Flows **sink** section:
+
+| Name | Description | Required | Allowed values | Data flow script property |
+| - | -- | -- | -- | - |
+| Update method | When you select "Allow insert" alone or when you write to a new delta table, the target receives all incoming rows regardless of the Row policies set. If your data contains rows of other Row policies, they need to be excluded using a preceding Filter transform. <br><br> When all Update methods are selected a Merge is performed, where rows are inserted/deleted/upserted/updated as per the Row Policies set using a preceding Alter Row transform. | yes | `true` or `false` | insertable <br> deletable <br> upsertable <br> updateable |
+| Optimized Write | Achieve higher throughput for write operation via optimizing internal shuffle in Spark executors. As a result, you may notice fewer partitions and files that are of a larger size | no | `true` or `false` | optimizedWrite: true |
+| Auto Compact | After any write operation has completed, Spark will automatically execute the ```OPTIMIZE``` command to re-organize the data, resulting in more partitions if necessary, for better reading performance in the future | no | `true` or `false` | autoCompact: true |
+| Merge Schema | Merge schema option allows schema evolution, i.e. any columns that are present in the current incoming stream but not in the target Delta table is automatically added to its schema. This option is supported across all update methods. | no | `true` or `false` | mergeSchema: true |
+
+**Example: Microsoft Fabric Lakehouse Table sink**
+
+```
+sink(allowSchemaDrift: true,
+ΓÇ» ΓÇ» validateSchema: false,
+ΓÇ» ΓÇ» input(
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» CustomerID as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» NameStyle as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» Title as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» FirstName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» MiddleName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» LastName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» Suffix as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» CompanyName as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» SalesPerson as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» EmailAddress as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» Phone as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordHash as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» PasswordSalt as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» rowguid as string,
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» ModifiedDate as string
+ΓÇ» ΓÇ» ),
+ΓÇ» ΓÇ» deletable:false,
+ΓÇ» ΓÇ» insertable:true,
+ΓÇ» ΓÇ» updateable:false,
+ΓÇ» ΓÇ» upsertable:false,
+ΓÇ» ΓÇ» optimizedWrite: true,
+ΓÇ» ΓÇ» mergeSchema: true,
+ΓÇ» ΓÇ» autoCompact: true,
+ΓÇ» ΓÇ» skipDuplicateMapInputs: true,
+ΓÇ» ΓÇ» skipDuplicateMapOutputs: true) ~> CustomerTable
+
+```
+ ## Next steps
data-factory Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-overview.md
Previously updated : 06/05/2023 Last updated : 10/31/2023
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Previously updated : 08/23/2023 Last updated : 11/1/2023
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
+## October 2023
+
+### Azure portal experience enhancement:
+We released a new user friendly experience to install ISV solutions that are available for Azure Data Manager for Agriculture users. You can now go to your Azure Data Manager for Agriculture instance on the Azure portal, view and install available solutions in a seamless user experience. Today the ISV solutions available are from Bayer AgPowered services, you can see the marketplace listing [here](https://azuremarketplace.microsoft.com/marketplace/apps?search=bayer&page=1). You can learn more about installing ISV solutions [here](how-to-set-up-isv-solution.md).
+ ## July 2023 ### Weather API update:
-We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs have been replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather).
+We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs are replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather).
### New farm operations connector:
-We've added support for Climate FieldView as a built-in data source. You can now auto sync planting, application and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md).
+We added support for Climate FieldView as a built-in data source. You can now auto sync planting, application and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md).
### Common Data Model now with geo-spatial support:
-WeΓÇÖve updated our data model to improve flexibility. The boundary object has been deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that may not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md).
+We updated our data model to improve flexibility. The boundary object has been deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md).
## June 2023
In Azure Data Manager for Agriculture Preview, you can monitor how and when your
You can connect to Azure Data Manager for Agriculture service from your virtual network via a private endpoint. You can then limit access to your Azure Data Manager for Agriculture Preview instance over these private IP addresses. [Private Links](how-to-set-up-private-links.md) are now available for your use. ### BYOL for satellite imagery
-To support scalable ingestion of geometry-clipped imagery, we've partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md).
+To support scalable ingestion of geometry-clipped imagery, we partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md).
## March 2023
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
Title: Support for the Defender for Containers plan
-description: Review support requirements for the Defender for Containers plan in Microsoft Defender for Cloud.
+ Title: Containers support matrix in Defender for Cloud
+description: Review support requirements for container capabilities in Microsoft Defender for Cloud.
Last updated 09/06/2023
-# Defender for Containers support
+# Containers support matrix in Defender for Cloud
-This article summarizes support information for the [Defender for Containers plan](defender-for-containers-introduction.md) in Microsoft Defender for Cloud.
+This article summarizes support information for Container capabilities in Microsoft Defender for Cloud.
> [!NOTE] > Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Azure (AKS)
-| Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Pricing Tier | Azure clouds availability |
+| Domain - Feature | Supported Resources | Linux release state | Windows release state | Agentless/Agent-based | Plans | Azure clouds availability |
|--|--|--|--|--|--|--|
-| [Agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes) | ACR, AKS | GA | GA | Agentless | Defender for Containers or Defender CSPM | Azure commercial clouds |
-| Compliance-Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) - registry scan [OS packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) (powered by Qualys) -registry scan [language packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| [Vulnerability assessment (powered by Qualys) - running images](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender agent | Defender for Containers | Commercial clouds |
-| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - registry scan | ACR, Private ACR | Preview | | Agentless | Defender for Containers | Commercial clouds |
-| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) powered by MDVM - running images | AKS | Preview | | Defender agent | Defender for Containers | Commercial clouds |
-| [Hardening (control plane)](defender-for-containers-architecture.md) | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Compliance - Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Microsoft Azure operated by 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) - registry scan (powered by Qualys) [OS packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| [Vulnerability assessment](defender-for-containers-vulnerability-assessment-azure.md) - registry scan (powered by Qualys) [language packages](#registries-and-images-support-for-azurepowered-by-qualys) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| [Vulnerability assessment - running images (powered by Qualys)](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) | AKS | GA | Preview | Defender agent | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) - registry scan (powered by MDVM)| ACR, Private ACR | Preview | | Agentless | Defender for Containers | Commercial clouds |
+| [Vulnerability assessment](agentless-container-registry-vulnerability-assessment.md) - running images (powered by MDVM) | AKS | Preview | | Defender agent | Defender for Containers | Commercial clouds |
+| [Hardening (control plane)](defender-for-containers-architecture.md) | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
| [Hardening (Kubernetes data plane)](kubernetes-workload-protections.md) | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government,Azure operated by 21Vianet | | [Runtime threat detection](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters) (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Runtime threat detection (workload) | AKS | GA | - | Defender agent | Defender for Containers | Commercial clouds |
-| Discovery/provisioning-Unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Discovery/provisioning-Collecting control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Discovery/provisioning-Defender agent auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Discovery/provisioning-Azure Policy for Kubernetes auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+|Runtime threat detection (workload) | AKS | GA | - | Defender agent | Defender for Containers | Commercial clouds |
+| [Discovery/provisioning - Agentless discovery for Kubernetes](defender-for-containers-introduction.md#agentless-discovery-for-kubernetes) | ACR, AKS | GA | GA | Agentless | Defender for Containers or Defender CSPM | Azure commercial clouds |
+| Discovery/provisioning - Discovery of Unprotected clusters | AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Discovery/provisioning - Collecting control plane threat data | AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Discovery/provisioning - Defender agent auto provisioning | AKS | GA | - | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
+| Discovery/provisioning - Azure Policy for Kubernetes auto provisioning | AKS | GA | - | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
### Registries and images support for Azure - powered by Qualys
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Vulnerability Assessment | Registry scan | ECR | Preview | - | Agentless | Defender for Containers | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | EKS | Preview | - | Azure Policy for Kubernetes | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | EKS | GA| - | Azure Policy for Kubernetes | Defender for Containers |
| Runtime protection| Threat detection (control plane)| EKS | Preview | Preview | Agentless | Defender for Containers | | Runtime protection| Threat detection (workload) | EKS | Preview | - | Defender agent | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | EKS | Preview | - | Agentless | Free |
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
> [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Private link restrictions
-
-Defender for Containers relies on the Defender agent for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
--
-Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
-
-Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
- ### Outbound proxy support Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
Outbound proxy without authentication and outbound proxy with basic authenticati
| Vulnerability Assessment | Registry scan | - | - | - | - | - | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | GKE | GA | GA | Agentless | Free |
-| Hardening | Kubernetes data plane recommendations | GKE | Preview | - | Azure Policy for Kubernetes | Defender for Containers |
+| Hardening |Kubernetes data plane recommendations | GKE | GA| - | Azure Policy for Kubernetes | Defender for Containers |
| Runtime protection| Threat detection (control plane)| GKE | Preview | Preview | Agentless | Defender for Containers | | Runtime protection| Threat detection (workload) | GKE | Preview | - | Defender agent | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | GKE | Preview | - | Agentless | Free |
Outbound proxy without authentication and outbound proxy with basic authenticati
> [!NOTE] > For additional requirements for Kubernetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
-### Private link restrictions
-
-Defender for Containers relies on the Defender agent for several features. The Defender agent doesn't support the ability to ingest data through Private Link. You can disable public access for ingestion, so that only machines that are configured to send traffic through Azure Monitor Private Link can send data to that workstation. You can configure a private link by navigating to **`your workspace`** > **Network Isolation** and setting the Virtual networks access configurations to **No**.
--
-Allowing data ingestion to occur only through Private Link Scope on your workspace Network Isolation settings, can result in communication failures and partial converge of the Defender for Containers feature set.
-
-Learn how to [use Azure Private Link to connect networks to Azure Monitor](../azure-monitor/logs/private-link-security.md).
- ### Outbound proxy support Outbound proxy without authentication and outbound proxy with basic authentication are supported. Outbound proxy that expects trusted certificates is currently not supported.
Outbound proxy without authentication and outbound proxy with basic authenticati
| Vulnerability Assessment | Registry scan - [language specific packages](#registries-and-images-supporton-premises) | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | | Vulnerability Assessment | View vulnerabilities for running images | - | - | - | - | - | | Hardening | Control plane recommendations | - | - | - | - | - |
-| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | Preview | - | Azure Policy for Kubernetes | Defender for Containers |
+| Hardening | Kubernetes data plane recommendations | Arc enabled K8s clusters | GA| - | Azure Policy for Kubernetes | Defender for Containers |
| Runtime protection| Threat detection (control plane)| Arc enabled K8s clusters | Preview | Preview | Defender agent | Defender for Containers | | Runtime protection for [supported OS](#registries-and-images-supporton-premises) | Threat detection (workload)| Arc enabled K8s clusters | Preview | - | Defender agent | Defender for Containers | | Discovery and provisioning | Discovery of unprotected clusters | Arc enabled K8s clusters | Preview | - | Agentless | Free |
Outbound proxy without authentication and outbound proxy with basic authenticati
- Learn how [Defender for Cloud manages and safeguards data](data-security.md). - Review the [platforms that support Defender for Cloud](security-center-os-coverage.md). ++
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Consolidation of Defender for Cloud's Service Level 2 names](#consolidation-of-defender-for-clouds-service-level-2-names) | November 1, 2023 | December 2023 |
| [General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#general-availability-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | October 30, 2023 | November 15, 2023 | | [Changes to how Microsoft Defender for Cloud's costs are presented in Microsoft Cost Management](#changes-to-how-microsoft-defender-for-clouds-costs-are-presented-in-microsoft-cost-management) | October 25, 2023 | November 2023 | | [Four alerts are set to be deprecated](#four-alerts-are-set-to-be-deprecated) | October 23, 2023 | November 23, 2023 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Consolidation of Defender for Cloud's Service Level 2 names
+
+**Announcement date: November 1, 2023**
+
+**Estimated date for change: December 2023**
+
+We're consolidating the legacy Service Level 2 names for all Defender for Cloud plans into a single new Service Level 2 name, **Microsoft Defender for Cloud**.
+
+Today, there are four Service Level 2 names: Azure Defender, Advanced Threat Protection, Advanced Data Security, and Security Center. The various meters for Microsoft Defender for Cloud are grouped across these separate Service Level 2 names, creating complexities when using Cost Management + Billing, invoicing, and other Azure billing-related tools.
+
+The change will simplify the process of reviewing Defender for Cloud charges and provide better clarity in cost analysis.
+
+To ensure a smooth transition, we've taken measures to maintain the consistency of the Product/Service name, SKU, and Meter IDs. Impacted customers will receive an informational Azure Service Notification to communicate the changes. No action is necessary from customers.
+
+The change is planned to go into effect on December 1, 2023.
+
+| OLD Service Level 2 name | NEW Service Level 2 name | Service Tier - Service Level 4 (No change) |
+|--|--|--|
+|Advanced Data Security |Microsoft Defender for Cloud|Defender for SQL|
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for Container Registries |
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for DNS |
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for Key Vault|
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for Kubernetes|
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for MySQL|
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for PostgreSQL|
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for Resource Manager|
+|Advanced Threat Protection|Microsoft Defender for Cloud|Defender for Storage|
+|Azure Defender |Microsoft Defender for Cloud|Defender for External Attack Surface Management|
+|Azure Defender |Microsoft Defender for Cloud|Defender for Azure Cosmos DB|
+|Azure Defender |Microsoft Defender for Cloud|Defender for Containers|
+|Azure Defender |Microsoft Defender for Cloud|Defender for MariaDB|
+|Security Center |Microsoft Defender for Cloud|Defender for App Service|
+|Security Center |Microsoft Defender for Cloud|Defender for Servers|
+|Security Center |Microsoft Defender for Cloud|Defender CSPM |
+++ ## General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries **Announcement date: October 30, 2023**
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
Title: What is Microsoft Dev Box?
-description: Learn how Microsoft Dev Box gives self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations.
+description: Microsoft Dev Box provides self-service access to ready-to-code cloud-based workstations. Dev Box supports developer productivity, integrating with tools like Visual Studio.
adobe-target: true
# What is Microsoft Dev Box?
-Microsoft Dev Box gives you self-service access to high-performance, preconfigured, and ready-to-code cloud-based workstations called dev boxes. You can set up dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. If you're a developer, you can use dev boxes in your day-to-day workflows.
+Microsoft Dev Box gives developers self-service access to ready-to-code cloud workstations called dev boxes. You can configure dev boxes with tools, source code, and prebuilt binaries that are specific to a project, so developers can immediately start work. You can create your own customized image, or use a preconfigured image from the Azure Market place, complete with Visual Studio already installed.
-The Dev Box service was designed with three organizational roles in mind: platform engineers, developer team leads, and developers.
+If you're a developer, you can use multiple dev boxes in your day-to-day workflows. You can access your dev boxes through a remote desktop client, or through a web browser, like any virtual desktop.
+
+The Dev Box service was designed with three organizational roles in mind: platform engineers, development team leads, and developers.
:::image type="content" source="media/overview-what-is-microsoft-dev-box/dev-box-roles.png" alt-text="Diagram that shows roles and responsibilities for dev boxes." border="false":::
Dev Box service configuration begins with the creation of a dev center, which re
Azure network connections enable dev boxes to communicate with your organization's network. The network connection provides a link between the dev center and your organization's virtual networks. In the network connection, you define how a dev box joins Microsoft Entra ID. Use a Microsoft Entra join to connect exclusively to cloud-based resources, or use a Microsoft Entra hybrid join to connect to on-premises resources and cloud-based resources.
-Dev box definitions define the configuration of the dev boxes that are available to users. You can use an image from Azure Marketplace, like the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. Or you can create your own custom image and store it in [Azure Compute Gallery](how-to-configure-azure-compute-gallery.md). Specify a SKU with compute and storage to complete the dev box definition.
+Dev box definitions define the configuration of the dev boxes that are available to users. You can use an image from Azure Marketplace, like the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2 | Hibernate supported** image. Or you can create your own custom image and store it in [Azure Compute Gallery](how-to-configure-azure-compute-gallery.md). Lastly, specify a SKU with compute and storage to complete the dev box definition.
Dev Box projects are the point of access for development teams. You assign the Dev Box User role to a project to give a developer access to the dev box pools that are associated with the project.
education-hub Navigate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/navigate-costs.md
+
+ Title: Track usage and create budgets in Azure for Students
+description: Describes how to track usage and create budgets in Microsoft Cost Management.
++++ Last updated : 10/31/2023++
+# How to Track Usage and Create Budgets in Azure for Students
+
+Azure for Students provides $100 in Azure credit to be used for up to one year and you can renew each year you're an active student to get an additional $100. However, students need to learn how to effectively manage and conserve their credit to keep their services running throughout the year.
+
+## Track your usage in the Education Hub
+
+Using the Azure Education Hub, you can keep track of your usage while on Azure for Students. The Overview page contains details about your Azure for Students subscription, such as monthly and aggregate usage and a countdown until your next renewal.
++
+Additionally, you can ΓÇÿView cost detailsΓÇÖ, which will send you into Microsoft Cost Management (MCM). With MCM, you can explore in more detail your services and the usage they have accumulated.
++
+## Create Budgets to help conserve your Azure for Students credit
+
+[![Budget](https://markdown-videos-api.jorgenkh.no/url?url=https%3A%2F%2Fyoutu.be%2FUrkHiUx19Po)](https://youtu.be/UrkHiUx19Po)
+
+Read more about this tutorial [Create and Manage Budgets](https://learn.microsoft.com/azure/cost-management-billing/costs/tutorial-acm-create-budgets)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set up a course, allocate credit, and invite students](create-assignment-allocate-credit.md)
+
frontdoor Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/domain.md
Previously updated : 03/10/2023 Last updated : 10/31/2023
When you add a domain to your Azure Front Door profile, you configure two record
## Domain validation
-All domains added to Azure Front Door must be validated. Validation helps to protect you from accidental misconfiguration, and also helps to protect other people from domain spoofing. In some situation, domains can be *pre-validated* by another Azure service. Otherwise, you need to follow the Azure Front Door domain validation process to prove your ownership of the domain name.
+All domains added to Azure Front Door must be validated. Validation helps to protect you from accidental misconfiguration, and also helps to protect other people from domain spoofing. In some situation, domains can be *prevalidated* by another Azure service. Otherwise, you need to follow the Azure Front Door domain validation process to prove your ownership of the domain name.
-* **Azure pre-validated domains** are domains that have been validated by another supported Azure service. If you onboard and validate a domain to another Azure service, and then configure Azure Front Door later, you might work with a pre-validated domain. You don't need to validate the domain through Azure Front Door when you use this type of domain.
+* **Azure pre-validated domains** are domains that have been validated by another supported Azure service. If you onboard and validate a domain to another Azure service, and then configure Azure Front Door later, you might work with a prevalidated domain. You don't need to validate the domain through Azure Front Door when you use this type of domain.
> [!NOTE] > Azure Front Door currently only accepts pre-validated domains that have been configured with [Azure Static Web Apps](https://azure.microsoft.com/products/app-service/static/).
The following table lists the validation states that a domain might show.
|--|--| | Submitting | The custom domain is being created. <br /><br /> Wait until the domain resource is ready. | | Pending | The DNS TXT record value has been generated, and Azure Front Door is ready for you to add the DNS TXT record. <br /><br /> Add the DNS TXT record to your DNS provider and wait for the validation to complete. If the status remains **Pending** even after the TXT record has been updated with the DNS provider, select **Regenerate** to refresh the TXT record then add the TXT record to your DNS provider again. |
-| Pending re-validation | The managed certificate is less than 45 days from expiring. <br /><br /> If you have a CNAME record already pointing to the Azure Front Door endpoint, no action is required for certificate renewal. If the custom domain is pointed to another CNAME record, select the **Pending re-validation** status, and then select **Regenerate** on the *Validate the custom domain* page. Lastly, select **Add** if you're using Azure DNS or manually add the TXT record with your own DNS providerΓÇÖs DNS management. |
-| Refreshing validation token | A domain goes into a *Refreshing Validation Token* state for a brief period after the **Regenerate** button is selected. Once a new TXT record value is issued, the state will change to **Pending**. <br /> No action is required. |
+| Pending revalidation | The managed certificate is less than 45 days from expiring. <br /><br /> If you have a CNAME record already pointing to the Azure Front Door endpoint, no action is required for certificate renewal. If the custom domain is pointed to another CNAME record, select the **Pending re-validation** status, and then select **Regenerate** on the *Validate the custom domain* page. Lastly, select **Add** if you're using Azure DNS or manually add the TXT record with your own DNS providerΓÇÖs DNS management. |
+| Refreshing validation token | A domain goes into a *Refreshing Validation Token* state for a brief period after the **Regenerate** button is selected. Once a new TXT record value is issued, the state changes to **Pending**. <br /> No action is required. |
| Approved | The domain has been successfully validated, and Azure Front Door can accept traffic that uses this domain. <br /><br /> No action is required. | | Rejected | The certificate provider/authority has rejected the issuance for the managed certificate. For example, the domain name might be invalid. <br /><br /> Select the **Rejected** link and then select **Regenerate** on the *Validate the custom domain* page, as shown in the screenshots below this table. Then, select **Add** to add the TXT record in the DNS provider. | | Timeout | The TXT record wasn't added to your DNS provider within seven days, or an invalid DNS TXT record was added. <br /><br /> Select the **Timeout** link and then select **Regenerate** on the *Validate the custom domain* page. Then select **Add** to add a new TXT record to the DNS provider. Ensure that you use the updated value. |
Azure Front Door can automatically manage TLS certificates for subdomains and ap
The process of generating, issuing, and installing a managed TLS certificate can take from several minutes to an hour to complete, and occasionally it can take longer.
+> [!NOTE]
+> Azure Front Door (Standard and Premium) managed certificates are automatically rotated if the domain CNAME record points directly to a Front Door endpoint or points indirectly to a Traffic Manager endpoint. Otherwise, you need to re-validate the domain ownership to rotate the certificates.
+ #### Domain types The following table summarizes the features available with managed TLS certificates when you use different types of domains:
Sometimes, you might need to provide your own TLS certificates. Common scenarios
* You need to use the same TLS certificate on multiple systems. * You use [wildcard domains](front-door-wildcard-domain.md). Azure Front Door doesn't provide managed certificates for wildcard domains.
+> [!NOTE]
+> * As of September 2023, Azure Front Door supports Bring Your Own Certificates (BYOC) for domain ownership validation. Front Door approves the domain ownership if the Certificate Name (CN) or Subject Alternative Name (SAN) of the certificate matches the custom domain. If you select Azure managed certificate, the domain validation uses the DNS TXT record.
+> * For custom domains created before BYOC based validation, and the domain validation status is not **Approved**, you need to trigger the auto approval of the domain ownership validation by selecting the **Validation State** and clicking on the **Revalidate** button in the portal. If you use the command line tool, you can trigger domain validation by sending an empty PATCH request to the domain API.
+ #### Certificate requirements To use your certificate with Azure Front Door, it must meet the following requirements: -- **Complete certificate chain:** When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected.
+- **Complete certificate chain:** When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a nonallowed CA, your request is rejected. The root CA must be part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If a certificate without complete chain is presented, the requests that involve that certificate aren't guaranteed to work as expected.
- **Common name:** The common name (CN) of the certificate must match the domain configured in Azure Front Door. - **Algorithm:** Azure Front Door doesn't support certificates with elliptic curve (EC) cryptography algorithms. - **File (content) type:** Your certificate must be uploaded to your key vault from a PFX file, which uses the `application/x-pkcs12` content type.
You can change a domain between using an Azure Front Door-managed certificate an
* It might take up to an hour for the new certificate to be deployed when you switch between certificate types. * If your domain state is *Approved*, switching the certificate type between a user-managed and a managed certificate won't cause any downtime.
-* When switching to a managed certificate, Azure Front Door continues to use the previous certificate until the domain ownership is re-validated and the domain state becomes *Approved*.
-* If you switch from BYOC to managed certificate, domain re-validation is required. If you switch from managed certificate to BYOC, you're not required to re-validate the domain.
+* When switching to a managed certificate, Azure Front Door continues to use the previous certificate until the domain ownership is revalidated and the domain state becomes *Approved*.
+* If you switch from BYOC to managed certificate, domain revalidation is required. If you switch from managed certificate to BYOC, you're not required to revalidate the domain.
### Certificate renewal
However, Azure Front Door won't automatically rotate certificates in the followi
* The custom domain uses an A record. We recommend you always use a CNAME record to point to Azure Front Door. * The custom domain is an [apex domain](apex-domain.md) and uses CNAME flattening.
-If one of the scenarios above applies to your custom domain, then 45 days before the managed certificate expires, the domain validation state becomes *Pending Revalidation*. The *Pending Revalidation* state indicates that you need to create a new DNS TXT record to revalidate your domain ownership.
+If one of the scenarios above applies to your custom domain, then 45 days before the managed certificate expire, the domain validation state becomes *Pending Revalidation*. The *Pending Revalidation* state indicates that you need to create a new DNS TXT record to revalidate your domain ownership.
> [!NOTE] > DNS TXT records expire after seven days. If you previously added a domain validation TXT record to your DNS server, you need to replace it with a new TXT record. Ensure you use the new value, otherwise the domain validation process will fail.
If your domain can't be validated, the domain validation state becomes *Rejected
For more information on the domain validation states, see [Domain validation states](#domain-validation-states).
-#### Renew Azure-managed certificates for domains pre-validated by other Azure services
+#### Renew Azure-managed certificates for domains prevalidated by other Azure services
Azure-managed certificates are automatically rotated by the Azure service that validates the domain.
frontdoor End To End Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/end-to-end-tls.md
For your own custom TLS/SSL certificate:
1. If a specific version is selected, autorotation isnΓÇÖt supported. You've will have to reselect the new version manually to rotate certificate. It takes up to 24 hours for the new version of the certificate/secret to be deployed.
+ > [!NOTE]
+ > Azure Front Door (Standard and Premium) managed certificates are automatically rotated if the domain CNAME record points directly to a Front Door endpoint or points indirectly to a Traffic Manager endpoint. Otherwise, you need to re-validate the domain ownership to rotate the certificates.
+ You'll need to ensure that the service principal for Front Door has access to the key vault. Refer to how to grant access to your key vault. The updated certificate rollout operation by Azure Front Door won't cause any production downtime, as long as the subject name or subject alternate name (SAN) for the certificate hasn't changed. ## Supported cipher suites
frontdoor How To Configure Https Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-configure-https-custom-domain.md
Previously updated : 02/07/2023 Last updated : 10/31/2023 #Customer intent: As a website owner, I want to add a custom domain to my Front Door configuration so that my users can use my custom domain to access my content.
key-vault Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/private-link.md
az network private-endpoint create --resource-group {RG} --vnet-name {vNet NAME}
``` > [!NOTE]
-> If you delete this HSM the private endpiont will stop working. If your recover (undelete) this HSM later, you must re-create a new private endpoint.
+> If you delete this HSM the private endpoint will stop working. If your recover (undelete) this HSM later, you must re-create a new private endpoint.
### Create a Private Endpoint (Manually Request Approval) ```azurecli
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
az keyvault secret set --name storageKey --vault-name vaultrotation-kv --value <
```azurepowershell $tomorrowDate = (Get-Date).AddDays(+1).ToString('yyy-MM-ddTHH:mm:ssZ')
-$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
+$secretValue = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
$tags = @{ CredentialId='key1' ProviderAddress='<storageAccountResourceId>' ValidityPeriodDays='60' }
-Set-AzKeyVaultSecret -Name storageKey -VaultName vaultrotation-kv -SecretValue $secretVaule -Tag $tags -Expires $tomorrowDate
+Set-AzKeyVaultSecret -Name storageKey -VaultName vaultrotation-kv -SecretValue $secretValue -Tag $tags -Expires $tomorrowDate
```
az keyvault secret set --name storageKey2 --vault-name vaultrotation-kv --value
```azurepowershell $tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
-$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
+$secretValue = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
$tags = @{ CredentialId='key2'; ProviderAddress='<storageAccountResourceId>'; ValidityPeriodDays='60' }
-Set-AzKeyVaultSecret -Name storageKey2 -VaultName vaultrotation-kv -SecretValue $secretVaule -Tag $tags -Expires $tomorrowDate
+Set-AzKeyVaultSecret -Name storageKey2 -VaultName vaultrotation-kv -SecretValue $secretValue -Tag $tags -Expires $tomorrowDate
```
lighthouse Managed Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/managed-applications.md
This table illustrates some high-level differences that may impact whether you m
|Typical user |Service providers or enterprises managing multiple tenants |Independent Software Vendors (ISVs) | |Scope of cross-tenant access |Subscription(s) or resource group(s) |Resource group (scoped to a single application) | |Purchasable in Azure Marketplace |No (offers can be published to Azure Marketplace, but customers are billed separately) |Yes |
-|IP protection |Yes (IP can remain in the service provider's tenant) |Yes (by design, resource group is locked to customers) |
+|IP protection |Yes (IP can remain in the service provider's tenant) |Yes (If the ISV chooses to restrict customer access with deny assignments, the managed resource group is locked to customers) |
|Deny assignments |No |Yes | ### Azure Lighthouse
Azure Lighthouse is typically used when a service provider will perform manageme
### Azure managed applications
-[Azure managed applications](../../azure-resource-manager/managed-applications/overview.md) allow a service provider or ISV to offer cloud solutions that are easy for customers to deploy and use in their own subscriptions.
+[Azure managed applications](../../azure-resource-manager/managed-applications/overview.md) allow an ISV/publisher to offer cloud solutions that are easy for customers to deploy and use in their own subscriptions.
-In a managed application, the resources used by the application are bundled together and deployed to a resource group that's managed by the publisher. This resource group is present in the customer's subscription, but an identity in the publisher's tenant has access to it. The ISV continues to manage and maintain the managed application, while the customer does not have direct access to work in its resource group, or any access to its resources.
+In a managed application, the resources used by the application are bundled together and deployed to a resource group that can be managed by the ISV/publisher. This 'managed resource group' is present in the customer's subscription, but identities in the publisher's tenant can have access to it. When publishing an offer in Microsoft Partner Center, the publisher can choose whether they enable or disable management access by the publisher itself. In addition, the publisher can restrict customer access (using deny assignments), or grant the customer full access.
Managed applications support [customized Azure portal experiences](../../azure-resource-manager/managed-applications/concepts-view-definition.md) and [integration with custom providers](../../azure-resource-manager/managed-applications/tutorial-create-managed-app-with-custom-provider.md). These options can be used to deliver a more customized and integrated experience, making it easier for customers to perform some management tasks themselves.
Customers might also be interested in managed applications from multiple service
- Learn about [Azure managed applications](../../azure-resource-manager/managed-applications/overview.md). - Learn how to [onboard a subscription to Azure Lighthouse](../how-to/onboard-customer.md).-- Learn about [ISV scenarios with Azure Lighthouse](isv-scenarios.md).
+- Learn about [ISV scenarios with Azure Lighthouse](isv-scenarios.md).
load-balancer Tutorial Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-portal.md
In this tutorial, you learn how to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An existing public standard SKU Azure Load Balancer. For more information on creating a load balancer, see **[Create a public load balancer using the Azure portal](quickstart-load-balancer-standard-public-portal.md)**.
- - For the purposes of this tutorial, the load balancer in the examples is named **myLoadBalancer**.
+ - For the purposes of this tutorial, the load balancer in the examples is named **load-balancer**.
+- A virtual machine or network virtual appliance for testing.
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
-## Create virtual network
-
-A virtual network is needed for the resources that are in the backend pool of the gateway load balancer.
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-
-2. In **Virtual networks**, select **+ Create**.
-
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **TutorGwLB-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **East US** |
-
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-5. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-6. Under **Subnet name**, select the word **default**.
-
-7. In **Edit subnet**, enter this information:
-
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/24** |
-
-8. Select **Save**.
-
-9. Select the **Security** tab.
-
-10. Under **BastionHost**, select **Enable**. Enter this information:
-
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
--
-11. Select the **Review + create** tab or select the **Review + create** button.
-
-12. Select **Create**.
-
-> [!IMPORTANT]
-
-> [!INCLUDE [Pricing](../../includes/bastion-pricing.md)]
-
->
## Create NSG
-Use the following example to create a network security group. You'll configure the NSG rules needed for network traffic in the virtual network created previously.
+Use the following example to create a network security group. You configure the NSG rules needed for network traffic in the virtual network created previously.
1. In the search box at the top of the portal, enter **Network Security**. Select **Network security groups** in the search results.
-2. Select **+ Create**.
+1. Select **+ Create**.
-3. In the **Basics** tab of **Create network security group**, enter, or select the following information:
+1. In the **Basics** tab of **Create network security group**, enter, or select the following information:
| Setting | Value | | - | -- | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **TutorGwLB-rg** |
+ | Resource group | Select **load-balancer-rg** |
| **Instance details** | |
- | Name | Enter **myNSG**. |
+ | Name | Enter **lb-nsg-R*. |
| Region | Select **East US**. |
-4. Select the **Review + create** tab or select the **Review + create** button.
+1. Select the **Review + create** tab or select the **Review + create** button.
-5. Select **Create**.
+1. Select **Create**.
-6. In the search box at the top of the portal, enter **Network Security**. Select **Network security groups** in the search results.
+1. In the search box at the top of the portal, enter **Network Security**. Select **Network security groups** in the search results.
-7. Select **myNSG**.
+1. Select **lb-nsg-R*.
-8. Select **Inbound security rules** in **Settings** in **myNSG**.
+1. Select **Inbound security rules** in **Settings** in **lb-nsg-R*.
-9. Select **+ Add**.
+1. Select **+ Add**.
-10. In **Add inbound security rule**, enter or select the following information.
+1. In **Add inbound security rule**, enter or select the following information.
| Setting | Value | | - | -- |
Use the following example to create a network security group. You'll configure t
| Protocol | Select **Any**. | | Action | Leave the default of **Allow**. | | Priority | Enter **100**. |
- | Name | Enter **myNSGRule-AllowAll-All** |
+ | Name | Enter **lb-nsg-Rule-AllowAll-All** |
-11. Select **Add**.
+1. Select **Add**.
-12. Select **Outbound security rules** in **Settings**.
+1. Select **Outbound security rules** in **Settings**.
-13. Select **+ Add**.
+1. Select **+ Add**.
-14. In **Add outbound security rule**, enter or select the following information.
+1. In **Add outbound security rule**, enter or select the following information.
| Setting | Value | | - | -- |
Use the following example to create a network security group. You'll configure t
| Protocol | Select **TCP**. | | Action | Leave the default of **Allow**. | | Priority | Enter **100**. |
- | Name | Enter **myNSGRule-AllowAll-TCP-Out** |
+ | Name | Enter **lb-nsg-Rule-AllowAll-TCP-Out** |
-15. Select **Add**.
+1. Select **Add**.
Select this NSG when creating the NVAs for your deployment. ## Create Gateway Load Balancer
-In this section, you'll create the configuration and deploy the gateway load balancer.
+In this section, you create the configuration and deploy the gateway load balancer.
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-2. In the **Load balancer** page, select **Create**.
+1. In the **Load balancer** page, select **Create**.
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
| Setting | Value | | | | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select **TutorGwLB-rg**. |
+ | Resource group | Select **load-balancer-rg**. |
| **Instance details** | |
- | Name | Enter **myLoadBalancer-gw** |
+ | Name | Enter **gateway-load-balancer** |
| Region | Select **(US) East US**. | | Type | Select **Internal**. | | SKU | Select **Gateway**. | :::image type="content" source="./media/tutorial-gateway-portal/create-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-
-6. Enter **MyFrontEnd** in **Name**.
-
-7. Select **myBackendSubnet** in **Subnet**.
+1. Select **Next: Frontend IP configuration** at the bottom of the page.
-8. Select **Dynamic** for **Assignment**.
+1. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+1. In **Add frontend IP configuration**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **lb-frontend-IP**. |
+ | Virtual network | Select **lb-vnet**. |
+ | Subnet | Select **backend-subnet**. |
+ | Assignment | Select **Dynamic** |
-9. Select **Add**.
+1. Select **Add**.
-10. Select **Next: Backend pools** at the bottom of the page.
+1. Select **Next: Backend pools** at the bottom of the page.
-11. In the **Backend pools** tab, select **+ Add a backend pool**.
+1. In the **Backend pools** tab, select **+ Add a backend pool**.
-12. In **Add backend pool**, enter or select the following information.
+5. In **Add backend pool**, enter or select the following information.
| Setting | Value | | - | -- |
- | Name | Enter **myBackendPool**. |
+ | Name | Enter **lb-backend-pool**. |
| Backend Pool Configuration | Select **NIC**. | | IP Version | Select **IPv4**. | | **Gateway load balancer configuration** | |
In this section, you'll create the configuration and deploy the gateway load bal
| External port | Leave the default of **10801**. | | External identifier | Leave the default of **801**. |
-13. Select **Add**.
+6. Select **Add**.
-14. Select the **Next: Inbound rules** button at the bottom of the page.
+7. Select the **Next: Inbound rules** button at the bottom of the page.
-15. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+8. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
-16. In **Add load balancing rule**, enter or select the following information:
+9. In **Add load balancing rule**, enter or select the following information:
| Setting | Value | | - | -- |
- | Name | Enter **myLBRule** |
+ | Name | Enter **lb-rule** |
| IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **MyFrontend**. |
- | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Frontend IP address | Select **lb-frontend-IP**. |
+ | Backend pool | Select **lb-backend-pool**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. |
| Session persistence | Select **None**. |
+ | Enable TCP reset | Leave the default of unchecked. |
+ | Enable floating IP | Leave default of unchecked. |
:::image type="content" source="./media/tutorial-gateway-portal/add-load-balancing-rule.png" alt-text="Screenshot of create load-balancing rule." border="true":::
-17. Select **Add**.
+10. Select **Save**.
-18. Select the blue **Review + create** button at the bottom of the page.
+11. Select the blue **Review + create** button at the bottom of the page.
-19. Select **Create**.
+12. Select **Create**.
## Add network virtual appliances to the gateway load balancer backend pool
Deploy NVAs through the Azure Marketplace. Once deployed, add the NVA virtual ma
In this example, you'll chain the frontend of a standard load balancer to the gateway load balancer.
-You'll add the frontend to the frontend IP of an existing load balancer in your subscription.
+You add the frontend to the frontend IP of an existing load balancer in your subscription.
1. In the search box in the Azure portal, enter **Load balancer**. In the search results, select **Load balancers**.
-2. In **Load balancers**, select **myLoadBalancer** or your existing load balancer name.
+2. In **Load balancers**, select **load-balancer** or your existing load balancer name.
3. In the load balancer page, select **Frontend IP configuration** in **Settings**.
-4. Select the frontend IP of the load balancer. In this example, the name of the frontend is **myFrontendIP**.
+4. Select the frontend IP of the load balancer. In this example, the name of the frontend is **lb-frontend-IP**.
:::image type="content" source="./media/tutorial-gateway-portal/frontend-ip.png" alt-text="Screenshot of frontend IP configuration." border="true":::
-5. Select **myFrontendIP (10.1.0.4)** in the pull-down box next to **Gateway load balancer**.
+5. Select **lb-frontend-IP (10.1.0.4)** in the pull-down box next to **Gateway load balancer**.
6. Select **Save**.
You'll add the frontend to the frontend IP of an existing load balancer in your
Alternatively, you can chain a VM's NIC IP configuration to the gateway load balancer.
-You'll add the gateway load balancer's frontend to an existing VM's NIC IP configuration.
+You add the gateway load balancer's frontend to an existing VM's NIC IP configuration.
> [!IMPORTANT] > A virtual machine must have a public IP address assigned before attempting to chain the NIC configuration to the frontend of the gateway load balancer.
You'll add the gateway load balancer's frontend to an existing VM's NIC IP confi
5. In the network interface page, select **IP configurations** in **Settings**.
-6. Select **myFrontend** in **Gateway Load balancer**.
+6. Select **lb-frontend-IP** in **Gateway Load balancer**.
:::image type="content" source="./media/tutorial-gateway-portal/vm-nic-gw-lb.png" alt-text="Screenshot of nic IP configuration." border="true":::
You'll add the gateway load balancer's frontend to an existing VM's NIC IP confi
## Clean up resources
-When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **TutorGwLB-rg** that contains the resources and then select **Delete**.
+When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **load-balancer-rg** that contains the resources and then select **Delete**.
## Next steps
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
Previously updated : 10/20/2022 Last updated : 11/01/2023 monikerRange: 'azureml-api-2 || azureml-api-1'
To update Azure Machine Learning to use the new key, use the following steps:
:::moniker range="azureml-api-2" ```python
- from azure.ai.ml.entities import AzureBlobDatastore
+ from azure.ai.ml.entities import AzureBlobDatastore, AccountKeyConfiguration
from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AZUREML_WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=resource_group,
+ workspace_name=workspace_name)
blob_datastore1 = AzureBlobDatastore( name="your datastore name",
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
We are excited to inform you that we have introduced new 20 vCores options under
"Host Memory Percent" metric will provide more accurate calculations of memory usage. It will now reflect the actual memory consumed by the server, excluding re-usable memory from the calculation. This improvement ensures that you have a more precise understanding of your server's memory utilization. After the completion of the [scheduled maintenance window](./concepts-maintenance.md), existing servers will benefit from this enhancement. - **Known Issues**
-When attempting to modify the User assigned managed identity and Key identifier in a single request while changing the CMK settings, the operation gets struck. We are working on the upcoming deployment for the permanent solution to address this issue, in the meantime, please ensure that you perform the two operations of updating the User Assigned Managed Identity and Key identifier in separate requests. The sequence of these operations is not critical, as long as the user-assigned identities have the necessary access to both Key Vault
+ - When attempting to modify the User assigned managed identity and Key identifier in a single request while changing the CMK settings, the operation gets struck. We are working on the upcoming deployment for the permanent solution to address this issue, in the meantime, please ensure that you perform the two operations of updating the User Assigned Managed Identity and Key identifier in separate requests. The sequence of these operations is not critical, as long as the user-assigned identities have the necessary access to both Key Vault
+ - We have identified a known issue where customers are unable to initialize a new Custom Maintenance Window (CMW) configuration while creating or updating their MySQL server using ARM/CLI/RestAPI. Currently, the CMW configuration can only be initially set up through the Azure portal. Subsequent modifications to the CMW can then be made during server updates. We are actively working to resolve this limitation. As a workaround, customers can manually set up a CMW for their MySQL server via the Azure portal before making any further changes through ARM/CLI/RestAPI.
+ ## September 2023
nat-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-overview.md
NAT Gateway provides dynamic SNAT port functionality to automatically scale outb
*Figure: Azure NAT Gateway* Azure NAT Gateway provides outbound connectivity for many Azure resources, including:
-* Azure virtual machine (VM) instances in a private subnet
+* Azure virtual machines or virtual machine scale-sets in a private subnet
* [Azure Kubernetes Services (AKS) clusters](/azure/aks/nat-gateway) * [Azure Function Apps](/azure/azure-functions/functions-how-to-use-nat-gateway) * [Azure Firewall subnet](/azure/firewall/integrate-with-nat-gateway)
A NAT gateway doesn't affect the network bandwidth of your compute resources. Le
* NAT gateway is the recommended method for outbound connectivity. * To migrate outbound access to a NAT gateway from default outbound access or load balancer outbound rules, see [Migrate outbound access to Azure NAT Gateway](./tutorial-migrate-outbound-nat.md).
+>[!NOTE]
+>On September 30th, 2025, [default outbound access](/azure/virtual-network/ip-services/default-outbound-access#when-is-default-outbound-access-provided) for new deployments will be retired. It is recommended to use an explicit form of outbound connectivity instead, like NAT gateway.
+ * Outbound connectivity with NAT gateway is defined at a per subnet level. NAT gateway replaces the default Internet destination of a subnet. * No traffic routing configurations are required to use NAT gateway.
Virtual appliance UDR / VPN Gateway / ExpressRoute >> NAT gateway >> Instance-le
* Basic SKU resources, such as basic load balancer or basic public IPs aren't compatible with NAT gateway. NAT gateway can't be used with subnets where basic SKU resources exist. Basic load balancer and basic public IP can be upgraded to standard to work with a NAT gateway
- * Upgrade a load balancer from basic to standard, see [Upgrade a public basic Azure Load Balancer](../load-balancer/upgrade-basic-standard.md).
+ * Upgrade a load balancer from basic to standard, see [Upgrade a public basic Azure Load Balancer](/azure/load-balancer/upgrade-basic-standard-with-powershell).
* Upgrade a public IP from basic to standard, see [Upgrade a public IP address](../virtual-network/ip-services/public-ip-upgrade-portal.md).
network-watcher Connection Monitor Connected Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-connected-machine-agent.md
Title: Install the Azure Connected Machine agent for connection monitor
-description: This article describes how to install Azure Connected Machine agent
-+
+description: Learn how to install the Azure Connected Machine agent using an installation script to use the Azure Network Watcher connection monitor.
+ - Previously updated : 10/27/2022-
-#Customer intent: I need to monitor a connection by using Azure Monitor Agent.
Last updated : 10/31/2023+
+#CustomerIntent: As an Azure administrator, I need to install the Azure Connected Machine agent so I can monitor a connection using the Connection Monitor.
# Install the Azure Connected Machine agent to enable Azure Arc
This article describes how to install the Azure Connected Machine agent.
## Prerequisites * An Azure account with an active subscription. If you don't already have an account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Administrator permissions to install and configure the Connected Machine agent. On Linux, you install and configure it by using the root account, and on Windows, you use an account that's a member of the Local Administrators group.
+* Administrator permissions to install and configure the Connected Machine agent. On Linux, you install and configure it using the root account, and on Windows, you use an account that's a member of the Local Administrators group.
* Register the Microsoft.HybridCompute, Microsoft.GuestConfiguration, and Microsoft.HybridConnectivity resource providers on your subscription. You can [register these resource providers](../azure-arc/servers/prerequisites.md#azure-resource-providers) either ahead of time or as you're completing the steps in this article. * Review the [agent prerequisites](../azure-arc/servers/prerequisites.md), and ensure that: * Your target machine is running a supported [operating system](../azure-arc/servers/prerequisites.md#supported-operating-systems).
Use the Azure portal to create a script that automates the downloading and insta
1. In the **Download or copy the following script** section, review the script. If you want to make any changes, use the **Previous** button to go back and update your selections. Otherwise, select **Download** to save the script file.
-## Install the agent by using the script
+## Install the agent using the script
-After you've generated the script, the next step is to run it on the server that you want to onboard to Azure Arc. The script will download the Connected Machine agent from the Microsoft Download Center, install the agent on the server, create the Azure Arc-enabled server resource, and associate it with the agent.
+After you've generated the script, the next step is to run it on the server that you want to onboard to Azure Arc. The script downloads the Connected Machine agent from the Microsoft Download Center, install the agent on the server, create the Azure Arc-enabled server resource, and associate it with the agent.
Follow the steps corresponding to the operating system of your server.
Refer to the linked document to discover the required steps to install the [Azur
You can enable Azure Arc-enabled servers for one or more Windows machines in your environment manually, or you can use the Windows Admin Center to deploy the Azure Connected Machine agent and register your on-premises servers without having to perform any steps outside of this tool. For more information about installing the Azure Arc agent via Windows Admin Center, see [Connect hybrid machines to Azure from Windows Admin Center](../azure-arc/servers/onboard-windows-admin-center.md).
-## Next steps
+## Next step
-- [Install Azure Monitor Agent](connection-monitor-install-azure-monitor-agent.md)
+> [!div class="nextstepaction"]
+> [Install Azure Monitor Agent](connection-monitor-install-azure-monitor-agent.md)
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
> - [Azure portal](network-watcher-packet-capture-manage-portal.md) > - [PowerShell](network-watcher-packet-capture-manage-powershell.md) > - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
-> - [Azure REST API](network-watcher-packet-capture-manage-rest.md)
Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more. By being able to remotely trigger packet captures, this capability eases the burden of running a packet capture manually and on the desired machine, which saves valuable time.
network-watcher Network Watcher Packet Capture Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal.md
> - [Azure portal](network-watcher-packet-capture-manage-portal.md) > - [PowerShell](network-watcher-packet-capture-manage-powershell.md) > - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
-> - [Azure REST API](network-watcher-packet-capture-manage-rest.md)
Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine, which saves valuable time.
network-watcher Network Watcher Packet Capture Manage Powershell Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell-vmss.md
> [!div class="op_single_selector"] > - [Azure portal](network-watcher-packet-capture-manage-portal-vmss.md)
-> - [Azure REST API](network-watcher-packet-capture-manage-rest-vmss.md)
> - [PowerShell](network-watcher-packet-capture-manage-powershell-vmss.md)
network-watcher Network Watcher Packet Capture Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
> - [Azure portal](network-watcher-packet-capture-manage-portal.md) > - [PowerShell](network-watcher-packet-capture-manage-powershell.md) > - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
-> - [Azure REST API](network-watcher-packet-capture-manage-rest.md)
Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more. By being able to remotely trigger packet captures, this capability eases the burden of running a packet capture manually and on the desired machine, which saves valuable time.
network-watcher Network Watcher Packet Capture Manage Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest-vmss.md
- Title: Manage packet captures in virtual machine scale sets - REST API-
-description: Learn how to manage packet captures in virtual machine scale sets with the packet capture feature of Network Watcher using Azure REST API.
----- Previously updated : 10/04/2022----
-# Manage packet captures in Virtual machine scale set with Azure Network Watcher using Azure REST API
-
-> [!div class="op_single_selector"]
-> - [Azure portal](network-watcher-packet-capture-manage-portal-vmss.md)
-> - [PowerShell](network-watcher-packet-capture-manage-powershell-vmss.md)
-> - [Azure REST API](network-watcher-packet-capture-manage-rest-vmss.md)
-
-Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine scale set instance/(s). Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies, both reactively, and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communication, and much more. Being able to remotely trigger packet captures, eases the burden of running a packet capture manually on a desired virtual machine, which saves valuable time.
-
-This article takes you through the different management tasks that are currently available for packet capture.
--- [**Get a packet capture**](#get-a-packet-capture)-- [**List all packet captures**](#list-all-packet-captures)-- [**Query the status of a packet capture**](#query-packet-capture-status)-- [**Start a packet capture**](#start-packet-capture)-- [**Stop a packet capture**](#stop-packet-capture)-- [**Delete a packet capture**](#delete-packet-capture)-
-> [!Note]
-> Currently, Azure Kubernetes Service (AKS) is not supported for Packet Capture.
--
-## Before you begin
-
-ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
-
-This scenario assumes you've already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
-
-> Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
-
-## Log in with ARMClient
-
-```powershell
-armclient login
-```
-
-## Retrieve a virtual machine
-
-Run the following script to return a virtual machine. This information is needed for starting a packet capture.
-
-The following code needs variables:
--- **subscriptionId** - The subscription id can also be retrieved with the **Get-AzSubscription** cmdlet.-- **resourceGroupName** - The name of a resource group that contains virtual machines.-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "<resource group name>"
-
-Get List of all VM scale sets under a resource group
-
-armclient get https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets?api-version=2022-03-01
-
-Display information about a virtual machine scale set
-
-armclient GET https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/virtualMachineScaleSets/{vmScaleSetName}?api-version=2022-03-01
-```
--
-## Get a packet capture
-
-The following example gets the status of a single packet capture
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/querystatus?api-version=2016-12-01"
-```
-
-The following responses are examples of a typical response returned when querying the status of a packet capture.
-
-```json
-{
- "name": "TestPacketCapture5",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
- "captureStartTime": "2016-12-06T17:20:01.5671279Z",
- "packetCaptureStatus": "Running",
- "packetCaptureError": []
-}
-```
-
-```json
-{
- "name": "TestPacketCapture5",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
- "captureStartTime": "2016-12-06T17:20:01.5671279Z",
- "packetCaptureStatus": "Stopped",
- "stopReason": "TimeExceeded",
- "packetCaptureError": []
-}
-```
-
-## List all packet captures
-
-The following example gets all packet capture sessions in a region.
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-armclient get "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures?api-version=2016-12-01"
-```
-
-The following response is an example of a typical response returned when getting all packet captures
-
-```json
-{
- "value": [
- {
- "name": "TestPacketCapture6",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
- "etag": "W/\"091762e1-c23f-448b-89d5-37cf56e4c045\"",
- "properties": {
- "provisioningState": "Succeeded",
- "target": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute/virtualMachines/ContosoVM",
- "bytesToCapturePerPacket": 0,
- "totalBytesPerSession": 1073741824,
- "timeLimitInSeconds": 60,
- "storageLocation": {
- "storageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374",
- "storagePath": "https://contosoexamplergdiag374.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/contosoexamplerg/providers/microsoft.compute/virtualmachines/contosovm/2016/12/06/packetcap
-ture_17_19_53_056.cap",
- "filePath": "c:\\temp\\packetcapture.cap"
- },
- "filters": [
- {
- "protocol": "Any",
- "localIPAddress": "",
- "localPort": "",
- "remoteIPAddress": "",
- "remotePort": ""
- }
- ]
- }
- },
- {
- "name": "TestPacketCapture7",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture7",
- "etag": "W/\"091762e1-c23f-448b-89d5-37cf56e4c045\"",
- "properties": {
- "provisioningState": "Failed",
- "target": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute/virtualMachines/ContosoVM",
- "bytesToCapturePerPacket": 0,
- "totalBytesPerSession": 1073741824,
- "timeLimitInSeconds": 60,
- "storageLocation": {
- "storageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374",
- "storagePath": "https://contosoexamplergdiag374.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/contosoexamplerg/providers/microsoft.compute/virtualmachines/contosovm/2016/12/06/packetcap
-ture_17_23_15_364.cap",
- "filePath": "c:\\temp\\packetcapture.cap"
- },
- "filters": [
- {
- "protocol": "Any",
- "localIPAddress": "",
- "localPort": "",
- "remoteIPAddress": "",
- "remotePort": ""
- }
- ]
- }
- }
- ]
-}
-```
-
-## Query packet capture status
-
-The following example gets all packet capture sessions in a region.
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-armclient get "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/querystatus?api-version=2016-12-01"
-```
-
-The following response is an example of a typical response returned when querying the status of a packet capture.
-
-```json
-{
- "name": "vm1PacketCapture",
- "id": "/subscriptions/{guid}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}",
- "captureStartTime" : "9/7/2016 12:35:24PM",
- "packetCaptureStatus" : "Stopped",
- "stopReason" : "TimeExceeded",
- "packetCaptureError" : [ ]
-}
-```
-
-## Start packet capture
-
-The following example creates a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-$storageaccountname = "contosoexamplergdiag374"
-$vmssName = "ContosoVMSS"
-$targetType = "AzureVMSS"
-$bytestoCaptureperPacket = "0"
-$bytesPerSession = "1073741824"
-$captureTimeinSeconds = "60"
-$localIP = ""
-$localPort = "" # Examples are: 80, or 80-120
-$remoteIP = ""
-$remotePort = "" # Examples are: 80, or 80-120
-$protocol = "" # Valid values are TCP, UDP and Any.
-$targetUri = "" # Example: /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.compute/virtualMachineScaleSet/$vmssName
-$storageId = "" #Example "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374"
-$storagePath = "" # Example: "https://mytestaccountname.blob.core.windows.net/capture/vm1Capture.cap"
-$localFilePath = "c:\\temp\\packetcapture.cap" # Example: "d:\capture\vm1Capture.cap"
-
-$requestBody = @"
-{
- 'properties': {
- 'target': '/${targetUri}',
- 'targetType': '/${targetType}',
- 'bytesToCapturePerPacket': '${bytestoCaptureperPacket}',
- 'totalBytesPerSession': '${bytesPerSession}',
- 'scope': {
- 'include': [ "1", "2" ],
- 'exclude': [ "3", "4" ],
- },
- 'timeLimitinSeconds': '${captureTimeinSeconds}',
- 'storageLocation': {
- 'storageId': '${storageId}',
- 'storagePath': '${storagePath}',
- 'filePath': '${localFilePath}'
- },
- 'filters': [
- {
- 'protocol': '${protocol}',
- 'localIPAddress': '${localIP}',
- 'localPort': '${localPort}',
- 'remoteIPAddress': '${remoteIP}',
- 'remotePort': '${remotePort}'
- }
- ]
- }
-}
-"@
-
-armclient PUT "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}?api-version=2016-07-01" $requestbody
-
-```
-
-## Stop packet capture
-
-The following example stops a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/stop?api-version=2016-12-01"
-```
-
-## Delete packet capture
-
-The following example deletes a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-
-armclient delete "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}?api-version=2016-12-01"
-```
-
-> [!NOTE]
-> Deleting a packet capture does not delete the file in the storage account
-
-## Next steps
-
-For instructions on downloading files from Azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
network-watcher Network Watcher Packet Capture Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest.md
- Title: Manage packet captures in VMs with Azure Network Watcher - REST API
-description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using Azure REST API.
----- Previously updated : 05/28/2021----
-# Manage packet captures with Azure Network Watcher using Azure REST API
-
-> [!div class="op_single_selector"]
-> - [Azure portal](network-watcher-packet-capture-manage-portal.md)
-> - [PowerShell](network-watcher-packet-capture-manage-powershell.md)
-> - [Azure CLI](network-watcher-packet-capture-manage-cli.md)
-> - [Azure REST API](network-watcher-packet-capture-manage-rest.md)
-
-Network Watcher packet capture allows you to create capture sessions to track traffic to and from a virtual machine. Filters are provided for the capture session to ensure you capture only the traffic you want. Packet capture helps to diagnose network anomalies both reactively and proactively. Other uses include gathering network statistics, gaining information on network intrusions, to debug client-server communications and much more. By being able to remotely trigger packet captures, this capability eases the burden of running a packet capture manually and on the desired machine, which saves valuable time.
-
-This article takes you through the different management tasks that are currently available for packet capture.
--- [**Get a packet capture**](#get-a-packet-capture)-- [**List all packet captures**](#list-all-packet-captures)-- [**Query the status of a packet capture**](#query-packet-capture-status)-- [**Start a packet capture**](#start-packet-capture)-- [**Stop a packet capture**](#stop-packet-capture)-- [**Delete a packet capture**](#delete-packet-capture)---
-## Before you begin
-
-In this scenario, you call the Network Watcher REST API to run IP Flow Verify. ARMclient is used to call the REST API using PowerShell. ARMClient is found on chocolatey at [ARMClient on Chocolatey](https://chocolatey.org/packages/ARMClient)
-
-This scenario assumes you have already followed the steps in [Create a Network Watcher](network-watcher-create.md) to create a Network Watcher.
-
-> Packet capture requires a virtual machine extension `AzureNetworkWatcherExtension`. For installing the extension on a Windows VM visit [Azure Network Watcher Agent virtual machine extension for Windows](../virtual-machines/extensions/network-watcher-windows.md) and for Linux VM visit [Azure Network Watcher Agent virtual machine extension for Linux](../virtual-machines/extensions/network-watcher-linux.md).
-
-## Log in with ARMClient
-
-```powershell
-armclient login
-```
-
-## Retrieve a virtual machine
-
-Run the following script to return a virtual machine. This information is needed for starting a packet capture.
-
-The following code needs variables:
--- **subscriptionId** - The subscription id can also be retrieved with the **Get-AzSubscription** cmdlet.-- **resourceGroupName** - The name of a resource group that contains virtual machines.-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "<resource group name>"
-
-armclient get https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Compute/virtualMachines?api-version=2015-05-01-preview
-```
-
-From the following output, the id of the virtual machine is used in the next example.
-
-```json
-...
-,
- "type": "Microsoft.Compute/virtualMachines",
- "location": "westcentralus",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute
-/virtualMachines/ContosoVM",
- "name": "ContosoVM"
- }
- ]
-}
-```
--
-## Get a packet capture
-
-The following example gets the status of a single packet capture
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/querystatus?api-version=2016-12-01"
-```
-
-The following responses are examples of a typical response returned when querying the status of a packet capture.
-
-```json
-{
- "name": "TestPacketCapture5",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
- "captureStartTime": "2016-12-06T17:20:01.5671279Z",
- "packetCaptureStatus": "Running",
- "packetCaptureError": []
-}
-```
-
-```json
-{
- "name": "TestPacketCapture5",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
- "captureStartTime": "2016-12-06T17:20:01.5671279Z",
- "packetCaptureStatus": "Stopped",
- "stopReason": "TimeExceeded",
- "packetCaptureError": []
-}
-```
-
-## List all packet captures
-
-The following example gets all packet capture sessions in a region.
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-armclient get "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures?api-version=2016-12-01"
-```
-
-The following response is an example of a typical response returned when getting all packet captures
-
-```json
-{
- "value": [
- {
- "name": "TestPacketCapture6",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture6",
- "etag": "W/\"091762e1-c23f-448b-89d5-37cf56e4c045\"",
- "properties": {
- "provisioningState": "Succeeded",
- "target": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute/virtualMachines/ContosoVM",
- "bytesToCapturePerPacket": 0,
- "totalBytesPerSession": 1073741824,
- "timeLimitInSeconds": 60,
- "storageLocation": {
- "storageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374",
- "storagePath": "https://contosoexamplergdiag374.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/contosoexamplerg/providers/microsoft.compute/virtualmachines/contosovm/2016/12/06/packetcap
-ture_17_19_53_056.cap",
- "filePath": "c:\\temp\\packetcapture.cap"
- },
- "filters": [
- {
- "protocol": "Any",
- "localIPAddress": "",
- "localPort": "",
- "remoteIPAddress": "",
- "remotePort": ""
- }
- ]
- }
- },
- {
- "name": "TestPacketCapture7",
- "id": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/NetworkWatcherRG/providers/Microsoft.Network/networkWatchers/NetworkWatcher_westcentralus/packetCaptures/TestPacketCapture7",
- "etag": "W/\"091762e1-c23f-448b-89d5-37cf56e4c045\"",
- "properties": {
- "provisioningState": "Failed",
- "target": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Compute/virtualMachines/ContosoVM",
- "bytesToCapturePerPacket": 0,
- "totalBytesPerSession": 1073741824,
- "timeLimitInSeconds": 60,
- "storageLocation": {
- "storageId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374",
- "storagePath": "https://contosoexamplergdiag374.blob.core.windows.net/network-watcher-logs/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/contosoexamplerg/providers/microsoft.compute/virtualmachines/contosovm/2016/12/06/packetcap
-ture_17_23_15_364.cap",
- "filePath": "c:\\temp\\packetcapture.cap"
- },
- "filters": [
- {
- "protocol": "Any",
- "localIPAddress": "",
- "localPort": "",
- "remoteIPAddress": "",
- "remotePort": ""
- }
- ]
- }
- }
- ]
-}
-```
-
-## Query packet capture status
-
-The following example gets all packet capture sessions in a region.
-
-```powershell
-$subscriptionId = "<subscription id>"
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-armclient get "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/querystatus?api-version=2016-12-01"
-```
-
-The following response is an example of a typical response returned when querying the status of a packet capture.
-
-```json
-{
- "name": "vm1PacketCapture",
- "id": "/subscriptions/{guid}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/networkWatchers/{networkWatcherName}/packetCaptures/{packetCaptureName}",
- "captureStartTime" : "9/7/2016 12:35:24PM",
- "packetCaptureStatus" : "Stopped",
- "stopReason" : "TimeExceeded",
- "packetCaptureError" : [ ]
-}
-```
-
-## Start packet capture
-
-The following example creates a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-$storageaccountname = "contosoexamplergdiag374"
-$vmName = "ContosoVM"
-$bytestoCaptureperPacket = "0"
-$bytesPerSession = "1073741824"
-$captureTimeinSeconds = "60"
-$localIP = ""
-$localPort = "" # Examples are: 80, or 80-120
-$remoteIP = ""
-$remotePort = "" # Examples are: 80, or 80-120
-$protocol = "" # Valid values are TCP, UDP and Any.
-$targetUri = "" # Example: /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.compute/virtualMachine/$vmName
-$storageId = "" #Example "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/ContosoExampleRG/providers/Microsoft.Storage/storageAccounts/contosoexamplergdiag374"
-$storagePath = "" # Example: "https://mytestaccountname.blob.core.windows.net/capture/vm1Capture.cap"
-$localFilePath = "c:\\temp\\packetcapture.cap" # Example: "d:\capture\vm1Capture.cap"
-
-$requestBody = @"
-{
- 'properties': {
- 'target': '/${targetUri}',
- 'bytesToCapturePerPacket': '${bytestoCaptureperPacket}',
- 'totalBytesPerSession': '${bytesPerSession}',
- 'timeLimitinSeconds': '${captureTimeinSeconds}',
- 'storageLocation': {
- 'storageId': '${storageId}',
- 'storagePath': '${storagePath}',
- 'filePath': '${localFilePath}'
- },
- 'filters': [
- {
- 'protocol': '${protocol}',
- 'localIPAddress': '${localIP}',
- 'localPort': '${localPort}',
- 'remoteIPAddress': '${remoteIP}',
- 'remotePort': '${remotePort}'
- }
- ]
- }
-}
-"@
-
-armclient PUT "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}?api-version=2016-07-01" $requestbody
-```
-
-## Stop packet capture
-
-The following example stops a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-armclient post "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}/stop?api-version=2016-12-01"
-```
-
-## Delete packet capture
-
-The following example deletes a packet capture on a virtual machine. The example is parameterized to allow for flexibility in creating an example.
-
-```powershell
-$subscriptionId = '<subscription id>'
-$resourceGroupName = "NetworkWatcherRG"
-$networkWatcherName = "NetworkWatcher_westcentralus"
-$packetCaptureName = "TestPacketCapture5"
-
-armclient delete "https://management.azure.com/subscriptions/${subscriptionId}/ResourceGroups/${resourceGroupName}/providers/Microsoft.Network/networkWatchers/${networkWatcherName}/packetCaptures/${packetCaptureName}?api-version=2016-12-01"
-```
-
-> [!NOTE]
-> Deleting a packet capture does not delete the file in the storage account
-
-## Next steps
-
-For instructions on downloading files from azure storage accounts, refer to [Get started with Azure Blob storage using .NET](../storage/blobs/storage-quickstart-blobs-dotnet.md). Another tool that can be used is Storage Explorer. More information about Storage Explorer can be found here at the following link: [Storage Explorer](https://storageexplorer.com/)
-
-Learn how to automate packet captures with Virtual machine alerts by viewing [Create an alert triggered packet capture](network-watcher-alert-triggered-packet-capture.md)
operator-insights Concept Data Quality Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-quality-monitoring.md
+
+ Title: Data quality and quality monitoring
+description: This article helps you understand how data quality and quality monitoring work in Azure Operator Insights.
++++ Last updated : 10/24/2023++
+# Data quality and quality monitoring
+
+Every Data Product working on Azure Operator Insights platform has built-in support for data quality monitoring. Data quality is crucial because it ensures accurate, reliable, and trustworthy information for decision-making. It prevents costly mistakes, builds credibility with customers and regulators, and enables personalized experiences.
+
+Azure Operator Insights platform monitors data quality when data is ingested into Data Product input storage (first AOI Data Product Storage block in following image) and after data is processed and made available to customers (AOI Data Product Compute in following image).
++
+## Quality dimensions
+
+Data quality dimensions are the various aspects or characteristics that define the quality of data. Azure Operator Insights support the following dimensions:
+
+- Accuracy - Refers to how well the data reflects reality, for example, correct names, addresses and up-to-date data. High data accuracy allows you to produce analytics that can be trusted and leads to correct reporting and confident decision-making.
+- Completeness - Refers to whether all the data required for a particular use is present and available to be used. Completeness applies not only at the data item level but also at the record level. Completeness helps to understand if missing data will affect the reliability of insights from the data.
+- Uniqueness - Refers to the absences of duplicates in a dataset.
+- Consistency - Refers to whether the same data element does not conflict across different sources or over time. Consistency ensures that data is uniform and can be compared across different sources.
+- Timeliness - Refers to whether the data is up-to-date and available when needed. Timeliness ensures that data is relevant and useful for decision-making.
+- Validity - Refers to whether the data conforms to a defined set of rules or constraints.
+
+## Metrics
+
+All data quality dimensions are covered by quality metrics produced by Azure Operator Insights platform. There are two types of the quality metrics:
+
+- Basic - Standard set of checks across all data products.
+- Custom - Custom set of checks, allowing all data products to implement checks that are specific to their product.
+
+The basic quality metrics produced by the platform are available in a table below.
+
+| **Metric** | **Dimension** | **Data Source** |
+|-||--|
+| Number of ingested rows | Timeliness | Ingested |
+| Number of rows containing null for required columns | Completeness | Ingested |
+| Number of rows failed validation against schema | Validity | Ingested |
+| Number of filtered out rows | Completeness | Ingested |
+| Number of processed rows | Timeliness | Processed |
+| Number of incomplete rows, which don't contain required data | Completeness | Processed |
+| Number of duplicated rows | Uniqueness | Processed |
+| Percentiles for overall lag between record generation and available for querying | Timeliness | Processed |
+| Percentiles for lag between record generation and ingested into input storage | Timeliness | Processed |
+| Percentiles for lag between data ingested and processed | Timeliness | Processed |
+| Percentiles for lag between data processed and available for querying | Timeliness | Processed |
+| Ages for materialized views | Timeliness | Processed |
+
+The custom data quality metrics are implemented on per data product basis. These metrics cover the accuracy and consistency dimensions. Data product documentation contains description for the custom quality metrics available.
+
+## Monitoring
+
+All Azure Operator Insight Data Products are deployed with a dashboard showing quality metrics. You can use the dashboard to monitor quality of their data.
+
+All data quality metrics are saved to the Data Product ADX tables. For exploration of the data quality metrics, you can use the standard Data Product KQL endpoint and then extend the dashboard if necessary.
operator-insights Concept Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-types.md
+
+ Title: Data types - Azure Operator Insights
+description: This article provides an overview of the data types used by Azure Operator Insights Data Products
++++ Last updated : 10/25/2023+
+#CustomerIntent: As a Data Product user, I want to understand the concept of Data Types so that I can use Data Product(s) effectively.
++
+# Data types overview
+
+A Data Product ingests data from one or more sources, digests and enriches this data, and presents this data to provide domain-specific insights and to support further data analysis.
+
+A data type is used to refer to individual data sources. The data types can be from outside the Data Product, such as from a network element. The data types can also be created within the Data Product itself by aggregating or enriching the data from other data types.
+
+Data Product operators can choose which data types to use and the data retention period for each data type.
+
+## Data type contents
+
+Each data type contains data from a specific source. For a foundational data product, the primary sources are typically network elements within the subject domain. For example, the Mobile Content Cloud (MCC) Data Product includes the *edr* data type that handles Event Data Records from the MCC and the *pmstats* data type that contains MCC performance management data (performance statistics).
+
+Data types can also be derived by aggregating or enriching the data from other data types. The MCC Data Product includes an *edr-sanitized* data type generated by the Data Product itself. This data type provides the same information as the *edr* data type but with PII data suppressed to support operators' compliance with privacy legislation.
+
+## Data type settings
+
+Data types are presented as child resources of the Data Product within the Azure portal as shown in the Data Types page. Relevant settings can be controlled independently for each individual data type.
++
+- Data Product operators can turn off individual data types to avoid incurring processing and storage costs associated with a data type that isn't valuable for their specific use cases.
+- Data Product operators can configure different data retention periods for each data type as shown in the Data Retention page. For example, data types containing PII are typically configured with a shorter retention period to comply with privacy legislation.
+
+ :::image type="content" source="media/concept-data-types/data-types-data-retention.png" alt-text="Screenshot of Data Types Data Retention portal page.":::
operator-insights Concept Data Visualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-visualization.md
+
+ Title: Data visualization in Azure Operator Insights Data Products
+description: This article outlines how data is stored and visualized in Azure Operator Insights Data Products.
++++ Last updated : 10/23/2023+
+#CustomerIntent: As a Data Product user, I want understand data visualization in the Data Product so that I can access my data.
++
+# Data visualization in Data Products overview
+
+The Azure Operator Insights Data Product is an Azure service that handles processing and enrichment of data. A set of dashboards is deployed with the Data Product, but users can also query and visualize the data.
+
+## Data Explorer
+
+Enriched and processed data is stored in the Data Product and is made available for querying with the Consumption URL, which you can connect to in the [Azure Data Explorer web UI](https://dataexplorer.azure.com/). Permissions are governed by role-based access control.
+
+The Data Product exposes a database, which contains a set of tables and materialized views. You can query this data in the Data Explorer GUI using [Kusto Query Language](/azure/data-explorer/kusto/query/).
+
+## Enrichment and aggregation
+
+The Data Product enriches the raw data by combining data from different tables together. This enriched data is then aggregated in materialized views that summarize the data over various dimensions.
+
+The data is enriched and aggregated after it has been ingested into the raw tables. As a result, there is a slight delay between the arrival of the raw data and the arrival of the enriched data.
+
+The Data Product has metrics that monitor the quality of the raw and enriched data. For more information, see [Data quality and data monitoring](concept-data-quality-monitoring.md).
+
+## Visualizations
+
+Dashboards are deployed with the Data Product. These dashboards include a set of visualizations organized according to different KPIs in the data, which can be filtered on a range of dimensions. For example, visualizations provided in the Mobile Content Cloud (MCC) Data Product include upload/download speeds and data volumes.
+
+For information on accessing and using the built-in dashboards, see [Use Data Product dashboards](dashboards-use.md).
+
+You can also create your own visualizations, either by using the KQL [render](/azure/data-explorer/kusto/query/renderoperator?pivots=azuredataexplorer) operator in the [Azure Data Explorer web UI](https://dataexplorer.azure.com/) or by creating dashboards following the guidance in [Visualize data with Azure Data Explorer dashboards](/azure/data-explorer/azure-data-explorer-dashboards).
+
+## Querying
+
+On top of the dashboards provided as part of the Data Product, the data can be directly queried in the Azure Data Explorer web UI. See [Query data in the Data Product](data-query.md) for information on accessing and querying the data.
+
+## Related content
+
+- To get started with creating a Data Product, see [Create an Azure Operator Insights Data Product](data-product-create.md)
+- For information on querying the data in your Data Product, see [Query data in the Data Product](data-query.md)
+- For information on accessing the dashboards in your Data Product, see [Use Data Product dashboards](dashboards-use.md)
operator-insights Concept Mcc Data Product https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-mcc-data-product.md
+
+ Title: Mobile Content Cloud (MCC) Data Product - Azure Operator Insights
+description: This article provides an overview of the MCC Data Product for Azure Operator Insights
++++ Last updated : 10/25/2023+
+#CustomerIntent: As an MCC operator, I want to understand the capabilities of the MCC Data Product so that I can use it to provide insights to my network.
++
+# Mobile Content Cloud (MCC) Data Product overview
+
+The MCC Data Product supports data analysis and insight for operators of the Affirmed Networks Mobile Content Cloud (MCC). It ingests Event Data Records (EDRs) and performance management data (performance statistics) from MCC network elements. It then digests and enriches this data to provide a range of visualizations for the operator and to provide access to the underlying enriched data for operator data scientists.
+
+## Background
+
+The Affirmed Networks Mobile Content Cloud (MCC) is a virtualized Evolved Packet Core (vEPC) that can provide the following functionality.
+
+- Serving Gateway (SGW) routes and forwards user data packets between the RAN and the core network.
+- Packet Data Network Gateway (PGW) provides interconnect between the core network and external IP networks.
+- Gi-LAN Gateway (GIGW) provides subscriber-aware or subscriber-unaware value-added services (VAS) without enabling MCC gateway services, allowing operators to take advantage of VAS while still using their incumbent gateway.
+- Gateway GPRS support node (GGSN) provides interworking between the GPRS network and external packet switched networks.
+- Serving GPRS support node and MME (SGSN/MME) is responsible for the delivery of data packets to and from the mobile stations within its geographical service area.
+- Control and User Plane Separation (CUPS), an LTE enhancement that separates control and user plane function to allow independent scaling of functions.
+
+The data produced by the MCC varies according to the functionality, which leads to variation in the data digested and in the enrichments and visualizations that are relevant.
+
+## Data types
+
+The following data types are provided as part of the MCC Data Product.
+
+- *edr* contains data from the Event Data Records (EDRs) written by the MCC network elements. EDRs record each significant event arising during calls or sessions handled by the MCC. They provide a comprehensive record of what happened, allowing operators to explore both individual problems and more general patterns.
+- *pmstats* contains performance management data reported by the MCC management node, giving insight into the performance characteristics of the MCC network elements.
+- *edr-sanitized* contains data from the *edr* data type but with personally identifiable information (PII) information suppressed. This data can be used to support data analysis without giving access to PII.
+
+## Related content
+
+- [Data Quality Monitoring](concept-data-quality-monitoring.md)
+- [Azure Operator Insights Data Types](concept-data-types.md)
+- [Affirmed Networks MCC documentation](https://manuals.metaswitch.com/MCC)
+
+ > [!NOTE]
+ > Affirmed Networks login credentials are required to access the MCC product documentation.
operator-insights Dashboards Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/dashboards-use.md
+
+ Title: Use Azure Operator Insights Data Product dashboards
+description: This article outlines how to access and use dashboards in the Azure Operator Insights Data Product.
++++ Last updated : 10/24/2023+
+#CustomerIntent: As a Data Product user, I want to access dashboards so that I can view my data.
++
+# Use Data Product dashboards to visualize data
+
+This article covers accessing and using the dashboards in the Azure Operator Insights Data Product.
+
+## Prerequisites
+
+A deployed Data Product, see [Create an Azure Operator Insights Data Product](data-product-create.md).
+
+## Get access to the dashboards
+
+Access to the dashboards is controlled by role-based access control (RBAC).
+
+1. In the Azure portal, select the Data Product resource and open the Permissions pane. You must have the `Reader` role. If you do not, contact an owner of the resource to grant you `Reader` permissions.
+1. In the Overview pane of the Data Product, open the link to the dashboards.
+1. Select any dashboard to open it and view the visualizations.
+
+## Filter data
+
+Each dashboard is split into pages with a set of filters at the top of the page.
+
+- View different pages in the dashboard by selecting the tabs on the left.
+- Filter data by using the drop-down or free text fields at the top of the page.
+ You can enter multiple values in the free text fields by separating the inputs with a comma and no spaces, for example: `London,Paris`.
+
+Some tiles report `UNDETECTED` for any filters with an empty entry. You can't filter these undetected entries.
+
+## Exploring the queries
+
+Each tile in a dashboard runs a query against the data. To edit these queries and run them manually, you can open these queries in the query editor.
+
+1. Select the ellipsis in the top right corner of the tile, and select **Explore Query**.
+1. Your query opens in a new tab in the query editor. If the query is all on one line, right-click the query block and select **Format Document**.
+1. Select **Run** or press *Shift + Enter* to run the query.
+
+## Editing the dashboards
+
+Users with Edit permissions on dashboards can make changes.
+
+1. In the dashboard, change the state from **Viewing** to **Editing** in the top left of the screen.
+1. Select **Add** to add new tiles, or select the pencil to edit existing tiles.
+
+## Related content
+
+- For more information on dashboards and how to create your own, see [Visualize data with Azure Data Explorer dashboards](/azure/data-explorer/azure-data-explorer-dashboards)
+- For general information on data querying in the Data Product, see [Query data in the Data Product](data-query.md)
operator-insights Data Product Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-product-create.md
+
+ Title: Create an Azure Operator Insights Data Product
+description: In this article, learn how to create an Azure Operator Insights Data Product resource.
++++ Last updated : 10/16/2023+++
+# Create an Azure Operator Insights Data Product
+
+In this article, you learn how to create an Azure Operator Insights Data Product instance.
+
+> [!NOTE]
+> Access is currently only available by request. More information is included in the application form. We appreciate your patience as we work to enable broader access to Azure Operator Insights Data Product. Apply for access by [filling out this form](https://aka.ms/AAn1mi6).
+
+## Prerequisites
+
+- An Azure subscription for which the user account must be assigned the Contributor role. If needed, create a [free subscription](https://azure.microsoft.com/free/) before you begin.
+- Access granted to Azure Operator Insights for the subscription. Apply for access by [completing this form](https://aka.ms/AAn1mi6).
+- (Optional) If you plan to integrate Data Product with Microsoft Purview, you must have an active Purview account. Make note of the Purview collection ID when you [set up Microsoft Purview with a Data Product](purview-setup.md).
+
+### For CMK-based data encryption or Microsoft Purview
+
+If you're using CMK-based data encryption or Microsoft Purview, you must set up Azure Key Vault and user-assigned managed identity (UAMI) as prerequisites.
+
+#### Set up Azure Key Vault
+
+Azure key Vault Resource is used to store your Customer Managed Key (CMK) for data encryption. Data Product uses this key to encrypt your data over and above the standard storage encryption. You need to have Subscription/Resource group owner permissions to perform this step.
+1. [Create an Azure Key Vault resource](../key-vault/general/quick-create-portal.md) in the same subscription and resource group where you intend to deploy the Data Product resource.
+1. Provide your user account with the Key Vault Administrator role on the Azure Key Vault resource. This is done via the **Access Control (IAM)** tab on the Azure Key Vault resource.
+1. Navigate to the object and select **Keys**. Select **Generate/Import**.
+1. Enter a name for the key and select **Create**.
+1. Select the newly created key and select the current version of the key.
+1. Copy the Key Identifier URI to your clipboard to use when creating the Data Product.
+
+#### Set up user-assigned managed identity
+
+1. [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azp#create-a-user-assigned-managed-identity) using Microsoft Entra ID for CMK-based encryption. The Data Product also uses the user-assigned managed identity (UAMI) to interact with the Microsoft Purview account.
+1. Navigate to the Azure Key Vault resource that you created earlier and assign the UAMI with **Key Vault Administrator** role.
++
+## Create an Azure Operator Insights Data Product resource in the Azure portal
+
+You create the Azure Operator Insights Data Product resource.
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+1. In the search bar, search for Operator Insights and select **Azure Operator Insights - Data Products**.
+1. On the Azure Operator Insights - Data Products page, select **Create**.
+1. On the Basics tab of the **Create a Data Product** page:
+ 1. Select your subscription.
+ 1. Select the resource group you previously created for the Key Vault resource.
+ 1. Under the Instance details, complete the following fields:
+ - Name - Enter the name for your Data Product resource. The name must start with a lowercase letter and can contain only lowercase letters and numbers.
+ - Publisher - Select Microsoft.
+ - Product - Select MCC.
+ - Version - Select the version.
+
+ Select **Next**.
+
+1. In the Advanced tab of the **Create a Data Product** page:
+ 1. Enable Purview if you're integrating with Microsoft Purview.
+ Select the subscription for your Purview account, select your Purview account, and enter the Purview collection ID.
+ 1. Enable Customer managed key if you're using CMK for data encryption.
+ 1. Select the user-assigned managed identity that you set up as a prerequisite.
+ 1. Carefully paste the Key Identifier URI that was created when you set up Azure Key Vault as a prerequisite.
+
+1. To add owner(s) for the Data Product, which will also appear in Microsoft Purview, select **Add owner**, enter the email address, and select **Add owners**.
+1. In the Tags tab of the **Create a Data Product** page, select or enter the name/value pair used to categorize your data product resource.
+1. Select **Review + create**.
+1. Select **Create**. Your Data Product instance is created in about 20-25 minutes. During this time, all the underlying components are provisioned post which you can work with your data ingestion, exploring sample dashboards, queries etc.
+
+## Deploy Sample Insights
+
+Once your Data Product instance is created, you can deploy sample insights dashboard which works against the sample data that came along with the Data Product instance.
+
+1. Navigate to your Data Product resource on the Azure portal and select the Permissions tab on the Security section.
+1. Select **Add Reader**. Type the user's email address to be added to Data Product reader role.
+
+> [!NOTE]
+> The reader role is required for you to have access to the insights consumption URL.
+
+3. Download the sample JSON template file from the Data product overview page by clicking on the link shown after the text ΓÇ£Sample DashboardΓÇ¥. Alternatively [download the sample JSON template file here](https://aka.ms/aoidashboard).
+1. Copy the consumption URL from the Data Product overview screen into the clipboard.
+1. Open a web browser, paste in the URL and select enter.
+1. When the URL loads, select on the Dashboards option on the left navigation pane.
+1. Select the **New Dashboard** drop down and select **Import dashboard from file**. Browse to select the JSON file downloaded previously, provide a name for the dashboard and select **Create**.
+1. Select the three dots (...) at the top right corner of the consumption URL page and select **Data Sources**.
+1. Select the pencil icon next to the Data source name in order to edit the data source.
+1. Under the Cluster URI section, replace the URL with your Data Product consumption URL and select connect.
+1. In the Database drop-down, select your Database. Typically, the database name is the same as your Data Product instance name. Select **Apply**.
+
+> [!NOTE]
+> These dashboards are based on synthetic data and may not have complete or representative examples of the real-world experience.
+
+## Explore sample data using Kusto
+
+The consumption URL also allows you to write your own Kusto query to get insights from the data.
+
+1. On the Overview page, copy the consumption URL and paste it in a new browser tab to see the database and list of tables.
+1. Use the ADX query plane to write Kusto queries. For example:
+
+ ```
+ enriched_flow_events_sample
+ | summarize Application_count=count() by flowRecord_dpiStringInfo_application
+ | order by Application_count desc
+ | take 10
+ ```
+
+```
+enriched_flow_events_sample
+| summarize SumDLOctets = sum(flowRecord_dataStats_downLinkOctets) by bin(eventTimeFlow, 1h)
+| render columnchart
+```
+
+## Delete Azure resources
+
+When you have finished exploring Azure Operator Insights Data Product, you should delete the resources you've created to avoid unnecessary Azure costs.
+
+1. On the **Home** page of the Azure portal, select **Resource groups**.
+1. Select the resource group for your Azure Operator Insights Data Product and verify that it contains the Azure Operator Insights Data Product instance.
+1. At the top of the Overview page for your resource group, select **Delete resource group**.
+1. Enter the resource group name to confirm the deletion, and select **Delete**.
operator-insights Data Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-query.md
+
+ Title: Query data in the Azure Operator Insights Data Product
+description: This article outlines how to access and query the data in the Azure Operator Insights Data Product.
++++ Last updated : 10/22/2023+
+#CustomerIntent: As a consumer of the Data Product, I want to query data that has been collected so that I can visualise the data and gain customised insights.
++
+# Query data in the Data Product
+
+This article outlines how to access and query your data.
+
+The Azure Operator Insights Data Product stores enriched and processed data, which is available for querying with the Consumption URL.
+
+## Prerequisites
+
+A deployed Data Product, see [Create an Azure Operator Insights Data Product](data-product-create.md).
+
+## Get access to the ADX cluster
+
+Access to the data is controlled by role-based access control (RBAC).
+
+1. In the Azure portal, select the Data Product resource and open the Permissions pane. You must have the `Reader` role. If you do not, contact an owner of the resource to grant you `Reader` permissions.
+1. In the Overview pane, copy the Consumption URL.
+1. Open the [Azure Data Explorer web UI](https://dataexplorer.azure.com/) and select **Add** > **Connection**.
+1. Paste your Consumption URL in the connection box and select **Add**.
+
+For more information, see [Add a cluster connection in the Azure Data Explorer web UI](/azure/data-explorer/add-cluster-connection).
+
+## Perform a query
+
+Now that you have access to your data, confirm you can run a query.
+
+1. In the [Azure Data Explorer web UI](https://dataexplorer.azure.com/), expand the drop-down for the Data Product Consumption URL for which you added a connection.
+1. Double-click on the database you want to run your queries against. This database is set as the context in the banner above the query editor.
+1. In the query editor, run one of the following simple queries to check access to the data.
+
+```kql
+// Lists all available tables in the database.
+.show tables
+
+// Returns the schema of the named table. Replace $TableName with the name of table in the database.
+$TableName
+| getschema
+
+// Take the first entry of the table. Replace $TableName with the name of table in the database.
+$TableName
+| take 1
+```
+
+With access to the data, you can run queries to gain insights or you can visualize and analyze your data. These queries are written in [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
+
+Aggregated data in the Data Product is stored in [materialized views](/azure/data-explorer/kusto/management/materialized-views/materialized-view-overview). These views can be queried like tables, or by using the [materialized_view() function](/azure/data-explorer/kusto/query/materialized-view-function). Queries against materialized views are highly performant when using the `materialized_view()` function.
+
+## Related content
+
+- For information on using the query editor, see [Writing Queries for Data Explorer](/azure/data-explorer/web-ui-kql)
+- For information on KQL, see [Kusto Query Language Reference](/azure/data-explorer/kusto/query/)
+- For information on accessing the dashboards in your Data Product, see [Use Data Product dashboards](dashboards-use.md)
operator-insights Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/managed-identity.md
+
+ Title: Managed identity for Azure Operator Insights
+description: This article helps you understand managed identity and how it works in Azure Operator Insights.
++++ Last updated : 10/18/2023++
+# Managed identity for Azure Operator Insights
+
+This article helps you understand managed identity (formerly known as Managed Service Identity/MSI) and how it works in Azure Operator Insights.
+
+## Overview
+
+Managed identities eliminate the need to manage credentials. Managed identities provide an identity for the service instance when connecting to resources that support Microsoft Entra ID (formerly Azure Active Directory) authentication. For example, the service can use a managed identity to access resources like [Azure Key Vault](../key-vault/general/overview.md), where data admins can securely store credentials or access storage accounts. The service uses the managed identity to obtain Microsoft Entra ID (formerly Azure Active Directory) tokens.
+
+There are two types of supported managed identities:
+
+- **System-assigned:** You can enable a managed identity directly on a service instance. When you allow a system-assigned managed identity during the creation of the service, an identity is created in Microsoft Entra ID (formerly Azure Active Directory) tied to that service instance's lifecycle. By design, only that Azure resource can use this identity to request tokens from Azure AD. So when the resource is deleted, Azure automatically deletes the identity for you.
+
+- **User-assigned:** You can also create a managed identity as a standalone Azure resource. You can [create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md). In user-assigned managed identities, the identity is managed separately from the resources that use it.
+
+Managed identity provides the below benefits:
+
+- [Store credential in Azure Key Vault](../data-factory/store-credentials-in-key-vault.md), in which case-managed identity is used for Azure Key Vault authentication.
+
+- Access data stores or computes using managed identity authentication, including Azure Blob storage, Azure Data Explorer, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, Azure SQL Managed Instance, Azure Synapse Analytics, REST, Databricks activity, Web activity, and more.
+
+- Managed identity is also used to encrypt/decrypt data and metadata using the customer-managed key stored in Azure Key Vault, providing double encryption.
+
+## System-assigned managed identity
+
+>[!NOTE]
+> System-assigned managed identity is not currently supported with Azure Operator Insights Data Product Resource.
+
+## User-assigned managed identity
+
+You can create, delete, manage user-assigned managed identities in Microsoft Entra ID (formerly Azure Active Directory). For more details refer to [Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md).
+
+Once you have created a user-assigned managed identity, you must supply the credentials during or after [Azure Operator Insights Data Product Resource creation](../data-factory/credentials.md).
+
+## Related content
+
+See [Store credential in Azure Key Vault](../data-factory/store-credentials-in-key-vault.md) for information about when and how to use managed identity.
+
+See [Managed Identities for Azure Resources Overview](../active-directory/managed-identities-azure-resources/overview.md) for more background on managed identities for Azure resources, on which managed identity in Azure Operator Insights is based.
operator-insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/overview.md
+
+ Title: What is Azure Operator Insights?
+description: Azure Operator Insights is an Azure service for monitoring and analyzing data from multiple sources
++++ Last updated : 10/26/2023++
+# What is Azure Operator Insights?
+
+Azure Operator Insights is a fully managed service that enables the collection and analysis of massive quantities of network data gathered from complex multi-part or multi-vendor network functions. It delivers statistical, machine learning, and AI-based insights for operator-specific workloads to help operators understand the health of their networks and the quality of their subscribers' experiences in near real-time.
+
+Azure Operator Insights accelerates time to business value by eliminating the pain and time-consuming task of assembling off-the-shelf cloud components (chemistry set). This reduces load on ultra-lean operator platform and data engineering teams by making the following turnkey:
+High scale ingestion to handle large amounts of network data from operator data sources.
+
+- Pipelines managed for all operators, leading to economies of scale dropping the price.
+- Operator privacy module.
+- Operator compliance including handling retention policies.
+- Common data model with open standards such as parquet and delta lake for easy integration with other Microsoft and third-party services.
+- High speed analytics to enable fast data exploration and correlation between different data sets produced by disaggregated 5G multi-vendor networks.
+
+The result is that the operator has a lower total cost of ownership but higher insights of their network over equivalent on-premises or cloud chemistry set platforms.
+
+## How do I get access to Azure Operator Insights?
+
+Access is currently limited by request. More information is included in the application form. We appreciate your patience as we work to enable broader access to Azure Operator Insights Data Product. Apply for access by [filling out this form](https://aka.ms/AAn1mi6).
operator-insights Purview Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/purview-setup.md
+
+ Title: Use Microsoft Purview with an Azure Operator Insights Data Product
+description: In this article, learn how to set up Microsoft Purview to explore an Azure Operator Insights Data Product.
++++ Last updated : 10/24/2023++
+# Use Microsoft Purview with an Azure Operator Insights Data Product
+
+This article outlines how to set up Microsoft Purview to explore an Azure Operator Insights Data Product.
+
+Data governance is about managing data as a strategic asset, ensuring that there are controls in place around data, its content, structure, use, and safety. Microsoft Purview (formerly Azure Purview) is responsible for implementing data governance and allows you to monitor, organize, govern, and manage your entire data estate.
+
+When it comes to Azure Operator Insights, Microsoft Purview provides simple overviews and catalogs of all Data Product resources. To integrate Microsoft Purview into your Data Product solution, provide your Microsoft Purview account and chosen collection when creating an Azure Operator Insights Data Product in the Azure portal.
+
+The Microsoft Purview account and collection is populated with catalog details of your Data Product during the resource creation or resource upgrade process.
+
+## Prerequisites
+
+- You are in the process of creating or upgrading an Azure Operator Insights Data Product.
+
+- If you don't have an existing Microsoft Purview account, [create a Purview account](../purview/create-microsoft-purview-portal.md) in the Azure portal.
+
+## Access and set up your Microsoft Purview account
+
+You can access your Purview account through the Azure portal by going to `https://web.purview.azure.com` and selecting your Microsoft Entra ID and account name. Or by going to `https://web.purview.azure.com/resource/<yourpurviewaccountname>`.
+
+To begin to catalog a data product in this account, [create a collection](../purview/how-to-create-and-manage-collections.md) to hold the Data Product.
+
+Assign roles to your users using effective role-based access control (RBAC). There are multiple roles that can be assigned, and assignments can be done on an account root and collection level. For more information, see how to [add roles and restrict access through collections](../purview/how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+
+[Using the Microsoft Purview compliance portal](../purview/use-microsoft-purview-governance-portal.md) explains how to use the user interface and navigate the service. Microsoft Purview includes options to scan in data sources, but this option isn't required for integrating Azure Operator Insights Data Products with Microsoft Purview. When you complete this procedure, all Azure services and assets are automatically populated to your Purview catalog.
+
+## Connect Microsoft Purview to your Data Product
+
+When creating an Azure Operator Insights Data Product, select the **Advanced** tab and enable Purview.
++
+Select **Select Purview Account** to provide the required values to populate a Purview collection with data product details.
+- **Purview account name** - When you select your subscription, all Purview accounts in that subscription are available. Select the account you created.
+- **Purview collection ID** - The five-character ID visible in the URL of the Purview collection. To find the ID, select your collection and the collection ID is the five characters following `?collection=` in the URL. In the following example, the Investment collection has the collection ID *50h55*.
++
+### Data Product representation in Microsoft Purview
+
+A Data Product is made up of many Azure Services and Data Assets, which are represented as an asset inside the Microsoft Purview compliance portal of the necessary types. The following asset types are represented.
+
+#### Data Product
+
+An overall representation of the AOI Data Product
+
+| **Additional fields** | **Description** |
+|--|--|
+| Description | Brief description of the Data Product |
+| Owners | A list of owners of this Data Product |
+| Azure Region | The region where the Data Product is deployed |
+| Docs | A link to documents that explain the data |
+
+#### AOI Data Lake
+
+Also known as Azure Data Lake Storage (ADLS)
+
+| **Additional fields** | **Description** |
+|--|-|
+| DFS Endpoint Address | Provides access to Parquet files in AOI Data Lake |
+
+#### AOI Database
+
+Also known as Azure Data Explorer (ADX)
+
+| **Additional fields** | **Description** |
+|--|-|
+| KQL Endpoint Address | Provides access to AOI tables for exploration using KQL |
+
+#### AOI Table
+
+ADX Tables and Materialized Views
+
+| **Additional fields** | **Description** |
+|--|-|
+| Description | Brief description of each table and view |
+| Schema | Contains the table columns and their details |
+
+#### AOI Parquet details
+
+Each ADX Table is an equivalent Parquet file type
+
+| **Additional fields** | **Description** |
+|--|-|
+| Path | Top-level path for the Parquet file type: container/dataset\_name |
+| Description | Identical to the equivalent AOI Table |
+| Schema | Identical to the equivalent AOI Table |
+
+#### AOI Column
+
+The columns belong to AOI Tables and the equivalent AOI Parquet details
+
+| **Additional fields** | **Description** |
+|--||
+| Type | The data type of this column |
+| Description | Brief description for this column |
+| Schema | Identical to the equivalent AOI Table |
+
+There are relationships between assets where necessary. For example, a Data Product can have many AOI Databases and one AOI Data Lake related to it.
+
+## Explore your Data Product with Microsoft Purview
+
+When the Data Product creation process is complete, you can see the catalog details of your Data Product in the collection. Select **Data map > Collections** from the left pane and select your collection.
++
+> [!NOTE]
+> The Microsoft Purview integration with Azure Operator Insights Data Products only features the Data catalog and Data map of the Purview portal.
+
+Select **Assets** to view the data product catalog and to list all assets of your data product.
++
+Select **Assets** to view the asset catalog of your data product. You can filter by the data source type for the asset type. For each asset, you can display properties, a list of owners (if applicable), and the related assets.
++
+When viewing all assets, filtering by data source type is helpful.
+
+### Asset properties and endpoints
+
+When looking at individual assets, select the **Properties** tab to display properties and related assets for that asset.
++
+You can use the Properties tab to find endpoints in AOI Database and AOI Tables.
+
+### Related assets
+
+Select the **Related** tab of an asset to display a visual representation of the existing relationships, summarized and grouped by the asset types.
++
+Select an asset type (such as aoi\_database as shown in the example) to view a list of related assets.
+
+### Exploring schemas
+
+The AOI Table and AOI Parquet Details have schemas. Select the **Schema** tab to display the details of each column.
++
+## Related Content
+
+[Use the Microsoft Purview compliance portal](../purview/use-microsoft-purview-governance-portal.md)
operator-nexus Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/overview.md
As a platform, Azure Operator Nexus is designed for telco network functions and
### Azure Operator Service Manager
-Azure Operator Service Manager is a service that allows network equipment providers (NEPs) to publish their NFs in Azure Marketplace. Operators can deploy the NFs by using familiar Azure APIs.
+[Azure Operator Service Manager](../operator-service-manager/azure-operator-service-manager-overview.md) is a service that allows network equipment providers (NEPs) to publish their NFs in Azure Marketplace. Operators can deploy the NFs by using familiar Azure APIs.
Operator Service Manager provides a framework for NEPs and Microsoft to test and validate the basic functionality of the NFs. The validation includes lifecycle management of an NF on Azure Operator Nexus.
operator-service-manager Azure Operator Service Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/azure-operator-service-manager-overview.md
+
+ Title: What is Azure Operator Service Manager?
+description: Learn about Azure Operator Service Manager, an Azure Service for the management of Network Services for telecom operators.
++ Last updated : 10/18/2023+++
+# What is Azure Operator Service Manager?
+
+Azure Operator Service Manager is an Azure service designed to assist telecom operators in managing their network services. It provides management capabilities for multi-vendor applications across hybrid cloud sites, encompassing Azure regions, edge platforms, and Arc-connected sites. Azure Operator Service Manager caters to the needs of telecom operators who are in the process of migrating their workloads to Azure and Arc-connected cloud environments.
+
+## Orchestrate operator services across Azure platforms
+
+As part of the Azure AI Operations program, Azure Operator Service Manager transforms the traditional operator service management experience into a modern cloud service. With support for Azure For Operator platforms, like Azure Operator Nexus and services like Azure Operator 5G Core, operators can simplify complex service deployments. This simplification ensures carrier grade service reliability while accelerating both service innovation and service monetization.
++
+## Technical overview
+
+Managing complex network services efficiently and reliably can be a challenge. Azure Operator Service ManagerΓÇÖs unique role-based approach introduces curated experiences for publishers, designers and operators. Use of Network Service Design (NSD) artifacts to onboard service requirements and configuration group schemas/values to define run-time inputs. The following diagram illustrates the Azure Operator Service Manager (AOSM) deployment workflow.
++
+## Product features
+
+### Support Azure Operator Nexus platform and service catalog
+
+Manage lifecycle for next generation services on Azure Operator Nexus platform. Manage lifecycle includes self-service support for telecom operators to onboard and deploy third-party network services along with catalog offerings from industry leaders such as Ericsson and Nokia and MicrosoftΓÇÖs own Azure For Operator services.
+
+### Unified service orchestration
+
+Consolidate software and configuration management tasks into a single set of end-to-end Azure operations to seamlessly compose, deploy, and update complex multi-vendor multi-region services. One true Azure interface provides access to all operator service management needs.
+
+### Simplify service creation
+
+Model network services using Azure Resource Manager (ARM), just like any other Azure resources. Reduce the number of parameters needed to create operator-centric services and drive run-time operations via traditional Azure interfaces, such as portal, CLI, API or SDK.
+
+### Reliably deploy Telco grade network function software
+
+Operators can easily automate repeat configuration changes, reducing the effort required to ensure service consistency and enhancing the reliability of service deployments.
+
+### Secure software distribution supply chain
+
+World class software distribution security addresses operator concerns with the threat of bad actors. Use of modern custody management ensures what a publisher has on-boarded, is what an operator deploys.
+
+### Consistent service updates
+
+Updating services becomes straightforward. Operators can recall the last service template, modify service parameters, and request a new service deployment. Using convergence to reach the desired state makes service updates seamless. Furthermore, if necessary, operators can easily clean up and delete service instances.
+
+## Business impact
+
+### Accelerate service velocity
+
+Leverage Azure Operator Service managerΓÇÖs approach to service composition, deployment and updates, to realize up to a 3x acceleration of service velocity. This allows operators to increase the frequency of services updates and be first to market with new services.
+
+### Optimize capital expenses
+
+Ease the path to Azure cloud savings with on-demand placement of service resources. Operators can realize up to a 40% reduction in capital expenses, by breaking the traditional cycle of advanced capacity purchasing.
+
+### Reduce energy expense
+
+Steer service placement to the greenest hardware, reducing service operating expenses by up to 20%. Use of greenest hardware also helps shrink the overall corporate carbon footprint.
+
+## Conclusion
+
+By unifying service management, facilitating reliable deployments, supporting global workflows, and ensuring service consistency, operators can achieve accelerated service velocity, improved service reliability, and optimize service cost. Harness the power of Microsoft Azure to drive network services forward.
+
+## Service Level Agreement
+
+SLA (Service Level Agreement) information can be found on the [Service Level Agreements SLA for Online Services](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1)).
+
+## Get access to Azure Operator Service Manager (AOSM) for your Azure subscription
+
+Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
operator-service-manager Best Practices Onboard Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md
+
+ Title: Best practices for Azure Operator Service Manager
+description: Understand Best Practices for Azure Operator Service Manager to onboard and deploy a Network Function (NF).
++ Last updated : 09/11/2023++++
+# Azure Operator Service Manager Best Practices to onboard and deploy a Network Function (NF)
+
+Microsoft has developed many proven practices for managing Network Functions (NFs) using Azure Operator Service Manager (AOSM). This article provides guidelines that NF vendors, telco operators and their System Integrators (SIs) can follow to optimize the design. Keep these practices in mind when onboarding and deploying your NFs.
+
+## Technical overview
+
+- Onboard an MVP first.
+
+- You can add config detail in subsequent versions.
+
+- Structure your artifacts to align with planned use if possible.
+- Separate globally defaulted artifacts from those artifacts you want to vary by site.
+
+- Achieve maximum benefit from Azure Operator Service Manager (AOSM) by considering service composition of multiple NFs with a reduced, templated config set that matches the needs of your network vs exposing hundreds of config settings you don't use.
+
+- Think early on about how you want to separate infrastructure (for example, clusters) or artifact stores and access between suppliers, in particular within a single service. Make your set of publisher resources match to this model
+
+- Sites are a logical concept. It's natural that many users equate them to a physical edge site. There are use cases where multiple sites share a physical location (canary vs prod resources).
+
+- Remember that Azure Operator Service Manager (AOSM) provides various APIs making it simple to combine with ADO or other pipeline tools, if desired.
+
+## Publisher recommendations and considerations
+
+- We recommend you create a single publisher per NF supplier.
+
+- Consider relying on the versionState (Active/Preview) of NFDVs and NSDVs to distinguish between those used in production vs the ones used for testing/development purposes. You can query the versionState on the NFDV and NSDV resources to determine which ones are Active and so immutable. For more information, see [Publisher Tenants, subscriptions, regions and preview management](publisher-resource-preview-management.md).
+
+- Consider using agreed upon naming convention and governance techniques to help address any remaining gaps.
+
+## Network Function Definition Group and Version considerations
+
+The Network Function Definition Version (NFDV) is the smallest component you're able to reuse independently across multiple services. All components of an NFDV are always deployed together. These components are called networkFunctionApplications.
+
+For Containerized Network Function Definition Versions (CNF NFDVs), the networkFunctionApplications list can only contain helm packages. It's reasonable to include multiple helm packages if they're always deployed and deleted together.
+
+For Virtualized Network Function Definition Versions (VNF NFDVs), the networkFunctionApplications list must contain one VhdImageFile and one ARM template. It's unusual to include more than one VhdImageFile and more than one ARM template. Unless you have a strong reason not to, the ARM template should deploy a single VM. The Service Designer should include numerous copies of the Network Function Definition (NFD) within the Network Service Design (NSD) if you want to deploy multiple VMs. The ARM template (for both AzureCore and Nexus) can only deploy ARM resources from the following Resource Providers:
+
+- Microsoft.Compute
+
+- Microsoft.Network
+
+- Microsoft.NetworkCloud
+
+- Microsoft.Storage
+
+- Microsoft.NetworkFabric
+
+- Microsoft.Authorization
+
+- Microsoft.ManagedIdentity
+
+Single Network Function Definition Group (NFDGs) can have multiple NFDVs.
+
+NFDVs should reference fixed images and charts. An update to an image version or chart means an update to the NFDV major or minor version. For a Containerized Network Function (CNF) each helm chart should contain fixed image repositories and tags that aren't customizable by deployParameters.
+
+### Common use cases that trigger Network Function Design Version (NFDV) minor or major version update
+
+- Updating CGS / CGV for an existing release that triggers changing the deployParametersMappingRuleProfile.
+
+- Updating values that are hard coded in the NFDV.
+
+- Marking components inactive to prevent them from being deployed via ΓÇÿapplicationEnablement: 'Disabled.'
+
+- New NF release (charts, images, etc.)
+
+## Network Service Design Group and Version considerations
+
+An NSD is a composite of one or more NFD and any infrastructure components deployed at the same time. An SNS refers to a single NSD. It's recommended that the NSD includes any infrastructure required (NAKS/AKS clusters, virtual machines, etc.) and then deploys the NFs required on top. Such design guarantees consistent and repeatable deployment of entire site from a single SNS PUT.
+
+An example of an NSD is:
+
+- Authentication Server Function (AUSF) NF
+- Unified Data Management (UDM) NF
+- Admin VM supporting AUSF/UDM
+- Unity Cloud (UC) Session Management Function (SMF) NF
+- Nexus Azure Kubernetes Service (NAKS) cluster which AUSF, UDM, and SMF are deployed to
+
+These five components form a single NSD. Single NSDs can have multiple NSDVs. The collection of all NSDVs for a given NSD is known as an NSDG.
+
+### Common use cases that trigger Network Service Design Version (NSDV) minor or major version update
+
+- Create or delete CGS.
+
+- Changes in the NF ARM template associated with one of the NFs being deployed.
+
+* Changes in the infrastructure ARM template, for example, AKS/NAKS or VM.
+
+Changes in NFDV shouldn't trigger an NSDV update. The versions of an NFD should be exposed within the CGS, so operator's can control them using CGVs.
+
+## Azure Operator Service Manager (AOSM) CLI extension and Network Service Design considerations
+
+The Azure Operator Service Manager (AOSM) CLI extension assists publishing of NFDs and NSDs. Use this tool as the starting point for creating new NFD and NSD.
+
+Currently NSDs created by the Azure Operator Service Manager (AOSM) CLI extension don't include infrastructure components. Best practice is to use the CLI to create the initial files and then edit them to incorporate infrastructure components before publishing.
+
+### Use the Azure Operator Service Manager (AOSM) CLI extension
+
+The Azure Operator Service Manager (AOSM) CLI extension assists publishing of Network Function Definitions (NFD) and Network Service Designs (NSD). Use this tool as the starting point for creating new NFD and NSD.
+
+## Configuration Group Schema (CGS) considerations
+
+ItΓÇÖs recommended to always start with a single CGS for the entire NF. If there are site-specific or instance-specific parameters, itΓÇÖs still recommended to keep them in a single CGS. Splitting into multiple CGS is recommended when there are multiple components (rarely NFs, more commonly, infrastructure) or configurations that are shared across multiple NFs. The number of CGS defines the number of CGVs.
+
+### Scenario
+
+- FluentD, Kibana, Splunk (common 3rd-party components) are always deployed for all NFs within an NSD. We recommend these components are grouped into a single NFDG.
+
+- NSD has multiple NFs that all share a few configurations (deployment location, publisher name, and a few chart configurations).
+
+In this scenario, we recommend that a single global CGS is used to expose the common NFsΓÇÖ and third party componentsΓÇÖ configurations. NF-specific CGS can be defined as needed.
+
+### Choose exposed parameters
+
+General recommendations when it comes to exposing parameters via CGS:
+
+- CGS should only have parameters that are used by NFs (day 0/N configuration) or shared components.
+
+- Parameters that are rarely configured should have default values defined.
+
+- When multiple CGSs are used, we recommend there's little to no overlap between the parameters. If overlap is required, make sure the parameters names are clearly distinguishable between the CGSs.
+
+- What can be defined via API (AKS, Azure Operator Nexus, Azure Operator Service Manager (AOSM)) should be considered for CGS. As opposed to, defining those configuration values via CloudInit files.
+
+- A single User Assigned Managed Identity should be used in all the Network Function ARM templates and should be exposed via CGS.
+
+## Site Network Service (SNS)
+
+It's recommended to have a single SNS for the entire site, including the infrastructure.
+
+It's recommended that every SNS is deployed with a User Assigned Managed Identity (UAMI) rather than a System Assigned Managed Identity. This UAMI should have permissions to access the NFDV, and needs to have the role of Managed Identity Operator on itself. It's usual for Network Service Designs to also require this UAMI to be provided as a Configuration Group Value, which is ultimately passed through and used to deploy the Network Function. For more information, see [Create and assign a User Assigned Managed Identity](how-to-create-user-assigned-managed-identity.md).
+
+## Azure Operator Service Manager (AOSM) resource mapping per use case
+
+### Scenario - single Network Function (NF)
+
+An NF with one or two application components deployed to a K8s cluster.
+
+Azure Operator Service Manager (AOSM) resources breakdown:
+
+- NFDG: If components can be used independently then two NFDGs, one per component. If components are always deployed together, then a single NFDG.
+
+- NFDV: As needed based on the use cases mentioned in Common use cases that trigger NFDV minor or major version update.
+
+- NSDG: Single; combines the NFs and the K8s cluster definitions.
+
+- NSDV: As needed based on the use cases mentioned in Common use cases that trigger NSDV minor or major version update.
+
+- CGS: Single; we recommend that CGS has subsections for each component and infrastructure being deployed for easier management, and includes the versions for NFDs.
+
+- CGV: Single; based on the number of CGS.
+
+- SNS: Single per NSDV.
+
+### Scenario - multiple Network Functions (NFs)
+
+Multiple NFs with some shared and independent components deployed to a shared K8s cluster.
+
+Azure Operator Service Manager (AOSM) resources breakdown:
+
+- NFDG:
+ - NFDG for all shared components.
+ - NFDG for every independent component and/or NF.
+- NFDV: Multiple per each NFDG per use cases mentioned in Common use cases that trigger NFDV minor or major version update.
+- NSDG: Single combining all NFs, shared and independent components, and infrastructure (K8s cluster and/or any supporting VMs).
+- NSDV: As needed based on use cases mentioned in Common use cases that trigger NSDV minor or major version update.
+- CGS:
+ - Single global for all components that have shared configuration values.
+ - NF CGS per NF including the version of the NFD.
+ - Depending on the total number of parameters, you can consider combining all the CGSs into a single CGS.
+- CGV: Equal to the number of CGS.
+- SNS: Single per NSDV.
+
+## Next steps
+
+- [Quickstart: Complete the prerequisites to deploy a Containerized Network Function in Azure Operator Service Manager](quickstart-containerized-network-function-prerequisites.md)
+
+- [Quickstart: Complete the prerequisites to deploy a Virtualized Network Function in Azure Operator Service Manager](quickstart-virtualized-network-function-prerequisites.md)
operator-service-manager Designer Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/designer-best-practices.md
+
+ Title: Best practices for Azure Operator Service Manager - Designer
+description: Understand best Practices for Azure Operator Service Manager - Designer.
++ Last updated : 09/11/2023++++
+# Azure Operator Service Manager Best Practices for Designer
+
+Content under development.
operator-service-manager Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/glossary.md
+
+ Title: Glossary for Azure Operator Service Manager
+description: Learn about terminology specific to Azure Operator Service Manager.
++ Last updated : 08/15/2023++++
+# Glossary: Azure Operator Service Manager
+The following article contains terms used throughout the Azure Operator Service Manager documentation.
+
+## A
+
+### Active
+Active refers to a versionState of Network Function Definition or Network Service Design such that it's ready for use. An Active resource is immutable and can be deployed cross-subscription.
+
+### Artifact
+An artifact refers to a deployable component, such as a container image, Helm chart, Virtual Machine image or ARM template that is used in the deployment process. Artifacts are essential building blocks for deploying applications and services, providing the necessary files required for successful deployment.
+
+### Artifact Manifest
+The Artifact Manifest contains a dictionary of artifacts, in a given artifact store, together with type and version. A Publisher only uploads artifacts that are listed in the manifest. The Artifact Manifest also handles credential generation for push access to the Artifact Store. The Artifact Manifest verifies if all the artifacts referenced by NSDVersions or NFDVersions are uploaded.
+
+### Artifact Store
+The Artifact Store serves as the Azure Resource Manager (ARM) resource responsible for housing all the resources needed to create Network Functions and Network Services. The Artifact Store acts as a centralized repository for various components, including:
+- Helm charts.
+- Docker images for Containerized Network Functions (CNF).
+- Virtual Machine (VM) images for Virtualized Network Functions (VNF).
+- Other assets like ARM templates required throughout the Network Function creation process.
+
+It ensures proper storage and prevents accidental mismanagement like deleting images still in use by operators.
+
+Artifact Store comes in two flavors:
+1. Azure Storage Account for storing VM images.
+1. Azure Container Registry for storing all other artifact types.
+
+### Azure CLI
+Azure CLI (Command-Line Interface) is a command-line tool provided by Azure that enables you to manage Azure resources and services. With Azure CLI, you can interact with Azure through commands automating tasks and managing resources efficiently.
+
+### Azure Cloud Shell
+Azure Cloud Shell is an interactive, browser-based shell environment provided by Azure. Azure Cloud Shell allows you to manage Azure resources using the CLI or PowerShell, without the need to install any more software. Azure Cloud Shell provides a convenient and accessible way to work with Azure resources from anywhere.
+
+### Azure Container Registry (ACR)
+Azure Container Registry (ACR) is a managed, private registry service provided by Azure for storing and managing container images.
+
+### Azure Operator Service Manager (AOSM)
+Azure Operator Service Manager is a service provided by Azure for managing and operating network functions and services. Azure Operator Service Manager (AOSM) provides a centralized platform for operators to deploy, monitor, and manage network functions. Azure Operator Service Manager (AOSM) simplifies the management and operation of complex network infrastructures.
+
+### Azure portal
+Azure portal is a web-based interface provided by Azure for managing and monitoring Azure resources and services. Azure portal provides a unified and intuitive user experience, allowing users to easily navigate and interact with their Azure resources.
+
+## B
+
+### Bicep
+Bicep is a domain-specific language (DSL) provided by Azure for deploying Azure resources using declarative syntax. Bicep templates are easier to write than Azure Resource Manager (ARM) templates. It's possible to convert Bicep templates to Azure Resource Manager (ARM) and vice-versa.
+
+## C
+
+### Configuration Group Schema (CGS)
+Configuration Groups are partitions of the Site Network Service (SNS) configuration defined by each Network Service Design Version (NSDV). The Configuration Group Schema (CSG) is a json schema, defining the format of these inputs. The Service Designer creates one or more Configuration Group Schemas (CSGs) in the process of creating a Network Service Design Version (NSDV).
+
+### Configuration Group Values (CGV)
+Configuration Group Values (CGV) are json blobs that define the inputs parameters for the Site Network Service (SNS). There are one or more Configuration Group Values (CGVs) associated with each Site Network Service (SNS). The contents of a Configuration Group Value (CGV) must adhere to the Configuration Group Schema (CGS) associated with the Network Service Design Version (NSDV) selected for the Site Network Service (SNS).
+
+### Containerized Network Function (CNF)
+A Containerized Network Function (CNF) is a network function described by Helm charts and delivered as container images. Azure Operator Service Manager (AOSM) supports CNFs on Arc-enabled Kubernetes clusters and AKS.
+
+### Contributor Role
+Contributor Role is a role in Azure that grants users permissions to manage and make changes to Azure resources within a subscription. Users with the Contributor Role have the ability to create, modify, and delete resources, providing them with the necessary permissions to effectively manage Azure resources.
+
+### Custom Location ID
+Custom Location ID is a unique identifier used to specify a custom location for deploying resources in Azure. Custom Location ID allows users to define and deploy resources in specific locations not predefined by Azure, providing flexibility and customization options.
+
+## D
+
+### Designer
+
+See *Service Designer*.
+
+### Docker
+Docker is an open-source platform that allows you to automate the deployment and management of applications within containers. Containerized Network Function (CNF) images are docker images and Helm charts describe the deployment of these images.
+
+## H
+
+### Helm
+Helm is a package manager for Kubernetes that uses preconfigured packages called charts.
+
+### Helm chart
+A Helm chart is a collection of files that describe a set of Kubernetes resources and their configurations, allowing for easy application deployment and management. Helm charts provide a templated approach to defining and deploying applications in Kubernetes.
+
+### Helm package
+A Helm package is a compressed archive file that contains all the files and metadata required to deploy an application using Helm.
+
+## I
+
+### Immutable
+Immutable refers to a state or condition that can't be changed or modified. In the context of Azure resources, immutability ensures that the resource's configuration and state remain unchanged, providing stability and consistency in the deployment and management of resources.
+
+## J
+
+### JSON
+JSON (JavaScript Object Notation) is a lightweight data interchange format that is easy for humans to read and write and easy for machines to parse and generate. JSON is commonly used for representing structured data, making it a popular choice for configuration files and data exchange between systems.
+
+## L
+
+### Linux
+ Linux is an open-source operating system that is widely used in server environments and supports various software applications. Linux provides a stable and secure platform for running applications, making it a popular choice for cloud-based deployments.
+
+## M
+
+### Managed Identity
+Managed Identity is a feature in Azure that provides an identity for a resource to authenticate and access other Azure resources securely. Managed Identity eliminates the need for managing credentials and simplifies the authentication process, ensuring secure and seamless access to Azure resources.
+
+### Managed Identity Operator Role
+Managed Identity Operator Role is a role in Azure that grants permissions to manage and operate resources using a managed identity. Users with the Managed Identity Operator Role have the ability to manage and operate resources using the associated managed identity, ensuring secure and controlled access to resources.
+
+## N
+
+### Network Function (NF)
+Network Functions (NFs) come in two flavors: Containerized Network Function (CNFs) and Virtualized Network Functions (VNFs). Network Functions are units of function that can be combined together into a service.
+
+### Network Function Definition (NFD) / Network Function Definition Version (NFDV)
+Network Function Definitions (NFDs) have multiple versions known as NetworkFunctionDefinitionVersions (NFDV). A NetworkFunctionDefinitionVersion is a template for deploying a network function on a particular version. Network Function Definitions and Network Function Definition Versions become immutable once set to 'Active. The Network Function (NF) Publisher on-boards a NetworkFunctionDefinitionVersion resource by providing binaries, configuration, and mapping rules.
+
+The collection of all Network Function Definition Version (NFDVs) for a given Network Function (NF) is known as a Network Function Definition Group (NFDG).
+
+### Network Function Virtualization Infrastructure (NFVI)
+A Network Function Virtualization Infrastructure (NFVI) represents a location where a Network Function (NF) can be instantiated, such as a Custom location of an Arc-enabled Kubernetes cluster or an Azure region.
+
+The name of the Network Function Virtualization Infrastructure (NFVI) defined in a Network Service Design (NSD) must match that of the Site used when deploying a Site Network Service (SNS).
+
+### Network Function Manager (NFM)
+Network Function Manager (NFM) is an Azure service responsible for managing and operating network functions in Azure. Azure Operator Service Manager uses Network Function Manager (NFM); NFM is opaque to the Publisher, Designer and Operator.
+
+### Network Service Design (NSD) / Network Service Design Group (NSDG) / Network Service Design Version (NSDV)
+A Network Service Design (NSD) describes a network service of a specific type, created and uploaded by the Designer. A Network Service Design (NSD) is a composite of one or more Network Function Definitions (NFD) and any infrastructure components deployed at the same time. Network Service Designs (NSDs) have multiple versions (NSDVs). The Network Service Design Versions (NSDVs) include mapping rules, references to Config Group Schemas (CGS), resource element templates and Site information.
+
+The collection of all Network Service Design Versions (NSDVs) for a given Network Service Design (NSD) is known as a Network Service Design Group (NSDG).
+
+### Nginx Container (NC)
+Nginx Container (NC) refers to a container that runs the Nginx web server, which is commonly used for serving web content. In the Azure Operator Service Manager (AOSM) Quickstart guides Nginx is used as an example of a Containerized Network Function (CNF).
+
+## O
+
+### Operator
+
+See *Service Operator*.
+
+## P
+
+### Publisher
+The Network Function (NF) Publisher is a person or organization that creates and publishes Network Functions (NFs) to Azure Operator Service Manager (AOSM).
+
+The Publisher *resource* enables the onboarding of Network Functions (NFs) to Azure Operator Service Manager (AOSM) and the definition of Network Services composed from those Network Functions (NFs). The Publisher includes child resources:
+- NSDVersions
+- NFDVersions
+- Config Group Schemas
+- Artifact Store
+
+You can upload container images and VHDs to the Artifact Store through the Publisher.
+
+### Publisher Offering Location
+Publisher Offering Location refers to the specific location or region where the publisher resource is deployed.
+
+## R
+
+### RBAC
+RBAC (Role-Based Access Control) is a security model in Azure that defines and manages access to resources based on assigned roles. RBAC allows administrators to grant specific permissions to users or groups, ensuring secure and controlled access to Azure resources.
+
+### Resource Group
+A Resource Group is a logical container in Azure that holds related resources for easier management, security, and billing. Resource Group provides a way to organize and manage resources, allowing for efficient management and control of Azure resources.
+
+### Resource ID
+Resource ID is a unique identifier assigned to each resource in Azure, used to reference and access the resource. Resource IDs provide a way to uniquely identify and locate resources within Azure, ensuring accurate and reliable resource management.
+
+### Resources
+Resources refer to the various components, services, or entities that are provisioned and managed within Azure. Resources can include virtual machines, storage accounts, databases, and other services that are used to build and operate applications and infrastructure in Azure.
+
+## S
+
+### SAS URL
+SAS URL (Shared Access Signature URL) is a URL that provides temporary access to a specific Azure resource or storage container. SAS URLs allow users to grant time-limited access to resources, ensuring secure and controlled access to Azure resources.
+
+### Service Account
+A Service Account is an account or identity used by an application or service to authenticate and access resources in Azure. Service accounts provide a way to securely manage and control access to resources, ensuring that only authorized applications or services can access sensitive data or perform specific actions.
+
+### Service Designer
+Service Designer is a person or organization who creates a Network Service Design.
+
+### Service Operator
+Network Service Operator is a person or organization responsible for operating and managing network services in Azure. They create Configuration Group Values (CGV), Sites and Site Network Services (SNS).
+
+### Service Port Configuration
+Service Port Configuration refers to the configuration settings for the ports used by a network service. Service Port Configuration includes details such as the port numbers, protocols, and other settings required for the proper operation and communication of the network service.
+
+### Site
+A *Site* refers to a logical location for the instantiation and management of network services. A Site can represent either a single Azure region (a data center location within the Azure cloud) or an on-premises facility. A Site serves as the fundamental unit for making updates, where all changes are independently applied to individual sites.
+
+### Site Network Service (SNS)
+A Site Network Service (SNS) consists of a collection of Network Functions (NFs) along with Azure infrastructure all working together to deliver a cohesive unit of service. A Site Network Service (SNS) is instantiated by selecting a Network Service Design Version (NSDV) and supplying parameters in the form or Configuration Group Values (CGVs) and a Site.
+
+### SSH
+SSH (Secure Shell) is a cryptographic network protocol used for secure remote access to systems and secure file transfers. SSH provides a secure and encrypted connection between a client and a server, ensuring the confidentiality and integrity of data transmitted over the network.
+
+### Subscription
+A Subscription is a billing and management container in Azure that holds resources and services used by an organization. Subscriptions provide a way to organize and manage resources, allowing for efficient billing, access control, and management of Azure resources.
+
+## T
+
+### Tenant
+A Tenant refers to an organization or entity that owns and manages a Microsoft Entra ID instance. Tenants provide a way to manage and control access to Azure resources, ensuring secure and controlled access for users and applications.
+
+## U
+
+### User Assigned Identity
+User Assigned Identity is an Azure feature that allows you to assign an identity to a specific user or application for authentication and access control. User Assigned Identities provide a way to manage and control access to resources, ensuring secure and controlled access for users and applications.
+
+## V
+
+### Virtualized Network Function (VNF)
+A Virtualized Network Function (VNF) is a Network Function (NF) described by an Azure Resource Manager (ARM) template and delivered as a VHD. Azure Operator Service Manager (AOSM) supports Virtualized Network Functions (VNFs) deployed on Azure Core and Operator Nexus.
operator-service-manager Helm Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/helm-requirements.md
+
+ Title: About Helm package requirements for Azure Operator Service Manager
+description: Learn about the Helm package requirements for Azure Operator Service Manager.
++ Last updated : 09/07/2023++++
+# Helm package requirements
+Helm is a package manager for Kubernetes that helps you manage Kubernetes applications. Helm packages are called charts, and they consist of a few YAML configuration files and some templates that are rendered into Kubernetes manifest files. Charts are reusable by anyone for any environment, which reduces complexity and duplicates.
+
+## Registry URL path and imagepullsecrets requirements
+When developing a helm package, it's common to keep the container registry server URL in the values. Keeping the container registry server URL in the values is useful for moving artifacts between each environment container registry. Azure Operator Service Manager (AOSM) uses the Network Function Manager (NFM) service to deploy Containerized Network Function (CNF). The Network Function Manager (NFM) contains features to inject container registry server location and imagepullsecrets into the helm values during Network Function (NF) deployment. An imagePullSecret is an authorization token, also known as a secret, that stores Docker credentials that are used for accessing a registry. For example, if you need to deploy an application via Kubernetes deployment, you can define a deployment like the following example:
+
+```json
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ labels:
+ app: nginx
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ {{- if .Values.global.imagePullSecrets }}
+ imagePullSecrets: {{ toYaml .Values.global.imagePullSecrets | nindent 8 }}
+ {{- end }}
+ containers:
+ - name: contosoapp
+ image:{{ .Values.global.registryPath }}/contosoapp:1.14.2
+ ports:
+ - containerPort: 80
+```
+
+`values.schema.json` is a file that allows you to easily set value requirements and constraints in a single location for Helm charts. In this file, define registryPath and imagePullSecrets as required properties.
+
+```json
+{
+ "$schema": "http://json-schema.org/draft-07/schema#",
+ "title": "StarterSchema",
+ "type": "object",
+ "required": ["global"],
+ "properties": {
+ "global" : {
+ "type": "object",
+ "properties": {
+ ΓÇ£registryPathΓÇ¥: {ΓÇ£typeΓÇ¥: ΓÇ£stringΓÇ¥},
+ ΓÇ£imagePullSecretsΓÇ¥: {ΓÇ£typeΓÇ¥: ΓÇ£stringΓÇ¥},
+ }
+ "required": [ "registryPath", "imagePullSecrets" ],
+ }
+ }
+}
+
+```
+
+The NFDVersion request payload provides the following values in the registryValuesPaths:
+
+```json
+"registryValuesPaths": [ "global.registryPath" ],
+"imagePullSecretsValuesPaths": [ "global.imagePullSecrets" ],
+```
+
+During an NF deployment, the Network Function Operator (NFO) sets the registryPath to the correct Azure Container Registry (ACR) server location. For example, the NFO runs the following equivalent command:
+
+```shell
+$ helm install --set "global.registryPath=<registryURL>" --set "global.imagePullSecrets[0].name=<secretName>" releasename ./releasepackage
+```
+
+> [!NOTE]
+> The registryPath is set without any prefix such as https:// or oci://. If a prefix is required in the helm package, publishers need to define this in the package.
+
+`values.yaml` is a file that contains the default values for a Helm chart. It's a YAML file that defines the default values for a chart. In the values.yaml file, two types of variables must be present; imagePullSecrets and registryPath. Each is described in the table.
+
+```json
+global:
+ imagePullSecrets: []
+ registryPath: ΓÇ£ΓÇ¥
+```
+
+| Name | Type | Description |
+| : | :-: | : |
+| imagePullSecrets | String | imagePullSecrets are an array of secret names, which are used to pull container images |
+| registryPath | String | registryPath is the `AzureContainerRegistry` server location |
+
+imagePullSecrets and registryPath must be provided in the create NFDVersion onboarding step.
+
+An NFO running in the cluster populates these two variables (imagePullSecrets and registryPath) during a helm release using the helm install ΓÇôset command.
+
+For more information, see: [pull-image-private-registry](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry)
+
+## Immutability restrictions
+Immutability restrictions prevent changes to a file or directory. For example, an immutable file can't be changed or renamed, and a file that allows append operations can't be deleted, modified, or renamed.
+
+### Avoid use of mutable tags
+Users should avoid using mutable tags such as latest, dev or stable. For example, if deployment.yaml used 'latest' for the .Values.image.tag the deployment would fail.
+
+```json
+ image: "{{ .Values.global.registryPath }}/{{ .Values.image.repository }}:{{ .Values.image.tag}}ΓÇ£
+```
+
+### Avoid references to external registry
+Users should avoid using references to an external registry. For example, if deployment.yaml uses a hardcoded registry path or external registry references it fails validation.
+
+```json
+ image: http://myURL/{{ .Values.image.repository }}:{{ .Values.image.tag}}
+```
+
+## Recommendations
+Splitting the Custom Resource Definitions (CRDs) declaration and usage plus using manual validations are recommended practices. Each is described in the following sections.
+
+### Split CRD declaration and usage
+We recommend splitting the declaration and usage of CRDs into separate helm charts to support
+updates. For detailed information see: [method-2-separate-charts](https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-2-separate-charts)
+
+### Manual validations
+Review the images and container specs created to ensure the images have prefix of registryURL and the imagePullSecrets are populated with secretName.
+
+```shell
+ helm template --set "global.imagePullSecrets[0].name=<secretName>" --set "global.registry.url=<registryURL>" <release-name> <chart-name> --dry-run
+```
+
+OR
+
+```shell
+ helm install --set "global.imagePullSecrets[0].name=<secretName>" --set "global.registry.url=<registryURL>" <release-name> <chart-name> --dry-run
+ kubectl create secret <secretName> regcred --docker-server=<registryURL> --dockerusername=<regusername> --docker-password=<regpassword>
+```
+### Static image repository and tags
+Each helm chart should contain static image repository and tags. Users should set the image repository and tag to static values. The static values can be set by:
+- By hard-coding them in the image line or,
+- Setting the Values in values.yaml and not exposing these values in the Network Function Design Version (NFDV).
+
+A Network Function Design Version (NFDV) should map to a static set of helm charts and images. The charts and images are only updated by publishing a new Network Function Design Version (NFDV).
+
+```json
+ image: "{{ .Values.global.registryPath }}/contosoapp:1.14.2ΓÇ£
+```
+or
+
+```json
+ image: "{{ .Values.global.registryPath }}/{{ .Values.image.repository }}:{{ .Values.image.tag}}ΓÇ£
+
+YAML values.yaml
+image:
+ repository: contosoapp
+ tag: 1.14.2
+```
operator-service-manager How To Assign Custom Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-assign-custom-role.md
+
+ Title: How to assign custom role in Azure Operator Service Manager
+description: Learn how to assign a custom role in Azure Operator Service Manager.
++ Last updated : 10/19/2023++++
+# Assign a custom role
+
+In this how-to guide, you learn how to assign a custom role for Service Operators to Azure Operator Service Manager Publisher resources. The permissions in this role are required for deploying a Site Network Service.
+
+## Prerequisites
+
+- You must have created a custom role via [Create a custom role](how-to-create-custom-role.md). This article assumes that you named the custom role 'Custom Role - AOSM Service Operator access to Publisher.'
+
+- To perform the tasks in this article, you need either the 'Owner' or 'User Access Administrator' role in your chosen scope.
+
+- You must have identified the users who you want to perform the Service Operator role and deploy Site Network Services.
+
+## Choose scope(s) for assigning custom role
+
+The publisher resources that you need to assign the custom role to are:
+
+- The Network Function Definition Versions (NFDVs).
+
+- The Network Service Design Versions (NSDVs).
+
+- The Configuration Group Schemas (CGSs) for the Network Service Design (NSD).
+
+You must decide if you want to assign the custom role individually to each resource, or to a parent resource such as the publisher resource group.
+
+Applying to a parent resource grants access over all child resources. For example, applying to the whole publisher resource group gives the operator access to:
+
+- All the Network Function Definition Groups and Versions.
+
+- All the Network Service Design Groups and Versions.
+
+- All the Configuration Group Schemas.
+
+The custom role permissions limit access to the list of the permissions shown here:
+
+- Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/**use**/**action**
+
+- Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/**read**
+
+- Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/**use**/**action**
+
+- Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/**read**
+
+- Microsoft.HybridNetwork/Publishers/ConfigurationGroupSchemas/**read**
+
+> [!NOTE]
+> Do not provide write or delete access to any of these publisher resources.
++
+## Assign the custom role
+
+1. Access the Azure portal and open your chosen scope (Publisher Resource Group or individual resources).
+
+2. In the side menu of this item, select **Access Control (IAM)**.
+
+3. Choose **Add Role Assignment**.
+
+ :::image type="content" source="media/how-to-assign-custom-role-resource-group.png" alt-text="Screenshot showing the publisher resource group access control page.":::
++
+4. Under **Job function roles** find your Custom Role in the list then proceed with *Next*.
+
+ :::image type="content" source="media/how-to-assign-custom-role-add-assignment.png" alt-text="Screenshot showing the add role assignment screen.":::
++
+5. Select **User, group, or service principal**, then Choose **+ Select Members** then find and choose the users you want to have access. Choose **Select**.
+
+ :::image type="content" source="media/how-to-assign-custom-role-add-members.png" alt-text="Screenshot showing the select members screen.":::
+
+7. Select **Review and assign**
+
+## Repeat the role assignment
+
+Repeat the tasks in this article for all your chosen scopes.
operator-service-manager How To Create Custom Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-create-custom-role.md
+
+ Title: How to create custom role in Azure Operator Service Manager
+description: Learn how to create a custom role in Azure Operator Service Manager.
++ Last updated : 10/19/2023++++
+# Create a custom role
+
+In this how-to guide, you learn how to create a custom role for Service Operators. A custom role provides the necessary permissions to access Azure Operator Service Manager (AOSM) Publisher resources when deploying a Site Network Service (SNS).
+
+## Prerequisites
+
+Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
+
+## Permissions/Actions required by the custom role
+
+- Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/**use**/**action**
+
+- Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/**read**
+
+- Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/**use**/**action**
+
+- Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/**read**
+
+- Microsoft.HybridNetwork/Publishers/ConfigurationGroupSchemas/**read**
+
+## Decide the scope
+
+Decide the scope that you want the role to be assignable to:
+
+- If the publisher resources are in a single resource group, you can use the assignable scope of that resource group.
+
+- If the publisher resources are spread across multiple resource groups within a single subscription, you must use the assignable scope of that subscription.
+
+- If the publisher resources are spread across multiple subscriptions, you must create a custom role assignable to each of these subscriptions.
+
+## Create custom role using Bicep
+
+Create a custom role using Bicep. For more information, see [Create or update Azure custom roles using Bicep](/azure/role-based-access-control/custom-roles-bicep?tabs=CLI)
+
+As an example, you can use the following sample as the main.bicep template. This sample creates the role with subscription-wide assignable scope.
+
+```
+targetScope = 'subscription'
+
+@description('Array of actions for the roleDefinition')
+param actions array = [
+ 'Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/use/action'
+ 'Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/read'
+ 'Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/use/action'
+ 'Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/read'
+ 'Microsoft.HybridNetwork/Publishers/ConfigurationGroupSchemas/read'
+]
+
+@description('Array of notActions for the roleDefinition')
+param notActions array = []
+
+@description('Friendly name of the role definition')
+param roleName string = 'Custom Role - AOSM Service Operator access to Publisher'
+
+@description('Detailed description of the role definition')
+param roleDescription string = 'Provides read and use access to AOSM Publisher resources'
+
+var roleDefName = guid(subscription().id, string(actions), string(notActions))
+
+resource roleDef 'Microsoft.Authorization/roleDefinitions@2022-04-01' = {
+ name: roleDefName
+ properties: {
+ roleName: roleName
+ description: roleDescription
+ type: 'customRole'
+ permissions: [
+ {
+ actions: actions
+ notActions: notActions
+ }
+ ]
+ assignableScopes: [
+ subscription().id
+ ]
+ }
+}
+```
+When you deploy the template, it should be deployed in the same subscription as the Publisher resources.
+
+```azurecli
+az login
+
+az account set --subscription <publisher subscription>
+
+az deployment sub create --location <location> --name customRole --template-file main.bicep
+```
+
+## Create a custom role using the Azure portal
+
+Create a custom role using Azure portal. For more information, see [Create or update Azure custom roles using Azure portal](/azure/role-based-access-control/custom-roles-portal)
+
+If you prefer, you can specify most of your custom role values in a JSON file.
+
+Sample JSON:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2018-05-01/subscriptionDeploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "metadata": {
+ "_generator": {
+ "name": "bicep",
+ "version": "0.22.6.54827",
+ "templateHash": "14238097231376848271"
+ }
+ },
+ "parameters": {
+ "actions": {
+ "type": "array",
+ "defaultValue": [
+ "Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/use/action",
+ "Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/read",
+ "Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/use/action",
+ "Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/read",
+ "Microsoft.HybridNetwork/Publishers/ConfigurationGroupSchemas/read"
+ ],
+ "metadata": {
+ "description": "Array of actions for the roleDefinition"
+ }
+ },
+ "notActions": {
+ "type": "array",
+ "defaultValue": [],
+ "metadata": {
+ "description": "Array of notActions for the roleDefinition"
+ }
+ },
+ "roleName": {
+ "type": "string",
+ "defaultValue": "Custom Role - AOSM Service Operator Role",
+ "metadata": {
+ "description": "Friendly name of the role definition"
+ }
+ },
+ "roleDescription": {
+ "type": "string",
+ "defaultValue": "Role Definition for AOSM Service Operator Role",
+ "metadata": {
+ "description": "Detailed description of the role definition"
+ }
+ }
+ },
+ "variables": {
+ "roleDefName": "[guid(subscription().id, string(parameters('actions')), string(parameters('notActions')))]"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Authorization/roleDefinitions",
+ "apiVersion": "2022-04-01",
+ "name": "[variables('roleDefName')]",
+ "properties": {
+ "roleName": "[parameters('roleName')]",
+ "description": "[parameters('roleDescription')]",
+ "type": "customRole",
+ "permissions": [
+ {
+ "actions": "[parameters('actions')]",
+ "notActions": "[parameters('notActions')]"
+ }
+ ],
+ "assignableScopes": [
+ "[subscription().id]"
+ ]
+ }
+ }
+ ]
+}
+```
+
+## Next steps
+
+- [Assign a custom role](how-to-assign-custom-role.md)
operator-service-manager How To Create Site Network Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-create-site-network-service.md
+
+ Title: How to create site network service for Azure Operator Service Manager
+description: Learn how to create site network service in Azure Operator Service Manager.
++ Last updated : 09/11/2023++++
+# Create site network service in Azure Operator Service Manager
+
+In this how-to guide you learn how to create a Site Network Service (SNS) using the Azure portal. A Site Network Service (SNS) is a collection of network functions along with Azure infrastructure that come together to offer a service. The set of Network Functions (NFs) and infrastructure that make up that service defined by the Network Service Design Version (NSDV).
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription.
+- A Resource Group over which you have the Contributor role.
+- You have completed [Create a site in Azure Operator Service Manager](how-to-create-site.md).
+- The Network Service Design (NSD) you plan to use is published within the same tenant where you intend to deploy the Site Network Service (SNS).
+- Collaboration has taken place with the Network Services Design Version (NSDV) designed to identify the required details that must be included in the Configuration Group Value (CGVs) for this specific Site Network Service (SNS).
+- Verify that any prerequisites specific to the Network Service Design (NSD) are correctly deployed. The documentation from the Network Service Design (NSD) designer contains the essential details of these prerequisites.
+
+## Create the Site Network Service
+
+1. In the Azure portal, select **Create resource**.
+1. In the search bar, search for *Site Network Service* and then select **Create**.
+
+ :::image type="content" source="media/how-to-create-site-network-service-search-site-network-services.png" alt-text="Diagram showing the Azure portal Create resource page and search for Site Network Service." lightbox="media/how-to-create-site-network-service-search-site-network-services.png":::
+
+1. On the **Basics** tab, enter or select the information shown in the table. Accept the default values for the remaining settings.
+
+ |Setting|Value|
+ |||
+ |Subscription| Select your subscription.|
+ |Resource group| Select your resource group.|
+ |Name| Enter the name for Site Network Service.|
+ |Region| Select the location.|
+ |Site| Select the name of the Site.|
+ |Managed Identity Type | This setting relies on the Network Service Design Version (NSDV). Consult your Network Service Design (NSD) designer for guidance. |
+
+ :::image type="content" source="media/how-to-create-site-network-service-basics-tab.png" alt-text="Screenshot showing the Basics tab with the mandatory fields." lightbox="media/how-to-create-site-network-service-basics-tab.png":::
+
+## Choose a Network Service Design
+
+1. On the *Choose a Network Service Design* tab, select the **Publisher**, **Network Service Design Resource**, **Network Service Design Version** that you published earlier.
+
+ > [!NOTE]
+ > Consult the documentation from your Network Service Design (NSD) Publisher or directly contact them to obtain the Publisher Offering Location, Publisher, Network Service Design Resource and Network Service Design version.
+
+ :::image type="content" source="media/how-to-create-site-network-service-choose-design-tab.png" alt-text="Screenshot showing the Choose a Network Service Design tab with the mandatory fields." lightbox="media/how-to-create-site-network-service-choose-design-tab.png":::
+
+1. On the *Set initial configuration* tab, select a Configuration Group Value resource for each schema listed in the selected Network Service Design.
+
+ :::image type="content" source="media/how-to-create-site-network-service-set-initial-configuration-tab.png" alt-text="Screenshot showing the Set initial configuration tab and Create New tab." lightbox="media/how-to-create-site-network-service-set-initial-configuration-tab.png":::
+
+1. Select **Create New** on the *Set initial configuration* page.
+1. Enter the name for the Configuration Group into the **Configuration Group name** field.
+1. Enter the configuration into the *Editor* panel.
+
+### Editor panel
+
+To configure settings in the *editor* panel, your input must be in JSON format:
+
+- Begin by entering a pair of curly brackets '{}'.
+- You notice a red squiggle appearing underneath them.
+- Hover your mouse cursor over the red squiggle to reveal the fields that require input.
+- More red squiggles might appear for any remaining errors. Follow the same process to address these issues.
+- Once all errors have been resolved, select **Create Configuration**.
+
+ :::image type="content" source="media/how-to-create-site-network-service-editor-panel-set-config.png" alt-text="Screenshot showing the editor panel with a sample error to correct." lightbox="media/how-to-create-site-network-service-editor-panel-set-config.png":::
+
+> [!NOTE]
+> Consult the documentation from your Network Service Design (NSD) Publisher or directly contact them to obtain the Configuration Group Value.
+
+## Review and create
+
+1. Select **Review + create** and then **Create**.
+1. Select the link under *Current State -> Resources*. The link takes you to the *Managed Resource Group* created by the Azure Operator Service Manager (AOSM).
operator-service-manager How To Create Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-create-site.md
+
+ Title: How to create a site in Azure Operator Service Manager
+description: Learn how to create a site in Azure Operator Service Manager.
++ Last updated : 09/11/2023++++
+# Create a site in Azure Operator Service Manager
+
+In this how-to guide, you learn how to create a site. A *site* refers to a specific location, which can be either a single Azure region (a data center location within the Azure cloud) or an on-premises facility, associated with the instantiation and management of network services.
+
+## Prerequisites
+
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription.
+- It's important to receive guidance from the designer regarding the specific NFVIs (Network Function Virtualization Infrastructure) to incorporate into your site.
+
+## Create site
+
+1. Sign into the Azure portal.
+1. Search for **Sites** then select **Create**.
+1. On the **Basics** tab, enter the *Subscription*, *Resource group*, *Name* and *Region*. You can accept the default values for the remaining settings.
+
+ :::image type="content" source="media/how-to-create-site-basics-tab.png" alt-text="Screenshot of the Basics tab showing mandatory fields Subscription, Resource group, Name and Region." lightbox="media/how-to-create-site-basics-tab.png":::
+
+> [!NOTE]
+> The site must be in the same region as the prerequisite resources.
+
+## Add Network Function Virtualization Infrastructure (NFVI)
+
+Use the information in the table to add the Network Function Virtualization Infrastructure (NFVIs).
+
+|Setting|Value|
+|||
+| NFVI Name| Enter the name specified by the designer in NSDV|
+| NFVI Type| *Azure Core*, *Azure Operator Distributed Services* or *Unknown*. This NFVI type value must match the NFVI type specified by the designer in the NSDV.|
+| NFVI Location | The Azure region for the site.|
++
+1. Add the Network Function Virtualization Infrastructure (NFVIs) you wish to deploy your network service on by selecting the **Add the NFVIs** tab, then *Add NFVI* once field information is input.
+
+ :::image type="content" source="media/how-to-create-site-add-network-function-virtual-infrastructure.png" alt-text="Screenshot showing Add the NFVIs tab and fields NFVI name, NFVI type and NFVI location." lightbox="media/how-to-create-site-add-network-function-virtual-infrastructure.png":::
+
+ > [!NOTE]
+ > Consult the documentation from your NSD Designer or directly contact them to obtain the list of NFVIs.
+
+1. Select **Review + create** and then **Create**.
+
+## Next steps
+
+- [Create site network service in Azure Operator Service Manager](how-to-create-site-network-service.md)
operator-service-manager How To Create User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-create-user-assigned-managed-identity.md
+
+ Title: How to create and assign User Assigned Managed Identity in Azure Operator Service Manager
+description: Learn how to create and assign a User Assigned Managed Identity in Azure Operator Service Manager.
++ Last updated : 10/19/2023++++
+# Create and assign a User Assigned Managed Identity
+
+In this how-to guide, you learn how to:
+- Create a User Assigned Managed Identity (UAMI) for your Site Network Service (SNS).
+- Assign that User Assigned Managed Identity permissions.
+
+The requirement for a User Assigned Managed Identity and the required permissions depend on the Network Service Design (NSD) and must have been communicated to you by the Network Service Designer.
+
+## Prerequisites
+
+- You must have created a custom role via [Create a custom role](how-to-create-custom-role.md). This article assumes that you named the custom role 'Custom Role - AOSM Service Operator access to Publisher.'
+
+- Your Network Service Designer must have told you which other permissions your Managed Identity requires and which Network Function Definition Version (NFDV) your SNS uses.
+
+- To perform this task, you need either the 'Owner' or 'User Access Administrator' role over the Network Function Definition Version resource from your chosen Publisher. You also must have a Resource Group over which you have the 'Owner' or 'User Access Administrator' role assignment in order to create the Managed Identity and assign it permissions.
+
+## Create a User Assigned Managed Identity
+
+Create a User Assigned Managed Identity. For details, refer to [Create a User Assigned Managed Identity for your SNS](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities?pivots=identity-mi-methods-azp).
+
+## Assign custom role
+
+Assign a custom role to your User Assigned Managed Identity.
+
+### Choose scope for assigning custom role
+
+The publisher resources that you need to assign the custom role to are:
+
+- The Network Function Definition Version(s)
+
+You must decide if you want to assign the custom role individually to this NFDV, or to a parent resource such as the publisher resource group or Network Function Definition Group.
+
+Applying to a parent resource grants access over all child resources. For example, applying to the whole publisher resource group gives the managed identity access to:
+- All the Network Function Definition Groups and Versions.
+
+- All the Network Service Design Groups and Versions.
+
+- All the Configuration Group Schemas.
+
+The custom role permissions limit access to the list of the permissions shown here:
+
+- Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/**use**/**action**
+
+- Microsoft.HybridNetwork/Publishers/NetworkFunctionDefinitionGroups/NetworkFunctionDefinitionVersions/**read**
+
+- Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/**use**/**action**
+
+- Microsoft.HybridNetwork/Publishers/NetworkServiceDesignGroups/NetworkServiceDesignVersions/**read**
+
+- Microsoft.HybridNetwork/Publishers/ConfigurationGroupSchemas/**read**
+
+> [!NOTE]
+> Do not provide write or delete access to any of these publisher resources.
+
+### Assign custom role
+
+1. Access the Azure portal and open your chosen scope; Publisher Resource Group or Network Function Definition Version.
+
+2. In the side menu of this item, select **Access Control (IAM)**.
+
+3. Choose **Add Role Assignment**.
+
+ :::image type="content" source="media/how-to-assign-custom-role-resource-group.png" alt-text="Screenshot showing the publisher resource group access control page.":::
+
+4. Under **Job function roles** find your Custom Role in the list then proceed with *Next*.
+
+ :::image type="content" source="media/how-to-assign-custom-role-add-assignment.png" alt-text="Screenshot showing the add role assignment screen.":::
+
+5. Select **Managed Identity**, then Choose **+ Select Members** then find and choose your new managed identity. Choose **Select**.
+
+ :::image type="content" source="media/how-to-custom-assign-user-access-managed-identity.png" alt-text="Screenshot showing the add role assignment and select managed identities.":::
++
+7. Select **Review and assign**.
+
+### Repeat the role assignment
+
+Repeat the role assignment tasks for all of your chosen scopes.
+
+## Assign Managed Identity Operator role to the Managed Identity itself
+
+1. Go to the Azure portal and search for **Managed Identities**.
+1. Select *identity-for-nginx-sns* from the list of **Managed Identities**.
+1. On the side menu, select **Access Control (IAM)**.
+1. Choose **Add Role Assignment** and select the **Managed Identity Operator** role.
+
+1. Select the **Managed Identity Operator** role.
+
+ :::image type="content" source="media/managed-identity-operator-role-virtual-network-function.png" alt-text="Screenshot showing the Managed Identity Operator role.":::
+
+1. Select **Managed identity**.
+1. Select **+ Select members** and navigate to the user-assigned managed identity and proceed with the assignment.
+
+ :::image type="content" source="media/managed-identity-user-assigned-ubuntu.png" alt-text="Screenshot showing the Add role assignment screen with Managed identity selected.":::
+
+Completion of all the tasks outlined in this article ensures that the Site Network Service (SNS) has the necessary permissions to function effectively within the specified Azure environment.
+
+## Assign other required permissions to the Managed Identity
+
+Repeat this process to assign any other permissions to the Managed Identity that your Network Service Designer identified.
operator-service-manager How To Delete Operator Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-delete-operator-resources.md
+
+ Title: How to delete operator resources in Azure Operator Service Manager
+description: Learn how to delete operator services
++ Last updated : 09/11/2023++++
+# Delete operator resources in Azure Operator Service Manager
+
+In this how-to guide, you learn how to delete operator resources that include Site Network Service (SNS), Configuration Group Values and Sites. The order in which operator resources are deleted is critical. You should start by deleting the Site Network Service (SNS) followed by the Configuration Group Values, then lastly the Sites. This process must be followed before deleting any of the Publisher or Designer resources referenced by the Operator.
+
+## Prerequisites
+
+- You must already have a site, in your deployment, that you want to delete.
+- Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create the site(s).
+
+## Delete Site Network Service
+
+1. Search for the Site Network Service (SNS) within Azure portal.
+
+ :::image type="content" source="media/how-to-delete-operator-resources-search-for-site-network-services.png" alt-text="Screenshot showing Azure portal and search for Site Network Services." lightbox="media/how-to-delete-operator-resources-search-for-site-network-services.png":::
+
+1. Select the Site Network Service (SNS) within the Azure portal you wish to delete.
+
+ :::image type="content" source="media/how-to-delete-operator-resource.png" alt-text="Screenshot showing the Site Network Service selected for deletion." lightbox="media/how-to-delete-operator-resource.png"
+
+ :::image type="content" source="media/how-to-delete-operator-resources-resource-groups.png" alt-text="Screenshot showing the Resource Group that the Configuration Group Values was deployed." lightbox="media/how-to-delete-operator-resources-resource-groups.png":::
+
+1. Under the Overview section, take note of the *Site* and the *resource group* within the **Properties**.
+
+ :::image type="content" source="media/how-to-delete-operator-resource-site-resource-group.png" alt-text="Screenshot showing the Site and resource group within the properties section." lightbox="media/how-to-delete-operator-resource-site-resource-group.png":::
+
+1. Under the **Overview** section, take note of the *Configuration Group Value* and the *resource group* within **Desired configuration**.
+
+ :::image type="content" source="media/how-to-delete-operator-resource-config-group-value.png" alt-text="Screenshot showing the Configuration Group Value and Site information in the desired configuration tab." lightbox="media/how-to-delete-operator-resource-config-group-value.png":::
+
+1. Once you have listed the resources, select **Delete** against the Site Network Service (SNS).
+
+ :::image type="content" source="media/how-to-delete-operator-resource-delete.png" alt-text="Screenshot showing the Site Network Service to delete." lightbox="media/how-to-delete-operator-resource-delete.png":::
+
+1. Follow the prompts to confirm and complete the deletion.
+
+ :::image type="content" source="media/how-to-delete-operator-resource-confirm-prompt.png" alt-text="Diagram showing the Confirmation prompt with a warning message.":::
+
+> [!NOTE]
+> Deleting a Site Network Service (SNS) can be time consuming. It it important to inform the user in advance that deletions may take between 5 minutes to over an hour.
+
+### Troubleshoot deletion errors
+
+While deleting a Site Network Service (SNS) is a straightforward task, here are some troubleshooting tips to consider issues are encountered:
+
+1. Check the error message: If the error message mentions "nested resources," delete the Site Network Service (SNS) again.
+1. Examine the managed resource group: To track the progress of the deletion, navigate to the managed resource group and follow the same instructions as outlined in [Create site network service in Azure Operator Service Manager](how-to-create-site-network-service.md). Eventually, all resources associated with the Site Network Service (SNS) become deleted.
+
+## Delete Configuration Group Values
+
+1. Navigate to the Azure portal and search for **Resource Group** in which the Configuration Group Value was deployed.
+
+ :::image type="content" source="media/how-to-delete-operator-resources-search-for-resource-groups.png" alt-text="Screenshot showing the Azure portal and search for Resource Groups.":::
+
+ :::image type="content" source="media/how-to-delete-operator-resources-resource-groups.png" alt-text="Screenshot showing the Resource Group in which the Configuration Group Value was deployed." lightbox="media/how-to-delete-operator-resources-resource-groups.png":::
+
+1. Select the specific **Configuration Group Value(s)** you wish to delete.
+1. Select **Delete**.
+
+ :::image type="content" source="media/how-to-delete-operator-resource-config-group-value.png" alt-text="Screenshot showing the selected Configuration Group Values to be deleted." lightbox="media/how-to-delete-operator-resource-config-group-value.png":::
+
+1. Follow the prompts to confirm and complete the deletion.
+
+## Delete Sites
+
+1. Navigate to the Azure portal and search for the Resource Group in which the Site was deployed.
+1. Select the specific **Site** you wish to delete.
+1. Select **Delete**.
+
+ :::image type="content" source="media/how-to-delete-operator-resource-delete-site.png" alt-text="Screenshot showing the Site selected for deletion." lightbox="media/how-to-delete-operator-resource-delete-site.png":::
+
+1. Follow the prompts to confirm and complete the deletion.
operator-service-manager How To Use Azure Operator Service Manager Cli Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-use-azure-operator-service-manager-cli-extension.md
+
+ Title: How to use Azure Operator Service Manager CLI extension
+description: Learn how to use Azure Operator Service Manager CLI extension.
++ Last updated : 10/17/2023++++
+# Use Azure Operator Service Manager (AOSM) CLI extension
+
+In this how-to guide, Network Function Publishers and Service Designers learn how to use the Azure CLI extension to get started with Network Function Definitions (NFDs) and Network Service Designs (NSDs).
+
+The `az aosm` CLI extension is intended to provide support for publishing Azure Operator Service Manager designs and definitions. The CLI extension aids in the process of publishing Network Function Definitions (NFDs) and Network Service Designs (NSDs) to use with Azure Operator Service Manager.
+
+## Prerequisites
+
+Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
+
+### Download and install Azure CLI
+
+Use the Bash environment in the Azure cloud shell. For more information, see [Start the Cloud Shell](/azure/cloud-shell/quickstart?tabs=azurecli) to use Bash environment in Azure Cloud Shell.
+
+For users that prefer to run CLI reference commands locally refer to [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+If you're running on Window or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+If you're using a local installation, sign into the Azure CLI using the `az login` command and complete the prompts displayed in your terminal to finish authentication. For more sign-in options, refer to [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+### Install Azure Operator Service Manager (AOSM) CLI extension
+
+Install the Azure Operator Service Manager (AOSM) CLI extension using this command:
+
+```azurecli
+az extension add --name aosm
+```
+1. Run `az version` to see the version and dependent libraries that are installed.
+1. Run `az upgrade` to upgrade to the current version of Azure CLI.
+
+### Register and verify required resource providers
+
+Before you begin using the Azure Operator Service Manager, make sure to register the required resource provider. Execute the following commands. This registration process can take up to 5 minutes.
+
+```azurecli
+# Register Resource Provider
+az provider register --namespace Microsoft.HybridNetwork
+az provider register --namespace Microsoft.ContainerRegistry
+```
+Verify the registration status of the resource providers. Execute the following commands.
+
+```azurecli
+# Query the Resource Provider
+az provider show -n Microsoft.HybridNetwork --query "{RegistrationState: registrationState, ProviderName: namespace}"
+az provider show -n Microsoft.ContainerRegistry --query "{RegistrationState: registrationState, ProviderName: namespace}"
+```
+
+> [!NOTE]
+> It may take a few minutes for the resource provider registration to complete. Once the registration is successful, you can proceed with using the Azure Operator Service Manager (AOSM).
+
+### Containerized Network Function (CNF) requirements
+
+For those utilizing Containerized Network Functions, it's essential to ensure that the following packages are installed on the machine from which you're executing the CLI:
+
+- **Install Helm**, refer to [Install Helm CLI](https://helm.sh/docs/intro/install/).
+- In some circumstances, **install docker**, refer to [Install the Docker Engine](https://docs.docker.com/engine/install/). Only needed if the source image is in your local docker repository, or you don't have subscription-wide permissions required to push charts and images.
+
+## Permissions
+
+An Azure account with an active subscription is required. If you don't have an Azure subscription, follow the instructions here [Start free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) to create an account before you begin.
+
+You require the Contributor role over this subscription in order to create a Resource Group, or an existing Resource Group where you have the Contributor role.
+
+### Permissions for publishing CNFs
+
+If sourcing the CNF images from an existing ACR, you need to have `Reader`/`AcrPull` permissions from this ACR, and ideally, `Contributor` role + `AcrPush` role (or a custom role that allows the `importImage` action and `AcrPush`) over the whole subscription in order to be able to import to the new Artifact store. If you have these, you don't need docker to be installed locally, and the image copy is quick.
+
+If you don't have the subscription-wide permissions then you can run the `az aosm nfd publish` command using the `--no-subscription-permissions` flag to pull the image to your local machine and then push it to the Artifact Store using manifest credentials scoped only to the store. Requires docker to be installed locally.
+
+## Azure Operator Service Manager (AOSM) CLI extension overview
+
+Network Function Publishers and Service Designers use the Azure CLI extension to help with the publishing of Network Function Definitions (NFDs) and Network Service Designs (NSDs).
+
+As explained in [Roles and Interfaces](roles-interfaces.md), a Network Function Publisher has various responsibilities. The CLI extension assists with the items shown in bold:
+
+- Create the network function.
+- **Encode that in a Network Function Design (NFD)**.
+- **Determine the deployment parameters to expose to the Service Designer**.
+- **Onboard the Network Function Design (NFD) to Azure Operator Service Manager (AOSM)**.
+- **Upload the associated artifacts**.
+- Validate the Network Function Design (NFD).
+
+A Service Designer also has various responsibilities, of which the CLI extension assists with the items in bold:
+
+- Choose which Network Function Definitions are included in the Service Design.
+- **Encode that into a Network Service Design**.
+- Combine Azure infrastructure into the Network Service Design.
+- **Determine how to parametrize the service by defining one or more Configuration Group Schemas (CGSs)**.
+- **Determine how inputs from the Service Operator map down to parameters required by the Network Function Definitions** and the Azure infrastructure.
+- **Onboard the Network Service Design (NSD) to Azure Operator Service Manager (AOSM)**.
+- **Upload the associated artifacts**.
+- Validate the Network Service Design (NSD).
+
+## Workflow summary
+
+A generic workflow of using the CLI extension is:
+
+1. Find the prerequisite items you require for your use-case.
+
+1. Run a `generate-config` command to output an example JSON config file for subsequent commands.
+
+1. Fill in the config file.
+
+1. Run a `build` command to output one or more bicep templates for your Network Function Definition or Network Service Design.
+
+1. Review the output of the build command, edit the output as necessary for your requirements.
+
+1. Run a `publish` command to:
+ * Create all prerequisite resources such as Resource Group, Publisher, Artifact Stores, Groups.
+ * Deploy those bicep templates.
+ * Upload artifacts to the artifact stores.
+
+## VNF start point
+
+For VNFs, you need a single ARM template that would create the Azure resources for your VNF, for example a Virtual Machine, disks and NICs. The ARM template must be stored on the machine from which you're executing the CLI.
+
+For Virtualized Network Function Definition Versions (VNF NFDVs), the networkFunctionApplications list must contain one VhdImageFile and one ARM template. It's unusual to include more than one VhdImageFile and more than one ARM template. Unless you have a strong reason not to, the ARM template should deploy a single VM. The Service Designer should include numerous copies of the Network Function Definition (NFD) with the Network Service Design (NSD) if you want to deploy multiple VMs. The ARM template (for both AzureCore and Nexus) can only deploy ARM resources from the following Resource Providers:
+
+- Microsoft.Compute
+
+- Microsoft.Network
+
+- Microsoft.NetworkCloud
+
+- Microsoft.Storage
+
+- Microsoft.NetworkFabric
+
+- Microsoft.Authorization
+
+- Microsoft.ManagedIdentity
+
+You also need a VHD image that would be used for the VNF Virtual Machine. The VHD image can be stored on the machine from which you're executing the CLI, or in Azure blob storage accessible via a SAS URI.
+
+## CNF start point
+
+For deployments of Containerized Network Functions (CNFs), it's crucial to have the following stored on the machine from which you're executing the CLI:
+
+- **Helm Packages with Schema** - These packages should be present on your local storage and referenced within the `input.json` configuration file. When following this quickstart, you download the required helm package.
+- **Creating a Sample Configuration File** - Generate an example configuration file for defining a CNF deployment. Issue this command to generate an `input.json` file that you need to populate with your specific configuration.
+
+ ```azurecli
+ az aosm nfd generate-config
+ ```
+
+- **Images for your CNF** - Here are the options:
+ - A reference to an existing Azure Container Registry that contains the images for your CNF. Currently, only one ACR and namespace are supported per CNF. The images to be copied from this ACR are populated automatically based on the helm package schema. You must have Reader/AcrPull permissions on this ACR. To use this option, fill in `source_registry` and optionally `source_registry_namespace` in the input.json file.
+ - The image name of the source docker image from local machine. This image name is for a limited use case where the CNF only requires a single docker image that exists in the local docker repository. To use this option, fill in `source_local_docker_image` in the input.json file. Requires docker to be installed. This quickstart guides you through downloading an nginx docker image to use for this option.
+- **Optional: Mapping File (path_to_mappings)**: Optionally, you can provide a file (on disk) named path_to_mappings. This file should mirror `values.yaml`, with your selected values replaced by deployment parameters. Doing so exposes them as parameters to the CNF. Or, you can leave this blank in `input.json` and the CLI generates the file. By default in this case, every value within `values.yaml` is exposed as a deployment parameter. Alternatively use the `--interactive` CLI argument to interactively make choices. This quickstart guides you through creation of this file.
+
+When configuring the `input.json` file, ensure that you list the Helm packages in the order they should be deployed. For instance, if package "A" must be deployed before package "B," your `input.json` should resemble the following structure:
+
+```json
+"helm_packages": [
+ {
+ "name": "A",
+ "path_to_chart": "Path to package A",
+ "path_to_mappings": "Path to package A mappings",
+ "depends_on": [
+ "Names of the Helm packages this package depends on"
+ ]
+ },
+ {
+ "name": "B",
+ "path_to_chart": "Path to package B",
+ "path_to_mappings": "Path to package B mappings",
+ "depends_on": [
+ "Names of the Helm packages this package depends on"
+ ]
+ }
+]
+```
+Following these guidelines ensures a well organized and structured approach to deploy Containerized Network Functions (CNFs) with Helm packages and associated configurations.
+
+## NSD start point
+For NSDs, you need to know the details of the Network Function Definitions (NFDs) to incorporate into your design:
+- the NFD Publisher resource group
+- the NFD Publisher name and scope
+- the name of the Network Function Definition Group
+- the location, type and version of the Network Function Definition Version
+
+You can use the `az aosm nfd` commands to create all of these resources.
+
+## Azure Operator Service Manager (AOSM) commands
+
+Use these commands before you begin:
+
+1. `az login` used to sign in to the Azure CLI.
+
+1. `az account set --subscription <subscription>` used to choose the subscription you want to work on.
+
+### NFD commands
+
+Get help on command arguments:
+
+- `az aosm -h`
+
+- `az aosm nfd -h`
+
+- `az aosm nfd build -h`
+
+### Definition type commands
+
+All these commands take a `--definition-type` argument of `vnf` or `cnf`.
+
+Create an example config file for building a definition:
+
+- `az aosm nfd generate-config`
+
+This command outputs a file called `input.json`, which must be filled in. Once the config file is filled in the following commands can be run.
+
+Build an NFD definition locally:
+
+- `az aosm nfd build --config-file input.json`
+
+More options on building an NFD definition locally:
+
+- Choose which of the VNF ARM template parameters you want to expose as NFD deploymentParameters, with the option of interactively choosing each one:
+
+ - `az aosm nfd build --config-file input.json --definition_type vnf --order_params`
+ - `az aosm nfd build --config-file input.json --definition_type vnf --order_params --interactive`
+
+Choose which of the CNF Helm values parameters you want to expose as NFD deploymentParameters:
+
+- `az aosm nfd build --config-file input.json --definition_type cnf --interactive`
+
+Publish a prebuilt definition:
+
+- `az aosm nfd publish --config-file input.json`
+
+Delete a published definition:
+
+- `az aosm nfd delete --config-file input.json`
+
+Delete a published definition and the publisher, artifact stores and NFD group:
+
+- `az aosm nfd delete --config-file input.json --clean`
+
+### NSD commands
+
+Get help on command arguments:
+
+- `az aosm -h`
+
+- `az aosm nsd -h`
+
+- `az aosm nsd build -h`
+
+Create an example config file for building a definition:
+
+- `az aosm nsd generate-config`
+
+This command outputs a file called `input.json`, which must be filled in. Once the config file is filled in the following commands can be run.
+
+Build an NSD locally:
+
+- `az aosm nsd build --config-file input.json`
+
+Publish a prebuilt design:
+
+- `az aosm nsd publish --config-file input.json`
+
+Delete a published design:
+
+- `az aosm nsd delete --config-file input.json`
+
+Delete a published design and the publisher, artifact stores and NSD group:
+
+- `az aosm nsd delete --config-file input.json --clean`
+
+## Edit the build output before publishing
+
+The `az aosm` CLI extension is intended to provide support for publishing Azure Operator Service Manager designs and definitions. It provides the building blocks for creating complex custom designs and definitions. You can edit the files output by the `build` command before running the `publish` command, to add more complex or custom features.
+
+The full API reference for Azure Operator Service Manager is here: [Azure Hybrid Network REST API](/rest/api/hybridnetwork/).
+
+The following sections describe some common ways that you can use to edit the built files before publishing.
+
+### Network Function Definitions (NFDs)
+
+- Change the `versionState` of the `networkfunctiondefinitionversions` resource from `Preview` to `Active`. Active NFDVs are immutable whereas Preview NFDVs are mutable and in draft state.
+- For CNFs, change the `releaseNamespace` of the `helmMappingRuleProfile` to change the kubernetes namespace that the chart is deployed to.
+
+### Network Service Designs (NSDs)
+
+- Add Azure Infrastructure to your Network Service Design (NSD). Adding Azure infrastructure to your can involve:
+ * Writing ARM templates to deploy the infrastructure.
+ * Adding Configuration Group Schemas(CGSs) for these ARM templates.
+ * Adding `ResourceElementTemplates` (RETs) of type `ArmResourceDefinition` to your NSD. The RETs look the same as `NetworkFunctionDefinition` RETs apart from the `type` field.
+ * Adding the infrastructure ARM templates to the `artifact_manifest.bicep` file.
+ * Editing the `configMappings` files to incorporate any outputs from the infrastructure templates as inputs to the `NetworkFunctionDefinition` ResourceElementTemplates. For example: `"customLocationId": "{outputparameters('infraretname').infraretname.customLocationId.value}"`
+ * Editing the `dependsOnProfile` for the `NetworkFunctionDefinition` ResourceElementTemplates (RETs) to ensure that infrastructure RETs are deployed before NF RETs.
+- Change the `versionState` of the `networkservicedesignversions` resource from `Preview` to `Active`. Active NSDs are immutable whereas Preview NSDs are mutable and in draft state.
operator-service-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/overview.md
- Title: About Azure Operator Service Manager
-description: Learn about Azure Operator Service Manager, an Azure Service for the management of Network Services for telecom operators.
-- Previously updated : 04/09/2023---
-# About Azure Operator Service Manager
-
-Azure Operator Service Manager is an Azure service designed to assist telecom operators in managing their network services. It provides streamlined management capabilities for intricate, multi-part, multi-vendor applications across numerous hybrid cloud sites, encompassing Azure regions, edge platforms, and Arc-connected sites. Initially, Azure Operator Service Manager caters to the needs of telecom operators who are in the process of migrating their workloads to Azure and Arc-connected cloud environments.
-
-Azure Operator Service Manager expands and improves the Network Function Manager by incorporating technology and ideas from Azure for Operators' on-premises management tools. Its purpose is to manage the convergence of comprehensive, multi-vendor service solutions on a per-site basis. It uses a declarative software and configuration model for the system.
-
-## Product features
-
-Azure Operator Service Manager provides an Azure-native abstraction for modeling and realizing a distributed network service using extra resource types in Azure Resource Manager (ARM) through our cloud service. A network service is represented as a network graph comprising multiple network functions, with appropriate policies controlling the data plane to meet each telecom operator's operational needs. Creation of templates of configuration schemas allows for per-site variation that is often required in such deployments.
-
-## Benefits
-
-Azure Operator Service Manager provides the following benefits:
--- Provides a single management experience for all Azure for operators solutions in Azure or connected clouds.-- Provides blast-radius limitations and disconnected mode support to enable five-nines operation of these services.-- Enables real telecom DevOps working, eliminating the need for NF-specific maintenance windows.-
-## Get access to Azure Operator Service Manager
-
-Azure Operator Service Manager is currently in public preview. To get started, contact us at [aosmpartner@microsoft.com](mailto:aosmpartner@microsoft.com?subject=Azure%20Operator%20Service%20Manager%20preview%20request&Body=Hello%2C%0A%0AI%20would%20like%20to%20request%20access%20to%20the%20Azure%20Operator%20Service%20Manager%20preview%20documentation.%0A%0AMy%20GitHub%20username%20is%3A%20%0A%0AThank%20you%21), provide your GitHub username, and request access to our preview documentation.
operator-service-manager Publisher Resource Preview Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/publisher-resource-preview-management.md
+
+ Title: Publisher resource preview management
+description: Learn about publisher resource preview management.
++ Last updated : 09/11/2023++++
+# Publisher Tenants, subscriptions, regions and preview management
+
+This article introduces the Publisher Resource Preview Management feature.
+
+## Overview
+
+The Azure Network Function Manager (NFM) Publisher API offers partners a seamless Azure Marketplace experience for onboarding Network Functions (NF) and Network Service Designs (NSDs).
+
+The Publisher API introduces features that enable Network Function (NF) Publishers and Service Designers to manage Network Function Definition (NFD) and Network Service Design (NSD) in various modes. These modes empower partners to exercise control over Network Function Definition (NFD) and Network Service Design (NSD) usage. Control over the NFDs and NSDs allows partners to target specific subscriptions, all subscriptions, or deprecate an NFDVersion or NSDVersion if there are regressions. This article delves into the specifics of these different modes.
+
+The Publisher Resource Preview Management feature in Azure Network Function Manager empowers partners to seamlessly manage Network Function Definitions and their versions. With the ability to control deployment states, access privileges, and version management, partners can ensure a smooth experience for their customers while maintaining the quality and stability of their offerings.
+
+## Tenants, subscriptions and regions
+
+Do my publisher and Site Network Service (SNS) resources need to be in the same tenant, subscription or region?
+
+- Publisher Network Service Design Version (**NSDV**) and Network Function Definition Version (**NFDV**) resources must be in the same Azure tenant as Site Network Services (**SNS**) resources.
+
+- Network Service Design Version (**NSDV**) and Network Function Definition Version (**NFDV**) versionState are key for cross-subscription.
+ - Preview = Site Network Service (**SNS**) is deployable in the same subscription as the Network Function Definition Version/Network Function Definition Version (**NSDV/NFDV**).
+ - Active = Site Network Service (**SNS**) is deployable in any *subscription*.
+- Publisher resources can be in different Azure Core or Nexus Regions to Site Network Service (**SNS**) resources.
+
+- Publisher names must be unique within a region.
+
+- Site Network Service (**SNS**) can reference Configuration Group Values (**CGVs**) from any region, but can only reference Site resources from the same region.
+
+- Configuration Group Values (**CGVs**) can reference a Configuration Group Schema (**CGS**) in any region.
+
+- Network Functions:
+ * Can reference NFDVersion from any region.
+ * Must reference Azure Stack Edge from the same region, if hosted on Azure Stack Edge.
+ * The ARM template within a Virtual Network Function must deploy resources to the same region as the Network Function.
+ * CNFs can reference customLocation from any region.
+
+## Network Function Definition and Network Service Design version states
+
+The following table provides Network Function Definition (NFD) and Network Service Design (NSD) version state information.
+
+|State |Description |Users |Is Immutable |
+|||||
+|**Preview** | Default state upon NFDVersion or NSDVersion creation; indicates pending testing. | Same subscription as Publisher. | No |
+|**Active** | Signifies readiness for customer usage. Artifacts must be immutable with artifactManifestState Uploaded. | Access based on RBS, any subscription in same tenant. | Yes |
+|**Deprecated** | Implies regression found; prevents new deployments from this version. | Can't be deployed. | Yes |
+
+## Artifact Manifest state machine
+
+ - Uploading means the state is mutable and the artifacts within the manifest can be altered.
+
+ - Uploaded means the state is immutable and the artifacts within the manifest can't be altered.
+
+Immutable artifacts are tested artifacts that can't be modified or overwritten. Use of immutable artifacts with Azure Operator Service Manager ensures consistency, reliability and security of its artifacts across different environments and platforms. Network Function Definition Versions and Network Service Design Versions with versionState Active are enforced to deploy immutable artifacts.
+
+
+ ### Update Artifact Manifest state
+
+ ### HTTP Method: POST URL
+
+```http
+https://management.azure.com/{artifactManifestResourceId}/updateState?api-version=2023-09-01
+```
+
+ Where artifactManifestResourceId is the full resource ID of the Artifact Manifest resource
+
+ ### Request body
+
+```json
+{
+ "artifactManifestState": "Uploaded"
+}
+```
+
+### Submit POST
+
+Submit the POST using `az rest` in the Azure CLI.
+
+```azurecli
+az rest --method post --uri {artifactManifestResourceId}/updateState?api-version=2023-09-01 --body "{\"artifactManifestState\": \"Uploaded\"}"
+```
+
+ Where *{artifactManifestResourceId}* is the full resource ID of the Artifact Manifest resource
+
+ Then issue the get command to check that the artifactManifestState change is complete.
+
+```azurecli
+ az rest --method get --uri {artifactManifestResourceId}?api-version=2023-09-01
+```
+
+## Network Function Definition and Network Service Design state machine
+
+- Preview is the default state.
+- Deprecated state is a terminal state but can be reversed.
+
+## Update Network Function definition version state
+
+Use the following API to update the state of a Network Function Definition Version (NFDV).
+
+### HTTP Method: POST URL
+
+```http
+https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.HybridNetwork/publishers/{publisherName}/networkfunctiondefinitiongroups/{networkfunctiondefinitiongroups}/networkfunctiondefinitionversions/{networkfunctiondefinitionversions}/updateState?api-version=2023-09-01
+```
+
+### URI parameters
+
+The following table describes the parameters used with the preceding URL.
+
+|Name |Description |
+|||
+|subscriptionId | The subscription ID.
+|resourceGroupName | The name of the resource group. |
+|publisherName | The name of the publisher. |
+|networkfunctiondefinitiongroups | The name of the network function definition groups.
+|networkfunctiondefinitionversions | The network function definition version. |
+|api-version | The API version to use for this operation. |
++
+### Request body
+
+```json
+{
+ "versionState": "Active | Deprecated"
+}
+```
+### Submit post
+
+Submit the POST using `az rest` in the Azure CLI.
+
+```azurecli
+ az rest --method post --uri {nfdvresourceId}/updateState?api-version=2023-09-01 --body "{\"versionState\": \"Active\"}"
+```
+ Where *{nfdvresourceId}* is the full resource ID of the Network Function Definition Version
+
+Then issue the get command to check that the versionState change is complete.
+
+```azurecli
+ az rest --method get --uri {nfdvresourceId}?api-version=2023-09-01
+```
+
+## Update Network Service Design Version (NSDV) version state
+
+Use the following API to update the state of a Network Service Design Version (NSDV).
+
+### HTTP Method: POST URL
+
+```http
+https://management.azure.com/subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/Microsoft.HybridNetwork/publishers/{publisherName}/networkservicedesigngroups/{nsdName}/networkservicedesignversions/{nsdVersion}/updateState?api-version=2023-09-01
+```
+
+### URI parameters
+
+The following table describes the parameters used with the preceding URL.
+
+|Name |Description |
+|||
+|subscriptionId | The subscription ID.
+|resourceGroupName | The name of the resource group. |
+|publisherName | The name of the publisher. |
+|nsdName | The name of the network service design.
+|nsdVersion | The network service design version. |
+|api-version | The API version to use for this operation. |
++
+### Request body
+
+```json
+{
+ "versionState": "Active | Deprecated"
+}
+```
+### Submit post
+
+Submit the POST using `az rest` in the Azure CLI.
+
+```azurecli
+az rest --method post --uri {nsdvresourceId}/updateState?api-version=2023-09-01 --body "{\"versionState\": \"Active\"}"
+```
+Where *{nsdvresourceId}* is the full resource ID of the Network Service Design
+
+Then issue the get command to check that the versionState change is complete.
+
+```azurecli
+ az rest --method get --uri {nsdvresourceId}?api-version=2023-09-01
+```
operator-service-manager Quickstart Containerized Network Function Create Site Network Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-create-site-network-service.md
+
+ Title: Create a Containerized Network Function (CNF) Site Network Service with Nginx
+description: Learn how to create a Containerized Network Function (CNF) Site Network Service (SNS) with Nginx.
++ Last updated : 09/07/2023++++
+# Quickstart: Create a Containerized Network Function (CNF) Site Network Service (SNS) with Nginx
+
+ This article walks you through the process of creating a Site Network Service (SNS) using the Azure portal. Site Networks Services is an essential part of a Network Service Instance and is associated with a specific site. Each Site Network Service instance references a particular version of a Network Service Design (NSD).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Complete the [Quickstart: Complete the prerequisites to deploy a Containerized Network Function in Azure Operator Service Manager](quickstart-containerized-network-function-prerequisites.md)
+- Complete the [Quickstart: Create a Containerized Network Functions site with Nginx](quickstart-containerized-network-function-create-site.md)
+
+## Create the site network service
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+1. Select **Create a resource**.
+1. Search for **Site network service**, then select **Create**.
+
+ :::image type="content" source="media/create-site-network-service-main.png" alt-text="Screenshot shows the Marketplace screen with site network service in the search bar. Options beneath the search bar include Site Network Service are shown.":::
+1. On the **Basics** tab, enter for select the information in the table and accept the default values for the remaining settings.
++
+ |Setting |Value |
+ |||
+ |Subscription | Select your subscription. |
+ |Resource group | Select resource group **operator-rg** you created when creating the *Site*. |
+ |Name | Enter **nginx-sns**. |
+ |Region | Select the location you used for your prerequisite resources. |
+ |Site | Enter **nginx-site**. |
+ |Managed Identity Type | Select **User Assigned**. |
+ |User Assigned Identity | Select **identity-for-nginx**
+
+
+ :::image type="content" source="media/create-site-network-service-basic-containerized.png" alt-text="Screenshot showing the basics tab to input project, instance and identity details.":::
+
+1. Select **Next: Choose a Network Site Design >**.
+1. On this screen, select the **Publisher**, **Network Service Design Resource**, and the **Network Service Design Version** you published earlier.
+ > [!NOTE]
+ > Be sure to select the same Publisher Offering Location you defined in the Network Service Design Quickstart (nginx-nsdg_NFVI.)
+
+
+ :::image type="content" source="media/create-site-network-service-network-service-design.png" alt-text="Screenshot shows the Choose a Network Service Design tab where you choose the details of the initial Network Service Design version.":::
+
+1. Select **Next: Set initial configuration >**.
+1. Select **Create New** and enter *nginx-sns-cgvs* in the **Name** field.
+
+ :::image type="content" source="media/create-site-network-service-configuration.png" alt-text="Screenshot showing the Initial Configuration screen including the dialog box that appears when you select the Create New option. ":::
+1. In the resulting editor panel, enter the following configuration:
+
+ ```json
+ {
+ "nginx-nfdg": {
+ "deploymentParameters": {
+ "service_port": 5222,
+ "serviceAccount_create": false
+ },
+ "customLocationId": "<resource id of your custom location>",
+ "nginx_nfdg_nfd_version": "1.0.0"
+ },
+ "managedIdentity": "`<managed-identity-resource-id>"
+ }
+ ```
+
+ > [!TIP]
+ > Refer to the Retrieve Custom Location section for config group value for the customlocationID. For more information, see [Quickstart: Prerequisites for Operator and Containerized Network Function (CNF)](quickstart-containerized-network-function-operator.md).
+
+10. Select **Review + Create** then **Create**.
+1. Allow the deployment state to reach a state of **Succeeded**. This status indicates your CNF is up and running.
+1. Access your CNF by navigating to the **Site Network Service Object** in the Azure portal. Select the **Current State -> Resources** to view the managed resource group created by Azure Operator Service Manager (AOSM).
+
+ :::image type="content" source="media/site-network-service-preview.png" alt-text="Screenshot shows an overview of the site network service created.":::
+
+You have successfully created a Site Network Service for a Nginx Container as a CNF in Azure. You can now manage and monitor your CNF through the Azure portal.
operator-service-manager Quickstart Containerized Network Function Create Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-create-site.md
+
+ Title: Create a Containerized Network Functions (CNF) Site with Nginx
+description: TBD
++ Last updated : 09/08/2023++++
+# Quickstart: Create a Containerized Network Functions site with Nginx
+
+This article helps you create a Containerized Network Functions (CNF) site using the Azure portal. A site is the collection of assets that represent one or more instances of nodes in a network service that should be discussed and managed in a similar manner.
+
+A site can represent:
+- A physical location such as DC or rack(s).
+- A node in the network that needs to be upgraded separately (early or late) vs other nodes.
+- Resources serving particular class of customer.
+
+Sites can be within a single Azure region or an on-premises location. If collocated, they can span multiple NFVIs (such as multiple K8s clusters in a single Azure region).
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Complete the [Quickstart: Design a Network Service Design for Nginx Container as CNF.](quickstart-containerized-network-function-network-design.md)
+- Complete the [Quickstart: Prerequisites for Operator and Containerized Network Function (CNF)](quickstart-containerized-network-function-operator.md).
+
+## Create a site
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+1. Select **Create a resource**.
+1. Search for **Sites**, then select **Create**.
+1. On the **Basics tab**, enter or select your **Subscription**, **Resource group**, and the **Name** and **Region** of your instance.
+
+ :::image type="content" source="media/create-site-basics-tab.png" alt-text="Screenshot showing the Basic tab to enter Project details and Instance details for your site.":::
+ > [!NOTE]
+ > The site must be located in the same region as the prerequisite resources.
+1. Add the Network Function Virtualization Infrastructure (NFVIs).
++
+ |Setting |Value |
+ |||
+ |NFVI Name | Enter nginx-nsdg_NFVI. |
+ |NFVI Type | Select Azure Core. |
+ |NFVI Location | Select the location you used for your prerequisite resource. |
+
+ :::image type="content" source="media/create-site-add-nfvis.png" alt-text="Screenshot showing the Add the NFVIs table to enter the name, type and location of the NFVIs.":::
+
+ > [!NOTE]
+ > This example features a single Network Function Virtual Infrastructure (NFVI) named nginx-nsdg_NFVI. If you modified the nsd_name in the input.json file while publishing the NSD, the NFVI name should be <nsd_name>_NFVI. Ensure that the NFVI type is set to Azure Core and that the NFVI location matches the location of the prerequisite resources.
+
+1. Select **Review + create**, then select **Create**.
+
+## Next steps
+
+- [Quickstart: Create a Containerized Network Function (CNF) Site Network Service (SNS) with Nginx](quickstart-containerized-network-function-create-site-network-service.md)
operator-service-manager Quickstart Containerized Network Function Network Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-network-design.md
+
+ Title: Design a Containerized Network Function (CNF) with Nginx
+description: Learn how to design a Containerized Network Function (CNF) with Nginx.
++ Last updated : 09/07/2023++++
+# Quickstart: Design a Containerized Network Function (CNF) Network Service Design with Nginx
++
+ This quickstart describes how to use the `az aosm` Azure CLI extension to create and publish a basic Network Service Design.
+
+## Prerequisites
+
+- An Azure account with an active subscription is required. If you don't have an Azure subscription, follow the instructions here [Start free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) to create an account before you begin.
+- Complete the [Quickstart: Publish Nginx container as Containerized Network Function (CNF)](quickstart-publish-containerized-network-function-definition.md).
+
+## Create input file
+
+Create an input file for publishing the Network Service Design. Execute the following command to generate the input configuration file for the Network Service Design (NSD).
+
+```azurecli
+az aosm nsd generate-config
+```
+
+Execution of the preceding command generates an input.json file.
+
+> [!NOTE]
+> Edit the input.json file. Replace it with the values shown in the sample. Save the file as **input-cnf-nsd.json**.
+
+Here's a sample **input-cnf-nsd.json**:
+
+```json
+{
+ "publisher_name": "nginx-publisher",
+ "publisher_resource_group_name": "nginx-publisher-rg",
+ "acr_artifact_store_name": "nginx-nsd-acr",
+ "location": "uksouth",
+ "network_functions": [
+ {
+ "publisher": "nginx-publisher",
+ "publisher_resource_group": "nginx-publisher-rg",
+ "name": "nginx-nfdg",
+ "version": "1.0.0",
+ "publisher_offering_location": "uksouth",
+ "type": "cnf",
+ "multiple_instances": false
+ }
+ ],
+ "nsd_name": "nginx-nsdg",
+ "nsd_version": "1.0.0",
+ "nsdv_description": "Deploys a basic NGINX CNF"
+}
+``````
+- **publisher_name** - Name of the Publisher resource you want your definition published to. Created if it doesn't already exist.
+- **publisher_resource_group_name** - Resource group for the Publisher resource. Created if it doesn't already exist. For this quickstart, it's recommended you use the same Resource Group that you used when publishing the Network Function Definition.
+- **acr_artifact_store_name** - Name of the ACR Artifact Store resource. Created if it doesn't already exist.
+- **location** - The Azure location to use when creating resources.
+- **network_function**:
+ - *publisher* - The name of the publisher that this NFDV is published under.
+ - *publisher_resource_group* - The resource group that the publisher is hosted in.
+ - *name* - The name of the existing Network Function Definition Group to deploy using this NSD.
+ - *version* - The version of the existing Network Function Definition to base this NSD on. This NSD is able to deploy any NFDV with deployment parameters compatible with this version.
+ - *publisher_offering_location* - The region that the NFDV is published to.
+ - *type* - Type of Network Function. Valid values are cnf or vnf.
+ - *multiple_instances* - Valid values are true or false. Controls whether the NSD should allow arbitrary numbers of this type of NF. If set to false only a single instance is allowed. Only supported on VNFs. For CNFs this value must be set to false.
+- **nsd_name** - The Network Service Design Group name. The collection of Network Service Design versions. Created if it doesn't already exist.
+- **nsd_version** - The version of the NSD being created. In the format of A.B.C.
+- **nsdv_description** - The description of the NSDV.
+
+## Build the Network Service Design (NSD)
+
+Initiate the build process for the Network Service Definition (NSD) using the following command:
+
+```azurecli
+az aosm nsd build -f input-cnf-nsd.json
+```
+After the build process completes, review the generated files to gain insights into the NSD's architecture and structure.
+
+These files are created:
+
+|Files |Description |
+|||
+|**artifact_manifest.bicep** | A bicep template for creating the Publisher and artifact stores. |
+|**configMappings** | Converts the config group values inputs to the deployment parameters required for each NF. |
+|**nsd_definition.bicep** | A bicep template for creating the NSDV itself. |
+|**schemas** | Defines to the inputs required in the config group values for this NSDV. |
+|**nginx-nfdg_nf.bicep** | A bicep template for deploying the NF. Uploaded to the artifact store. |
+
+## Publish the Network Service Design (NSD)
+
+To publish the Network Service Design (NSD) and its associated artifacts, issue the following command:
+
+```azurecli
+az aosm nsd publish -f input-cnf-nsd.json
+```
+When the Publish process is complete, navigate to your Publisher Resource Group to observe and review the resources and artifacts that were produced.
+
+## Next steps
+
+- [Quickstart: Prerequisites for Operator and Containerized Network Function (CNF)](quickstart-containerized-network-function-operator.md)
operator-service-manager Quickstart Containerized Network Function Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-operator.md
+
+ Title: Prerequisites for Operator and Containerized Network Function (CNF)
+description: Install the necessary prerequisites for Operator and Containerized Network Function (CNF).
++ Last updated : 09/07/2023++++
+# Quickstart: Prerequisites for Operator and Containerized Network Function (CNF)
+
+ This quickstart contains the prerequisite tasks for Operator and Containerized Network Function (CNF). While it's possible to automate these tasks within your NSD (Network Service Definition), in this quickstart, the actions are performed manually.
+
+> [!NOTE]
+> The tasks presented in this article may require some time to complete.
+
+## Permissions
+
+In order to complete these prerequisites for Operator and Containerized Network Function, you
+need an Azure subscription where you have the *Contributor* role (in order to create a Resource Group) and you need to be able to attain the *Owner* or *User Access Administrator* role over this Resource Group. Alternatively, you need an existing Resource Group where you have the ΓÇÿOwnerΓÇÖ or ΓÇÿUser Access AdministratorΓÇÖ Role.
+
+You also need the *Owner* or *User Access Administrator* role in the Network Function Definition Publisher Resource Group. The Network Function Definition Publisher Resource Group was created in [Quickstart: Publish Nginx container as Containerized Network Function (CNF)](quickstart-publish-containerized-network-function-definition.md) and named nginx-publisher-rg in the input.json file.
+
+## Set environment variables
+
+Adapt the environment variable settings and references as needed for your particular environment. For example, in Windows PowerShell, you would set the environment variables as follows:
+
+```powershell
+$env:ARC_RG=<my rg>
+``````
+
+To use an environment variable, you would reference it as `$env:ARC_RG`.
+
+```bash
+export resourceGroup=<replace with resourcegroup name>
+export location=<region>
+export clusterName=<replace with clustername>
+export customlocationId=${clusterName}-custom-location
+export extensionId=${clusterName}-extension
+```
+
+## Create Resource Group
+
+Create a Resource Group to host your Azure Kubernetes Service (AKS) cluster.
+
+```azurecli
+az account set --subscription <subscription>
+az group create -n ${resourceGroup} -l ${location}
+``````
+
+## Provision Azure Kubernetes Service (AKS) cluster
+
+Follow the instructions here [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md) to create the Azure Kubernetes Service (AKS) cluster within the previously created Resource Group.
++
+> [!NOTE]
+> Ensure that `agentCount` is set to 1. Only one node is required at this time.
+
+```azurecli
+az aks create -g ${resourceGroup} -n ${clusterName} --node-count 1 --generate-ssh-keys
+``````
+
+## Enable Azure Arc
+
+Enable Azure Arc for the Azure Kubernetes Service (AKS) cluster. Follow the prerequisites outlined in
+[Create and manage custom locations on Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/custom-locations.md).
+
+## Retrieve the config file for AKS cluster
+```azurecli
+az aks get-credentials --resource-group ${resourceGroup} --name ${clusterName}
+``````
+
+## Create a connected cluster
+
+Create the cluster:
+
+```azurecli
+az connectedk8s connect --name ${clusterName} --resource-group ${resourceGroup}
+``````
+
+## Register your subscription
+Register your subscription to the Microsoft.ExtendedLocation resource provider:
+
+```azurecli
+az provider register --namespace Microsoft.ExtendedLocation
+``````
++
+### Enable custom locations
+
+Enable custom locations on the cluster:
+
+```azurecli
+az connectedk8s enable-features -n ${clusterName} -g ${resourceGroup} --features cluster-connect custom-locations
+``````
+
+### Connect cluster
+
+Connect the cluster:
+
+```azurecli
+az connectedk8s connect --name ${clusterName} -g ${resourceGroup} --location $location
+``````
+
+### Create extension
+
+Create an extension:
+
+```azurecli
+az k8s-extension create -g ${resourceGroup} --cluster-name ${clusterName} --cluster-type connectedClusters --name ${extensionId} --extension-type microsoft.azure.hybridnetwork --release-train preview --scope cluster
+```
+
+### Create custom location
+
+Create a custom location:
+
+```azurecli
+export ConnectedClusterResourceId=$(az connectedk8s show --resource-group ${resourceGroup} --name ${clusterName} --query id -o tsv)
+export ClusterExtensionResourceId=$(az k8s-extension show -c $clusterName -n $extensionId -t connectedClusters -g ${resourceGroup} --query id -o tsv)
+az customlocation create -g ${resourceGroup} -n ${customlocationId} --namespace "azurehybridnetwork" --host-resource-id $ConnectedClusterResourceId --cluster-extension-ids $ClusterExtensionResourceId
+``````
+
+## Retrieve custom location value
+
+Retrieve the Custom location value. You need this information to fill in the Configuration Group values for your Site Network Service (SNS).
+
+Search for the name of the Custom location (customLocationId) in the Azure portal, then select Properties. Locate the full Resource ID under the Essentials information area and look for field name ID. The following image provides an example of where the Resource ID information is located.
+++
+> [!TIP]
+> The full Resource ID has a format of: /subscriptions/{subscriptionId}/resourcegroups/{resourceGroupName}/providers/microsoft.extendedlocation/customlocation/{customLocationName}
+
+## Create User Assigned Managed Identity for the Site Network Service
+
+1. Save the following Bicep script locally as *prerequisites.bicep*.
+
+ ```azurecli
+ param location string = resourceGroup().location
+ param identityName string = 'identity-for-nginx-sns'
+
+
+ resource managedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = {
+ name: identityName
+ location: location
+ }
+ output managedIdentityId string = managedIdentity.id
+ ```
+
+1. Start the deployment of the User Assigned Managed Identity by issuing the following command.
+
+ ```azurecli
+ az deployment group create --name prerequisites --resource-group ${resourceGroup} --template-file prerequisites.bicep
+ ```
+
+1. The script creates a managed identity.
+
+## Retrieve Resource ID for managed identity
+
+1. Run the following command to find the resource ID of the created managed identity.
+
+ ```azurecli
+ az deployment group list -g ${resourceGroup} | jq -r --arg Deployment prerequisites '.[] | select(.name == $Deployment).properties.outputs.managedIdentityId.value'
+ ```
+
+1. Copy and save the output, which is the resource identity. You need this output when you create the Site Network Service.
+
+## Update Site Network Service (SNS) permissions
+
+To perform these tasks, you need either the 'Owner' or 'User Access Administrator' role in both the operator and the Network Function Definition Publisher Resource Groups. You created the operator Resource Group in prior tasks. The Network Function Definition Publisher Resource Group was created in [Quickstart: Publish Nginx container as Containerized Network Function (CNF)](quickstart-publish-containerized-network-function-definition.md) and named nginx-publisher-rg in the input.json file.
+
+In prior steps, you created a Managed Identity labeled identity-for-nginx-sns inside your reference resource group. This identity plays a crucial role in deploying the Site Network Service (SNS). Follow the steps in the next sections to grant the identity the 'Contributor' role over the Publisher Resource Group and the Managed Identity Operator role over itself. Through this identity, the Site Network Service (SNS) attains the required permissions.
+
+### Grant Contributor role over publisher Resource Group to Managed Identity
+
+1. Access the Azure portal and open the Publisher Resource Group created when publishing the Network Function Definition.
+
+1. In the side menu of the Resource Group, select **Access Control (IAM)**.
+
+1. Choose Add **Role Assignment**.
+
+ :::image type="content" source="media/add-role-assignment-publisher-resource-group-containerized.png" alt-text="Screenshot showing the publisher resource group add role assignment.":::
+
+1. Under the **Privileged administrator roles**, category pick *Contributor* then proceed with **Next**.
+
+ :::image type="content" source="media/privileged-admin-roles-contributor-resource-group.png" alt-text="Screenshot showing the privileged administrator role with contributor selected.":::
+
+1. Select **Managed identity**.
+
+1. Choose **+ Select members** then find and choose the user-assigned managed identity **identity-for-nginx-sns**.
+
+ :::image type="content" source="media/how-to-create-user-assigned-managed-identity-select-members.png" alt-text="Screenshot showing the select managed identities with user assigned managed identity.":::
+
+### Grant Managed Identity Operator role to itself
+
+1. Go to the Azure portal and search for **Managed Identities**.
+
+1. Select *identity-for-nginx-sns* from the list of **Managed Identities**.
+
+1. On the side menu, select **Access Control (IAM)**.
+
+1. Choose **Add Role Assignment**.
+
+ :::image type="content" source="media/how-to-create-user-assigned-managed-identity-operator.png" alt-text="Screenshot showing identity for nginx sns add role assignment.":::
+
+1. Select the **Managed Identity Operator** role then proceed with **Next**.
+
+ :::image type="content" source="media/add-role-assignment-managed-identity-operator-containerized.png" alt-text="Screenshot showing add role assignment with managed identity operator selected.":::
+
+1. Select **Managed identity**.
+
+1. Select **+ Select members** and navigate to the user-assigned managed identity called *identity-for-nginx-sns* and proceed with the assignment.
+
+ :::image type="content" source="media/how-to-create-user-assigned-managed-identity-select-members.png" alt-text="Screenshot showing the select managed identities with user assigned managed identity.":::
+
+1. Select **Review and assign**.
+
+Completion of all the tasks outlined in these articles ensure that the Site Network Service (SNS) has the necessary permissions to function effectively within the specified Azure environment.
+
+## Next steps
+
+- [Quickstart: Create a Containerized Network Functions (CNF) Site with Nginx](quickstart-containerized-network-function-create-site.md)
operator-service-manager Quickstart Containerized Network Function Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-prerequisites.md
++
+ Title: Prerequisites for Using Azure Operator Service Manager
+description: Use this Quickstart to install and configure the necessary prerequisites for Azure Operator Service Manager
++++ Last updated : 09/08/2023++
+# Quickstart: Complete the prerequisites to deploy a Containerized Network Function in Azure Operator Service Manager
+
+In this Quickstart, you complete the tasks necessary prior to using the Azure Operator Service Manager (AOSM).
+
+## Prerequisites
+
+Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
+
+## Download and install Azure CLI
+
+Use the Bash environment in the Azure cloud shell. For more information, see [Start the Cloud Shell](/azure/cloud-shell/quickstart?tabs=azurecli) to use Bash environment in Azure Cloud Shell.
+
+For users that prefer to run CLI reference commands locally refer to [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+If you're running on Window or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+If you're using a local installation, sign into the Azure CLI using the `az login` command and complete the prompts displayed in your terminal to finish authentication. For more sign-in options, refer to [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
+
+### Install Azure Operator Service Manager (AOSM) CLI extension
+
+Install the Azure Operator Service Manager (AOSM) CLI extension using this command:
+
+```azurecli
+az extension add --name aosm
+```
+1. Run `az version` to see the version and dependent libraries that are installed.
+1. Run `az upgrade` to upgrade to the current version of Azure CLI.
+
+## Register and verify required resource providers
+
+Before you begin using the Azure Operator Service Manager, make sure to register the required resource provider. Execute the following commands. This registration process can take up to 5 minutes.
+
+```azurecli
+# Register Resource Provider
+az provider register --namespace Microsoft.HybridNetwork
+az provider register --namespace Microsoft.ContainerRegistry
+```
+
+Verify the registration status of the resource providers. Execute the following commands.
+
+```azurecli
+# Query the Resource Provider
+az provider show -n Microsoft.HybridNetwork --query "{RegistrationState: registrationState, ProviderName: namespace}"
+az provider show -n Microsoft.ContainerRegistry --query "{RegistrationState: registrationState, ProviderName: namespace}"
+```
+
+> [!NOTE]
+> It may take a few minutes for the resource provider registration to complete. Once the registration is successful, you can proceed with using the Azure Operator Service Manager (AOSM).
+
+## Requirements for Containerized Network Function (CNF)
+
+For those utilizing Containerized Network Functions, it's essential to ensure that the following packages are installed on the machine from which you're executing the CLI:
+
+- **Install docker**, refer to [Install the Docker Engine](https://docs.docker.com/engine/install/).
+- **Install Helm**, refer to [Install Helm CLI](https://helm.sh/docs/intro/install/).
+
+### Configure Containerized Network Function (CNF) deployment
+
+For deployments of Containerized Network Functions (CNFs), it's crucial to have the following stored on the machine from which you're executing the CLI:
+
+- **Helm Packages with Schema** - These packages should be present on your local storage and referenced within the `input.json` configuration file. When following this quickstart, you download the required helm package.
+- **Creating a Sample Configuration File** - Generate an example configuration file for defining a CNF deployment. Issue this command to generate an `input.json` file that you need to populate with your specific configuration.
+
+ ```azurecli
+ az aosm nfd generate-config --definition-type cnf
+ ```
+
+- **Images for your CNF** - Here are the options:
+ - A reference to an existing Azure Container Registry that contains the images for your CNF. Currently, only one ACR and namespace are supported per CNF. The images to be copied from this ACR are populated automatically based on the helm package schema. You must have Reader/AcrPull permissions on this ACR. To use this option, fill in `source_registry` and optionally `source_registry_namespace` in the input.json file.
+ - The image name of the source docker image from local machine. This image name is for a limited use case where the CNF only requires a single docker image that exists in the local docker repository. To use this option, fill in `source_local_docker_image` in the input.json file. Requires docker to be installed. This quickstart guides you through downloading an nginx docker image to use for this option.
+- **Optional: Mapping File (path_to_mappings)**: Optionally, you can provide a file (on disk) named path_to_mappings. This file should mirror `values.yaml`, with your selected values replaced by deployment parameters. Doing so exposes them as parameters to the CNF. Or, you can leave this blank in `input.json` and the CLI generates the file. By default in this case, every value within `values.yaml` is exposed as a deployment parameter. Alternatively use the `--interactive` CLI argument to interactively make choices. This quickstart guides you through creation of this file.
+
+When configuring the `input.json` file, ensure that you list the Helm packages in the order they should be deployed. For instance, if package "A" must be deployed before package "B," your `input.json` should resemble the following structure:
+
+```json
+"helm_packages": [
+ {
+ "name": "A",
+ "path_to_chart": "Path to package A",
+ "path_to_mappings": "Path to package A mappings",
+ "depends_on": [
+ "Names of the Helm packages this package depends on"
+ ]
+ },
+ {
+ "name": "B",
+ "path_to_chart": "Path to package B",
+ "path_to_mappings": "Path to package B mappings",
+ "depends_on": [
+ "Names of the Helm packages this package depends on"
+ ]
+ }
+]
+```
+Following these guidelines ensures a well organized and structured approach to deploy Containerized Network Functions (CNFs) with Helm packages and associated configurations.
+
+### Download nginx image to local docker repo
+
+For this quickstart, you download the nginx docker image to your local repository. The Azure Operator Service Manager (AOSM) Azure CLI extension pushes the image from there to the Azure Operator Service Manager (AOSM) Artifact Store ACR. The CLI extension also supports copying the image from an existing ACR. Copying the image is the expected default use-case, but it's slower for a quickstart to create an ACR to copy from so this method isn't used here.
+
+Issue the following command: `docker pull nginx:stable`
+
+### Download sample Helm chart
+
+Download the sample Helm chart from here [Sample Helm chart](https://download.microsoft.com/download/8/3/d/83dd3dd3-7208-41c1-bd46-f616fb712084/nginxdemo-0.1.0.tgz) for use with this quickstart.
+
+## Dive into Helm charts
+
+This section introduces you to a basic Helm chart that sets up nginx and configures it to listen on a specified port. The Helm chart furnished in this section already incorporates a `values.schema.json` file.
+
+### Sample values.schema.json file
+
+```json
+{
+ "$schema": "http://json-schema.org/draft-07/schema",
+ "additionalProperties": true,
+ "properties": {
+ "affinity": {
+ "additionalProperties": false,
+ "properties": {},
+ "type": "object"
+ },
+ "fullnameOverride": {
+ "type": "string"
+ },
+ "image": {
+ "additionalProperties": false,
+ "properties": {
+ "pullPolicy": {
+ "type": "string"
+ },
+ "repository": {
+ "type": "string"
+ },
+ "tag": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "imagePullSecrets": {
+ "items": {
+ "anyOf": []
+ },
+ "type": "array"
+ },
+ "ingress": {
+ "additionalProperties": false,
+ "properties": {
+ "annotations": {
+ "additionalProperties": false,
+ "properties": {},
+ "type": "object"
+ },
+ "enabled": {
+ "type": "boolean"
+ },
+ "hosts": {
+ "items": {
+ "anyOf": [
+ {
+ "additionalProperties": false,
+ "properties": {
+ "host": {
+ "type": "string"
+ },
+ "paths": {
+ "items": {
+ "anyOf": []
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+ }
+ ]
+ },
+ "type": "array"
+ },
+ "tls": {
+ "items": {
+ "anyOf": []
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+ },
+ "nameOverride": {
+ "type": "string"
+ },
+ "nodeSelector": {
+ "additionalProperties": false,
+ "properties": {},
+ "type": "object"
+ },
+ "podSecurityContext": {
+ "additionalProperties": false,
+ "properties": {},
+ "type": "object"
+ },
+ "replicaCount": {
+ "type": "integer"
+ },
+ "resources": {
+ "additionalProperties": false,
+ "properties": {},
+ "type": "object"
+ },
+ "securityContext": {
+ "additionalProperties": false,
+ "properties": {},
+ "type": "object"
+ },
+ "service": {
+ "additionalProperties": false,
+ "properties": {
+ "port": {
+ "type": "integer"
+ },
+ "type": {
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "serviceAccount": {
+ "additionalProperties": false,
+ "properties": {
+ "create": {
+ "type": "boolean"
+ },
+ "name": {
+ "type": "null"
+ }
+ },
+ "type": "object"
+ },
+ "tolerations": {
+ "items": {
+ "anyOf": []
+ },
+ "type": "array"
+ }
+ },
+ "type": "object"
+}
+```
+
+Although this article doesn't delve into the intricacies of Helm, a few elements worth highlighting include:
+
+- **Service Port Configuration:** The `values.yaml` has a preset with a service port of 80.
+
+### Sample values.yaml file
+
+```yml
+# Default values for nginxdemo.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+
+replicaCount: 1
+
+
+image:
+
+ # Repository gets overwritten by AOSM to the Artifact Store ACR, however we've hard-coded the image name and tag in deployment.yaml
+
+ repository: overwriteme
+ tag: stable
+ pullPolicy: IfNotPresent
++
+imagePullSecrets: []
+nameOverride: ""
+fullnameOverride: ""
+
+
+serviceAccount:
+
+ # Specifies whether a service account should be created
+
+ create: false
+
+ # The name of the service account to use.
+ # If not set and create is true, a name is generated using the fullname template
+
+ name:
+
+
+podSecurityContext:
+
+ {}
+
+ # fsGroup: 2000
+
+
+securityContext:
+
+ {}
+ # capabilities:
+ # drop:
+ # - ALL
+ # readOnlyRootFilesystem: true
+ # runAsNonRoot: true
+ # runAsUser: 1000
+
+
+service:
+
+ type: ClusterIP
+ port: 80
+
+
+ingress:
+
+ enabled: false
+ annotations:
+
+ {}
+
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+
+ hosts:
+
+ - host: chart-example.local
+ paths: []
+
+
+ tls: []
+
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+
+resources:
+
+ {}
+
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+
+nodeSelector: {}
+
+
+tolerations: []
+
+
+affinity: {}
+```
+
+- **Port References:** This port finds its use in multiple locations:
+
+ - Within `service.yaml` as `{{ Values.service.port }}`
+
+### Sample service.yaml file
+
+```yml
+apiVersion: v1
+kind: Service
+
+metadata:
+ name: {{ include "nginxdemo.fullname" . }}
+ labels:
+
+{{ include "nginxdemo.labels" . | indent 4 }}
+
+spec:
+ type: {{ .Values.service.type }}
+ ports:
+ - port: {{ .Values.service.port }}
+ targetPort: http
+ protocol: TCP
+ name: http
+
+ selector:
+ app.kubernetes.io/name: {{ include "nginxdemo.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+```
+
+- In `nginx_config_map.yaml` represented as `{{ Values.service.port }}`. This file corresponds to `/etc/nginx/conf.d/default.conf`, with a mapping established using a config map in `deployment.yaml`.
+
+### Sample nginx_config_map.yaml file
+
+```yml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: nginx-config
+# This writes the nginx config file to the ConfigMap and deployment.yaml mounts it as a volume
+# to the right place.
+
+data:
+ default.conf: |
+ log_format client '$remote_addr - $remote_user $request_time $upstream_response_time '
+ '[$time_local] "$request" $status $body_bytes_sent $request_body "$http_referer" '
+ '"$http_user_agent" "$http_x_forwarded_for"';
+
+ server {
+ listen 80;
+ listen {{ .Values.service.port }};
+ listen [::]:80;
+ server_name localhost;
+
+ access_log /var/log/nginx/host.access.log client;
+
+ location / {
+ root /usr/share/nginx/html;
+ index https://docsupdatetracker.net/index.html index.htm;
+ error_page 405 =200 $uri;
+ }
++
+ #error_page 404 /404.html;
+ # redirect server error pages to the static page /50x.html
+ #
+ error_page 500 502 503 504 /50x.html;
+ location = /50x.html {
+ root /usr/share/nginx/html;
+ }
++
+ location = /cnf/test {
+ error_page 405 =200 $uri;
+ }
+
+
+ location = /post_thing {
+ # turn off logging here to avoid double logging
+ access_log off;
+ error_page 405 =200 $uri;
+ }
+ }
+```
+
+**Deployment Configuration:** The `deployment.yaml` file showcases specific lines pertinent to `imagePullSecrets` and `image`. Be sure to observe their structured format, as Azure Operator Service Manager (AOSM) furnishes the necessary values for these fields during deployment. For more information, see [Helm package requirements](helm-requirements.md).
+
+**Sample deployment.yaml file**
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: {{ include "nginxdemo.fullname" . }}
+ labels:
+{{ include "nginxdemo.labels" . | indent 4 }}
+
+spec:
+ replicas: {{ .Values.replicaCount }}
+ selector:
+ matchLabels:
+ app.kubernetes.io/name: {{ include "nginxdemo.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: {{ include "nginxdemo.name" . }}
+ app.kubernetes.io/instance: {{ .Release.Name }}
+
+ spec:
+ # Copied from sas
+ imagePullSecrets: {{ mustToPrettyJson (ternary (list ) .Values.imagePullSecrets (kindIs "invalid" .Values.imagePullSecrets)) }}
+ serviceAccountName: {{ template "nginxdemo.serviceAccountName" . }}
+ securityContext:
+ {{- toYaml .Values.podSecurityContext | nindent 8 }}
+ containers:
+ - name: {{ .Chart.Name }}
+ securityContext:
+ {{- toYaml .Values.securityContext | nindent 12 }}
+ # Want this to evaluate to acr-name.azurecr.io/nginx:stable (or specific version)
+ # docker tag nginx:stable acr-name.azurecr.io/nginx:stable
+ # docker push acr-name.azurecr.io/nginx:stable
+ # Image hard coded to that put in the Artifact Store ACR for this CNF POC
+ image: "{{ .Values.image.repository }}/nginx:stable"
+ imagePullPolicy: {{ .Values.image.pullPolicy }}
+ ports:
+ - name: http
+ containerPort: 80
+ protocol: TCP
+ livenessProbe:
+ httpGet:
+ path: /
+ port: http
+ readinessProbe:
+ httpGet:
+ path: /
+ port: http
+ resources:
+ {{- toYaml .Values.resources | nindent 12 }}
+ # Gets the nginx config from the configMap - see nginx_config_map.yaml
+ volumeMounts:
+ - name: nginx-config-volume
+ mountPath: /etc/nginx/conf.d/default.conf
+ subPath: default.conf
+ volumes:
+ - name: nginx-config-volume
+ configMap:
+ name: nginx-config
+ {{- with .Values.nodeSelector }}
+ nodeSelector:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.affinity }}
+ affinity:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+ {{- with .Values.tolerations }}
+ tolerations:
+ {{- toYaml . | nindent 8 }}
+ {{- end }}
+```
+
+## Next steps
+
+- [Quickstart: Publish Nginx container as Containerized Network Function (CNF)](quickstart-publish-containerized-network-function-definition.md)
operator-service-manager Quickstart Publish Containerized Network Function Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-publish-containerized-network-function-definition.md
+
+ Title: Publish a network function definition
+description: Learn how to publish a network function definition.
++ Last updated : 09/07/2023++++
+# Quickstart: Publish Nginx container as Containerized Network Function (CNF)
+
+ This quickstart describes how to use the `az aosm` Azure CLI extension to create and publish a basic Network Function Definition. Its purpose it to demonstrate the workflow of the Publisher Azure Operator Service Manager (AOSM) resources. The basic concepts presented here are meant to prepare users to build more exciting services.
+
+## Prerequisites
+
+- An Azure account with an active subscription is required. If you don't have an Azure subscription, follow the instructions here [Start free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) to create an account before you begin.
+
+- The Contributor and AcrPush roles over this subscription in order to create a Resource Group, or an existing Resource Group where you have the Contributor role.
+
+- Complete the [Quickstart: Complete the prerequisites to deploy a Containerized Network Function in Azure Operator Service Manager](quickstart-containerized-network-function-prerequisites.md).
+
+## Create input file
+
+Create an input file for publishing the Network Function Definition. Execute the following command to generate the input configuration file for the Network Function Definition (NFD).
+
+```azurecli
+az aosm nfd generate-config --definition-type cnf
+```
+Execution of the preceding command generates an input.json file.
+
+> [!NOTE]
+> Edit the input.json file. Replace it with the values shown in the following sample. Save the file as **input-cnf-nfd.json**.
+
+> [!NOTE]
+> For this quickstart, we use source_local_docker_image. For further CNFs you may make in future, you have the option of using a reference to an existing Azure Container Registry which contains the images for your CNF. Currently, only one ACR and namespace is supported per CNF. The images to be copied from this ACR are populated automatically based on the helm package schema. To use this option in future, fill in `source_registry` and optionally `source_registry_namespace` in the input.json file. You must have Reader/AcrPull permissions on this ACR.
+
+Here's sample input-cnf-nfd.json file:
+
+```json
+{
+ {
+ "publisher_name": "nginx-publisher",
+ "publisher_resource_group_name": "nginx-publisher-rg",
+ "nf_name": "nginx",
+ "version": "1.0.0",
+ "acr_artifact_store_name": "nginx-nsd-acr",
+ "location": "uksouth",
+ "images": {
+ "source_local_docker_image": "nginx:stable"
+ },
+ "helm_packages": [
+ {
+ "name": "nginxdemo",
+ "path_to_chart": "../nginxdemo-0.1.0.tgz",
+ "path_to_mappings": "",
+ "depends_on": []
+ }
+ ]
+}
+```
+- **publisher_name** - Name of the Publisher resource you want your definition published to. Created if it doesn't already exist.
+- **publisher_resource_group_name** - Resource group for the Publisher resource. Created if it doesn't already exist.
+- **acr_artifact_store_name** - Name of the ACR Artifact Store resource. Created if it doesn't already exist.
+- **location** - The Azure location to use when creating resources.
+- **nf_name** - The name of the NF definition.
+- **version** - The version of the NF definition in A.B.C format.
+- **images**:
+ - *source_local_docker_image* - Optional. The image name of the source docker image from your local machine. For limited use case where the CNF only requires a single docker image that exists in the local docker repository.
+- **helm_packages**:
+ - *name* - The name of the Helm package.
+ - *path_to_chart* - The file path of Helm Chart on the local disk. Accepts .tgz, .tar or .tar.gz. Use Linux slash (/) file separator even if running on Windows. The path should be an absolute path or the path relative to the location of the `input.json` file.
+ - *path_to_mappings* - The file path (absolute or relative to `input.json`) of value mappings on the local disk where chosen values are replaced with deploymentParameter placeholders. Accepts .yaml or .yml. If left as a blank string, a value mappings file is generated with every value mapped to a deployment parameter. Use a blank string and `--interactive` on the build command to interactively choose which values to map.
+ - *depends_on* - Names of the Helm packages this package depends on. Leave as an empty array if there are no dependencies.
+
+## Build the Network Function Definition (NFD)
+
+To construct the Network Function Definition (NFD), initiate the build process in the interactive mode. This mode allows you to selectively expose values from `values.yaml` as deploymentParameters.
+
+```azurecli
+az aosm nfd build -f input-cnf-nfd.json --definition-type cnf --interactive
+```
+During the interactive process, you can respond with 'n' (no) for all the options except the following two:
+
+- To expose the parameter `serviceAccount_create`, respond with 'y' (yes)
+- To expose the parameter `service_port`, respond with 'y' (yes)
+
+Once the build is complete, examine the generated files to gain a better understanding of the Network Function Definition (NFD) structure. These files are created:
+++
+|File |Description |
+|||
+|configMappings | Maps the deployment parameters for the Network Function Definition Version (NFDV) to the values required for the helm chart. |
+|generatedValuesMappings | The yaml output of interactive mode that created configMappings. Edit and rerun the command if necessary. |
+|schemas | Defines the deployment parameters required to create a Network Function (NF) from this Network Function Definition Version (NFDV). |
+|cnfartifactmanifests.bicep | Bicep template for creating the artifact manifest. |
+|cnfdefinition.bicep | Bicep template for creating the Network Function Definition Version (NFDV) itself. |
+
+If errors were made during your interactive choices, there are two options to correct them:
+
+1. Rerun the command with the correct selections.
+1. Manually adjust the generated value mappings within `generatedValuesMappings` folder. Then edit the `path_to_mappings_file` in `input.json` to reference the modified file path.
+
+## Publish the Network Function Definition and upload artifacts
+
+Execute the following command to publish the Network Function Definition (NFD) and upload the associated artifacts:
+
+```azurecli
+az aosm nfd publish -f input-cnf-nfd.json --definition-type cnf
+```
+When the command completes, inspect the resources within your Publisher Resource Group to review the created components and artifacts.
+
+## Next steps
+
+- [Quickstart: Design a Containerized Network Function (CNF) Network Service Design with Nginx](quickstart-containerized-network-function-network-design.md)
operator-service-manager Quickstart Publish Virtualized Network Function Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-publish-virtualized-network-function-definition.md
+
+ Title: Publish an Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)
+description: Learn how to publish an Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)
++ Last updated : 10/19/2023++++
+# Quickstart: Publish Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)
+
+ This quickstart describes how to use the `az aosm` Azure CLI extension to create and publish a basic Network Function Definition. Its purpose it to demonstrate the workflow of the Publisher Azure Operator Service Manager (AOSM) resources. The basic concepts presented here are meant to prepare users to build more exciting services.
+
+## Prerequisites
+
+- An Azure account with an active subscription is required. If you don't have an Azure subscription, follow the instructions here [Start free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) to create an account before you begin.
+
+- The Contributor role over this subscription in order to create a Resource Group, or an existing Resource Group where you have the Contributor role.
+
+- It's also assumed that you followed the prerequisites in [Quickstart: Complete the prerequisites to deploy a Virtualized Network Function in Azure Operator Service Manager](quickstart-virtualized-network-function-prerequisites.md)
+
+## Create input file
+
+Execute the following command to generate the input configuration file for the Network Function Definition (NFD).
+
+```azurecli
+az aosm nfd generate-config --definition-type vnf
+```
+
+Once you execute this command, an input.json file generates.
+
+> [!NOTE]
+>Edit the input.json file, replacing it with the values shown in the sample. Save the file as **input-vnf-nfd.json**.
+
+Here's sample input-vnf-nfd.json file:
+
+```json
+{
+ΓÇ» ΓÇ» "publisher_name": "ubuntu-publisher",
+ΓÇ» ΓÇ» "publisher_resource_group_name": "ubuntu-publisher-rg",
+ΓÇ» ΓÇ» "nf_name": "ubuntu-vm",
+ΓÇ» ΓÇ» "version": "1.0.0",
+ΓÇ» ΓÇ» "acr_artifact_store_name": "ubuntu-acr",
+ΓÇ» ΓÇ» "location": "uksouth",
+ΓÇ» ΓÇ» "blob_artifact_store_name": "ubuntu-blob-store",
+ΓÇ» ΓÇ» "image_name_parameter": "imageName",
+ΓÇ» ΓÇ» "arm_template": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "file_path": "ubuntu-template.json",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "version": "1.0.0"
+ΓÇ» ΓÇ» },
+ΓÇ» ΓÇ» "vhd": {
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "file_path": "livecd.ubuntu-cpc.azure.vhd",
+ΓÇ» ΓÇ» ΓÇ» ΓÇ» "version": "1-0-0"
+ΓÇ» ΓÇ» }
+
+}
+```
+
+| Variable | Description |
+|||
+|**publisher_name**|Name of the Publisher resource you want your definition published to. Created if it doesn't exist.
+|**publisher_resource_group_name** | Resource group for the Publisher resource. Created if it doesn't exist.
+|**acr_artifact_store_name** |Name of the ACR Artifact Store resource. Created if it doesn't exist. |
+|**location** |Azure location to use when creating resources. |
+|**nf_name** |Name of NF definition. |
+|**version** |Version of the NF definition in A.B.C format. |
+|**blob_artifact_store_name** |Name of the storage account Artifact Store resource. Created if it doesn't exist. |
+|**image_name_parameter** | The parameter name in the VM ARM template that specifies the name of the image to use for the VM. |
+|**arm_template** | artifact_name: Name of the artifact.
+| | *file_path*: Optional. File path of the artifact you wish to upload from your local disk. Delete if not required. Relative paths are relative to the configuration file. On Windows escape any backslash with another backslash. |
+| | *version*: Version of the artifact. For ARM templates version must be in format A.B.C.
+**vhd** |*artifact_name*: Name of the artifact.
+| |*file_path*: Optional. File path of the artifact you wish to upload from your local disk. Delete if not required. Relative paths are relative to the configuration file. On Windows escape any backslash with another backslash.
+| |*blob_sas_url*: Optional. SAS URL of the blob artifact you wish to copy to your Artifact Store. Delete if not required.
+| |*version*: Version of the artifact. Version of the artifact. For VHDs version must be in format A-B-C.
+
+> [!IMPORTANT]
+> Each variable described in the previous table must be unique. For instance, the resource group name cannot already exist, and publisher and artifact store names must be unique in the region.
++
+## Build the Network Function Definition (NFD)
+
+To construct the Network Function Definition (NFD), initiate the build process.
+
+```azurecli
+az aosm nfd build -f input-vnf-nfd.json --definition-type vnf
+```
+
+Once the build is complete, examine the generated files to better understand the Network Function Definition (NFD) structure.
+
+These files are created in a subdirectory called **nfd-bicep-ubuntu-template**:
+
+| File | Description |
+|||
+|**configMappings** |Directory containing files that map the deployment parameters for the Network Function Definition Version (NFDV) to the parameters required for the Virtual Machine (VM) ARM template.
+|**schemas** | Directory containing files that define the deployment parameters required to create a Network Function (NF) from this Network Function Definition Version (NFDV).
+|**vnfartifactmanifests.bicep** |Bicep template for creating the artifact manifests.
+|**Vnfdefinition.bicep** |Bicep template for creating the Network Function Definition Version (NFDV) itself.
+
+> [!NOTE]
+> If errors were made, the only option to correct is to re-run the command with the proper selections.
+
+## Publish the Network Function Definition and upload artifacts
+
+Execute the following command to publish the Network Function Definition (NFD) and upload the associated artifacts:
+
+```azurecli
+az aosm nfd publish -f input-vnf-nfd.json --definition-type vnf
+```
+
+When the command completes, inspect the resources within your Publisher Resource Group to observe the created components and artifacts.
+
+These resources are created:
+
+|Resource Name | Resource Type |
+|||
+|**ubuntu-vm-nfdg** | Network Function Definition.
+|**1.0.0** |Network Function Definition Version.
+|**ubuntu-publisher** |Publisher.
+|**ubuntu-vm-acr-manifest-1-0-0** |Publisher Artifact Manifest.
+|**ubuntu-vm-sa-manifest-1-0-0** |Publisher Artifact Manifest.
+|**ubuntu-acr** |Publisher Artifact Store.
+|**ubuntu-blob-store** |Publisher Artifact Store.
+
+> [!NOTE]
+> The creation of the artifact stores takes about 10 minutes. If the resource already exists, the process is considerably faster.
+
+## Next steps
+
+- [Quickstart: Design a Virtualized Network Function (VNF) Network Service Design](quickstart-virtualized-network-function-network-design.md)
operator-service-manager Quickstart Virtualized Network Function Create Site Network Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-create-site-network-service.md
+
+ Title: Create a Site Network Service for Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)
+description: Learn how to create a Site Network Service (SNS) for Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)
++ Last updated : 09/26/2023++++
+# Quickstart: Create a Site Network Service (SNS) for Ubuntu Virtual Machine (VM) as Virtualized Network Function (VNF)
+
+This quickstart describes the process of creating a Site Network Service (SNS) using the Azure portal. The Site Network Service (SNS) is an essential part of a Network Service Instance and is associated with a specific site. Each Site Network Service (SNS) instance references a particular version of a Network Service Design (NSD).
+
+## Prerequisites
+
+An Azure account with an active subscription is required. If you don't have an Azure subscription, follow the instructions here [Start free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) to create an account before you begin.
+
+This quickstart assumes you followed the prerequisites in these quickstarts:
+
+- [Quickstart: Prerequisites for Operator and Virtualized Network Function (VNF)](quickstart-virtualized-network-function-operator.md)
+- [Quickstart: Create a Virtualized Network Functions (VNF) Site](quickstart-virtualized-network-function-create-site.md)
+
+## Create Site Network Service (SNS)
+
+### Create resource
+
+1. In Azure portal, enter "Site Network Services" into the search and select **Site Network Service** from the results.
+1. Select **+ Create**.
+
+ :::image type="content" source="media/create-site-network-service-virtual-network-function.png" alt-text="Screenshot showing the Create a resource page search for and select Site Network Service.":::
++
+1. In the **Basics** tab, enter or select the following information. Accept the defaults for the remaining settings.
+
+ |Setting|Value|
+ |||
+ |**Subscription**| Select your subscription.|
+ |**Resource group**| Select *operatorresourcegroup*.|
+ |**Name**| Enter *ubuntu-sns*.|
+ |**Region**| Select the location you used for your prerequisite resources.|
+ |**Site**| Enter *ubuntu-vm-site*.|
+ |**Managed Identity Type** | User Assigned. |
+ |**User Assigned Identity** |Select **identity-for-ubuntu-vm-sns**.|
+
+ :::image type="content" source="media/basics-tab-virtual-network-function.png" alt-text="Screenshot showing the Basics page where the details for the Site Network Service are input.":::
+
+### Choose Network Service Design
+
+1. On the **Choose a Network Service Design** page, select the Publisher, Network Service Design Resource and Network Service Design Version that you published earlier.
++
+ |Setting|Value|
+ |||
+ |**Publisher Offering Location**| Select **UK South**|
+ |**Publisher**| Select **ubuntu-publisher**|
+ |**Network Service Design resource**| Select **ubuntu-nsdg**|
+ |**Network Service Design version**| Select **1.0.0**|
+
+
+ :::image type="content" source="media/choose-network-service-design-virtual-network-function.png" alt-text="Screenshot showing the Choose a Network Service Design tab and Network Service Design resource.":::
+
+1. Select **Next**.
+
+### Set initial configuration
+
+1. From the **Set initial configuration** tab, choose **Create New**.
+1. Enter ubuntu-sns-cgvs into the name field.
+
+ :::image type="content" source="media/review-create-virtual-network-function.png" alt-text="Screenshot showing the Set initial configuration tab, then Review and Create.":::
+
+1. Copy and paste the following JSON file into the ubuntu-sns-cgvs dialog that appears. Edit the place holders to contain your virtual network ID, your managed identity, and your SSH public key values.
+++
+ ```json
+ {
+ "ubuntu-vm-nfdg": {
+ "deploymentParameters": {
+ "location": "uksouth",
+ "subnetName": "ubuntu-vm-subnet",
+ "virtualNetworkId": "/subscriptions/<subscription_id>/resourceGroups/<pre-requisites resource group>/providers/Microsoft.Network/virtualNetworks/ubuntu-vm-vnet",
+ "sshPublicKeyAdmin": "<Your public ssh key>",
+ "ubuntuVmName": "myUbuntuVm"
+ },
+ "ubuntu_vm_nfdg_nfd_version": "1.0.0"
+ },
+ "managedIdentity": "`<managed-identity-resource-id>`"
+ }
+ ```
+
+1. Refer to [Quickstart: Prerequisites for Operator and Virtualized Network Function (VNF)](quickstart-virtualized-network-function-operator.md) in the **Resource ID for the managed identity** section to see how to retrieve the managedIdentity resource ID.
++
+ Additionally, the sshPublicKeyadmin can be listed by executing `cat ~/.ssh/id_rsa.pub` or `cat ~/.ssh/id_dsa.pub` or can be created following [Generate new keys and Get public keys ](/azure/virtual-machines/ssh-keys-portal).
+
+1. Select **Review + create**.
+1. Select **Create**.
+
+### Wait for deployment
+
+Wait for the deployment to reach the 'Succeeded' state. After completion, your Virtual Network Function (VNF) should be up and running.
+
+### Access your Virtual Network Function (VNF)
+
+1. To access your Virtual Network Function (VNF), go to the Site Network Service object in the Azure portal.
+1. Select the link under **Current State -> Resources**. The link takes you to the managed resource group created by Azure Operator Service Manager.
+
+Congratulations! You have successfully created a Site Network Service for Ubuntu Virtual Machine (VM) as a Virtual Network Function (VNF) in Azure. You can now manage and monitor your Virtual Network Function (VNF) through the Azure portal.
operator-service-manager Quickstart Virtualized Network Function Create Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-create-site.md
+
+ Title: Create a Virtualized Network Functions (VNF) Site
+description: Learn how to create a Virtualized Network Functions (VNF) Site
++ Last updated : 09/14/2023++++
+# Quickstart: Create a virtualized network functions (VNF) site for an Ubuntu virtual machine
+
+This article shows you how to create a Site using the Azure portal. A site is the collection of assets that represent one or more instances of nodes in a network service that should be discussed and managed in a similar manner.
+
+A site can represent:
+
+- A physical location such as DC or rack(s).
+- A node in the network that needs to be upgraded separately (early or late) vs other nodes.
+- Resources serving particular class of customer.
+
+Sites can be within a single Azure region or an on-premises location. If collocated, they can span multiple NFVIs (such as multiple K8s clusters in a single Azure region).
+
+> [!IMPORTANT]
+> You must create a site prior to creating a site network service.
+
+## Prerequisites
+
+- An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- Complete the [Quickstart: Prerequisites for Operator and Virtualized Network Function (VNF)](quickstart-virtualized-network-function-operator.md)
+
+## Create a site
+
+1. Sign in to the [Azure portal](https://ms.portal.azure.com/).
+1. Prior to creating a site, navigate to the resource group you created in the [Quickstart: Design a Network Service Design (NSD) for Ubuntu Virtual Machine (VM) as a Virtualized Network Function (VNF)](quickstart-virtualized-network-function-network-design.md) and select **Network Service Design Version**.
+
+ :::image type="content" source="media/network-service-design-version.png" alt-text="Screenshot showing the network service design version used in creating your site.":::
+
+1. Select **NVFI from site**, select **View value as JSON**, and locate the "name" of the NFVI. Note this information. You'll need it in a later step.
+
+ :::image type="content" source="media/network-service-design-version-name.png" alt-text="Screenshot showing the Add the NFVIs table to enter the name, type and location of the NFVIs.":::
+
+1. Type Sites into the search and select **Sites** from the results.
+1. Select **+Create**.
+1. On the **Basics tab**, enter or select the following information:
+
+ |Setting |Value |
+ |||
+ |**Subscription** | Select the **Subscription**. |
+ |**Resource Group** | Select **OperatorResourceGroup**. |
+ |**Name** | Enter *ubuntu-vm-site*. This name should be unique in your subscription to avoid confusion when you create the SNS later. |
+ |**Region** | Select the location you used for your prerequisite resource. |
+
+ :::image type="content" source="media/create-site-basic-virtual-network-function.png" alt-text="Screenshot showing the Basic tab to enter Project details and Instance details for your site.":::
+
+> [!NOTE]
+ > The site must be located in the same region as the [prerequisite resources](quickstart-virtualized-network-function-prerequisites.md).
++
+7. Navigate to the **Add NFVI** tab of the **Create site** screen and select **+ Add NFVI**.
+1. Enter the NFVI name you noted earlier. Select **NFVI type** as "Azure Core", set **NFVI location** to the location of your resources, and select **Add NFVI**.
+
+ :::image type="content" source="media/create-site-add-ubuntu.png" alt-text="Screenshot showing the NFVI tab where you enter the name, type and location of the NFVI.":::
+
+1. Select **Review + create**, then select **Create**.
+
+## Next steps
+
+- Complete [Quickstart: Create a Virtualized Network Function (VNF) Site Network Service (SNS) for Ubuntu Virtual Machine (VM)](quickstart-virtualized-network-function-create-site-network-service.md).
operator-service-manager Quickstart Virtualized Network Function Network Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-network-design.md
+
+ Title: Design a Virtualized Network Function (VNF) for Ubuntu
+description: Learn how to design a Virtualized Network Function (VNF) for Ubuntu.
++ Last updated : 10/19/2023++++
+# Quickstart: Design a Network Service Design (NSD) for Ubuntu Virtual Machine (VM) as a Virtualized Network Function (VNF)
+
+This quickstart describes how to use the `az aosm` Azure CLI extension to create and publish a basic Network Service Design.
+
+## Prerequisites
+
+An Azure account with an active subscription is required. If you don't have an Azure subscription, follow the instructions here [Start free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) to create an account before you begin.
+
+It's also assumed that you followed the prerequisites in [Quickstart: Publish Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)](quickstart-publish-virtualized-network-function-definition.md).
+
+## Create input file
+
+Create an input file for publishing the Network Service Design. Execute the following command to generate the input configuration file for the Network Service Design (NSD).
+
+```azurecli
+az aosm nsd generate-config
+```
+
+Once you execute this command an input.json file generates.
+
+> [!NOTE]
+> Edit the input.json file, replacing it with the values shown in the sample. Save the file as **input-vnf-nsd.json**.
+
+```json
+{
+ "location": "uksouth",
+ "publisher_name": "ubuntu-publisher",
+ "publisher_resource_group_name": "ubuntu-publisher-rg",
+ "acr_artifact_store_name": "ubuntu-acr",
+ "network_functions": [
+ {
+ "name": "ubuntu-vm-nfdg",
+ "version": "1.0.0",
+ "publisher_offering_location": "uksouth",
+ "type": "vnf",
+ "multiple_instances": false,
+ "publisher": "ubuntu-publisher",
+ "publisher_resource_group": "ubuntu-publisher-rg"
+ }
+ ],
+ "nsd_name": "ubuntu-nsdg",
+ "nsd_version": "1.0.0",
+ "nsdv_description": "Plain ubuntu VM"
+}
+```
+
+|Variable |Description |
+|||
+|**publisher_name** | Name of the Publisher resource you want your definition published to. Created if it doesn't exist. |
+|**publisher_resource_group_name** | Resource group for the Publisher resource. Created if it doesn't exist. |
+|**acr_artifact_store_name** | Name of the ACR Artifact Store resource. Created if it doesn't exist. |
+|**location** | Azure location to use when creating resources. |
+|**network-functions** | *publisher*: The name of the publisher that this NFDV is published under. |
+| | *publisher_resource_group*: The resource group that the publisher is hosted in. |
+| | *name*: The name of the existing Network Function Definition Group to deploy using this NSD. |
+| | *version*: The version of the existing Network Function Definition to base this NSD on. This NSD is able to deploy any NFDV with deployment parameters compatible with this version. |
+| | *publisher_offering_location*: The region that the NFDV is published to. |
+| | *type*: Type of Network Function. Valid values are cnf or vnf. |
+| | *multiple_instances*: Valid values are true or false. WhetherControls if the NSD should allow arbitrary numbers of this type of NF. If set to false only a single instance is allowed. Only supported on VNFs. For CNFs, set to false. |
+|**nsd_name** | Network Service Design Group Name. The collection of Network Service Design Versions. Created if it doesn't exist. |
+|**nsd_version** | Version of the NSD to be created. The format should be A.B.C. |
+|**nsdv_description** | Description of the NSDV. |
++
+## Build the Network Service Design (NSD)
+
+Initiate the build process for the Network Service Definition (NSD) using the following command:
+
+```azurecli
+az aosm nsd build -f input-vnf-nsd.json
+```
+After the build process completes, review the following generated files to gain insights into the NSD's architecture and structure.
+
+These files are created in a subdirectory called **nsd-bicep-templates**:
+
+|Files |Description |
+|||
+|**artifact_manifest.bicep** | A bicep template for creating the Publisher and artifact stores. |
+|**configMappings** | A directory containing files that convert the config group values inputs to the deployment parameters required for each NF. |
+|**nsd_definition.bicep** | A bicep template for creating the NSDV itself. |
+|**schemas** | A directory containing files that define to the inputs required in the config group values for this NSDV. |
+|**ubuntu-vm-nfdg_nf.bicep** | A bicep template for deploying the NF. Uploaded to the artifact store. |
+
+## Publish the Network Service Design (NSD)
+
+To publish the Network Service Design (NSD) and its associated artifacts, issue the following command:
+
+```azurecli
+az aosm nsd publish -f input-vnf-nsd.json
+```
+When the Publish process is complete navigate to your Publisher Resource Group to observe and review the resources and artifacts that were produced.
+
+These resources are created:
+
+|Resource Name |Resource Type |
+|||
+|**ubuntu-nsdg** | The Network Service Design. |
+|**1.0.0 (ubuntu-nsdg/1.0.0)** | The Network Service Design Version. |
+|**ubuntu-vm-nfdg-nf-acr-manifest-1-0-0** |Publisher Artifact Manifest.
+|**ubuntu_nsdg_ConfigGroupSchema** | The Configuration Group Schema. |
+
+## Next steps:
+
+- [Quickstart: Prerequisites for Operator and Virtualized Network Function (VNF)](quickstart-virtualized-network-function-operator.md)
operator-service-manager Quickstart Virtualized Network Function Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-operator.md
+
+ Title: Prerequisites for Operator and Virtualized Network Function (VNF)
+description: Install the necessary prerequisites for Operator and Virtualized Network Function (VNF).
++ Last updated : 10/19/2023++++
+# Quickstart: Prerequisites for Operator and Virtualized Network Function (VNF)
+
+This quickstart contains the prerequisite tasks for Operator and Virtualized Network Function (VNF). While it's possible to automate these tasks within your NSD (Network Service Definition), in this quickstart, the actions are performed manually.
+
+## Deploy prerequisites for Virtual Machine (VM)
+
+1. Follow the actions to [Create resource groups](../azure-resource-manager/management/manage-resource-groups-cli.md) for the prerequisites in the same region as your Publisher resources.
+
+ ```azurecli
+ az login
+ ```
+1. Select active subscription using the subscription ID.
+
+ ```azurecli
+ az account set --subscription "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+ ```
+1. Create the Resource Group.
+
+ ```azurecli
+ az group create --name OperatorResourceGroup --location uksouth
+ ```
+
+ > [!NOTE]
+ > The Resource Group you create here is used for further deployment.
+
+1. Save the following Bicep script locally as *prerequisites.bicep*.
+
+ ```json
+ param location string = resourceGroup().location
+ param vnetName string = 'ubuntu-vm-vnet'
+ param vnetAddressPrefixes string
+ param subnetName string = 'ubuntu-vm-subnet'
+ param subnetAddressPrefix string
+ param identityName string = 'identity-for-ubuntu-vm-sns'
+
+ resource networkSecurityGroup 'Microsoft.Network/networkSecurityGroups@2022-05-01' ={
+ name: '${vnetName}-nsg'
+ location: location
+ }
+
+ resource virtualNetwork 'Microsoft.Network/virtualNetworks@2019-11-01' = {
+ name: vnetName
+ location: location
+ properties: {
+
+ addressSpace: {
+ addressPrefixes: [vnetAddressPrefixes]
+ }
+ subnets: [
+ {
+ name: subnetName
+ properties: {
+ addressPrefix: subnetAddressPrefix
+ networkSecurityGroup: {
+ id:networkSecurityGroup.id
+ }
+ }
+ }
+ ]
+ }
+ }
+
+ resource managedIdentity 'Microsoft.ManagedIdentity/userAssignedIdentities@2018-11-30' = {
+ name: identityName
+ location: location
+ }
+
+ output managedIdentityId string = managedIdentity.id
+ output vnetId string = virtualNetwork.id
+ ```
+
+1. Save the following json template locally as *prerequisites.parameters.json*.
+
+ ```json
+ {
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vnetAddressPrefixes": {
+ "value": "10.0.0.0/24"
+ },
+ "subnetAddressPrefix": {
+ "value": "10.0.0.0/28"
+ }
+ }
+ }
+ ```
+
+1. Ensure the scripts are saved locally.
+
+## Deploy Virtual Network
+
+1. Start the deployment of the Virtual Network. Issue the following command:
+
+ ```azurecli
+ az deployment group create --name prerequisites --resource-group OperatorResourceGroup --template-file prerequisites.bicep --parameters prerequisites.parameters.json
+ ```
+1. The script creates a Virtual Network, a Network Security Group and the Managed Identity.
+
+
+
+## Locate Resource ID for managed identity
+
+1. **Login to Azure portal**: Open a web browser and sign in to the Azure portal (https://portal.azure.com/) using your Azure account credentials.
+1. **Navigate to All Services**: Under *Identity* select *Managed identities*.
+1. **Locate the Managed Identity**: In the list of managed identities, find and select the one named **identity-for-ubuntu-vm-sns**. You should now be on the overview page for that managed identity.
+1. **Locate ID**: Select the properties section of the managed identity. You should see various information about the identity. Look for the **ID** field.
+1. **Copy to clipboard**: Select the **Copy** button or icon next to the Resource ID.
+1. **Save copied Resource ID**: Save the copied Resource ID as this information is required for the **Config Group Values** when creating the Site Network Service.
+
+ :::image type="content" source="media/identity-for-ubuntu-vm-sns.png" alt-text="Screenshot showing Managed Identity Properties and ID under Essentials." lightbox="media/identity-for-ubuntu-vm-sns.png":::
+
+## Locate Resource ID for Virtual Network
+
+1. **Login to Azure portal**: Open a web browser and sign in to the Azure portal (https://portal.azure.com/) using your Azure account credentials.
+1. **Navigate to Virtual Networks**: In the left-hand navigation pane, select *Virtual networks*.
+1. **Search for Virtual Networks**: In the list of virtual networks, you can either scroll through the list or use the search bar to find the *ubuntu-vm-vnet* virtual network.
+1. **Access Virtual Network**: Select the name of the *ubuntu-vm-vnet* virtual network. You should now be on the overview page for that virtual network.
+1. **Locate ID**: Select the properties section of the Virtual Network. You should see various information about the identity. Look for the Resource ID field.
+1. **Copy to clipboard**: Select the **Copy** button or icon next to the Resource ID to copy it to your clipboard.
+1. **Save copied Resource ID**: Save the copied Resource ID as this information is required for the **Config Group Values** when creating the Site Network Service.
+
+ :::image type="content" source="media/resource-id-ubuntu-vm-vnet.png" alt-text="Screenshot showing Virtual network Properties and the Resource ID.":::
+
+## Update Site Network Service (SNS) permissions
+
+To perform this task, you need either the 'Owner' or 'User Access Administrator' role in the respective Resource Group.
+In prior steps, you created a Managed Identity labeled *identity-for-ubuntu-vm-sns* inside your reference resource group. This identity plays a crucial role in deploying the Site Network Service. (SNS). Grant the identity 'Contributor' permissions for relevant resources. These actions facilitate the connection of the Virtual Machine (VM) to the Virtual Network (VNET). Through this identity, the Site Network Service (SNS) attains the required permissions.
+
+In prior steps, you created a Managed Identity labeled identity-for-ubuntu-vm-sns inside your reference resource group. This identity plays a crucial role in deploying the Site Network Service (SNS). Grant the identity 'Contributor' permissions for relevant resources. These actions facilitate the deployment of the Virtual Network Function and the connection of the Virtual Machine (VM) to the Virtual Network (VNET). Through this identity, the Site Network Service (SNS) attains the required permissions.
+
+### Grant Contributor role over Virtual Network to Managed Identity
+
+1. Access the Azure portal and open the Resource Group created earlier in this case *OperatorResourceGroup*.
+1. Locate and select the Virtual Network named **ubuntu-vm-vnet**.
+1. In the side menu of the Virtual Network, select **Access Control (IAM)**.
+1. Choose **Add Role Assignment**.
+
+ :::image type="content" source="media/add-role-assignment-ubuntu-vm-vnet.png" alt-text="Screenshot showing Virtual Access control (IAM) area to Add role assignment.":::
+
+1. Under the **Privileged administrator roles**, category pick *Contributor* then proceed with **Next**.
+
+ :::image type="content" source="media/privileged-admin-roles-contributor-ubuntu.png" alt-text="Screenshot showing the Add role assignment window and Contributor with description.":::
+
+1. Select **Managed identity**.
+1. Choose **+ Select members** then find and choose the user-assigned managed identity **identity-for-ubuntu-vm-sns**.
+1. Select **Review and assign**.
+
+ :::image type="content" source="media/managed-identity-select-members-ubuntu.png" alt-text="Screenshot showing Managed identity and + Select members.":::
+
+### Grant Contributor role over publisher Resource Group to Managed Identity
+
+1. Access the Azure portal and open the Publisher Resource Group created when publishing the Network Function Definition in this case *ubuntu-publisher-rg*.
+
+1. In the side menu of the Resource Group, select **Access Control (IAM)**.
+1. Choose **Add Role Assignment**.
+
+ :::image type="content" source="media/how-to-assign-custom-role-resource-group.png" alt-text="Screen shot showing the ubuntu publisher resource screen where you add role assignment.":::
+
+
+1. Under the **Privileged administrator roles**, category pick *Contributor* then proceed with **Next**.
+
+ :::image type="content" source="media/privileged-admin-roles-contributor-resource-group.png" alt-text="Screenshot show privileged administrator roles with owner of contributor.":::
+
+1. Select **Managed identity**.
+1. Choose **+ Select members** then find and choose the user-assigned managed identity **identity-for-ubuntu-vm-sns**.
+1. Select **Review and assign**.
+
+ :::image type="content" source="media/managed-identity-resource-group-select-members-ubuntu.png" alt-text="Screenshot showing the add role assignment screen with review + assign highlighted.":::
+
+### Grant Managed Identity Operator role to itself
+
+1. Go to the Azure portal and search for **Managed Identities**.
+1. Select *identity-for-ubuntu-vm-sns* from the list of **Managed Identities**.
+1. On the side menu, select **Access Control (IAM)**.
+1. Choose **Add Role Assignment**.
+
+ :::image type="content" source="media/quickstart-virtual-network-function-operator-add-role-assignment-screen.png" alt-text="Screenshot showing the identity for ubuntu VM SNS add role assignment.":::
++
+1. Select the **Managed Identity Operator** role then proceed with **Next**.
+
+ :::image type="content" source="media/managed-identity-operator-role-virtual-network-function.png" alt-text="Screenshot showing the Managed Identity Operator role.":::
+
+1. Select **Managed identity**.
+1. Select **+ Select members** and navigate to the user-assigned managed identity called *identity-for-ubuntu-vm-sns* and proceed with the assignment.
+
+ :::image type="content" source="media/managed-identity-user-assigned-ubuntu.png" alt-text="Screenshot showing the Add role assignment screen with Managed identity selected.":::
+
+1. Select **Review and assign**.
+
+Completion of all the tasks outlined in this article ensures that the Service Network Slice (SNS) has the necessary permissions to function effectively within the specified Azure environment.
+
+## Next steps
+
+- [Quickstart: Create a Virtualized Network Functions (VNF) Site](quickstart-virtualized-network-function-create-site.md).
operator-service-manager Quickstart Virtualized Network Function Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-prerequisites.md
++
+ Title: Prerequisites for Using Azure Operator Service Manager as Virtual Network Function (VNF)
+description: Use this Quickstart to install and configure the necessary prerequisites for Azure Operator Service Manager as Virtual Network Function (VNF)
++++ Last updated : 10/19/2023++
+# Quickstart: Complete the prerequisites to deploy a Virtualized Network Function in Azure Operator Service Manager
+
+Before you begin using Azure Operator Service Manager, ensure you have registered the required resource providers and installed the necessary tools to interact with the service.
+
+## Prerequisites
+
+Contact your Microsoft account team to register your Azure subscription for access to Azure Operator Service Manager (AOSM) or express your interest through the [partner registration form](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR7lMzG3q6a5Hta4AIflS-llUMlNRVVZFS00xOUNRM01DNkhENURXU1o2TS4u).
+
+## Download and install Azure CLI
+
+You can use the Bash environment in Azure Cloud Shell. For more information, see [Quickstart for Bash in Azure Cloud Shell](../cloud-shell/quickstart.md).
+
+If you prefer to run CLI reference commands locally, install the Azure CLI using [How to install the Azure CLI](/cli/azure/install-azure-cli).
+
+If you're machine runs on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see [How to run the Azure CLI in a Docker container](/cli/azure/run-azure-cli-docker).
+
+For local installation, sign into Azure CLI using the `az login` command.
+
+To finish the authentication process, follow the steps displayed in your terminal. For other sign in options, see [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli).
+
+## Sign in with Azure CLI
+
+To sign in with Azure CLI, issue the following command.
+
+```azurecli
+az login
+```
+
+## Select subscription
+
+To change the active subscription using the subscription ID, issue the following command.
+
+```azurecli
+az account set --subscription "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+```
+## Install Azure Operator Service Manager (AOSM) CLI extension
+
+To install the Azure Operator Service Manager CLI extension, issue the following command.
+
+```azurecli
+az extension add --name aosm
+```
+
+Run `az version` to determine the version and dependent libraries installed. Upgrade to the latest version by issuing command `az upgrade`.
+
+## Register required resource providers
+
+Prior to using the Azure Operator Service Manager you must first register the required resource providers by executing these commands. The registration process could take up to 5 minutes.
+
+```azurecli
+# Register Resource Provider
+az provider register --namespace Microsoft.HybridNetwork
+az provider register --namespace Microsoft.ContainerRegistry
+```
+## Verify registration status
+
+To verify the registration status of the resource providers, you can run the following commands:
+
+```azurecli
+# Query the Resource Provider
+az provider show -n Microsoft.HybridNetwork --query "{RegistrationState: registrationState, ProviderName: namespace}"
+az provider show -n Microsoft.ContainerRegistry --query "{RegistrationState: registrationState, ProviderName: namespace}"
+```
+
+Upon success, the following output displays:
+
+```azurecli
+{
+ "ProviderName": "Microsoft.HybridNetwork",
+ "RegistrationState": "Registered"
+}
+{
+ "ProviderName": "Microsoft.ContainerRegistry",
+ "RegistrationState": "Registered"
+}
+```
+
+> [!NOTE]
+> It can take a few minutes for the resource provider registration to complete. Once the registration is successful, you can begin using the Network Function Manager (NFM) or Azure Operator Service Manager.
+
+## Virtual Network Function (VNF) requirements
+
+### Download and extract Ubuntu image
+
+If you already possess the Ubuntu image accessible through a SAS URL in Azure blob storage, you can save time by omitting this step. Keep in mind that the Ubuntu image is sizable, around 650 MB, so the transfer process can take a while.
+
+```bash
+# Download the Ubuntu image
+wget https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64-azure.vhd.tar.gz
+
+# Extract the downloaded image
+tar -xzvf jammy-server-cloudimg-amd64-azure.vhd.tar.gz
+```
+
+### Virtual Machine (VM) ARM template
++
+The following sample ARM template for Ubuntu Virtual Machine (VM) is used in this quickstart.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "metadata": {
+ "_generator": {
+ "name": "bicep",
+ "version": "0.21.1.54444",
+ "templateHash": "2626436546580286549"
+ }
+ },
+ "parameters": {
+ "location": {
+ "type": "string",
+ "defaultValue": "[resourceGroup().location]"
+ },
+ "subnetName": {
+ "type": "string"
+ },
+ "ubuntuVmName": {
+ "type": "string",
+ "defaultValue": "ubuntu-vm"
+ },
+ "virtualNetworkId": {
+ "type": "string"
+ },
+ "sshPublicKeyAdmin": {
+ "type": "string"
+ },
+ "imageName": {
+ "type": "string"
+ }
+ },
+ "variables": {
+ "imageResourceGroup": "[resourceGroup().name]",
+ "subscriptionId": "[subscription().subscriptionId]",
+ "vmSizeSku": "Standard_D2s_v3"
+ },
+ "resources": [
+ {
+ "type": "Microsoft.Network/networkInterfaces",
+ "apiVersion": "2021-05-01",
+ "name": "[format('{0}_nic', parameters('ubuntuVmName'))]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "ipConfigurations": [
+ {
+ "name": "ipconfig1",
+ "properties": {
+ "subnet": {
+ "id": "[format('{0}/subnets/{1}', parameters('virtualNetworkId'), parameters('subnetName'))]"
+ },
+ "primary": true,
+ "privateIPAddressVersion": "IPv4"
+ }
+ }
+ ]
+ }
+ },
+ {
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2021-07-01",
+ "name": "[parameters('ubuntuVmName')]",
+ "location": "[parameters('location')]",
+ "properties": {
+ "hardwareProfile": {
+ "vmSize": "[variables('vmSizeSku')]"
+ },
+ "storageProfile": {
+ "imageReference": {
+ "id": "[extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', variables('subscriptionId'), variables('imageResourceGroup')), 'Microsoft.Compute/images', parameters('imageName'))]"
+ },
+ "osDisk": {
+ "osType": "Linux",
+ "name": "[format('{0}_disk', parameters('ubuntuVmName'))]",
+ "createOption": "FromImage",
+ "caching": "ReadWrite",
+ "writeAcceleratorEnabled": false,
+ "managedDisk": "[json('{\"storageAccountType\": \"Premium_LRS\"}')]",
+ "deleteOption": "Delete",
+ "diskSizeGB": 30
+ }
+ },
+ "osProfile": {
+ "computerName": "[parameters('ubuntuVmName')]",
+ "adminUsername": "azureuser",
+ "linuxConfiguration": {
+ "disablePasswordAuthentication": true,
+ "ssh": {
+ "publicKeys": [
+ {
+ "path": "/home/azureuser/.ssh/authorized_keys",
+ "keyData": "[parameters('sshPublicKeyAdmin')]"
+ }
+ ]
+ },
+ "provisionVMAgent": true,
+ "patchSettings": {
+ "patchMode": "ImageDefault",
+ "assessmentMode": "ImageDefault"
+ }
+ },
+ "secrets": [],
+ "allowExtensionOperations": true
+ },
+ "networkProfile": {
+ "networkInterfaces": [
+ {
+ "id": "[resourceId('Microsoft.Network/networkInterfaces', format('{0}_nic', parameters('ubuntuVmName')))]"
+ }
+ ]
+ }
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Network/networkInterfaces', format('{0}_nic', parameters('ubuntuVmName')))]"
+ ]
+ }
+ ]
+}
+```
+
+Save the preceding json file as *ubuntu-template.json* on your local machine.
++
+## Next steps
+
+- [Quickstart: Publish Ubuntu Virtual Machine (VM) as Virtual Network Function (VNF)](quickstart-publish-virtualized-network-function-definition.md)
operator-service-manager Roles Interfaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/roles-interfaces.md
+
+ Title: Roles and Interfaces for Azure Operator Service Manager
+description: Learn about the various roles and interfaces for Azure Operator Service Manager.
++ Last updated : 09/07/2023++++
+# Roles and Interfaces
+
+Azure Operator Service Manager (AOSM) provides three distinct interfaces catering to three roles:
+
+- Network Function Publisher
+- Network Service Designer
+- Network Service Operator
+
+In practice, multiple of these roles can be performed by the same person if necessary.
++
+## Network Function (NF) Publisher - Role 1
+
+The Network Function (NF) Publisher creates and publishes network functions to Azure Operator Service Manager (AOSM). Publisher responsibilities include:
+- Create the network function.
+- Encode that in a Network Function Definition (NFD).
+- Determine the deployment parameters to expose to the Service Designer.
+- Onboard the Network Function Definition (NFD) to Azure Operator Service Manager (AOSM).
+- Upload the associated artifacts.
+- Validate the Network Function Definition (NFD).
+
+The term *Publisher* is synonymous. The Network Function (NF) Publisher is responsible for creating/updating these Azure Operator Service Manager (AOSM) resources:
+- Publisher
+- Artifact Store
+- Artifact Manifest
+- Network Function Definition Group
+- Network Function Definition Version
+
+## Service Designer - Role 2
+
+The Service Designer is responsible for building a Network Service Design (NSD). The Service Designer takes a collection of Network Function Definition (NFDs) from various Network Function (NF) Publishers. When collecting the Network Function Definitions (NFDs) is complete, the Service Designer combines them together along with Azure infrastructure to create a cohesive service. The Service Designer determines how to parametrize the service by defining one or more Configuration Group Schemas (CGSs). The Configuration Group Schemas (CGSs) define the inputs that the Service Operator must supply in the Configuration Group Values (CGVs).
+
+The Service Designer determines how inputs from the Service Operator map down to parameters required by the Network Function (NF) Publishers and the Azure infrastructure.
+
+As part of creating the Network Service Design (NSD) the Service Designer must consider the upgrade and scaling requirements of the service.
+
+The Service Designer is responsible for creating/updating the following Azure Operator Service Manager (AOSM) objects:
+
+- Publisher
+- Artifact Store
+- Artifact Manifest
+- Network Service Design Group
+- Network Service Design Version
+- Configuration Group Schema
+
+## Service Operator - Role 3
+
+The Service Operator is the person who runs the service on a day to day basis. The Service Operator duties include creating, modifying and monitoring these objects:
+- Site
+- Site Network Service (SNS)
+- Configuration Group Values (CGV)
+
+The process to create a Site Network Service consists of:
+- Selecting a Network Function Design Version (NSDV) for the new service.
+- Applying parameters using inputs in the form of a Site and one or more Configuration Group Schemas (CGSs).
+
+The Service Designer determines the exact format of these inputs.
+
+A Service Operator is responsible for creating/updating the following Azure Operator Service Manager (AOSM) objects:
+- Site
+- Configuration Group Values
+- Site Network Service
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
One advantage of running your workload in Azure is global reach. The flexible se
| France Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | France South | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Germany West Central | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
+| Japan East | :heavy_check_mark: (v3/v4 only) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Japan West | :heavy_check_mark: (v3/v4 only) | :x: | :heavy_check_mark: | :heavy_check_mark: | | Jio India West | :heavy_check_mark: (v3 only) | :x: | :heavy_check_mark: | :x: | | Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :heavy_check_mark: | :heavy_check_mark: |
postgresql Concepts Data Access And Security Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-data-access-and-security-private-link.md
Configure [VNet peering](../../virtual-network/tutorial-connect-virtual-networks
### Connecting from an Azure VM in VNet-to-VNet environment
-Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to a Azure Database for PostgreSQL - Single server from an Azure VM in a different region or subscription.
+Configure [VNet-to-VNet VPN gateway connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md) to establish connectivity to an Azure Database for PostgreSQL - Single server from an Azure VM in a different region or subscription.
### Connecting from an on-premises environment over VPN
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
Prepare the files using the information you collected for your SIMs in [Collect
- If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM. ```json
- {ΓÇ»
- ΓÇ» "version": 1,ΓÇ»
- ΓÇ» "azureKey": 1,ΓÇ»
- ΓÇ» "vendorKeyFingerprint": "A5719BCEFD6A2021A11D7649942ECC14",
- ΓÇ» "encryptedTransportKey": "0EBAE5E2D31A1BE48495F6DCA65983EEAE6BA6B75A92040562EAD84079BF701CBD3BB1602DB74E85921184820B78A02EC709951195DC87E44481FDB6B826DF775E29B7073644EA66649A14B6CA6B0EE75DE8B4A8D0D5186319E37FBF165A691E607CFF8B65F3E5E9D448049704DE4EA047101ADA4554A543F405B447B8DB687C0B7624E62515445F3E887B3328AA555540D9959752C985490586EF06681501A89594E28F98BF66F179FE3F1D2EE13C69BC42C30A8D3DC6898B8160FC66CDDEE164760F27B68A07BA4C4AE5AFFEA45EE8342E1CA8470150ED6AF4215CEF173418E60E2B1DF4A8C2AE6F0C9A291F5D185ECAD0D94D48EFD06570A0C1AE27D5EC20",ΓÇ»
- ΓÇ» "signedTransportKey": "83515CC47C8890F62D4A0D16DE58C2F2DCFD827C317047693A46B9CA1F9EBC33CCDB8CABE04A275D65E180813CCFF43FC2DA95E19E2B9FF2588AE0914418DC9CB5506EB7AEADB272F5DAB9F0B1CCFAD62B95C91D4F4680A350F56D2A7F8EC863F4D61E1D7A38746AEE6C6391D619CCFCFA2B6D554671D91A26484CD6E120D84917FBF69D3B56B2AA8F2B36AF88492F1A7E267594B6C1596B81A81079540EC3F31869294BFEB225DFB171DE557B8C05D7C963E047E3AF36D1387FEDA28E55E411E5FB6AED178FB9C92D674D71AF8FEB6462F509E6423D4EBE0EC84E4135AA6C7A36F849A14A6A70E7188E08278D515BD95A549645E9D595D1DEC13E1A68B9CB67",ΓÇ»
- ΓÇ» "sims": [ΓÇ»
-     { 
-       "name": "SIM 1", 
-       "properties": { 
-         "deviceType": "Sensor", 
-        "integratedCircuitCardIdentifier": "8922345678901234567", 
-         "internationalMobileSubscriberIdentity": "001019990010002", 
-         "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385", 
-         "simPolicy": { 
-           "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy" 
-         }, 
-         "staticIpConfiguration": [
- {
- "attachedDataNetwork": {
- "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
- },
- "slice": {
- "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
- },
- "staticIp": {
- "ipv4Address": "2.4.0.1"
- }
-           } 
-         ] 
-       } 
-     }
- {ΓÇ»
-       "name": "SIM 2", 
-       "properties": { 
-         "deviceType": "Cellphone", 
-        "integratedCircuitCardIdentifier": "1234545678907456123", 
-         "internationalMobileSubscriberIdentity": "001019990010003", 
-         "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385", 
-         "simPolicy": { 
-           "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy" 
-         }, 
-         "staticIpConfiguration": [
- {
- "attachedDataNetwork": {
- "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
- },
- "slice": {
- "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
- },
- "staticIp": {
- "ipv4Address": "2.4.0.2"
- }
-           } 
-         ] 
-       } 
-     } 
- ΓÇ» ]ΓÇ»
- }ΓÇ»
+ {
+ "version": 1,
+ "azureKeyIdentifier": 1,
+ "vendorKeyFingerprint": "A5719BCEFD6A2021A11D7649942ECC14",
+ "encryptedTransportKey": "0EBAE5E2D31A1BE48495F6DCA65983EEAE6BA6B75A92040562EAD84079BF701CBD3BB1602DB74E85921184820B78A02EC709951195DC87E44481FDB6B826DF775E29B7073644EA66649A14B6CA6B0EE75DE8B4A8D0D5186319E37FBF165A691E607CFF8B65F3E5E9D448049704DE4EA047101ADA4554A543F405B447B8DB687C0B7624E62515445F3E887B3328AA555540D9959752C985490586EF06681501A89594E28F98BF66F179FE3F1D2EE13C69BC42C30A8D3DC6898B8160FC66CDDEE164760F27B68A07BA4C4AE5AFFEA45EE8342E1CA8470150ED6AF4215CEF173418E60E2B1DF4A8C2AE6F0C9A291F5D185ECAD0D94D48EFD06570A0C1AE27D5EC20",
+ "signedTransportKey": "83515CC47C8890F62D4A0D16DE58C2F2DCFD827C317047693A46B9CA1F9EBC33CCDB8CABE04A275D65E180813CCFF43FC2DA95E19E2B9FF2588AE0914418DC9CB5506EB7AEADB272F5DAB9F0B1CCFAD62B95C91D4F4680A350F56D2A7F8EC863F4D61E1D7A38746AEE6C6391D619CCFCFA2B6D554671D91A26484CD6E120D84917FBF69D3B56B2AA8F2B36AF88492F1A7E267594B6C1596B81A81079540EC3F31869294BFEB225DFB171DE557B8C05D7C963E047E3AF36D1387FEDA28E55E411E5FB6AED178FB9C92D674D71AF8FEB6462F509E6423D4EBE0EC84E4135AA6C7A36F849A14A6A70E7188E08278D515BD95A549645E9D595D1DEC13E1A68B9CB67",
+ "sims": [
+ {
+ "name": "SIM 1",
+ "properties": {
+ "deviceType": "Sensor",
+ "integratedCircuitCardIdentifier": "8922345678901234567",
+ "internationalMobileSubscriberIdentity": "001019990010002",
+ "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385",
+ "simPolicy": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy"
+ },
+ "staticIpConfiguration": [
+ {
+ "attachedDataNetwork": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
+ },
+ "slice": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
+ },
+ "staticIp": {
+ "ipv4Address": "2.4.0.1"
+ }
+ }
+ ]
+ }
+ },
+ {
+ "name": "SIM 2",
+ "properties": {
+ "deviceType": "Cellphone",
+ "integratedCircuitCardIdentifier": "1234545678907456123",
+ "internationalMobileSubscriberIdentity": "001019990010003",
+ "encryptedCredentials": "3ED205BE2DD7F0E467283EC55F9E8F3588B68DC98811BE671070C65EFDE0CCCAD18C8B663231C80FB478F753A6B09142D06982421261679B7BB112D36473EA7EF973DCF7F634124B58DD945FE61D4B16978438CB33E64D3AA58B5C38A0D97030B5F95B16E308D919EB932ACCD36CB8C2838C497B3B38A60E3DD385",
+ "simPolicy": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/simPolicies/MySimPolicy"
+ },
+ "staticIpConfiguration": [
+ {
+ "attachedDataNetwork": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/TestPacketCoreCP/packetCoreDataPlanes/TestPacketCoreDP/attachedDataNetworks/TestAttachedDataNetwork"
+ },
+ "slice": {
+ "id": "/subscriptions/subid/resourceGroups/rg1/providers/Microsoft.MobileNetwork/mobileNetworks/testMobileNetwork/slices/testSlice"
+ },
+ "staticIp": {
+ "ipv4Address": "2.4.0.2"
+ }
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ``` ## Begin provisioning the SIMs in the Azure portal
sap Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/extensibility.md
configuration_settings = {
sapsys_gid = "400" }
+```
+ ## Adding custom repositories (Linux)
sap Quickstart Register System Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/quickstart-register-system-powershell.md
To register an existing SAP system in Azure Center for SAP solutions:
| | | | East US | East US | | East US 2 | East US 2|
- | South Central US | East US 2 |
- | Central US | East US 2|
- | West US 2 | West US 3 |
+ | North Central US | South Central US |
+ | South Central US | South Central US |
+ | West Central US | South Central US |
+ | Central US | South Central US |
+ | West US | West US 3 |
+ | West US 2 | West US 2 |
| West US 3 | West US 3 | | West Europe | West Europe | | North Europe | North Europe |
To register an existing SAP system in Azure Center for SAP solutions:
| East Asia | East Asia | | Southeast Asia | East Asia | | Central India | Central India |
+ | Canada Central | Canada Central |
+ | Brazil South | Brazil South |
+ | UK South | UK South |
+ | Germany West Central | Germany West Central |
+ | Sweden Central | Sweden Central |
- **Environment** is used to specify the type of SAP environment you are registering. Valid values are *NonProd* and *Prod*. - **SapProduct** is used to specify the type of SAP product you are registering. Valid values are *S4HANA*, *ECC*, *Other*. - **ManagedResourceGroupName** is used to specify the name of the managed resource group which is deployed by ACSS service in your Subscription. This RG is unique for each SAP system (SID) you register. If you do not specify the name, ACSS service sets a name with this naming convention 'mrg-{SID}-{random string}'. - **ManagedRgStorageAccountName** is used to specify the name of the Storage Account which is deployed into the managed resource group. This storage account is unique for each SAP system (SID) you register. ACSS service sets a default name using '{SID}{random string}' naming convention.
-2. Once you trigger the registration process, you can view its status by getting the status of the Virtual Instance for SAP solutions resource that gets deployed as part of the registration process.
+3. Once you trigger the registration process, you can view its status by getting the status of the Virtual Instance for SAP solutions resource that gets deployed as part of the registration process.
```powershell Get-AzWorkloadsSapVirtualInstance -ResourceGroupName TestRG -Name L46
security Threat Modeling Tool Releases 73308291 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73308291.md
+
+ Title: Microsoft Threat Modeling Tool release 08/30/2023 - Azure
+description: Documenting the release notes for the threat modeling tool release 7.3.30829.1.
++++ Last updated : 08/30/2023++
+# Threat Modeling Tool update release 7.3.30829.1 - 08/30/2023
+
+Version 7.3.30829.1 of the Microsoft Threat Modeling Tool (TMT) was released on August 30 2023 and contains the following changes:
+
+- Accessibility fixes
+
+## Known issues
+
+### Errors related to TMT7.application file deserialization
+
+#### Issue
+
+Some customers have reported receiving the following error message when downloading the Threat Modeling Tool:
+
+```
+The threat model file '$PATH\TMT7.application' could not be deserialized. File is not an actual threat model or the threat model may be corrupted.
+```
+
+This error occurs because some browsers don't natively support ClickOnce installation. In those cases the ClickOnce application file is downloaded to the user's hard drive.
+
+#### Workaround
+
+This error will continue to appear if the Threat Modeling Tool is launched by double-clicking on the TMT7.application file. However, after bypassing the error the tool will function normally. Rather than launching the Threat Modeling Tool by double-clicking the TMT7.application file, users should utilize shortcuts created in the Windows Menu during the installation to start the Threat Modeling Tool.
+
+## System requirements
+
+- Supported Operating Systems
+ - [Microsoft Windows 10 Anniversary Update](https://blogs.windows.com/windowsexperience/2016/08/02/how-to-get-the-windows-10-anniversary-update/#HTkoK5Zdv0g2F2Zq.97) or later
+- .NET Version Required
+ - [.NET 4.7.1](https://go.microsoft.com/fwlink/?LinkId=863262) or later
+- Additional Requirements
+ - An Internet connection is required to receive updates to the tool as well as templates.
+
+## Documentation and feedback
+
+- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+
+## Next steps
+
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73309251 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73309251.md
+
+ Title: Microsoft Threat Modeling Tool release 09/25/2023 - Azure
+description: Documenting the release notes for the threat modeling tool release 7.3.30925.1.
++++ Last updated : 09/25/2023++
+# Threat Modeling Tool update release 7.3.30925.1 - 09/25/2023
+
+Version 7.3.30925.1 of the Microsoft Threat Modeling Tool (TMT) was released on September 25 2023 and contains the following changes:
+
+- Accessibility fixes
+
+## Known issues
+
+### Errors related to TMT7.application file deserialization
+
+#### Issue
+
+Some customers have reported receiving the following error message when downloading the Threat Modeling Tool:
+
+```
+The threat model file '$PATH\TMT7.application' could not be deserialized. File is not an actual threat model or the threat model may be corrupted.
+```
+
+This error occurs because some browsers don't natively support ClickOnce installation. In those cases the ClickOnce application file is downloaded to the user's hard drive.
+
+#### Workaround
+
+This error will continue to appear if the Threat Modeling Tool is launched by double-clicking on the TMT7.application file. However, after bypassing the error the tool will function normally. Rather than launching the Threat Modeling Tool by double-clicking the TMT7.application file, users should utilize shortcuts created in the Windows Menu during the installation to start the Threat Modeling Tool.
+
+## System requirements
+
+- Supported Operating Systems
+ - [Microsoft Windows 10 Anniversary Update](https://blogs.windows.com/windowsexperience/2016/08/02/how-to-get-the-windows-10-anniversary-update/#HTkoK5Zdv0g2F2Zq.97) or later
+- .NET Version Required
+ - [.NET 4.7.1](https://go.microsoft.com/fwlink/?LinkId=863262) or later
+- Additional Requirements
+ - An Internet connection is required to receive updates to the tool as well as templates.
+
+## Documentation and feedback
+
+- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+
+## Next steps
+
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases 73310263 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases-73310263.md
+
+ Title: Microsoft Threat Modeling Tool release 10/26/2023 - Azure
+description: Documenting the release notes for the threat modeling tool release 7.3.31026.3.
++++ Last updated : 10/26/2023++
+# Threat Modeling Tool update release 7.3.31026.3 - 10/26/2023
+
+Version 7.3.31026.3 of the Microsoft Threat Modeling Tool (TMT) was released on October 26 2023 and contains the following changes:
+
+- Bug fixes
+- Accessibility fixes
+
+## Known issues
+
+### Errors related to TMT7.application file deserialization
+
+#### Issue
+
+Some customers have reported receiving the following error message when downloading the Threat Modeling Tool:
+
+```
+The threat model file '$PATH\TMT7.application' could not be deserialized. File is not an actual threat model or the threat model may be corrupted.
+```
+
+This error occurs because some browsers don't natively support ClickOnce installation. In those cases the ClickOnce application file is downloaded to the user's hard drive.
+
+#### Workaround
+
+This error will continue to appear if the Threat Modeling Tool is launched by double-clicking on the TMT7.application file. However, after bypassing the error the tool will function normally. Rather than launching the Threat Modeling Tool by double-clicking the TMT7.application file, users should utilize shortcuts created in the Windows Menu during the installation to start the Threat Modeling Tool.
+
+## System requirements
+
+- Supported Operating Systems
+ - [Microsoft Windows 10 Anniversary Update](https://blogs.windows.com/windowsexperience/2016/08/02/how-to-get-the-windows-10-anniversary-update/#HTkoK5Zdv0g2F2Zq.97) or later
+- .NET Version Required
+ - [.NET 4.7.1](https://go.microsoft.com/fwlink/?LinkId=863262) or later
+- Additional Requirements
+ - An Internet connection is required to receive updates to the tool as well as templates.
+
+## Documentation and feedback
+
+- Documentation for the Threat Modeling Tool is located on [docs.microsoft.com](./threat-modeling-tool.md), and includes information [about using the tool](./threat-modeling-tool-getting-started.md).
+
+## Next steps
+
+Download the latest version of the [Microsoft Threat Modeling Tool](https://aka.ms/threatmodelingtool).
security Threat Modeling Tool Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/threat-modeling-tool-releases.md
The Microsoft Threat Modeling Tool is currently released as a free [click-to-dow
## Release Notes
+- [Microsoft Threat Modeling Tool GA Release Version 7.3.31026.3](threat-modeling-tool-releases-73310263.md) - October 26 2023
+- [Microsoft Threat Modeling Tool GA Release Version 7.3.30925.1](threat-modeling-tool-releases-73309251.md) - September 25 2023
+- [Microsoft Threat Modeling Tool GA Release Version 7.3.30829.1](threat-modeling-tool-releases-73308291.md) - August 30 2023
- [Microsoft Threat Modeling Tool GA Release Version 7.3.30630.5](threat-modeling-tool-releases-73306305.md) - June 30 2023 - [Microsoft Threat Modeling Tool GA Release Version 7.3.21108.2](threat-modeling-tool-releases-73211082.md) - November 8 2022 - [Microsoft Threat Modeling Tool GA Release Version 7.3.20927.9](threat-modeling-tool-releases-73209279.md) - September 27 2022
security Data Encryption Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md
To help protect data in the cloud, you need to account for the possible states i
- At rest: This includes all information storage objects, containers, and types that exist statically on physical media, whether magnetic or optical disk. - In transit: When data is being transferred between components, locations, or programs, it's in transit. Examples are transfer over the network, across a service bus (from on-premises to cloud and vice-versa, including hybrid connections such as ExpressRoute), or during an input/output process.
+- In Use: When data is being processed, the specialized AMD & Intel chipset based Confidential compute VMs keep the data encrypted in memory using hardware managed keys.
## Choose a key management solution
Following are best practices specific to using Azure VPN Gateway, SSL/TLS, and H
Organizations that fail to protect data in transit are more susceptible to [man-in-the-middle attacks](/previous-versions/office/skype-server-2010/gg195821(v=ocs.14)), [eavesdropping](/previous-versions/office/skype-server-2010/gg195641(v=ocs.14)), and session hijacking. These attacks can be the first step in gaining access to confidential data.
+## Protect data in use
+
+**Lessen the need for trust**
+Running workloads on the cloud requires trust. You give this trust to various providers enabling different components of your application.
+- App software vendors: Trust software by deploying on-premises, using open-source, or by building in-house application software.
+- Hardware vendors: Trust hardware by using on-premises hardware or in-house hardware.
+- Infrastructure providers: Trust cloud providers or manage your own on-premises data centers.
+
+**Reducing the attack surface**
+The Trusted Computing Base (TCB) refers to all of a system's hardware, firmware, and software components that provide a secure environment. The components inside the TCB are considered "critical." If one component inside the TCB is compromised, the entire system's security may be jeopardized. A lower TCB means higher security. There's less risk of exposure to various vulnerabilities, malware, attacks, and malicious people.
+
+Azure confidential computing can help you:
+
+- Prevent unauthorized access: Run sensitive data in the cloud. Trust that Azure provides the best data protection possible, with little to no change from what gets done today.
+- Meet regulatory compliance: Migrate to the cloud and keep full control of data to satisfy government regulations for protecting personal information and secure organizational IP.
+- Ensure secure and untrusted collaboration: Tackle industry-wide work-scale problems by combing data across organizations, even competitors, to unlock broad data analytics and deeper insights.
+- Isolate processing: Offer a new wave of products that remove liability on private data with blind processing. User data can't even be retrieved by the service provider.
+
+Learn more about [Confidential computing](/azure/confidential-computing/).
+ ## Secure email, documents, and sensitive data You want to control and secure email, documents, and sensitive data that you share outside your company. [Azure Information Protection](/azure/information-protection/what-is-information-protection) is a cloud-based solution that helps an organization to classify, label, and protect its documents and emails. This can be done automatically by administrators who define rules and conditions, manually by users, or a combination where users get recommendations.
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
All Azure Storage services (Blob storage, Queue storage, Table storage, and Azur
Azure SQL Database currently supports encryption at rest for Microsoft-managed service side and client-side encryption scenarios.
-Support for server encryption is currently provided through the SQL feature called Transparent Data Encryption. Once an Azure SQL Database customer enables TDE key are automatically created and managed for them. Encryption at rest can be enabled at the database and server levels. As of June 2017, [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) is enabled by default on newly created databases. Azure SQL Database supports RSA 2048-bit customer-managed keys in Azure Key Vault. For more information, see [Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Data Warehouse](/sql/relational-databases/security/encryption/transparent-data-encryption-byok-azure-sql).
+Support for server encryption is currently provided through the SQL feature called Transparent Data Encryption. Once an Azure SQL Database customer enables TDE, keys are automatically created and managed for them. Encryption at rest can be enabled at the database and server levels. As of June 2017, [Transparent Data Encryption (TDE)](/sql/relational-databases/security/encryption/transparent-data-encryption) is enabled by default on newly created databases. Azure SQL Database supports RSA 2048-bit customer-managed keys in Azure Key Vault. For more information, see [Transparent Data Encryption with Bring Your Own Key support for Azure SQL Database and Data Warehouse](/sql/relational-databases/security/encryption/transparent-data-encryption-byok-azure-sql).
-Client-side encryption of Azure SQL Database data is supported through the [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) feature. Always Encrypted uses a key that created and stored by the client. Customers can store the master key in a Windows certificate store, Azure Key Vault, or a local Hardware Security Module. Using SQL Server Management Studio, SQL users choose what key they'd like to use to encrypt which column.
+Client-side encryption of Azure SQL Database data is supported through the [Always Encrypted](/sql/relational-databases/security/encryption/always-encrypted-database-engine) feature. Always Encrypted uses a key that is created and stored by the client. Customers can store the master key in a Windows certificate store, Azure Key Vault, or a local Hardware Security Module. Using SQL Server Management Studio, SQL users choose what key they'd like to use to encrypt which column.
## Conclusion
service-connector How To Integrate Cosmos Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-cassandra.md
# Integrate Azure Cosmos DB for Cassandra with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB for Apache Cassandra using Service Connector. You might still be able to connect to the Azure Cosmos DB for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB for Apache Cassandra using Service Connector. You might still be able to connect to the Azure Cosmos DB for Cassandra in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection and sample code showing how to use them. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
## Supported compute services
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
Previously updated : 08/11/2022 Last updated : 10/24/2023 # Integrate Azure Table Storage with Service Connector
-This page shows the supported authentication types and client types of Azure Table Storage using Service Connector. You might still be able to connect to Azure Table Storage in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows supported authentication methods and clients, and shows sample code you can use to connect Azure Table Storage to other cloud services using Service Connector. You might still be able to connect to Azure Table Storage in other programming languages without using Service Connector. For more information about naming conventions, check the [Service Connector internals](concept-service-connector-internals.md#configuration-naming-convention) article.
## Supported compute service
Supported authentication and clients for App Service, Container Apps and Azure S
| Node.js |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)| | Python |![yes icon](./media/green-check.png)|![yes icon](./media/green-check.png)| ![yes icon](./media/green-check.png) |![yes icon](./media/green-check.png)|
-## Default environment variable names or application properties
+## Default environment variable names or application properties and sample code
-Use the connection details below to connect compute services to Azure Table Storage. For each example below, replace the placeholder texts `<account-name>` and `<account-key>` with your own account name and account key.
-
-### Secret / connection string
-
-| Default environment variable name | Description | Example value |
-|-||-|
-| AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
+Use the connection details below to connect compute services to Azure Table Storage. This page also shows default environment variable names and values you get when you create the service connection, as well as sample code.
### System-assigned managed identity
Use the connection details below to connect compute services to Azure Table Stor
|-||-| | AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Blob Storage using a system-assigned managed identity.
### User-assigned managed identity
Use the connection details below to connect compute services to Azure Table Stor
| AZURE_STORAGETABLE_RESOURCEENDPOINT | Table storage endpoint | `https://<storage-account-name>.table.core.windows.net/` | | AZURE_STORAGETABLE_CLIENTID | Your client ID | `<client-ID>` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Blob Storage using a user-assigned managed identity.
+
+### Connection string
+
+| Default environment variable name | Description | Example value |
+|-||-|
+| AZURE_STORAGETABLE_CONNECTIONSTRING | Table storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
+
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Table Storage using a connection string.
+ ### Service principal | Default environment variable name | Description | Example value |
Use the connection details below to connect compute services to Azure Table Stor
| AZURE_STORAGETABLE_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGETABLE_TENANTID | Your tenant ID | `<tenant-ID>` |
+#### Sample code
+
+Refer to the steps and code below to connect to Azure Blob Storage using a service principal.
## Next steps
site-recovery Vmware Azure Deploy Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-deploy-configuration-server.md
Previously updated : 05/27/2021 Last updated : 11/01/2023 # Deploy a configuration server
static-web-apps Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/get-started-portal.md
After you sign in with GitHub, enter the repository information.
> [!NOTE] > If you don't see any repositories:
-> - You may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub repository and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**.
+> - You may need to authorize Azure Static Web Apps in GitHub. Browse to your GitHub profile and go to **Settings > Applications > Authorized OAuth Apps**, select **Azure Static Web Apps**, and then select **Grant**.
> - You may need to authorize Azure Static Web Apps in your Azure DevOps organization. You must be an owner of the organization to grant the permissions. Request third-party application access via OAuth. For more information, see [Authorize access to REST APIs with OAuth 2.0](/azure/devops/integrate/get-started/authentication/oauth). ::: zone-end
storage Object Replication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/object-replication-overview.md
Here's a breakdown of the costs. To find the price of each cost component, see [
|Storage cost of the blob and each blob version<sup>1</sup>|Transaction cost to read the blob and blob versions<sup>2</sup>| |Cost to add a change feed record|Transaction cost to write the blob and blob versions<sup>2</sup>| ||Storage cost of the blob and each blob version<sup>1</sup>|
-||Cost of network egress<sup>2</sup>|
+||Cost of network egress<sup>3</sup>|
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
You can use many different SFTP clients to securely connect and then transfer fi
SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize [Microsoft Security Development Lifecycle (SDL) approved algorithms](/security/sdl/cryptographic-recommendations) to securely access their data.
-At this time, in accordance with the Microsoft Security SDL, we don't plan on supporting the following: `ssh-dss`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `hmac-sha1`, `hmac-sha1-96`. Algorithm support is subject to change in the future.
+At this time, in accordance with the Microsoft Security SDL, we don't plan on supporting the following: `ssh-dss`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `diffie-hellman-group-exchange-sha1`, `hmac-sha1`, `hmac-sha1-96`. Algorithm support is subject to change in the future.
## Connecting with SFTP
storage Storage Ref Azcopy Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-ref-azcopy-copy.md
description: This article provides reference information for the azcopy copy com
Previously updated : 11/08/2022 Last updated : 10/31/2023
Copies source data to a destination location. The supported directions are:
- local <-> Azure Files (Share/directory SAS authentication) - local <-> Azure Data Lake Storage Gen2 (SAS, OAuth, or SharedKey authentication) - Azure Blob (SAS or public) -> Azure Blob (SAS or OAuth authentication)-- Azure Blob (SAS or OAuth authentication) -> Azure Blob (SAS or OAuth authentication) - See [Guidelines](./storage-use-azcopy-blobs-copy.md#guidelines).
+- Azure Data Lake Storage Gen2 (SAS or public) -> Azure Data Lake Storage Gen2 (SAS or OAuth authentication)
+- Azure Blob (SAS or OAuth authentication) <-> Azure Blob (SAS or OAuth authentication) - See [Guidelines](./storage-use-azcopy-blobs-copy.md#guidelines).
+- Azure Data Lake Storage Gen2 (SAS or OAuth authentication) <-> Azure Data Lake Storage Gen2 (SAS or OAuth authentication)
+- Azure Data Lake Storage Gen2 (SAS or OAuth authentication) <-> Azure Blob (SAS or OAuth authentication)
- Azure Blob (SAS or public) -> Azure Files (SAS) - Azure Files (SAS) -> Azure Files (SAS) - Azure Files (SAS) -> Azure Blob (SAS or OAuth authentication)
preserve full properties, AzCopy needs to send one more request per object or fi
`--output-type` (string) Format of the command's output. The choices include: text, json. The default value is 'text'. (default "text")
-`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Microsoft Entra login tokens may be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
+`--trusted-microsoft-suffixes` (string) Specifies additional domain suffixes where Microsoft Entra login tokens can be sent. The default is '*.core.windows.net;*.core.chinacloudapi.cn;*.core.cloudapi.de;*.core.usgovcloudapi.net;*.storage.azure.net'. Any listed here are added to the default. For security, you should only put Microsoft Azure domains here. Separate multiple entries with semi-colons.
## See also
storage Storage Use Azcopy Blobs Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-copy.md
description: This article contains a collection of AzCopy example commands that
Previously updated : 11/08/2022 Last updated : 10/31/2023
Apply the following guidelines to your AzCopy commands.
- If you copy to a premium block blob storage account, omit the access tier of a blob from the copy operation by setting the `s2s-preserve-access-tier` to `false` (For example: `--s2s-preserve-access-tier=false`). Premium block blob storage accounts don't support access tiers. -- If you copy to or from an account that has a hierarchical namespace, use `blob.core.windows.net` instead of `dfs.core.windows.net` in the URL syntax. [Multi-protocol access on Data Lake Storage](../blobs/data-lake-storage-multi-protocol-access.md) enables you to use `blob.core.windows.net`, and it's the only supported syntax for account to account copy scenarios.- - You can increase the throughput of copy operations by setting the value of the `AZCOPY_CONCURRENCY_VALUE` environment variable. To learn more, see [Increase Concurrency](storage-use-azcopy-optimize.md#increase-concurrency). - If the source blobs have index tags, and you want to retain those tags, you'll have to reapply them to the destination blobs. For information about how to set index tags, see the [Copy blobs to another storage account with index tags](#copy-between-accounts-and-add-index-tags) section of this article.
Copy a blob to another storage account by using the [azcopy copy](storage-ref-az
**Syntax**
-`azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>/<blob-path>'`
+`azcopy copy 'https://<source-storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-path>' 'https://<destination-storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<blob-path>'`
**Example**
Copy a blob to another storage account by using the [azcopy copy](storage-ref-az
azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myTextFile.txt' 'https://mydestinationaccount.blob.core.windows.net/mycontainer/myTextFile.txt' ```
+**Example (Data Lake Storage endpoints)**
+
+```azcopy
+azcopy copy 'https://mysourceaccount.dfs.core.windows.net/mycontainer/myTextFile.txt' 'https://mydestinationaccount.dfs.core.windows.net/mycontainer/myTextFile.txt'
+```
+ The copy operation is synchronous so when the command returns, that indicates that all files have been copied. ## Copy a directory
Copy a directory to another storage account by using the [azcopy copy](storage-r
**Syntax**
-`azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>/<directory-path>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive`
+`azcopy copy 'https://<source-storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<directory-path>' 'https://<destination-storage-account-name>.<blob or dfs>.core.windows.net/<container-name>' --recursive`
**Example**
Copy a directory to another storage account by using the [azcopy copy](storage-r
azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer/myBlobDirectory' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive ```
+**Example (Data Lake Storage endpoints)**
+
+```azcopy
+azcopy copy 'https://mysourceaccount.dfs.core.windows.net/mycontainer/myBlobDirectory' 'https://mydestinationaccount.dfs.core.windows.net/mycontainer' --recursive
+```
+ The copy operation is synchronous. All files have been copied when the command returns. ## Copy a container
Copy a container to another storage account by using the [azcopy copy](storage-r
**Syntax**
-`azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/<container-name>' 'https://<destination-storage-account-name>.blob.core.windows.net/<container-name>' --recursive`
+`azcopy copy 'https://<source-storage-account-name>.<blob or dfs>.core.windows.net/<container-name>' 'https://<destination-storage-account-name>.<blob or dfs>.core.windows.net/<container-name>' --recursive`
**Example**
Copy a container to another storage account by using the [azcopy copy](storage-r
azcopy copy 'https://mysourceaccount.blob.core.windows.net/mycontainer' 'https://mydestinationaccount.blob.core.windows.net/mycontainer' --recursive ```
+**Example (Data Lake Storage endpoints)**
+
+```azcopy
+azcopy copy 'https://mysourceaccount.dfs.core.windows.net/mycontainer' 'https://mydestinationaccount.dfs.core.windows.net/mycontainer' --recursive
+```
+ The copy operation is synchronous. All files have been copied when the command returns. ## Copy containers, directories, and blobs
Copy all containers, directories, and blobs to another storage account by using
**Syntax**
-`azcopy copy 'https://<source-storage-account-name>.blob.core.windows.net/' 'https://<destination-storage-account-name>.blob.core.windows.net/' --recursive`
+`azcopy copy 'https://<source-storage-account-name>.<blob or dfs>.core.windows.net/' 'https://<destination-storage-account-name>.<blob or dfs>.core.windows.net/' --recursive`
**Example**
Copy all containers, directories, and blobs to another storage account by using
azcopy copy 'https://mysourceaccount.blob.core.windows.net/' 'https://mydestinationaccount.blob.core.windows.net' --recursive ```
+**Example (Data Lake Storage endpoints)**
+
+```azcopy
+azcopy copy 'https://mysourceaccount.dfs.core.windows.net/' 'https://mydestinationaccount.dfs.core.windows.net' --recursive
+```
+ The copy operation is synchronous so when the command returns, that indicates that all files have been copied. <a id="copy-between-accounts-and-add-index-tags"></a>
storage Storage Use Azcopy Blobs Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-download.md
Download a blob by using the [azcopy copy](storage-ref-azcopy-copy.md) command.
azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt' 'C:\myDirectory\myTextFile.txt' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myTextFile.txt' 'C:\myDirectory\myTextFile.txt'
Download a directory by using the [azcopy copy](storage-ref-azcopy-copy.md) comm
azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory' 'C:\myDirectory' --recursive ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myBlobDirectory' 'C:\myDirectory' --recursive
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pa
azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory' 'C:\myDirectory' --include-path 'photos;documents\myFile.txt' --recursive ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'https://mystorageaccount.dfs.core.windows.net/mycontainer/FileDirectory' 'C:\myDirectory' --include-path 'photos;documents\myFile.txt'--recursive
The following examples download files that were modified on or after the specifi
azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory/*' 'C:\myDirectory' --include-after '2020-08-19T15:04:00Z' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'https://mystorageaccount.dfs.core.windows.net/mycontainer/FileDirectory/*' 'C:\myDirectory' --include-after '2020-08-19T15:04:00Z'
You can download a [blob snapshot](../blobs/snapshots-overview.md) by referencin
azcopy copy 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt?sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myTextFile.txt?sharesnapshot=2020-09-23T08:21:07.0000000Z' 'C:\myDirectory\myTextFile.txt'
storage Storage Use Azcopy Blobs Upload https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azcopy-blobs-upload.md
You can use the [azcopy make](storage-ref-azcopy-make.md) command to create a co
azcopy make 'https://mystorageaccount.blob.core.windows.net/mycontainer' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy make 'https://mystorageaccount.dfs.core.windows.net/mycontainer'
Upload a file by using the [azcopy copy](storage-ref-azcopy-copy.md) command.
azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myTextFile.txt' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory\myTextFile.txt' 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myTextFile.txt'
This example copies a directory (and all of the files in that directory) to a bl
azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --recursive ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory' 'https://mystorageaccount.dfs.core.windows.net/mycontainer' --recursive
To copy to a directory within the container, just specify the name of that direc
azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory' --recursive ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory' 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myBlobDirectory' --recursive
Upload the contents of a directory by using the [azcopy copy](storage-ref-azcopy
azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.blob.core.windows.net/mycontainer/myBlobDirectory' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.dfs.core.windows.net/mycontainer/myBlobDirectory'
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pa
azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --include-path 'photos;documents\myFile.txt' --recursive' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory' 'https://mystorageaccount.dfs.core.windows.net/mycontainer' --include-path 'photos;documents\myFile.txt' --recursive'
Use the [azcopy copy](storage-ref-azcopy-copy.md) command with the `--include-pa
azcopy copy 'C:\myDirectory' 'https://mystorageaccount.blob.core.windows.net/mycontainer' --include-pattern 'myFile*.txt;*.pdf*' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory' 'https://mystorageaccount.dfs.core.windows.net/mycontainer' --include-pattern 'myFile*.txt;*.pdf*'
The following examples upload files that were modified on or after the specified
azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.blob.core.windows.net/mycontainer/FileDirectory' --include-after '2020-08-19T15:04:00Z' ```
-**Example (hierarchical namespace)**
+**Example (Data Lake Storage endpoint)**
```azcopy azcopy copy 'C:\myDirectory\*' 'https://mystorageaccount.dfs.core.windows.net/mycontainer/FileDirectory' --include-after '2020-08-19T15:04:00Z'
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
This article highlights Microsoft partner companies that deliver a network attached storage (NAS) or storage area network (SAN) solution. The solution can be on-premises, in Azure, or a hybrid solution that uses Azure Storage as a cost-effective tier. These solutions can enable customers to use the same solution in any of their environments.
-## Verified partners
+## Validated partners
| Partner | Description | Website/product link | | - | -- | -- |
update-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md
Actions |Permission |Scope |
For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems).
-Currently, Update Manager has the following limitations regarding operating system support:
+- [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) - Azure Update Manager now supports scheduled patching and periodic assessment for VMs including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery in preview.
+
+Currently, Update Manager has the following limitation regarding operating system support:
- Marketplace images other than the [list of supported Marketplace OS images](../virtual-machines/automatic-vm-guest-patching.md#supported-os-images) are currently not supported.
-For the preceding limitations, we recommend that you use [Automation Update Management](../automation/update-management/overview.md) until support is available in Update Manager. To learn more, see [Supported operating systems](support-matrix.md#supported-operating-systems).
+For the preceding limitation, we recommend that you use [Automation Update Management](../automation/update-management/overview.md) until support is available in Update Manager. To learn more, see [Supported operating systems](support-matrix.md#supported-operating-systems).
## VM extensions
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
The following table summarizes identity scenarios that Azure Virtual Desktop cur
| Microsoft Entra ID + Microsoft Entra Domain Services | Joined to Microsoft Entra ID | In Microsoft Entra ID and Microsoft Entra Domain Services, synchronized| | Microsoft Entra-only | Joined to Microsoft Entra ID | In Microsoft Entra ID |
-To use [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial) when joining your session hosts to Microsoft Entra ID, you need to [store profiles on Azure Files](create-profile-container-azure-ad.md) and your user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md). You must create these accounts in AD DS and synchronize them to Microsoft Entra ID. To learn more about deploying FSLogix Profile Container with different identity scenarios, see the following articles:
+To use [FSLogix Profile Container](/fslogix/configure-profile-container-tutorial) when joining your session hosts to Microsoft Entra ID, you need to [store profiles on Azure Files](create-profile-container-azure-ad.md) or [Azure NetApp Files](create-fslogix-profile-container.md) and your user accounts must be [hybrid identities](../active-directory/hybrid/whatis-hybrid-identity.md). You must create these accounts in AD DS and synchronize them to Microsoft Entra ID. To learn more about deploying FSLogix Profile Container with different identity scenarios, see the following articles:
- [Set up FSLogix Profile Container with Azure Files and Active Directory Domain Services or Microsoft Entra Domain Services](fslogix-profile-container-configure-azure-files-active-directory.md). - [Set up FSLogix Profile Container with Azure Files and Microsoft Entra ID](create-profile-container-azure-ad.md).
+- [Set up FSLogix Profile Container with Azure NetApp Files](create-fslogix-profile-container.md)
> [!IMPORTANT] > The user account must exist in the Microsoft Entra tenant you use for Azure Virtual Desktop. Azure Virtual Desktop doesn't support [B2B](../active-directory/external-identities/what-is-b2b.md), [B2C](../active-directory-b2c/overview.md), or personal Microsoft accounts.
virtual-desktop Safe Url List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md
The following table lists optional URLs that your session host virtual machines
|--|--|--| | `login.windows.net` | 443 | Sign in to Microsoft Online Services and Microsoft 365 | | `*.events.data.microsoft.com` | 443 | Telemetry Service |
-| `www.msftconnecttest.com` | 443 | Detects if the session host is connected to the internet |
+| `www.msftconnecttest.com` | 80 | Detects if the session host is connected to the internet |
| `*.prod.do.dsp.mp.microsoft.com` | 443 | Windows Update | | `*.sfx.ms` | 443 | Updates for OneDrive client software |
-| `*.digicert.com` | 443 | Certificate revocation check |
+| `*.digicert.com` | 80 | Certificate revocation check |
| `*.azure-dns.com` | 443 | Azure DNS resolution | | `*.azure-dns.net` | 443 | Azure DNS resolution |
The following table lists optional URLs that your session host virtual machines
| Address | Outbound TCP port | Purpose | |--|--|--| | `*.events.data.microsoft.com` | 443 | Telemetry Service |
-| `www.msftconnecttest.com` | 443 | Detects if the session host is connected to the internet |
+| `www.msftconnecttest.com` | 80 | Detects if the session host is connected to the internet |
| `*.prod.do.dsp.mp.microsoft.com` | 443 | Windows Update | | `oneclient.sfx.ms` | 443 | Updates for OneDrive client software |
-| `*.digicert.com` | 443 | Certificate revocation check |
+| `*.digicert.com` | 80 | Certificate revocation check |
| `*.azure-dns.com` | 443 | Azure DNS resolution | | `*.azure-dns.net` | 443 | Azure DNS resolution |
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Title: Enforce Microsoft Entra multifactor authentication for Azure Virtual Desk
description: How to enforce Microsoft Entra multifactor authentication for Azure Virtual Desktop using Conditional Access to help make it more secure. Previously updated : 08/24/2022 Last updated : 10/27/2023
> [!IMPORTANT] > If you're visiting this page from the Azure Virtual Desktop (classic) documentation, make sure to [return to the Azure Virtual Desktop (classic) documentation](./virtual-desktop-fall-2019/tenant-setup-azure-active-directory.md) once you're finished.
-Users can sign into Azure Virtual Desktop from anywhere using different devices and clients. However, there are certain measures you should take to help keep yourself and your users safe. Using Microsoft Entra multifactor authentication (MFA) with Azure Virtual Desktop prompts users during the sign-in process for another form of identification in addition to their username and password. You can enforce MFA for Azure Virtual Desktop using Conditional Access, and can also configure whether it applies to the web client, mobile apps, desktop clients, or all clients.
+Users can sign into Azure Virtual Desktop from anywhere using different devices and clients. However, there are certain measures you should take to help keep your environment and your users safe. Using Microsoft Entra multifactor authentication (MFA) with Azure Virtual Desktop prompts users during the sign-in process for another form of identification in addition to their username and password. You can enforce MFA for Azure Virtual Desktop using Conditional Access, and can also configure whether it applies to the web client, mobile apps, desktop clients, or all clients.
How often a user is prompted to reauthenticate depends on [Microsoft Entra session lifetime configuration settings](../active-directory/authentication/concepts-azure-multi-factor-authentication-prompts-session-lifetime.md#azure-ad-session-lifetime-configuration-settings). For example, if their Windows client device is registered with Microsoft Entra ID, it will receive a [Primary Refresh Token](../active-directory/devices/concept-primary-refresh-token.md) (PRT) to use for single sign-on (SSO) across applications. Once issued, a PRT is valid for 14 days and is continuously renewed as long as the user actively uses the device.
Learn how to enforce MFA for Azure Virtual Desktop and optionally configure sign
Here's what you'll need to get started: - Assign users a license that includes [Microsoft Entra ID P1 or P2](../active-directory/authentication/concept-mfa-licensing.md).-- An [Microsoft Entra group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) with your Azure Virtual Desktop users assigned as group members.
+- A [Microsoft Entra group](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) with your Azure Virtual Desktop users assigned as group members.
- Enable Microsoft Entra multifactor authentication for your users. For more information about how to do that, see [Enable Microsoft Entra multifactor authentication](../active-directory/authentication/tutorial-enable-azure-mfa.md). ## Create a Conditional Access policy
Here's what you'll need to get started:
Here's how to create a Conditional Access policy that requires multifactor authentication when connecting to Azure Virtual Desktop: 1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator, security administrator, or Conditional Access administrator.
-1. In the search bar, type *Microsoft Entra ID* and select the matching service entry.
-1. Browse to **Security** > **Conditional Access**.
-1. Select **New policy** > **Create new policy**.
+1. In the search bar, type *Microsoft Entra Conditional Access* and select the matching service entry.
+1. From the overview, select **Create new policy**.
1. Give your policy a name. We recommend that organizations create a meaningful standard for the names of their policies.
-1. Under **Assignments**, select **Users or workload entities**.
-1. Under the **Include** tab, select **Select users and groups** and tick **Users and groups**. On the right, search for and choose the group that contains your Azure Virtual Desktop users as group members.
-1. Select **Select**.
-1. Under **Assignments**, select **Cloud apps or actions**.
-1. Under the **Include** tab, select **Select apps**.
-1. On the right, select one of the following apps based on which version of Azure Virtual Desktop you're using.
-
- - If you're using Azure Virtual Desktop (based on Azure Resource Manager), you can configure MFA on two different apps:
-
- - **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07), which applies when the user subscribes to a feed and authenticates to the Azure Virtual Desktop Gateway during a connection.
+1. Under **Assignments** > **Users**, select **0 users and groups selected**.
+1. Under the **Include** tab, select **Select users and groups** and check **Users and groups**, then under **Select**, select **0 users and groups selected**.
+1. On the new pane that opens, search for and choose the group that contains your Azure Virtual Desktop users as group members, then select **Select**.
+1. Under **Assignments** > **Target resources**, select **No target resources selected**.
+1. Under the **Include** tab, select **Select apps**, then under **Select**, select **None**.
+1. On the new pane that opens, search for and select the necessary apps based on the resources you are trying to protect.
+
+ - If you're using Azure Virtual Desktop (based on Azure Resource Manager), you can configure MFA on three different apps:
+
+ - **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07), which applies when the user subscribes to Azure Virtual Desktop, authenticates to the Azure Virtual Desktop Gateway during a connection, and when diagnostics information is sent to the service from the user's local device.
> [!TIP] > The app name was previously *Windows Virtual Desktop*. If you registered the *Microsoft.DesktopVirtualization* resource provider before the display name changed, the application will be named **Windows Virtual Desktop** with the same app ID as above.
- - **Microsoft Remote Desktop** (app ID a4a365df-50f1-4397-bc59-1a1564b8bb9c), which applies when the user authenticates to the session host when [single sign-on](configure-single-sign-on.md) is enabled.
+ - **Microsoft Remote Desktop** (app ID a4a365df-50f1-4397-bc59-1a1564b8bb9c) and **Windows Cloud Login** (app ID 270efc09-cd0d-444b-a71f-39af4910ec45). These apply when the user authenticates to the session host when [single sign-on](configure-single-sign-on.md) is enabled. It's recommended to match conditional access policies between these apps and the Azure Virtual Desktop app above, except for the [sign-in frequency](#configure-sign-in-frequency).
+
+ > [!IMPORTANT]
+ > The clients used to access Azure Virtual Desktop use the **Microsoft Remote Desktop** Entra ID app to authenticate to the session host today. An upcoming change will transition the authentication to the **Windows Cloud Login** Entra ID app. To ensure a smooth transition, you need to add both Entra ID apps to your CA policies.
- - If you're using Azure Virtual Desktop (classic), choose these apps:
+ - If you're using Azure Virtual Desktop (classic), choose these apps:
- - **Windows Virtual Desktop** (app ID 5a0aa725-4958-4b0c-80a9-34562e23f3b7)
- - **Windows Virtual Desktop Client** (app ID fa4345a4-a730-4230-84a8-7d9651b86739), which will let you set policies on the web client
+ - **Windows Virtual Desktop** (app ID 5a0aa725-4958-4b0c-80a9-34562e23f3b7).
+ - **Windows Virtual Desktop Client** (app ID fa4345a4-a730-4230-84a8-7d9651b86739), which will let you set policies on the web client.
> [!TIP] > If you're using Azure Virtual Desktop (classic) and if the Conditional Access policy blocks all access excluding Azure Virtual Desktop app IDs, you can fix this by also adding the **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07) to the policy. Not adding this app ID will block feed discovery of Azure Virtual Desktop (classic) resources.
Here's how to create a Conditional Access policy that requires multifactor authe
> [!IMPORTANT] > Don't select the app called Azure Virtual Desktop Azure Resource Manager Provider (app ID 50e95039-b200-4007-bc97-8d5790743a63). This app is only used for retrieving the user feed and shouldn't have multifactor authentication.
-1. Once you've selected your app, select **Select**.
+1. Once you've selected your apps, select **Select**.
> [!div class="mx-imgBorder"] > ![A screenshot of the Conditional Access Cloud apps or actions page. The Azure Virtual Desktop app is shown.](media/cloud-apps-enterprise.png)
-1. Under **Assignments**, select **Conditions** > **Client apps**. On the right, for **Configure**, select **Yes**, and then select the client apps this policy will apply to:
+1. Under **Assignments** > **Conditions**, select **0 conditions select**.
+1. Under **Client apps**, select **Not configured**.
+1. On the new pane that opens, for **Configure**, select **Yes**
+1. Select the client apps this policy will apply:
- - Select both check boxes if you want to apply the policy to all clients.
- Select **Browser** if you want the policy to apply to the web client. - Select **Mobile apps and desktop clients** if you want to apply the policy to other clients.
+ - Select both check boxes if you want to apply the policy to all clients.
- Deselect values for legacy authentication clients. > [!div class="mx-imgBorder"] > ![A screenshot of the Conditional Access Client apps page. The user has selected the mobile apps and desktop clients, and browser check boxes.](media/conditional-access-client-apps.png) 1. Once you've selected the client apps this policy will apply to, select **Done**.
-1. Under **Assignments**, select **Access controls** > **Grant**, select **Grant access**, **Require multifactor authentication**, and then select **Select**.
+1. Under **Access controls** > **Grant**, select **0 controls selected**.
+1. On the new pane that opens, select **Grant access**.
+1. Check **Require multifactor authentication**, and then select **Select**.
1. At the bottom of the page, set **Enable policy** to **On** and select **Create**. > [!NOTE] > When you use the web client to sign in to Azure Virtual Desktop through your browser, the log will list the client app ID as a85cf173-4192-42f8-81fa-777a763e6e2c (Azure Virtual Desktop client). This is because the client app is internally linked to the server app ID where the conditional access policy was set. > [!TIP]
-> Some users may see a prompt titled *Stay signed in to all your apps* if the Windows device they're using is not already registered with Microsoft Entra ID. If they deselect **Allow my organization to manage my device** and select **No, sign in to this app only**, this may reappear frequently.
+> Some users may see a prompt titled *Stay signed in to all your apps* if the Windows device they're using is not already registered with Microsoft Entra ID. If they deselect **Allow my organization to manage my device** and select **No, sign in to this app only**, they may be prompted for authentication more frequently.
## Configure sign-in frequency To optionally configure the time period before a user is asked to sign-in again: 1. Open the policy you created previously.
-1. Under **Assignments**, select **Access controls** > **Session**. On the right, select **Sign-in frequency**. Set the value for the time period before a user is asked to sign-in again, and then select **Select**. For example, setting the value to **1** and the unit to **Hours**, will require multifactor authentication if a connection is launched over an hour after the last one.
+1. Under **Access controls** > **Session**, select **0 controls selected**.
+1. On the new pane that opens, select **Sign-in frequency**.
+1. Select **Periodic reauthentication**.
+1. Set the value for the time period before a user is asked to sign-in again, and then select **Select**. For example, setting the value to **1** and the unit to **Hours**, will require multifactor authentication if a connection is launched over an hour after the last one.
1. At the bottom of the page, under **Enable policy** select **Save**.
+> [!NOTE]
+> - If [single sign-on](configure-single-sign-on.md) is enabled, it's recommended to configure the sign-in frequency only on the **Microsoft Remote Desktop** and **Windows Cloud Login** Entra ID apps and not the **Azure Virtual Desktop** Entra ID app. This will ensure that feed refresh and diagnostics upload continue working in the background as expected.
+> - Without single sign-on, sign-in frequency can be configured on the **Azure Virtual Desktop** Entra ID app.
+ <a name='azure-ad-joined-session-host-vms'></a> ## Microsoft Entra joined session host VMs
virtual-machines Oms Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-linux.md
The following table provides a mapping of the version of the Log Analytics VM ex
| Log Analytics Linux VM extension version | Log Analytics Agent bundle version | |--|--|
+| 1.17.0 | [1.17.0](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.16.0-0) |
| 1.16.0 | [1.16.0](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.16.0-0) | | 1.14.23 | [1.14.23](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.23-0) | | 1.14.20 | [1.14.20](https://github.com/microsoft/OMS-Agent-for-Linux/releases/tag/OMSAgent_v1.14.20-0) |
virtual-machines Virtual Machines Create Restore Points https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points.md
Title: Using Virtual Machine Restore Points description: Using Virtual Machine Restore Points--++
Currently, restore points can only be created in one VM at a time, that is, you
- Restore points for Virtual Machine Scale Sets in Uniform orchestration mode are not supported. - Movement of Virtual Machines (VM) between Resource Groups (RG), or Subscriptions is not supported when the VM has restore points. Moving the VM between Resource Groups or Subscriptions will not update the source VM reference in the restore point and will cause a mismatch of ARM IDs between the actual VM and the restore points. > [!Note]
- > Public preview of cross-region creation and copying of VM restore points is available, with the following limitations:
+ > Public preview of cross-region copying of VM restore points is available, with the following limitations:
> - Private links are not supported when copying restore points across regions or creating restore points in a region other than the source VM.
- > - Customer-managed key encrypted restore points, when copied to a target region or created directly in the target region are created as platform-managed key encrypted restore points.
+ > - Customer-managed key encrypted restore points, when copied to a target region are created as platform-managed key encrypted restore points.
## Troubleshoot VM restore points Most common restore points failures are attributed to the communication with the VM agent and extension, and can be resolved by following the troubleshooting steps listed in the [troubleshooting](restore-point-troubleshooting.md) article.
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
NAT gateway is the recommended approach to have explicit outbound connectivity.
## Next steps
-For more information on outbound connections in Azure and Azure Virtual Network NAT (NAT gateway), see:
+For more information on outbound connections in Azure and Azure NAT Gateway, see:
* [Source Network Address Translation (SNAT) for outbound connections](../../load-balancer/load-balancer-outbound-connections.md).
-* [What is Azure Virtual Network NAT?](../../nat-gateway/nat-overview.md)
+* [What is Azure NAT Gateway?](../../nat-gateway/nat-overview.md)
-* [Azure Virtual Network NAT FAQ](../../nat-gateway/faq.yml)
+* [Azure NAT Gateway FAQ](../../nat-gateway/faq.yml)