Updates from: 03/05/2024 02:10:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 01/11/2024 Last updated : 03/01/2024 -+
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Microsoft Entra ID](../active-directory/fundamentals/whats-new.md), [Azure AD B2C developer release notes](custom-policy-developer-notes.md) and [What's new in Microsoft Entra External ID](/entra/external-id/whats-new-docs).
+## February 2024
+
+### New articles
+
+- [Enable CAPTCHA in Azure Active Directory B2C](add-captcha.md)
+- [Define a CAPTCHA technical profile in an Azure Active Directory B2C custom policy](captcha-technical-profile.md)
+- [Verify CAPTCHA challenge string using CAPTCHA display control](display-control-captcha.md)
+
+### Updated articles
+
+- [Enable custom domains in Azure Active Directory B2C](custom-domain.md) - Updated steps to block the default B2C domain
+- [Manage Azure AD B2C custom policies with Microsoft Graph PowerShell](manage-custom-policies-powershell.md) - Microsoft Graph PowerShell updates
+- [Localization string IDs](localization-string-ids.md) - CAPTCHA updates
+- [Page layout versions](page-layout.md) - CAPTCHA updates
+ ## January 2024 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Set up sign-up and sign-in with a Google account using Azure Active Directory B2C](identity-provider-google.md) - Editorial updates - [Localization string IDs](localization-string-ids.md) - Updated the localization string IDs
-## November 2023
-
-### Updated articles
--- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md) - Editorial updates-- [Enrich tokens with claims from external sources using API connectors](add-api-connector-token-enrichment.md) - Editorial updates-- [Enable custom domains for Azure Active Directory B2C](custom-domain.md) - Editorial updates-- [Set up sign-in for multitenant Microsoft Entra ID using custom policies in Azure Active Directory B2C](identity-provider-azure-ad-multi-tenant.md) - Editorial updates-- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md) - Editorial updates-- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md) - Editorial updates-- [What is Azure Active Directory B2C?](overview.md) - Editorial updates-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md) - Editorial updates-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md) - Editorial updates-- [User flows and custom policies overview](user-flow-overview.md) - Editorial updates-- [OAuth 2.0 authorization code flow in Azure Active Directory B2C](authorization-code-flow.md) - Editorial updates-- [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) - Editorial updates-- [Define a Microsoft Entra multifactor authentication technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - Editorial updates----
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
The pages collection is a list of pages within the document. Each page is repres
| | | | |Images (JPEG/JPG, PNG, BMP, HEIF) | Each image = 1 page unit | Total images | |PDF | Each page in the PDF = 1 page unit | Total pages in the PDF |
-|TIFF | Each image in the TIFF = 1 page unit | Total images in the PDF |
+|TIFF | Each image in the TIFF = 1 page unit | Total images in the TIFF |
|Word (DOCX) | Up to 3,000 characters = 1 page unit, embedded or linked images not supported | Total pages of up to 3,000 characters each | |Excel (XLSX) | Each worksheet = 1 page unit, embedded or linked images not supported | Total worksheets | |PowerPoint (PPTX) | Each slide = 1 page unit, embedded or linked images not supported | Total slides |
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
# Azure OpenAI API preview lifecycle
-This article is to help you understand the support lifecycle for the Azure OpenAI API previews.
+This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. Post April 2, 2024, the latest three preview APIs will remain supported while older APIs will no longer be supported.
+
+> [!NOTE]
+> The `2023-06-01-preview` API will remain supported at this time, as `DALL-E 2` is only available in this API version. `DALL-E 3` is supported in the latest API releases.
## Latest preview API release
ai-services Assistants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/assistants.md
Title: Azure OpenAI Service Assistant API concepts
description: Learn about the concepts behind the Azure OpenAI Assistants API. Previously updated : 02/05/2023 Last updated : 03/04/2024+
ai-services Provisioned Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md
We use a variation of the leaky bucket algorithm to maintain utilization below 1
#### How many concurrent calls can I have on my deployment?
-The number of concurrent calls you can have at one time is dependent on each call's shape. The service will continue to accept calls until the utilization is above 100%. To determine the approximate number of concurrent calls you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If `max_tokens` is empty, you can assume a value of 1000
+The number of concurrent calls you can achieve depends on each call's shape (prompt size, max_token parameter, etc). The service will continue to accept calls until the utilization reach 100%. To determine the approximate number of concurrent calls you can model out the maximum requests per minute for a particular call shape in the [capacity calculator](https://oai.azure.com/portal/calculator). If the system generates less than the number of samplings tokens like max_token, it will accept more requests.
## Next steps
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json) - `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) - `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) **Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json) - `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) - `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) **Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
- `2023-03-15-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json) - `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) - `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-12-01-preview` (required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)
+- `2023-12-01-preview` (retiring April 2, 2024) (This version or greater required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | **Supported versions**-- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json) - `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json) - `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
#### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview (retiring April 2, 2024)` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub
**Supported versions** -- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
**Request body**
GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{oper
**Supported versions** -- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
#### Example request
DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{o
**Supported versions** -- `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
#### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) **Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2023-09-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)-- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2023-12-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json) **Request body**
ai-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md
This table lists some of the key configuration parameters for pronunciation asse
|--|-| | `ReferenceText` | The text that the pronunciation is evaluated against.<br/><br/>The `ReferenceText` parameter is optional. Set the reference text if you want to run a [scripted assessment](#scripted-assessment-results) for the reading language learning scenario. Don't set the reference text if you want to run an [unscripted assessment](#unscripted-assessment-results).<br/><br/>For pricing differences between scripted and unscripted assessment, see [Pricing](./pronunciation-assessment-tool.md#pricing). | | `GradingSystem` | The point system for score calibration. `FivePoint` gives a 0-5 floating point score. `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
-| `Granularity` | Determines the lowest level of evaluation granularity. Returns scores for levels greater than or equal to the minimal value. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph. It depends on your input reference text. Default: `Phoneme`.|
+| `Granularity` | Determines the lowest level of evaluation granularity. Returns scores for levels greater than or equal to the minimal value. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph. It depends on your input reference text. Default: `Phoneme`.|
| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. Enabling miscue is optional. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Values are `False` and `True`. Default: `False`. To enable miscue calculation, set the `EnableMiscue` to `True`. You can refer to the code snippet below the table. | | `ScenarioId` | A GUID for a customized point system. |
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
The Azure AI managed VNet feature is free. However, you're charged for the follo
## Limitations
+* Azure AI Studio currently doesn't support bring your own virtual network, it only supports managed VNet isolation.
* Azure AI services provisioned with Azure AI and Azure AI Search attached with Azure AI should be public. * The "Add your data" feature in the Azure AI Studio playground doesn't support private storage account. * Once you enable managed VNet isolation of your Azure AI, you can't disable it.
ai-studio Create Azure Ai Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-azure-ai-resource.md
At Azure AI hub resource creation, select between the networking isolation modes
At Azure AI hub resource creation in the Azure portal, creation of associated Azure AI services, Storage account, Key vault, Application insights, and Container registry is given. These resources are found on the Resources tab during creation.
-To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the PNA flag is disabled when creating the private endpoint connection. For more about Azure AI services connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network.
+To connect to Azure AI services (Azure OpenAI, Azure AI Search, and Azure AI Content Safety) or storage accounts in Azure AI Studio, create a private endpoint in your virtual network. Ensure the PNA (Public Network Access) flag is disabled when creating the private endpoint connection. For more about Azure AI services connections, follow documentation [here](../../ai-services/cognitive-services-virtual-networks.md). You can optionally bring your own (BYO) search, but this requires a private endpoint connection from your virtual network.
### Encryption Projects that use the same Azure AI hub resource, share their encryption configuration. Encryption mode can be set only at the time of Azure AI hub resource creation between Microsoft-managed keys and Customer-managed keys.
ai-studio Create Manage Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-manage-compute.md
To create a compute instance in Azure AI Studio:
- **Assign a managed identity**: You can attach system assigned or user assigned managed identities to grant access to resources. The name of the created system managed identity will be in the format `/workspace-name/computes/compute-instance-name` in your Microsoft Entra ID. - **Enable SSH access**: Enter credentials for an administrator user account that will be created on each compute node. These can be used to SSH to the compute nodes. Note that disabling SSH prevents SSH access from the public internet. When a private virtual network is used, users can still SSH from within the virtual network.
- - **Enable virtual network**:
- - If you're using an Azure Virtual Network, specify the Resource group, Virtual network, and Subnet to create the compute instance inside an Azure Virtual Network. You can also select No public IP to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these network requirements for virtual network setup.
- - If you're using a managed virtual network, the compute instance is created inside the managed virtual network. You can also select No public IP to prevent the creation of a public IP address. For more information, see managed compute with a managed network.
1. On the **Applications** page you can add custom applications to use on your compute instance, such as RStudio or Posit Workbench. Then select **Next**. 1. On the **Tags** page you can add additional information to categorize the resources you create. Then select **Review + Create** or **Next** to review your settings.
ai-studio Sdk Generative Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/sdk-generative-overview.md
Detailed `DEBUG` level logging, including request/response bodies and unredacted
The Azure AI Generative Python SDK includes a telemetry feature that collects usage and failure data about the SDK and sends it to Microsoft when you use the SDK in a Jupyter Notebook only. Telemetry won't be collected for any use of the Python SDK outside of a Jupyter Notebook.
-Telemetry data helps the SDK team understand how the SDK is used so it can be improved and the information about failures helps the team resolve problems and fix bugs. The SDK telemetry feature is enabled by default for Jupyter Notebook usage and can't be enabled for non-Jupyter scenarios. To opt out of the telemetry feature in a Jupyter scenario, set the environment variable `"AZURE_AI_GENERATIVE_ENABLE_LOGGING"` to `"False"`.
+Telemetry data helps the SDK team understand how the SDK is used so it can be improved and the information about failures helps the team resolve problems and fix bugs. The SDK telemetry feature is enabled by default for Jupyter Notebook usage and can't be enabled for non-Jupyter scenarios.
+To opt out of the telemetry feature in a Jupyter scenario:
+- When using the `azure-ai-generative` package, set both of the following environment variables to `"False"`: `"AZURE_AI_GENERATIVE_ENABLE_LOGGING"` and `"AZURE_AI_RESOURCES_ENABLE_LOGGING"`. Both environment variables need to be set to `"False"` since `azure-ai-generative` is dependent on `azure-ai-resources`.
+- When using the `azure-ai-resources` package, set the environment variable `"AZURE_AI_RESOURCES_ENABLE_LOGGING"` to `"False"`.
## Next steps
aks Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-cost.md
Title: Optimize Costs in Azure Kubernetes Service (AKS)
-description: Recommendations for optimizing costs in Azure Kubernetes Service (AKS).
+description: Recommendations and best practices for optimizing costs in Azure Kubernetes Service (AKS).
-
- - ignite-2023
Previously updated : 04/13/2023 Last updated : 02/21/2024 # Optimize costs in Azure Kubernetes Service (AKS)
-Cost optimization is about understanding your different configuration options and recommended best practices to reduce unnecessary expenses and improve operational efficiencies. Before you use this article, you should see the [cost optimization section](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service#cost-optimization) in the Azure Well-Architected Framework.
+Cost optimization is about maximizing the value of resources while minimizing unnecessary expenses within your cloud environment. This process involves identifying cost effective configuration options and implementing best practices to improve operational efficiency. An AKS environment can be optimized to minimize cost while taking into account performance and reliability requirements.
-When discussing cost optimization with Azure Kubernetes Service, it's important to distinguish between *cost of cluster resources* and *cost of workload resources*. Cluster resources are a shared responsibility between the cluster admin and their resource provider, while workload resources are the domain of a developer. Azure Kubernetes Service has considerations and recommendations for both of these roles.
+In this article, you learn about:
+> [!div class="checklist"]
+> * Strategic infrastucture selection
+> * Dynamic rightsizing and autoscaling
+> * Leveraging Azure discounts for substantial savings
+> * Holistic monitoring and FinOps practices
-## Design checklist
-> [!div class="checklist"]
-> - **Cluster architecture:** Use appropriate VM SKU per node pool and reserved instances where long-term capacity is expected.
-> - **Cluster and workload architectures:** Use appropriate managed disk tier and size.
-> - **Cluster architecture:** Review performance metrics, starting with CPU, memory, storage, and network, to identify cost optimization opportunities by cluster, nodes, and namespace.
-> - **Cluster and workload architecture:** Use autoscale features to scale in when workloads are less active.
-
-## Recommendations
-
-Explore the following table of recommendations to optimize your AKS configuration for cost.
-
-| Recommendation | Benefit |
-|-|--|
-|**Cluster architecture**: Utilize AKS cluster pre-set configurations. |From the Azure portal, the **cluster preset configurations** option helps offload this initial challenge by providing a set of recommended configurations that are cost-conscious and performant regardless of environment. Mission critical applications might require more sophisticated VM instances, while small development and test clusters might benefit from the lighter-weight, preset options where availability, Azure Monitor, Azure Policy, and other features are turned off by default. The **Dev/Test** and **Cost-optimized** pre-sets help remove unnecessary added costs.|
-|**Cluster architecture:** Consider using [ephemeral OS disks](concepts-storage.md#ephemeral-os-disk).|Ephemeral OS disks provide lower read/write latency, along with faster node scaling and cluster upgrades. Containers aren't designed to have local state persisted to the managed OS disk, and this behavior offers limited value to AKS. AKS defaults to an ephemeral OS disk if you chose the right VM series and the OS disk can fit in the VM cache or temporary storage SSD.|
-|**Cluster and workload architectures:** Use the [Start and Stop feature](start-stop-cluster.md) in Azure Kubernetes Services (AKS).|The AKS Stop and Start cluster feature allows AKS customers to pause an AKS cluster, saving time and cost. The stop and start feature keeps cluster configurations in place and customers can pick up where they left off without reconfiguring the clusters.|
-|**Workload architecture:** Consider using [Azure Spot VMs](spot-node-pool.md) for workloads that can handle interruptions, early terminations, and evictions.|For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates for you to schedule on a spot node pool. Using spot VMs for nodes with your AKS cluster allows you to take advantage of unused capacity in Azure at a significant cost savings.|
-|**Cluster architecture:** Enforce [resource quotas](operator-best-practices-scheduler.md) at the namespace level.|Resource quotas provide a way to reserve and limit resources across a development team or project. These quotas are defined on a namespace and can be used to set quotas on compute resources, storage resources, and object counts. When you define resource quotas, all pods created in the namespace must provide limits or requests in their pod specifications.|
-|**Cluster architecture:** Sign up for [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md). | If you properly planned for capacity, your workload is predictable and exists for an extended period of time, sign up for [Azure Reserved Instances](../virtual-machines/prepay-reserved-vm-instances.md) to further reduce your resource costs.|
-|**Cluster architecture:** Use Kubernetes [Resource Quotas](operator-best-practices-scheduler.md#enforce-resource-quotas). | Resource quotas can be used to limit resource consumption for each namespace in your cluster, and by extension resource utilization for the Azure service.|
-|**Cluster and workload architectures:** Cost management using monitoring and observability tools. | OpenCost on AKS introduces a new community-driven [specification](https://github.com/opencost/opencost/blob/develop/spec/opencost-specv01.md) and implementation to bring greater visibility into current and historic Kubernetes spend and resource allocation. OpenCost, born out of [Kubecost](https://www.kubecost.com/), is an open-source, vendor-neutral [CNCF sandbox project](https://www.cncf.io/sandbox-projects/) that recently became a [FinOps Certified Solution](https://www.finops.org/partner-certifications/#finops-certified-solution). Customer specific prices are now included using the [Azure Consumption Price Sheet API](/rest/api/consumption/price-sheet), ensuring accurate cost reporting that accounts for consumption and savings plan discounts. For out-of-cluster analysis or to ingest allocation data into an existing BI pipeline, you can export a CSV with daily infrastructure cost breakdown by Kubernetes constructs (namespace, controller, service, pod, job and more) to your Azure Storage Account or local storage with minimal configuration. CSV also includes resource utilization metrics for CPU, GPU, memory, load balancers, and persistent volumes. For in-cluster visualization, OpenCost UI enables real-time cost drill down by Kubernetes constructs. Alternatively, directly query the OpenCost API to access cost allocation data. For more information on Azure specific integration, see [OpenCost docs](https://www.opencost.io/docs).|
-|**Cluster architecture:** Improve cluster operations efficiency.|Managing multiple clusters increases operational overhead for engineers. [AKS auto upgrade](auto-upgrade-cluster.md) and [AKS Node Auto-Repair](node-auto-repair.md) helps improve day-2 operations. Learn more about [best practices for AKS Operators](operator-best-practices-cluster-isolation.md).|
+## Prepare the application environment
-## Next steps
+### Evaluate SKU family
+It's important to evaluate the resource requirements of your application prior to deployment. Small development workloads have different infrastructure needs than large production ready workloads. While a combination of CPU, memory, and networking capacity configurations heavily influences the cost effectiveness of a SKU, consider the following VM types:
+
+- [**Azure Spot Virtual Machines**](/azure/virtual-machines/spot-vms) - [Spot node pools](./spot-node-pool.md) are backed by Azure Spot Virtual machine scale sets and deployed to a single fault domain with no high availability or SLA guarantees. Spot VMs allow you to take advantage of unutilized Azure capacity with significant discounts (up to 90% as compared to pay-as-you-go prices). If Azure needs capacity back, the Azure infrastructure evicts the Spot nodes. _Best for dev/test environments, workloads that can handle interruptions such as batch processing jobs, and workloads with flexible execution time._
+- [**Ampere Altra Arm-based processors (ARM64)**](https://azure.microsoft.com/blog/now-in-preview-azure-virtual-machines-with-ampere-altra-armbased-processors/) - ARM64 VMs are power-efficient and cost effective but don't compromise on performance. With [AMR64 node pool support in AKS](./create-node-pools.md#arm64-node-pools), you can create ARM64 Ubuntu agent nodes and even mix Intel and ARM architecture nodes within a cluster. These ARM VMs are engineered to efficiently run dynamic, scalable workloads and can deliver up to 50% better price-performance than comparable x86-based VMs for scale-out workloads. _Best for web or application servers, open-source databases, cloud-native applications, gaming servers, and more._
+- [**GPU optimized SKUs**](/azure/virtual-machines/sizes) - Depending on the nature of your workload, consider using compute optimized, memory optimized, storage optimized, or even graphical processing unit (GPU) optimized VM SKUs. GPU VM sizes are specialized VMs that are available with single, multiple, and fractional GPUs. _[GPU-enabled Linux node pools on AKS](./gpu-cluster.md) are best for compute-intensive workloads like graphics rendering, large model training and inferencing._
+
+> [!NOTE]
+> The cost of compute varies across regions. When picking a less expensive region to run workloads, be conscious of the potential impact of latency as well as data transfer costs. To learn more about VM SKUs and their characteristics, see [Sizes for virtual machines in Azure](/azure/virtual-machines/sizes).
++
+### Use cluster preset configurations
+Picking the right VM SKU, regions, number of nodes, and other configuration options can be difficult upfront. [Cluster preset configurations](./quotas-skus-regions.md#cluster-configuration-presets-in-the-azure-portal) in the Azure portal offloads this initial challenge by providing recommended configurations for different application environments that are cost-conscious and performant. The **Dev/Test** preset is best for developing new workloads or testing existing workloads. The **Production Economy** preset is best for serving production traffic in a cost-conscious way if your workloads can tolerate interruptions. Noncritical features are off by default and the preset values can be modified at any time.
+
+### Consider multitenancy
+AKS offer flexibility in how you run multitenant clusters and isolate resources. For friendly multitenancy, clusters and infrastructure can be shared across teams and business units through [_logical isolation_](./operator-best-practices-cluster-isolation.md#logically-isolated-clusters). Kubernetes [Namespaces](./concepts-clusters-workloads.md#namespaces) form the logical isolation boundary for workloads and resources. Sharing infrastructure reduces cluster management overhead while also improving resource utilization and pod density within the cluster. To learn more about multitenancy on AKS and to determine if it's right for your organizational needs, see [AKS considerations for multitenancy](/azure/architecture/guide/multitenant/service/aks) and [Design clusters for multitenancy](./operator-best-practices-cluster-isolation.md#design-clusters-for-multi-tenancy).
+
+> [!WARNING]
+> Kubernetes environments aren't entirely safe for hostile multitenancy. If any tenant on the shared infrastructure can't be trusted, additional planning is needed to prevent tenants from impacting the security of other services.
+>
+> Consider [_physical isolation_](./operator-best-practices-cluster-isolation.md#physically-isolated-clusters) boundaries. In this model, teams or workloads are assigned to their own cluster. Added management and financial overhead will be a tradeoff.
+
+## Build cloud native applications
+
+### Make your container as lean as possible
+A lean container refers to optimizing the size and resource footprint of the containerized application. Check that your base image is minimal and only contains the necessary dependencies. Remove any unnecessary libraries and packages. A smaller container image will accelerate deployment times and increase scaling operation efficiency. Going one step further, [Artifact Streaming on AKS](./artifact-streaming.md) allows you to stream container images from Azure Container Registry (ACR). It pulls only the necessary layer for initial pod startup, reducing the pull time for larger images from minutes to seconds.
+
+### Enforce resource quotas
+[Resource quotas](./operator-best-practices-scheduler.md#enforce-resource-quotas) provide a way to reserve and limit resources across a development team or project. Quotas are defined on a namespace and can set on compute resources, storage resources, and object counts. When you define resource quotas, individual namespaces are prevented from consuming more resources than allocated. This is particularly important for multi-tenant clusters where teams are sharing infrastructure.
+
+### Use cluster start stop
+Small development and test clusters, when left unattended, can realize large amounts of unnecessary spending. Turn off clusters that don't need to run at all times using [cluster start and stop](./start-stop-cluster.md?tabs=azure-cli). Doing so shuts down all system and user node pools so you arenΓÇÖt paying for extra compute. All objects and cluster state will be maintained when you start the cluster again.
+
+### Use capacity reservations
+Capacity reservations allow you to reserve compute capacity in an Azure region or Availability Zone for any duration of time. Reserved capacity will be available for immediate use until the reservation is deleted. [Associating an existing capacity reservation group to a node pool](./manage-node-pools.md#associate-capacity-reservation-groups-to-node-pools) guarantees allocated capacity for your node pool and helps you avoid potential on-demand pricing spikes during periods of high compute demand.
++
+## Monitor your environment and spend
+
+### Increase visibility with Microsoft Cost Management
+[Microsoft Cost Management](/azure/cost-management-billing/cost-management-billing-overview) offers a broad set of capabilities to help with cloud budgeting, forecasting, and visibility for costs both inside and outside of the cluster. Proper visibility is essential for deciphering spending trends, identifying optimization opportunities, and increasing accountability amongst application developers and platform teams. Enable the [AKS Cost Analysis add-on](./cost-analysis.md) for granular cluster cost breakdown by Kubernetes constructs along with Azure Compute, Network, and Storage categories.
+
+### Azure Monitor
+If you're ingesting metric data via Container insights, we recommended migrating to managed Prometheus metrics, which offers a significant cost reduction. You can [disable Container insights metrics using the data collection rule (DCR)](/azure/azure-monitor/containers/container-insights-data-collection-dcr?tabs=portal) and deploy the [managed Prometheus add-on](./network-observability-managed-cli.md?tabs=non-cilium#azure-managed-prometheus-and-grafana), which supports configuration via Azure Resource Manager, Azure CLI, Azure portal, and Terraform.
+
+If you rely on log ingestion, we also recommended using the Basic Logs API to reduce Log Analytics costs. To learn more, see [Azure Monitor best practices](/azure/azure-monitor/best-practices-containers#cost-optimization) and [managing costs for Container insights](/azure/azure-monitor/containers/container-insights-cost).
-- [Azure Advisor recommendations](../advisor/advisor-cost-recommendations.md) for cost can highlight the over-provisioned services and ways to lower cost.-- Consider enabling [AKS cost analysis](./cost-analysis.md) to get granular insight into costs associated with Kubernetes resources across your clusters and namespaces. After you enable cost analysis, you can [explore and analyze costs](../cost-management-billing/costs/quick-acm-cost-analysis.md).+
+## Optimize workloads through autoscaling
+
+### Enable Application Autoscaling
+#### Vertical Pod Autoscaling
+Requests and limits that are significantly higher than actual usage can result in overprovisioned workloads and wasted resources. In contrast, requests and limits that are too low can result in throttling and workload issues due to lack of memory. [Vertical Pod Autoscaler (VPA)](./vertical-pod-autoscaler.md) allows you to fine-tune CPU and memory resources required by your pods. VPA provides recommended values for CPU and memory requests and limits based on historical container usage, which you can set manually or update automatically. _Best for applications with fluctuating resource demands._
+
+#### Horizontal Pod Autoscaling
+[Horizontal Pod Autoscaler (HPA)](./concepts-scale.md#horizontal-pod-autoscaler) dynamically scales the number of pod replicas based on an observed metric such as CPU or memory utilization. During periods of high demand, HPA scales out, adding more pod replicas to distribute the workload. During periods of low demand, HPA scales in, reducing the number of replicas to conserve resources. _Best for applications with predictable resource demands._
+
+> [!WARNING]
+> You shouldn't use the VPA in conjunction with the HPA on the same CPU or memory metrics. This combination can lead to conflicts, as both autoscalers attempt to respond to changes in demand using the same metrics. However, you can use the VPA for CPU or memory in conjunction with the HPA for custom metrics to prevent overlap and ensure that each autoscaler focuses on distinct aspects of workload scaling.
+
+#### Kubernetes Event-driven Autoscaling
+[Kubernetes Event-driven Autoscaler (KEDA) add-on](./keda-about.md) provides additional flexibility to scale based on various event-driven metrics that align with your application behavior. For example, for a web application, KEDA can monitor incoming HTTP request traffic and adjust the number of pod replicas to ensure the application remains responsive. For processing jobs, KEDA can scale the application based on message queue length. Managed support is provided for all [Azure Scalers](https://keda.sh/docs/2.13/scalers/).
+
+### Enable Infrastructure Autoscaling
+#### Cluster Autoscaling
+To keep up with application demand, [Cluster Autoscaler](./cluster-autoscaler-overview.md) watches for pods that can't be scheduled due to resource constraints and scales the number of nodes in the node pool accordingly. When nodes don't have running pods, Cluster Autoscaler will scale down the number of nodes. Note that Cluster Autoscaler profile settings apply to all autoscaler-enabled nodepools in the cluster. To learn more, see [Cluster Autoscaler best practices and considerations](./cluster-autoscaler-overview.md#best-practices-and-considerations).
+
+#### Node Autoprovisioning
+Complicated workloads may require several node pools with different VM size configurations to accommodate CPU and memory requirements. Accurately selecting and managing several node pool configurations adds complexity and operational overhead. [Node Autoprovision (NAP)](./node-autoprovision.md?tabs=azure-cli) simplifies the SKU selection process and decides, based on pending pod resource requirements, the optimal VM configuration to run workloads in the most efficient and cost effective manner.
+
+> [!NOTE]
+> Refer to [Performance and scaling for small to medium workloads in Azure Kubernetes Service (AKS)](./best-practices-performance-scale.md) and [Performance and scaling best practices for large workloads in Azure Kubernetes Service (AKS)](./best-practices-performance-scale-large.md) for additional scaling best practices.
++
+## Save with Azure discounts
+
+### Azure Reservations
+If your workload is predictable and exists for an extended period of time, consider purchasing an [Azure Reservation](/azure/cost-management-billing/reservations/save-compute-costs-reservations) to further reduce your resource costs. Azure Reservations operate on a one-year or three-year term, offering up to 72% discount as compared to pay-as-you-go prices for compute. Reservations automatically apply to matching resources. _Best for workloads that are committed to running in the same SKUs and regions over an extended period of time._
+
+### Azure Savings Plan
+If you have consistent spend but your use of disparate resources across SKUs and regions makes Azure Reservations infeasible, consider purchasing an [Azure Savings Plan](/azure/cost-management-billing/savings-plan/savings-plan-compute-overview). Like Azure Reservations, Azure Savings Plans operate on a one-year or three-year term and automatically apply to any resources within benefit scope. You commit to spend a fixed hourly amount on compute resources irrespective of SKU or region. _Best for workloads that utilize different resources and/or different datacenter regions._
+
+### Azure Hybrid Benefit
+[Azure Hybrid Benefit for Azure Kubernetes Service (AKS)](./azure-hybrid-benefit.md) allows you to maximize your on-premises licenses at no additional cost. Use any qualifying on-premises licenses that also have an active Software Assurance (SA) or a qualifying subscription to get Windows VMs on Azure at a reduced cost.
++
+## Embrace FinOps to build a cost saving culture
+[Financial operations (FinOps)](https://www.finops.org/introduction/what-is-finops/) is a discipline that combines financial accountability with cloud management and optimization. It focuses on driving alignment between finance, operations, and engineering teams to understand and control cloud costs. The FinOps foundation has released several notable projects:
+- [FinOps Framework](https://finops.org/framework) - an operating model for how to practice and implement FinOps.
+- [FOCUS Specification](https://focus.finops.org/) - a technical specification and open standard for cloud usage, cost, and billing data across all major cloud provider services.
++
+## Next steps
+Cost optimization is an ongoing and iterative effort. Learn more by reviewing the following recommendations and architecture guidance:
+* [Microsoft Azure Well-Architected Framework for AKS - Cost Optimization Design Principles](/azure/architecture/framework/services/compute/azure-kubernetes-service/azure-kubernetes-service#cost-optimization)
+* [Baseline Architecture Guide for AKS](/azure/architecture/reference-architectures/containers/aks/baseline-aks)
+* [Optimize Compute Costs on AKS](/training/modules/aks-optimize-compute-costs/)
+* [AKS Cost Optimization Techniques](https://techcommunity.microsoft.com/t5/apps-on-azure-blog/azure-kubernetes-service-aks-cost-optimization-techniques/ba-p/3652908)
+* [What is FinOps?](/azure/cost-management-billing/finops/)
aks Concepts Sustainable Software Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-sustainable-software-engineering.md
A service mesh deploys extra containers for communication, typically in a [sidec
Sending and storing all logs from all possible sources (workloads, services, diagnostics, and platform activity) can increase storage and network traffic, which impacts costs and carbon emissions.
-* Make sure you're collecting and retaining only the necessary log data to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-data-collection-configmap.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](/azure/architecture/framework/services/monitoring/log-analytics/cost-optimization).
+* Make sure you're collecting and retaining only the necessary log data to support your requirements. [Configure data collection rules for your AKS workloads](../azure-monitor/containers/container-insights-data-collection-configmap.md#data-collection-settings) and implement design considerations for [optimizing your Log Analytics costs](../azure-monitor/best-practices-cost.md).
### Cache static data
aks Use Oidc Issuer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-oidc-issuer.md
Title: Create an OpenID Connect provider for your Azure Kubernetes Service (AKS)
description: Learn how to configure the OpenID Connect (OIDC) provider for a cluster in Azure Kubernetes Service (AKS) Previously updated : 11/10/2023 Last updated : 03/04/2024 # Create an OpenID Connect provider on Azure Kubernetes Service (AKS)
In this article, you learn how to create, update, and manage the OIDC Issuer for
> [!IMPORTANT] > After enabling OIDC issuer on the cluster, it's not supported to disable it.
+> [!IMPORTANT]
+> The token needs to be refreshed periodically. If you use [SDK][sdk], the rotation is automatic, otherwise, you need to refresh the token every 24 hours manually.
+ ## Prerequisites * The Azure CLI version 2.42.0 or higher. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
During key rotation, there's one other key present in the discovery document.
<!-- LINKS - internal --> [open-id-connect-overview]: ../active-directory/fundamentals/auth-oidc.md
+[sdk]: workload-identity-overview.md#azure-identity-client-libraries
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update
app-service Configure Common https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-common.md
At runtime, connection strings are available as environment variables, prefixed
* Custom: `CUSTOMCONNSTR_` * PostgreSQL: `POSTGRESQLCONNSTR_`
+>[!Note]
+> .NET apps targeting PostgreSQL should set the connection string to **Custom** as workaround for a [knows issue in .NET EnvironmentVariablesConfigurationProvider](https://github.com/dotnet/runtime/issues/36123)
+>
+ For example, a MySQL connection string named *connectionstring1* can be accessed as the environment variable `MYSQLCONNSTR_connectionString1`. For language-stack specific steps, see: - [ASP.NET Core](configure-language-dotnetcore.md#access-environment-variables)
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
adobe-target-experience: Experience B
adobe-target-content: ./quickstart-dotnetcore-uiex -+ ai-usage: ai-assisted
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Last updated 11/30/2023 -+ zone_pivot_groups: app-service-portal-azd
azure-functions Migrate Cosmos Db Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-cosmos-db-version-3-version-4.md
description: This article shows you how to upgrade your existing function apps u
Previously updated : 10/05/2023 Last updated : 03/04/2024 zone_pivot_groups: programming-languages-set-functions-lang-workers
namespace CosmosDBSamples
} ```
+> [!NOTE]
+> If your scenario relied on the dynamic nature of the `Document` type to identify different schemas and types of events, you can use a base abstract type with the common properties across your types or dynamic types like `JObject` that allow to access properties like `Document` did.
+ ::: zone-end ::: zone pivot="programming-language-javascript,programming-language-python,programming-language-java,programming-language-powershell"
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
| Release Date | Release notes | Windows | Linux | |:|:|:|:| | February 2024 | **Windows**<ul><li>Fix memory leak in IIS log collection</li><li>Fix json parsing with unicode characters for some ingestion endpoints</li><li>Allow Client installer to run on AVD DevBox partner</li><li>Enable TLS 1.3 on supported Windows versions</li><li>Enable Agent Side Aggregation for Private Preview</li><li>Update MetricsExtension package to 2.2024.202.2043</li><li>Update AzureSecurityPack.Geneva package to 4.31</li></ul>**Linux**<ul><li></li></ul> | 1.24.0 | Coming soon |
-| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. **This issue was fixed in 1.29.6**</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li><li>Reduced noise generated by AMAs' use of semanage when SELinux is enabled"</li></ul> | 1.23.0 | 1.29.5, 1.29.6 |
+| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. **This issue was fixed in 1.29.6**</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Reduced noise generated by AMAs' use of semanage when SELinux is enabled"</li></ul> | 1.23.0 | 1.29.5, 1.29.6 |
| December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multitenant mode</li><li>AMA installer won't install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (also known as GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based resources:
> - Are available in all commercial regions and [Azure US Government](../../azure-government/index.yml). > - Don't require changing instrumentation keys after migration from a classic resource. - > [!IMPORTANT]
-> * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.
+> * On February 29, 2024, Continuous Export was retired as part of the classic Application Insights resource retirement.
> > * [Workspace-based Application Insights resources](./create-workspace-resource.md) are not compatible with continuous export. We recommend migrating to [diagnostic settings](../essentials/diagnostic-settings.md) on classic Application Insights resources before transitioning to a workspace-based Application Insights. This ensures continuity and compatibility of your diagnostic settings. >
Update-AzApplicationInsights -Name "aiName" -ResourceGroupName "rgName" -Ingesti
### Azure Resource Manager templates
-This section provides templates.
+This section provides templates.
+
+ > [!CAUTION]
+ > Ensure that you have removed all Continous Export settings from your resource before running the migration templates. See [Prerequisites](#prerequisites)
#### Template file
This section provides answers to common questions.
Microsoft will begin an automatic phased approach to migrating classic resources to workspace-based resources beginning in May 2024 and this migration will span the course of several months. We can't provide approximate dates that specific resources, subscriptions, or regions will be migrated.
-We strongly encourage manual migration to workspace-based resources, which is initiated by selecting the deprecation notice banner in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace will be used to store your application data. If you use continuous export, you'll need to additionally migrate to diagnostic settings or disable the feature first.
+We strongly encourage manual migration to workspace-based resources, which is initiated by selecting the retirement notice banner in the classic Application Insights resource Overview pane of the Azure portal. This process typically involves a single step of choosing which Log Analytics workspace will be used to store your application data. If you use continuous export, you'll need to additionally migrate to diagnostic settings or disable the feature first.
If you don't wish to have your classic resource automatically migrated to a workspace-based resource, you may delete or manually migrate the resource.
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
# Sampling overrides (preview) - Azure Monitor Application Insights for Java > [!NOTE]
-> The sampling overrides feature is in preview, starting from 3.0.3.
+> The sampling overrides feature is in GA, starting from 3.5.0.
Sampling overrides allow you to override the [default sampling percentage](./java-standalone-config.md#sampling), for example:
To begin, create a configuration file named *applicationinsights.json*. Save it
"connectionString": "...", "sampling": { "percentage": 10
- },
- "preview": {
- "sampling": {
- "overrides": [
- {
- "telemetryType": "request",
- "attributes": [
- ...
- ],
- "percentage": 0
- },
- {
- "telemetryType": "request",
- "attributes": [
- ...
- ],
- "percentage": 100
- }
- ]
- }
+ "overrides": [
+ {
+ "telemetryType": "request",
+ "attributes": [
+ ...
+ ],
+ "percentage": 0
+ },
+ {
+ "telemetryType": "request",
+ "attributes": [
+ ...
+ ],
+ "percentage": 100
+ }
+ ]
} } ```
This example also suppresses collecting any downstream spans (dependencies) that
```json { "connectionString": "...",
- "preview": {
- "sampling": {
- "overrides": [
- {
- "telemetryType": "request",
- "attributes": [
- {
- "key": "http.url",
- "value": "https?://[^/]+/health-check",
- "matchType": "regexp"
- }
- ],
- "percentage": 0
- }
- ]
- }
+ "sampling": {
+ "overrides": [
+ {
+ "telemetryType": "request",
+ "attributes": [
+ {
+ "key": "url.path",
+ "value": "/health-check",
+ "matchType": "strict"
+ }
+ ],
+ "percentage": 0
+ }
+ ]
} } ```
This example suppresses collecting telemetry for all `GET my-noisy-key` redis ca
```json { "connectionString": "...",
- "preview": {
- "sampling": {
- "overrides": [
- {
- "telemetryType": "dependency",
- "attributes": [
- {
- "key": "db.system",
- "value": "redis",
- "matchType": "strict"
- },
- {
- "key": "db.statement",
- "value": "GET my-noisy-key",
- "matchType": "strict"
- }
- ],
- "percentage": 0
- }
- ]
- }
+ "sampling": {
+ "overrides": [
+ {
+ "telemetryType": "dependency",
+ "attributes": [
+ {
+ "key": "db.system",
+ "value": "redis",
+ "matchType": "strict"
+ },
+ {
+ "key": "db.statement",
+ "value": "GET my-noisy-key",
+ "matchType": "strict"
+ }
+ ],
+ "percentage": 0
+ }
+ ]
} } ```
those are also collected for all '/login' requests.
"sampling": { "percentage": 10 },
- "preview": {
- "sampling": {
- "overrides": [
- {
- "telemetryType": "request",
- "attributes": [
- {
- "key": "http.url",
- "value": "https?://[^/]+/login",
- "matchType": "regexp"
- }
- ],
- "percentage": 100
- }
- ]
- }
+ "sampling": {
+ "overrides": [
+ {
+ "telemetryType": "request",
+ "attributes": [
+ {
+ "key": "url.path",
+ "value": "/login",
+ "matchType": "strict"
+ }
+ ],
+ "percentage": 100
+ }
+ ]
} } ```
azure-monitor Java Standalone Telemetry Processors Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors-examples.md
This section shows how to include spans for an attribute processor. The processo
A match requires the span name to be equal to `spanA` or `spanB`. These spans match the include properties, and the processor actions are applied:
-* Span1 Name: 'spanA' Attributes: {env: dev, test_request: 123, credit_card: 1234}
-* Span2 Name: 'spanB' Attributes: {env: dev, test_request: false}
-* Span3 Name: 'spanA' Attributes: {env: 1, test_request: dev, credit_card: 1234}
+* `Span1` Name: 'spanA' Attributes: {env: dev, test_request: 123, credit_card: 1234}
+* `Span2` Name: 'spanB' Attributes: {env: dev, test_request: false}
+* `Span3` Name: 'spanA' Attributes: {env: 1, test_request: dev, credit_card: 1234}
This span doesn't match the include properties, and the processor actions aren't applied: * Span4 Name: 'spanC' Attributes: {env: dev, test_request: false}
This section demonstrates how to exclude spans for an attribute processor. This
A match requires the span name to be equal to `spanA` or `spanB`. The following spans match the exclude properties, and the processor actions aren't applied:
-* Span1 Name: 'spanA' Attributes: {env: dev, test_request: 123, credit_card: 1234}
-* Span2 Name: 'spanB' Attributes: {env: dev, test_request: false}
-* Span3 Name: 'spanA' Attributes: {env: 1, test_request: dev, credit_card: 1234}
+* `Span1` Name: 'spanA' Attributes: {env: dev, test_request: 123, credit_card: 1234}
+* `Span2` Name: 'spanB' Attributes: {env: dev, test_request: false}
+* `Span3` Name: 'spanA' Attributes: {env: 1, test_request: dev, credit_card: 1234}
This span doesn't match the exclude properties, and the processor actions are applied: * Span4 Name: 'spanC' Attributes: {env: dev, test_request: false}
A match requires the following conditions to be met:
* The span must have an attribute that has key `test_request`. The following spans match the exclude properties, and the processor actions aren't applied.
-* Span1 Name: 'spanB' Attributes: {env: dev, test_request: 123, credit_card: 1234}
-* Span2 Name: 'spanA' Attributes: {env: dev, test_request: false}
+* `Span1` Name: 'spanB' Attributes: {env: dev, test_request: 123, credit_card: 1234}
+* `Span2` Name: 'spanA' Attributes: {env: dev, test_request: false}
The following span doesn't match the exclude properties, and the processor actions are applied:
-* Span3 Name: 'spanB' Attributes: {env: 1, test_request: dev, credit_card: 1234}
+* `Span3` Name: 'spanB' Attributes: {env: 1, test_request: dev, credit_card: 1234}
* Span4 Name: 'spanC' Attributes: {env: dev, dev_request: false}
properties indicate which spans should be processed. The exclude properties filt
In the following configuration, these spans match the properties, and processor actions are applied:
-* Span1 Name: 'spanB' Attributes: {env: production, test_request: 123, credit_card: 1234, redact_trace: "false"}
-* Span2 Name: 'spanA' Attributes: {env: staging, test_request: false, redact_trace: true}
+* `Span1` Name: 'spanB' Attributes: {env: production, test_request: 123, credit_card: 1234, redact_trace: "false"}
+* `Span2` Name: 'spanA' Attributes: {env: staging, test_request: false, redact_trace: true}
These spans don't match the include properties, and processor actions aren't applied:
-* Span3 Name: 'spanB' Attributes: {env: production, test_request: true, credit_card: 1234, redact_trace: false}
+* `Span3` Name: 'spanB' Attributes: {env: production, test_request: true, credit_card: 1234, redact_trace: false}
* Span4 Name: 'spanC' Attributes: {env: dev, test_request: false} ```json
The following sample shows how to hash existing attribute values.
### Extract The following sample shows how to use a regular expression (regex) to create new attributes based on the value of another attribute.
-For example, given `http.url = http://example.com/path?queryParam1=value1,queryParam2=value2`, the following attributes are inserted:
+For example, given `url.path = /path?queryParam1=value1,queryParam2=value2`, the following attributes are inserted:
* httpProtocol: `http` * httpDomain: `example.com` * httpPath: `path` * httpQueryParams: `queryParam1=value1,queryParam2=value2`
-* http.url: *no* change
+* url.path: *no* change
```json {
For example, given `http.url = http://example.com/path?queryParam1=value1,queryP
"type": "attribute", "actions": [ {
- "key": "http.url",
+ "key": "url.path",
"pattern": "^(?<httpProtocol>.*):\\/\\/(?<httpDomain>.*)\\/(?<httpPath>.*)(\\?|\\&)(?<httpQueryParams>.*)", "action": "extract" }
For example, given `http.url = http://example.com/path?queryParam1=value1,queryP
### Mask
-For example, given `http.url = https://example.com/user/12345622` is updated to `http.url = https://example.com/user/****` using either of the below configurations.
+For example, given `url.path = https://example.com/user/12345622` is updated to `url.path = https://example.com/user/****` using either of the below configurations.
First configuration example:
First configuration example:
"type": "attribute", "actions": [ {
- "key": "http.url",
+ "key": "url.path",
"pattern": "user\\/\\d+", "replace": "user\\/****", "action": "mask"
Second configuration example with regular expression group name:
"type": "attribute", "actions": [ {
- "key": "http.url",
+ "key": "url.path",
"pattern": "^(?<userGroupName>[a-zA-Z.:\/]+)\d+", "replace": "${userGroupName}**", "action": "mask"
Second configuration example with regular expression group name:
} } ```
-### Non-string typed attributes samples
+### Nonstring typed attributes samples
-Starting 3.4.19 GA, telemetry processors support non-string typed attributes:
+Starting 3.4.19 GA, telemetry processors support nonstring typed attributes:
`boolean`, `double`, `long`, `boolean-array`, `double-array`, `long-array`, and `string-array`. When `attributes.type` is not provided in the json, it's default to `string`.
The following sample inserts the new attribute `{"newAttributeKeyStrict": "newAt
```
-Additionally, non-string typed attributes support `regexp`.
+Additionally, nonstring typed attributes support `regexp`.
The following sample inserts the new attribute `{"newAttributeKeyRegexp": "newAttributeValueRegexp"}` into spans and logs where the attribute `longRegexpAttributeKey` matches the value from `400` to `499`.
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
This section lists some common span attributes that telemetry processors can use
| Attribute | Type | Description | ||||
-| `http.method` | string | HTTP request method.|
-| `http.url` | string | Full HTTP request URL in the form `scheme://host[:port]/path?query[#fragment]`. The fragment isn't usually transmitted over HTTP. But if the fragment is known, it should be included.|
-| `http.status_code` | number | [HTTP response status code](https://tools.ietf.org/html/rfc7231#section-6).|
-| `http.flavor` | string | Type of HTTP protocol. |
-| `http.user_agent` | string | Value of the [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) header sent by the client. |
+| `http.request.method` (used to be `http.method`) | string | HTTP request method.|
+| `url.full` (client span) or `url.path` (server span) (used to be `http.url`) | string | Full HTTP request URL in the form `scheme://host[:port]/path?query[#fragment]`. The fragment isn't usually transmitted over HTTP. But if the fragment is known, it should be included.|
+| `http.response.status_code` (used to be `http.status_code`) | number | [HTTP response status code](https://tools.ietf.org/html/rfc7231#section-6).|
+| `network.protocol.version` (used to be `http.flavor`) | string | Type of HTTP protocol. |
+| `user_agent.original` (used to be `http.user_agent`) | string | Value of the [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) header sent by the client. |
### JDBC spans
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
The telemetry processors perform the following actions (in order):
which means it applies to all telemetry that has attributes (currently `requests` and `dependencies`, but soon also `traces`).
- It matches any telemetry that has attributes named `http.method` and `http.url`.
+ It matches any telemetry that has attributes named `http.request.method` and `url.path`.
- Then it extracts the path portion of the `http.url` attribute into a new attribute named `tempName`.
+ Then it extracts `url.path` attribute into a new attribute named `tempName`.
2. The second telemetry processor is a span processor (has type `span`), which means it applies to `requests` and `dependencies`.
The telemetry processors perform the following actions (in order):
"include": { "matchType": "strict", "attributes": [
- { "key": "http.method" },
- { "key": "http.url" }
+ { "key": "http.request.method" },
+ { "key": "url.path" }
] }, "actions": [ {
- "key": "http.url",
+ "key": "url.path",
"pattern": "https?://[^/]+(?<tempPath>/[^?]*)", "action": "extract" }
The telemetry processors perform the following actions (in order):
] }, "name": {
- "fromAttributes": [ "http.method", "tempPath" ],
+ "fromAttributes": [ "http.request.method", "tempPath" ],
"separator": " " } },
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
Use a custom processor:
from azure.monitor.opentelemetry import configure_azure_monitor from opentelemetry import trace
+# Create a SpanEnrichingProcessor instance.
+span_enrich_processor = SpanEnrichingProcessor()
+ # Configure OpenTelemetry to use Azure Monitor with the specified connection string. # Replace `<your-connection-string>` with the connection string to your Azure Monitor Application Insights resource. configure_azure_monitor( connection_string="<your-connection-string>",
+ # Configure the custom span processors to include span enrich processor.
+ span_processors=[span_enrich_processor],
)
-# Create a SpanEnrichingProcessor instance.
-span_enrich_processor = SpanEnrichingProcessor()
-
-# Add the span enrich processor to the current TracerProvider.
-trace.get_tracer_provider().add_span_processor(span_enrich_processor)
... ```
-Add `SpanEnrichingProcessor.py` to your project with the following code:
+Add `SpanEnrichingProcessor` to your project with the following code:
```python # Import the SpanProcessor class from the opentelemetry.sdk.trace module.
Use the add [custom property example](#add-a-custom-property-to-a-span), but rep
# Replace `<your-connection-string>` with the connection string to your Azure Monitor Application Insights resource. configure_azure_monitor( connection_string="<your-connection-string>",
+ # Configure the custom span processors to include span filter processor.
+ span_processors=[span_filter_processor],
)
-
- # Add a SpanFilteringProcessor to the tracer provider.
- trace.get_tracer_provider().add_span_processor(SpanFilteringProcessor())
+ ... ```
- Add `SpanFilteringProcessor.py` to your project with the following code:
+ Add `SpanFilteringProcessor` to your project with the following code:
```python # Import the necessary libraries.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
### [Python](#tab/python) -- Python Application using Python 3.7+
+- Python Application using Python 3.8+
Download the [applicationinsights-agent-3.5.0.jar](https://github.com/microsoft/
> [!WARNING] > > If you are upgrading from an earlier 3.x version, you may be impacted by changing defaults or slight differences in the data we collect. For more information, see the migration section in the release notes.
+> [3.5.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.5.0),
> [3.4.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.4.0), > [3.3.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.3.0), > [3.2.0](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0), and
azure-monitor Vminsights Enable Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md
The following table shows the installation methods available for enabling VM Ins
| [Azure portal](vminsights-enable-portal.md) | Enable individual machines with the Azure portal. | | [Azure Policy](vminsights-enable-policy.md) | Create policy to automatically enable when a supported machine is created. | | [Azure Resource Manager templates](../vm/vminsights-enable-resource-manager.md) | Enable multiple machines by using any of the supported methods to deploy a Resource Manager template, such as the Azure CLI and PowerShell. |
-| [PowerShell](vminsights-enable-powershell.md) | Use a PowerShell script to enable multiple machines. Currently only supported for Log Analytics agent. |
+| [PowerShell](vminsights-enable-powershell.md) | Use a PowerShell script to enable multiple machines. |
| [Manual install](vminsights-enable-hybrid.md) | Virtual machines or physical computers on-premises with other cloud environments.| ### Supported Azure Arc machines
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The following table describes resource limits for Azure NetApp Files:
| Maximum number of files in a single directory | *Approximately* 4 million. <br> See [Determine if a directory is approaching the limit size](#directory-limit). | No | | Maximum number of files [`maxfiles`](#maxfiles) per volume | 106,255,630 | Yes | | Maximum number of export policy rules per volume | 5 | No |
-| Maximum number of quota rules per volume | 100 | Yes |
+| Maximum number of quota rules per volume | 100 | No |
| Minimum assigned throughput for a manual QoS volume | 1 MiB/s | No | | Maximum assigned throughput for a manual QoS volume | 4,500 MiB/s | No | | Number of cross-region replication data protection volumes (destination volumes) | 50 | Yes |
azure-netapp-files Configure Network Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-network-features.md
The edit network features option is available in [all regions that support Stand
> ``` > [!NOTE]
-> You can also revert the option from *Standard* back to *Basic* network features. However, before performing the revert operation, you need to submit a waitlist request through the **[Azure NetApp Files standard networking features (edit volumes) Request Form](https://aka.ms/anfeditnetworkfeaturespreview)**. The revert capability can take approximately one week to be enabled after you submit the waitlist request. You can check the status of the registration by using the following command:
+> You can also revert the option from *Standard* back to *Basic* network features. However, before performing the revert operation, you need to submit a waitlist request through the **[Azure NetApp Files standard networking features (edit volumes) Request Form](https://aka.ms/anfeditnetworkfeatures)**. The revert capability can take approximately one week to be enabled after you submit the waitlist request. You can check the status of the registration by using the following command:
> > ```azurepowershell-interactive > Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFStdToBasicNetworkFeaturesRevert
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
Converting a volume between NFSv3 and NFSv4.1 does not require that you create a
> * Before conversion, you need to unmount the volume from all clients. This operation might require shutdown of your applications that access the volume. > * After successful volume conversion, you need to reconfigure each of the clients that access the volume before you can remount the volume. >
-> If you convert from NFSv4.1 to NFSv3, all advanced NFSv4.1 features such as Access Control Lists (ACLs) and file locking will become unavailable.
+> If you convert from NFSv4.1 to NFSv3, all advanced NFSv4.1 features such as Access Control Lists (ACLs) and file locking become unavailable.
## Considerations
Converting a volume between NFSv3 and NFSv4.1 does not require that you create a
* You cannot convert a single-protocol NFS volume to a dual-protocol volume, or the other way around. * You cannot convert a destination volume in a cross-region replication relationship. * Converting an NFSv4.1 volume to NFSv3 will cause all advanced NFSv4.1 features such as ACLs and file locking to become unavailable.
-* Converting a volume from NFSv3 to NFSv4.1 will cause the `.snapshot` directory to be hidden from NFSv4.1 clients. The directory will still be accessible.
-* Converting a volume from NFSv4.1 to NFSv3 will cause the `.snapshot` directory to be visible. You can modify the properties of the volume to [hide the snapshot path](snapshots-edit-hide-path.md).
+* Converting a volume from NFSv3 to NFSv4.1 causes the `.snapshot` directory to be hidden from NFSv4.1 clients. The directory remains accessible.
+* Converting a volume from NFSv4.1 to NFSv3 causes the `.snapshot` directory to be visible. You can modify the properties of the volume to [hide the snapshot path](snapshots-edit-hide-path.md).
## Register the option
In this example, you have an existing NFSv4.1 volume that you want to convert to
This section shows you how to convert the NFSv4.1 volume to NFSv3. > [!IMPORTANT]
-> Converting a volume from NFSv4.1 to NFSv3 will result in all NFSv4.1 features such as ACLs and file locking to become unavailable.
+> Converting a volume from NFSv4.1 to NFSv3 results in all NFSv4.1 features such as ACLs and file locking to become unavailable.
1. Before converting the volume: 1. Unmount it from the clients in preparation. See [Mount or unmount a volume](azure-netapp-files-mount-unmount-volumes-for-virtual-machines.md).
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
If you're using a custom RBAC role or the [built-in Contributor role](../role-ba
* `Microsoft.NetApp/locations/{location}/quotaLimits` * `Microsoft.NetApp/locations/{location}/quotaLimits/{quotaLimitName}` * `Microsoft.NetApp/locations/{location}/regionInfo`
+* `Microsoft.NetApp/locations/{location}/regionInfos`
* `Microsoft.NetApp/locations/{location}/queryNetworkSiblingSet` * `Microsoft.NetApp/locations/{location}/updateNetworkSiblingSet`
azure-netapp-files Manage Default Individual User Group Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-default-individual-user-group-quotas.md
Quota rules only come into effect on the CRR/CZR destination volume after the re
* A quota rule is specific to a volume and is applied to an existing volume. * Deleting a volume results in deleting all the associated quota rules for that volume.
-* You can create a maximum number of 100 quota rules for a volume. You can [request limit increase](azure-netapp-files-resource-limits.md#request-limit-increase) through the portal.
+* You can create a maximum number of 100 quota rules for a volume.
* Azure NetApp Files doesn't support individual group quota and default group quota for SMB and dual protocol volumes. * Group quotas track the consumption of disk space for files owned by a particular group. A file can only be owned by exactly one group. * Auxiliary groups only help in permission checks. You can't use auxiliary groups to restrict the quota (disk space) for a file.
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
Title: Manage favorites in Azure portal description: Learn how to add or remove services from the Favorites list. Previously updated : 09/27/2023 Last updated : 03/04/2024
In this example, we'll add **Cost Management + Billing** to the **Favorites** li
1. **Cost Management + Billing** is now added as the last item in your **Favorites** list.
+## Rearrange your favorite services
+
+When you add a new service to your **Favorites** list, it appears as the last item in the list. To move it to a different position, select the new service, then drag and drop it to the desired location.
+
+You can continue to drag and drop any service in your **Favorites** list to place them in the order you choose.
+ ## Remove an item from Favorites You can remove items directly from the **Favorites** list.
You can remove items directly from the **Favorites** list.
:::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png" alt-text="Screenshot showing how to remove a service from Favorites in the Azure portal.":::
-2. On the information card, select the star so that it changes from filled to unfilled. The service is removed from the **Favorites** list.
+2. On the information card, select the star so that it changes from filled to unfilled.
+
+The service is then removed from your **Favorites** list.
## Next steps
+- Learn how to [manage your settings and preferences in the Azure portal](set-preferences.md).
- To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).-- Explore the [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR).+
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
Last updated 11/07/2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# What are the resource providers for Azure services
azure-resource-manager Delete Resource Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/delete-resource-group.md
Last updated 09/27/2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Azure Resource Manager resource group and resource deletion
azure-resource-manager Lock Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/lock-resources.md
Last updated 01/02/2024
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Lock your resources to protect your infrastructure
azure-resource-manager Manage Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/manage-resources-python.md
Last updated 04/21/2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Manage Azure resources by using Python
azure-resource-manager Move Resource Group And Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-resource-group-and-subscription.md
Last updated 04/24/2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Move Azure resources to a new resource group or subscription
azure-resource-manager Resource Providers And Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-providers-and-types.md
Last updated 07/14/2023
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Azure resource providers and types
azure-resource-manager Tag Resources Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources-python.md
Last updated 01/27/2024
content_well_notification: - AI-contribution
+ai-usage: ai-assisted
# Apply tags with Python
azure-vmware Backup Azure Netapp Files Datastores Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/backup-azure-netapp-files-datastores-vms.md
Before you back up your Azure NetApp Files datastores, you must add your Azure a
### Prerequisites
-* Cloud Backup for Virtual Machines requires outbound internet access from your Azure VMware Solution SDDC. For more information, see [Internet connectivity design considerations](../azure-vmware/concepts-design-public-internet-access.md).
+* Cloud Backup for Virtual Machines uses the Azure REST API to collect information about your Azure NetApp Files datastores and create Azure NetApp Files snapshots. To interact with the [Azure REST API](/rest/api/azure/), the Cloud Backup for Virtual Machines virtual appliance requires outbound internet access from your Azure VMware Solution SDDC via HTTPS. For more information, see [Internet connectivity design considerations](../azure-vmware/concepts-design-public-internet-access.md).
* You must have sufficient permissions to [Create a Microsoft Entra app and service principal](../active-directory/develop/howto-create-service-principal-portal.md) within your Microsoft Entra tenant and assign to the application a role in your Azure subscription. You can use the built-in role of "contributor" or you can create a custom role with only the required permissions:
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Understand the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 3/1/2024 Last updated : 3/4/2024
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
During onboarding, to create a connection between your VMware vCenter and Azure,
As a last step, run the following command:
-`az rest --method delete --url` [URL](https://management.azure.com/subscriptions/%3Csubscrption-id%3E/resourcegroups/%3Cresource-group-name%3E/providers/Microsoft.AVS/privateClouds/%3Cprivate-cloud-name%3E/addons/arc?api-version=2022-05-01%22)
+
+```
+az rest --method delete --"https://management.azure.com/subscriptions/%3Csubscrption-id%3E/resourcegroups/%3Cresource-group-name%3E/providers/Microsoft.AVS/privateClouds/%3Cprivate-cloud-name%3E/addons/arc?api-version=2022-05-01%22"
+```
+ Once that step is done, Arc no longer works on the Azure VMware Solution private cloud. When you delete Arc resources from vCenter Server, it doesn't affect the Azure VMware Solution private cloud for the customer.
azure-web-pubsub Howto Web Pubsub Tunnel Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-web-pubsub-tunnel-tool.md
SET WebPubSubConnectionString=<your connection string>
:::image type="content" alt-text="Screenshot of starting the test WebSocket connection and send message." source="media\howto-web-pubsub-tunnel-tool\overview-client.png" ::: :::image type="content" alt-text="Screenshot of showing the traffic inspection." source="media\howto-web-pubsub-tunnel-tool\overview-tunnel.png" :::+
+## Under the hood
+
+How does the tunnel tool work? Under the hood it starts a tunnel connection to the Web PubSub service. Tunnel connection is a persistent connection (WebSocket) connects to the `/server/tunnel` endpoint, and it is considered as one kind of server connections. You could also use ACL rules in the service to disable such connections from connecting.
backup Backup Azure Private Endpoints Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-private-endpoints-concept.md
Title: Private endpoints for Azure Backup - Overview
description: This article explains about the concept of private endpoints for Azure Backup that helps to perform backups while maintaining the security of your resources. Previously updated : 08/14/2023 Last updated : 03/04/2024
In addition to these connections, when the workload extension or MARS agent is i
| | | | | Azure Backup | `*.backup.windowsazure.com` | 443 | | Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443 |
-| Microsoft Entra ID | `*.australiacentral.r.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
+| Microsoft Entra ID | `*.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
When the workload extension or MARS agent is installed for Recovery Services vault with private endpoint, the following endpoints are communicated:
When the workload extension or MARS agent is installed for Recovery Services vau
| | | | | Azure Backup | `*.privatelink.<geo>.backup.windowsazure.com` | 443 | | Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443 |
-| Microsoft Entra ID | `*.australiacentral.r.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
+| Microsoft Entra ID | `*.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
>[!Note] >In the above text, `<geo>` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
backup Backup Azure Sap Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sap-hana-database.md
Title: Back up an SAP HANA database to Azure with Azure Backup description: In this article, learn how to back up an SAP HANA database to Azure virtual machines with the Azure Backup service. Previously updated : 11/29/2023 Last updated : 03/04/2024
You can also use the following FQDNs to allow access to the required services fr
| -- | | - | | Azure Backup | `*.backup.windowsazure.com` | 443 | | Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443 |
-| Azure AD | `*.australiacentral.r.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | 443 <br><br> As applicable |
+| Azure AD | `*.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | 443 <br><br> As applicable |
#### Use an HTTP proxy server to route traffic
backup Backup Sql Server Database Azure Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-sql-server-database-azure-vms.md
Title: Back up multiple SQL Server VMs from the vault description: In this article, learn how to back up SQL Server databases on Azure virtual machines with Azure Backup from the Recovery Services vault Previously updated : 01/24/2024 Last updated : 03/04/2024
You can also use the following FQDNs to allow access to the required services fr
| -- | | | Azure Backup | `*.backup.windowsazure.com` | 443 | Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` | 443
-| Azure AD | `*.australiacentral.r.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | 443 <br><br> As applicable
+| Azure AD | `*.login.microsoft.com` <br><br> Allow access to FQDNs under sections 56 and 59 according to [this article](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online) | 443 <br><br> As applicable
#### Allow connectivity for servers behind internal load balancers
backup Private Endpoints Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/private-endpoints-overview.md
Title: Private endpoints overview description: Understand the use of private endpoints for Azure Backup and the scenarios where using private endpoints helps maintain the security of your resources. Previously updated : 08/14/2023 Last updated : 03/04/2024
In addition to these connections when the workload extension or MARS agent is in
| | | | | Azure Backup | `*.backup.windowsazure.com` | 443 | | Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` <br><br> `*.storage.azure.net` | 443 |
-| Microsoft Entra ID | `*.australiacentral.r.login.microsoft.com` <br><br> [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
+| Microsoft Entra ID | `*.login.microsoft.com` <br><br> [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
When the workload extension or MARS agent is installed for Recovery Services vault with private endpoint, the following endpoints are hit:
When the workload extension or MARS agent is installed for Recovery Services vau
| | | | | Azure Backup | `*.privatelink.<geo>.backup.windowsazure.com` | 443 | | Azure Storage | `*.blob.core.windows.net` <br><br> `*.queue.core.windows.net` <br><br> `*.blob.storage.azure.net` <br><br> `*.storage.azure.net` | 443 |
-| Microsoft Entra ID |`*.australiacentral.r.login.microsoft.com` <br><br> [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
+| Microsoft Entra ID |`*.login.microsoft.com` <br><br> [Allow access to FQDNs under sections 56 and 59](/microsoft-365/enterprise/urls-and-ip-address-ranges?view=o365-worldwide&preserve-view=true#microsoft-365-common-and-office-online). | 443 <br><br> As applicable |
>[!Note] >In the above text, `<geo>` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
backup Quick Sap Hana Database Instance Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-sap-hana-database-instance-restore.md
For more information about the supported configurations and scenarios, see [SAP
## Restore the database ## Next steps
backup Sap Hana Database Instances Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-backup.md
You'll also need to [create a policy for SAP HANA database backup](backup-azure-
To discover the database instance where the snapshot is present, see the [Back up SAP HANA databases in Azure VMs](backup-azure-sap-hana-database.md#discover-the-databases). ## Next steps
backup Sap Hana Database Instances Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-instances-restore.md
Learn about the [SAP HANA instance snapshot restore architecture](azure-backup-a
## Restore the entire system to a snapshot restore point ## Restore the database to a different logpoint-in-time over a snapshot
backup Tutorial Configure Sap Hana Database Instance Snapshot Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-configure-sap-hana-database-instance-snapshot-backup.md
For more information on the supported scenarios, see the [support matrix](./sap-
- [Create a Recovery Services vault](sap-hana-database-instances-backup.md#create-a-recovery-services-vault) for the backup and restore operations. - [Create a backup policy](sap-hana-database-instances-backup.md#create-a-policy). ## Next steps
bastion Bastion Create Host Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-create-host-powershell.md
This section helps you create a virtual network, subnets, and deploy Azure Basti
1. Configure and set the Azure Bastion subnet for your virtual network. This subnet is reserved exclusively for Azure Bastion resources. You must create this subnet using the name value **AzureBastionSubnet**. This value lets Azure know which subnet to deploy the Bastion resources to. The example in the following section helps you add an Azure Bastion subnet to an existing VNet.
- [!INCLUDE [Important about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
+ [!INCLUDE [Important about BastionSubnet size](../../includes/bastion-subnet-size.md)]
Set the variable.
bastion Create Host Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/create-host-cli.md
This section helps you deploy Azure Bastion using Azure CLI.
1. Use [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create) to create the subnet to which Bastion will be deployed. The subnet you create must be named **AzureBastionSubnet**. This subnet is reserve exclusively for Azure Bastion resources. If you don't have a subnet with the naming value **AzureBastionSubnet**, Bastion won't deploy.
- [!INCLUDE [Note about BastionSubnet size.](../../includes/bastion-subnet-size.md)]
+ [!INCLUDE [Note about BastionSubnet size](../../includes/bastion-subnet-size.md)]
```azurecli-interactive az network vnet subnet create --name AzureBastionSubnet --resource-group TestRG1 --vnet-name VNet1 --address-prefix 10.1.1.0/26
batch Batch Upgrade Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-upgrade-policy.md
+
+ Title: Provision a pool with Auto OS Upgrade
+description: Learn how to create a Batch pool with Auto OS Upgrade so that customers can have control over their OS upgrade strategy to ensure safe, workload-aware OS upgrade deployments.
+ Last updated : 02/29/2024+++
+# Create an Azure Batch pool with Automatic Operating System (OS) Upgrade
+
+> [!IMPORTANT]
+> - Support for pools with Auto OS Upgrade in Azure Batch is currently in public preview, and is currently controlled by an account-level feature flag. If you want to use this feature, please start a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md) and provide your batch account to request its activation.
+> - This preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> - For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+When you create an Azure Batch pool, you can provision the pool with nodes that have Auto OS Upgrade enabled. This article explains how to set up a Batch pool with Auto OS Upgrade.
+
+## Why use Auto OS Upgrade?
+
+Auto OS Upgrade is used to implement an automatic operating system upgrade strategy and control within Azure Batch Pools. Here are some reasons for using Auto OS Upgrade:
+
+- **Security.** Auto OS Upgrade ensures timely patching of vulnerabilities and security issues within the operating system image, to enhance the security of compute resources. It helps prevent potential security vulnerabilities from posing a threat to applications and data.
+- **Minimized Availability Disruption.** Auto OS Upgrade is used to minimize the availability disruption of compute nodes during OS upgrades. It is achieved through task-scheduling-aware upgrade deferral and support for rolling upgrades, ensuring that workloads experience minimal disruption.
+- **Flexibility.** Auto OS Upgrade allows you to configure your automatic operating system upgrade strategy, including percentage-based upgrade coordination and rollback support. It means you can customize your upgrade strategy to meet your specific performance and availability requirements.
+- **Control.** Auto OS Upgrade provides you with control over your operating system upgrade strategy to ensure secure, workload-aware upgrade deployments. You can tailor your policy configurations to meet the specific needs of your organization.
+
+In summary, the use of Auto OS Upgrade helps improve security, minimize availability disruptions, and provide both greater control and flexibility for your workloads.
+
+## How does Auto OS Upgrade work?
+
+When upgrading images, VMs in Azure Batch Pool will follow roughly the same work flow as VirtualMachineScaleSets. To learn more about the detailed steps involved in the Auto OS Upgrade process for VirtualMachineScaleSets, you can refer to [VirtualMachineScaleSet page](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#how-does-automatic-os-image-upgrade-work).
+
+However, if *automaticOSUpgradePolicy.osRollingUpgradeDeferral* is set to 'true' and an upgrade becomes available when a batch node is actively running tasks, the upgrade will be delayed until all tasks have been completed on the node.
+
+> [!Note]
+> If a pool has enabled *osRollingUpgradeDeferral*, its nodes will be displayed as *upgradingos* state during the upgrade process. Please note that the *upgradingos* state will only be shown when you are using the the API version of 2024-02-01 or later. If you're using an old API version to call *GetTVM/ListTVM*, the node will be in a *rebooting* state when upgrading.
+
+## Supported OS images
+Only certain OS platform images are currently supported for automatic upgrade. For detailed images list, you can get from [VirtualMachineScaleSet page](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md#supported-os-images).
+
+## Requirements
+
+* The version property of the image must be set to **latest**.
+* For Batch Management API, use API version 2024-02-01 or higher. For Batch Service API, use API version 2024-02-01.19.0 or higher.
+* Ensure that external resources specified in the pool are available and updated. Examples include SAS URI for bootstrapping payload in VM extension properties, payload in storage account, reference to secrets in the model, and more.
+* If you are using the property *virtualMachineConfiguration.windowsConfiguration.enableAutomaticUpdates*, this property must set to 'false' in the pool definition. The enableAutomaticUpdates property enables in-VM patching where "Windows Update" applies operating system patches without replacing the OS disk. With automatic OS image upgrades enabled, an extra patching process through Windows Update isn't required.
+
+### Additional requirements for custom images
+
+* When a new version of the image is published and replicated to the region of that pool, the VMs will be upgraded to the latest version of the Azure Compute Gallery image. If the new image isn't replicated to the region where the pool is deployed, the VM instances won't be upgraded to the latest version. Regional image replication allows you to control the rollout of the new image for your VMs.
+* The new image version shouldn't be excluded from the latest version for that gallery image. Image versions excluded from the gallery image's latest version won't be rolled out through automatic OS image upgrade.
+
+## Configure Auto OS Upgrade
+
+If you intend to implement Auto OS Upgrades within a pool, it's essential to configure the **UpgradePolicy** field during the pool creation process. To configure automatic OS image upgrades, make sure that the *automaticOSUpgradePolicy.enableAutomaticOSUpgrade* property is set to 'true' in the pool definition.
+
+> [!Note]
+> **Upgrade Policy mode** and **Automatic OS Upgrade Policy** are separate settings and control different aspects of the provisioned scale set by Azure Batch. The Upgrade Policy mode will determine what happens to existing instances in scale set. However, Automatic OS Upgrade Policy enableAutomaticOSUpgrade is specific to the OS image and tracks changes the image publisher has made and determines what happens when there is an update to the image.
+
+> [!IMPORTANT]
+> If you are using [user subscription](batch-account-create-portal.md#additional-configuration-for-user-subscription-mode), it's essential to note that a subscription feature **Microsoft.Compute/RollingUpgradeDeferral** is required for your subscription to be registered. You cannot use *osRollingUpgradeDeferral* unless this feature is registered. To enable this feature, please [manually register](../azure-resource-manager/management/preview-features.md) it on your subscription.
+
+### REST API
+The following example describes how to create a pool with Auto OS Upgrade via REST API:
+
+```http
+PUT https://management.azure.com/subscriptions/<subscriptionid>/resourceGroups/<resourcegroupName>/providers/Microsoft.Batch/batchAccounts/<batchaccountname>/pools/<poolname>?api-version=2024-02-01
+```
+
+Request Body
+
+```json
+{
+ "name": "test1",
+ "type": "Microsoft.Batch/batchAccounts/pools",
+ "parameters": {
+ "properties": {
+ "vmSize": "Standard_d4s_v3",
+ "deploymentConfiguration": {
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2019-datacenter-smalldisk",
+ "version": "latest"
+ },
+ "nodePlacementConfiguration": {
+ "policy": "Zonal"
+ },
+ "nodeAgentSKUId": "batch.node.windows amd64",
+ "windowsConfiguration": {
+ "enableAutomaticUpdates": false
+ }
+ }
+ },
+ "scaleSettings": {
+ "fixedScale": {
+ "targetDedicatedNodes": 2,
+ "targetLowPriorityNodes": 0
+ }
+ },
+ "upgradePolicy": {
+ "mode": "Automatic",
+ "automaticOSUpgradePolicy": {
+ "disableAutomaticRollback": true,
+ "enableAutomaticOSUpgrade": true,
+ "useRollingUpgradePolicy": true,
+ "osRollingUpgradeDeferral": true
+ },
+ "rollingUpgradePolicy": {
+ "enableCrossZoneUpgrade": true,
+ "maxBatchInstancePercent": 20,
+ "maxUnhealthyInstancePercent": 20,
+ "maxUnhealthyUpgradedInstancePercent": 20,
+ "pauseTimeBetweenBatches": "PT0S",
+ "prioritizeUnhealthyInstances": false,
+ "rollbackFailedInstancesOnPolicyBreach": false
+ }
+ }
+ }
+ }
+}
+```
+
+### SDK (C#)
+The following code snippet shows an example of how to use the [Batch .NET](https://www.nuget.org/packages/Microsoft.Azure.Batch/) client library to create a pool of Auto OS Upgrade via C# codes. For more details about Batch .NET, view the [reference documentation](/dotnet/api/microsoft.azure.batch).
+
+```csharp
+public async Task CreateUpgradePolicyPool()
+{
+ // Authenticate
+ var clientId = Environment.GetEnvironmentVariable("CLIENT_ID");
+ var clientSecret = Environment.GetEnvironmentVariable("CLIENT_SECRET");
+ var tenantId = Environment.GetEnvironmentVariable("TENANT_ID");
+ var subscriptionId = Environment.GetEnvironmentVariable("SUBSCRIPTION_ID");
+ ClientSecretCredential credential = new ClientSecretCredential(tenantId, clientId, clientSecret);
+ ArmClient client = new ArmClient(credential, subscriptionId);
+
+ // Get an existing Batch account
+ string resourceGroupName = "testrg";
+ string accountName = "testaccount";
+ ResourceIdentifier batchAccountResourceId = BatchAccountResource.CreateResourceIdentifier(subscriptionId, resourceGroupName, accountName);
+ BatchAccountResource batchAccount = client.GetBatchAccountResource(batchAccountResourceId);
+
+ // get the collection of this BatchAccountPoolResource
+ BatchAccountPoolCollection collection = batchAccount.GetBatchAccountPools();
+
+ // Define the pool
+ string poolName = "testpool";
+ BatchAccountPoolData data = new BatchAccountPoolData()
+ {
+ VmSize = "Standard_d4s_v3",
+ DeploymentConfiguration = new BatchDeploymentConfiguration()
+ {
+ VmConfiguration = new BatchVmConfiguration(new BatchImageReference()
+ {
+ Publisher = "MicrosoftWindowsServer",
+ Offer = "WindowsServer",
+ Sku = "2019-datacenter-smalldisk",
+ Version = "latest",
+ },
+ nodeAgentSkuId: "batch.node.windows amd64")
+ {
+ NodePlacementPolicy = BatchNodePlacementPolicyType.Zonal,
+ IsAutomaticUpdateEnabled = false
+ },
+ },
+ ScaleSettings = new BatchAccountPoolScaleSettings()
+ {
+ FixedScale = new BatchAccountFixedScaleSettings()
+ {
+ TargetDedicatedNodes = 2,
+ TargetLowPriorityNodes = 0,
+ },
+ },
+ UpgradePolicy = new UpgradePolicy()
+ {
+ Mode = UpgradeMode.Automatic,
+ AutomaticOSUpgradePolicy = new AutomaticOSUpgradePolicy()
+ {
+ DisableAutomaticRollback = true,
+ EnableAutomaticOSUpgrade = true,
+ UseRollingUpgradePolicy = true,
+ OSRollingUpgradeDeferral = true
+ },
+ RollingUpgradePolicy = new RollingUpgradePolicy()
+ {
+ EnableCrossZoneUpgrade = true,
+ MaxBatchInstancePercent = 20,
+ MaxUnhealthyInstancePercent = 20,
+ MaxUnhealthyUpgradedInstancePercent = 20,
+ PauseTimeBetweenBatches = "PT0S",
+ PrioritizeUnhealthyInstances = false,
+ RollbackFailedInstancesOnPolicyBreach = false,
+ }
+ }
+ };
+
+ ArmOperation<BatchAccountPoolResource> lro = await collection.CreateOrUpdateAsync(WaitUntil.Completed, poolName, data);
+ BatchAccountPoolResource result = lro.Value;
+
+ // the variable result is a resource, you could call other operations on this instance as well
+ // but just for demo, we get its data from this resource instance
+ BatchAccountPoolData resourceData = result.Data;
+ // for demo we just print out the id
+ Console.WriteLine($"Succeeded on id: {resourceData.Id}");
+}
+```
+
+## FAQs
+
+- How can I enable Auto OS Upgrade?
+
+ Start a [support request](../azure-portal/supportability/how-to-create-azure-support-request.md) and provide your batch account to request its activation.
+
+- Will my tasks be disrupted if I enabled Auto OS Upgrade?
+
+ Tasks won't be disrupted when *automaticOSUpgradePolicy.osRollingUpgradeDeferral* is set to 'true'. In that case, the upgrade will be postponed until node becomes idle. Otherwise, node will upgrade when it receives a new OS version, regardless of whether it is currently running a task or not. So we strongly advise enabling *automaticOSUpgradePolicy.osRollingUpgradeDeferral*.
+
+## Next steps
+
+- Learn how to use a [managed image](batch-custom-images.md) to create a pool.
+- Learn how to use the [Azure Compute Gallery](batch-sig-images.md) to create a pool.
container-apps Java Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-overview.md
Previously updated : 02/27/2024 Last updated : 03/04/2024
When you use Container Apps for your containerized Java applications, you get:
- **Deployment options**: Azure Container Apps integrates with [Buildpacks](https://buildpacks.io), which allows you to deploy directly from a Maven build, via artifact files, or with your own Dockerfile. -- **Automatic memory fitting**: Container Apps optimizes how the Java Virtual Machines (JVM) [manages memory](java-memory-fit.md), making the most possible memory available to your Java applications.
+- **Automatic memory fitting**: Container Apps optimizes how the Java Virtual Machine (JVM) [manages memory](java-memory-fit.md), making the most possible memory available to your Java applications.
- **Build environment variables**: You can configure [custom key-value pairs](java-build-environment-variables.md) to control the Java image build from source code.
container-instances Container Instances Container Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-groups.md
Learn how to deploy a multi-container container group with an Azure Resource Man
<!-- LINKS - Internal --> [resource-manager template]: container-instances-multi-container-group.md [yaml-file]: container-instances-multi-container-yaml.md
-[region-availability]: container-instances-region-availability.md
+[region-availability]: container-instances-resource-and-quota-limits.md
[resource-requests]: /rest/api/container-instances/2022-09-01/container-groups/create-or-update#resourcerequests [resource-limits]: /rest/api/container-instances/2022-09-01/container-groups/create-or-update#resourcelimits [resource-requirements]: /rest/api/container-instances/2022-09-01/container-groups/create-or-update#resourcerequirements
container-instances Container Instances Resource And Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-resource-and-quota-limits.md
# Resource availability & quota limits for ACI
-This article details the availability and quota limits of Azure Container Instances compute, memory, and storage resources in Azure regions and by target operating system. For a general list of available regions for Azure Container Instances, see [available regions](https://azure.microsoft.com/regions/services/). For product feature availability in Azure regions, see [Region availability](container-instances-region-availability.md).
+This article details the availability and quota limits of Azure Container Instances compute, memory, and storage resources in Azure regions and by target operating system. For a general list of available regions for Azure Container Instances, see [available regions](https://azure.microsoft.com/regions/services/).
Values presented are the maximum resources available per deployment of a [container group](container-instances-container-groups.md). Values are current at time of publication.
The following maximum resources are available to a container group deployed usin
## GPU Container Resources (Preview)
+> [!IMPORTANT]
+> K80 and P100 GPU SKUs are retiring by August 31st, 2023. This is due to the retirement of the underlying VMs used: [NC Series](../virtual-machines/nc-series-retirement.md) and [NCv2 Series](../virtual-machines/ncv2-series-retirement.md) Although V100 SKUs will be available, it is receommended to use Azure Kubernetes Service instead. GPU resources are not fully supported and should not be used for production workloads. Use the following resources to migrate to AKS today: [How to Migrate to AKS](../aks/aks-migration.md).
+ > [!NOTE] > Not all limit increase requests are guaranteed to be approved. > Deployments with GPU resources are not supported in an Azure virtual network deployment and are only available on Linux container groups.
container-instances Container Instances Spot Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-spot-containers-overview.md
Spot containers offer the best of both worlds by combining the simplicity of ACI
This feature is designed for customers who need to run interruptible workloads with no strict availability requirements. Azure Container Instances Spot Containers support both Linux and Windows containers, providing flexibility for different operating system environments.
-This article provides background about the feature, limitations, and resources. To see the availability of Spot containers in Azure regions, see [Resource and region availability](container-instances-region-availability.md).
+This article provides background about the feature, limitations, and resources. To see the availability of Spot containers in Azure regions, see [Resource and quota limits](container-instances-resource-and-quota-limits.md).
> [!NOTE] > Spot containers with Azure Container Instances is in preview and is not recommended for production scenarios.
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Last updated 06/17/2022
This article provides background about virtual network scenarios, limitations, and resources. For deployment examples using the Azure CLI, see [Deploy container instances into an Azure virtual network](container-instances-vnet.md). > [!IMPORTANT]
-> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [Regions and resource availability](container-instances-region-availability.md).
+> Container group deployment to a virtual network is generally available for Linux and Windows containers, in most regions where Azure Container Instances is available. For details, see [Resource availability and quota limits](container-instances-resource-and-quota-limits.md).
## Scenarios
data-factory Concept Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concept-managed-airflow.md
Managed Airflow in Azure Data Factory offers a range of powerful features, inclu
- **Microsoft Entra integration**ΓÇ»- You can enable [Microsoft Entra RBAC](concepts-roles-permissions.md) against your Airflow environment for a single sign-on experience that is secured by Microsoft Entra ID. - **Managed Virtual Network integration**ΓÇ»(coming soon) - You can access your data source via private endpoints or on-premises using ADF Managed Virtual Network that provides extra network isolation. - **Metadata encryption**ΓÇ»- Managed Airflow automatically encrypts metadata using Azure-managed keys to ensure your environment is secure by default. It also supports double encryption with a [Customer-Managed Key (CMK)](enable-customer-managed-key.md). -- **Azure Monitoring and alerting**ΓÇ»- All the logs generated by Managed Airflow is exported to Azure Monitor. It also provides metrics to track critical conditions and help you notify if the need be.
+- **Azure Monitoring and alerting**ΓÇ»- All the logs generated by Managed Airflow are exported to Azure Monitor. It also provides metrics to track critical conditions and help you notify if the need be.
## Architecture :::image type="content" source="media/concept-managed-airflow/architecture.png" lightbox="media/concept-managed-airflow/architecture.png" alt-text="Screenshot shows architecture in Managed Airflow.":::
data-factory Connector Microsoft Fabric Warehouse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-fabric-warehouse.md
+
+ Title: Copy and transform data in Microsoft Fabric Warehouse
+
+description: Learn how to copy and transform data to and from Microsoft Fabric Warehouse using Azure Data Factory or Azure Synapse Analytics pipelines.
++++++ Last updated : 02/23/2024++
+# Copy and transform data in Microsoft Fabric Warehouse using Azure Data Factory or Azure Synapse Analytics
++
+This article outlines how to use Copy Activity to copy data from and to Microsoft Fabric Warehouse. To learn more, read the introductory article for [Azure Data Factory](introduction.md) or [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md).
+
+## Supported capabilities
+
+This Microsoft Fabric Warehouse connector is supported for the following capabilities:
+
+| Supported capabilities|IR | Managed private endpoint|
+|| --| --|
+|[Copy activity](copy-activity-overview.md) (source/sink)|&#9312; &#9313;|Γ£ô |
+|[Lookup activity](control-flow-lookup-activity.md)|&#9312; &#9313;|Γ£ô |
+|[GetMetadata activity](control-flow-get-metadata-activity.md)|&#9312; &#9313;|Γ£ô |
+|[Script activity](transform-data-using-script.md)|&#9312; &#9313;|Γ£ô |
+|[Stored procedure activity](transform-data-using-stored-procedure.md)|&#9312; &#9313;|Γ£ô |
+
+*&#9312; Azure integration runtime &#9313; Self-hosted integration runtime*
+
+## Get started
++
+## Create a Microsoft Fabric Warehouse linked service using UI
+
+Use the following steps to create a Microsoft Fabric Warehouse linked service in the Azure portal UI.
+
+1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then select New:
+
+ # [Azure Data Factory](#tab/data-factory)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI.":::
+
+ # [Azure Synapse](#tab/synapse-analytics)
+
+ :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI.":::
+
+2. Search for Warehouse and select the connector.
+
+ :::image type="content" source="media/connector-microsoft-fabric-warehouse/microsoft-fabric-warehouse-connector.png" alt-text="Screenshot showing select Microsoft Fabric Warehouse connector.":::
+
+1. Configure the service details, test the connection, and create the new linked service.
+
+ :::image type="content" source="media/connector-microsoft-fabric-warehouse/configure-microsoft-fabric-warehouse-linked-service.png" alt-text="Screenshot of configuration for Microsoft Fabric Warehouse linked service.":::
++
+## Connector configuration details
+
+The following sections provide details about properties that are used to define Data Factory entities specific to Microsoft Fabric Warehouse.
+
+## Linked service properties
+
+The Microsoft Fabric Warehouse connector supports the following authentication types. See the corresponding sections for details:
+
+- [Service principal authentication](#service-principal-authentication)
+
+### Service principal authentication
+
+To use service principal authentication, follow these steps.
+
+1. [Register an application with the Microsoft Identity platform](../active-directory/develop/quickstart-register-app.md) and [add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret). Afterwards, make note of these values, which you use to define the linked service:
+
+ - Application (client) ID, which is the service principal ID in the linked service.
+ - Client secret value, which is the service principal key in the linked service.
+ - Tenant ID
+
+2. Grant the service principal at least the **Contributor** role in Microsoft Fabric workspace. Follow these steps:
+ 1. Go to your Microsoft Fabric workspace, select **Manage access** on the top bar. Then select **Add people or groups**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-warehouse/fabric-workspace-manage-access.png" alt-text="Screenshot shows selecting Fabric workspace Manage access.":::
+
+ :::image type="content" source="media/connector-microsoft-fabric-warehouse/manage-access-pane.png" alt-text=" Screenshot shows Fabric workspace Manage access pane.":::
+
+ 1. In **Add people** pane, enter your service principal name, and select your service principal from the drop-down list.
+
+ 1. Specify the role as **Contributor** or higher (Admin, Member), then select **Add**.
+
+ :::image type="content" source="media/connector-microsoft-fabric-warehouse/select-workspace-role.png" alt-text="Screenshot shows adding Fabric workspace role.":::
+
+ 1. Your service principal is displayed on **Manage access** pane.
+
+These properties are supported for the linked service:
+
+| Property | Description | Required |
+|: |: |: |
+| type | The type property must be set to **Warehouse**. |Yes |
+| endpoint | The endpoint of Microsoft Fabric Warehouse server. | Yes |
+| workspaceId | The Microsoft Fabric workspace ID. | Yes |
+| artifactId | The Microsoft Fabric Warehouse object ID. | Yes |
+| tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
+| servicePrincipalId | Specify the application's client ID. | Yes |
+| servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are **ServicePrincipalKey** and **ServicePrincipalCert**. | Yes |
+| servicePrincipalCredential | The service principal credential. <br/> When you use **ServicePrincipalKey** as the credential type, specify the application's client secret value. Mark this field as **SecureString** to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). <br/> When you use **ServicePrincipalCert** as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is **PKCS #12**.| Yes |
+| connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+
+**Example: using service principal key authentication**
+
+You can also store service principal key in Azure Key Vault.
+
+```json
+{
+ "name": "MicrosoftFabricWarehouseLinkedService",
+ "properties": {
+ "type": "Warehouse",
+ "typeProperties": {
+ "endpoint": "<Microsoft Fabric Warehouse server endpoint>",
+ "workspaceId": "<Microsoft Fabric workspace ID>",
+ "artifactId": "<Microsoft Fabric Warehouse object ID>",
+ "tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>",
+ "servicePrincipalId": "<service principal id>",
+ "servicePrincipalCredentialType": "ServicePrincipalKey",
+ "servicePrincipalCredential": {
+ "type": "SecureString",
+ "value": "<service principal key>"
+ }
+ },
+ "connectVia": {
+ "referenceName": "<name of Integration Runtime>",
+ "type": "IntegrationRuntimeReference"
+ }
+ }
+}
+```
+
+## Dataset properties
+
+For a full list of sections and properties available for defining datasets, see the [Datasets](concepts-datasets-linked-services.md) article.
+
+The following properties are supported for Microsoft Fabric Warehouse dataset:
+
+| Property | Description | Required |
+| :-- | :-- | :-- |
+| type | The **type** property of the dataset must be set to **WarehouseTable**. | Yes |
+| schema | Name of the schema. |No for source, Yes for sink |
+| table | Name of the table/view. |No for source, Yes for sink |
+
+### Dataset properties example
+
+```json
+{
+ "name": "FabricWarehouseTableDataset",
+ "properties": {
+ "type": "WarehouseTable",
+ "linkedServiceName": {
+ "referenceName": "<Microsoft Fabric Warehouse linked service name>",
+ "type": "LinkedServiceReference"
+ },
+ "schema": [ < physical schema, optional, retrievable during authoring >
+ ],
+ "typeProperties": {
+ "schema": "<schema_name>",
+ "table": "<table_name>"
+ }
+ }
+}
+```
+
+## Copy activity properties
+
+For a full list of sections and properties available for defining activities, see [Copy activity configurations](copy-activity-overview.md#configuration) and [Pipelines and activities](concepts-pipelines-activities.md). This section provides a list of properties supported by the Microsoft Fabric Warehouse source and sink.
+
+### Microsoft Fabric Warehouse as the source
+
+>[!TIP]
+>To load data from Microsoft Fabric Warehouse efficiently by using data partitioning, learn more from [Parallel copy from Microsoft Fabric Warehouse](#parallel-copy-from-microsoft-fabric-warehouse).
+
+To copy data from Microsoft Fabric Warehouse, set the **type** property in the Copy Activity source to **WarehouseSource**. The following properties are supported in the Copy Activity **source** section:
+
+| Property | Description | Required |
+| : | :-- | :- |
+| type | The **type** property of the Copy Activity source must be set to **WarehouseSource**. | Yes |
+| sqlReaderQuery | Use the custom SQL query to read data. Example: `select * from MyTable`. | No |
+| sqlReaderStoredProcedureName | The name of the stored procedure that reads data from the source table. The last SQL statement must be a SELECT statement in the stored procedure. | No |
+| storedProcedureParameters | Parameters for the stored procedure.<br/>Allowed values are name or value pairs. Names and casing of parameters must match the names and casing of the stored procedure parameters. | No |
+| queryTimeout | Specifies the timeout for query command execution. Default is 120 minutes. | No |
+| isolationLevel | Specifies the transaction locking behavior for the SQL source. The allowed value is **Snapshot**. If not specified, the database's default isolation level is used. For more information, see [system.data.isolationlevel](/dotnet/api/system.data.isolationlevel). | No |
+| partitionOptions | Specifies the data partitioning options used to load data from Microsoft Fabric Warehouse. <br>Allowed values are: **None** (default), and **DynamicRange**.<br>When a partition option is enabled (that is, not `None`), the degree of parallelism to concurrently load data from a Microsoft Fabric Warehouse is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. | No |
+| partitionSettings | Specify the group of the settings for data partitioning. <br>Apply when the partition option isn't `None`. | No |
+| ***Under `partitionSettings`:*** | | |
+| partitionColumnName | Specify the name of the source column **in integer or date/datetime type** (`int`, `smallint`, `bigint`, `date`, `datetime2`) that will be used by range partitioning for parallel copy. If not specified, the index or the primary key of the table is detected automatically and used as the partition column.<br>Apply when the partition option is `DynamicRange`. If you use a query to retrieve the source data, hook `?AdfDynamicRangePartitionCondition` in the WHERE clause. For an example, see the [Parallel copy from Microsoft Fabric Warehouse](#parallel-copy-from-microsoft-fabric-warehouse) section. | No |
+| partitionUpperBound | The maximum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value. <br>Apply when the partition option is `DynamicRange`. For an example, see the [Parallel copy from Microsoft Fabric Warehouse](#parallel-copy-from-microsoft-fabric-warehouse) section. | No |
+| partitionLowerBound | The minimum value of the partition column for partition range splitting. This value is used to decide the partition stride, not for filtering the rows in table. All rows in the table or query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br>Apply when the partition option is `DynamicRange`. For an example, see the [Parallel copy from Microsoft Fabric Warehouse](#parallel-copy-from-microsoft-fabric-warehouse) section. | No |
++
+>[!Note]
+>When using stored procedure in source to retrieve data, note if your stored procedure is designed as returning different schema when different parameter value is passed in, you may encounter failure or see unexpected result when importing schema from UI or when copying data to Microsoft Fabric Warehouse with auto table creation.
+
+#### Example: using SQL query
+
+```json
+"activities":[
+ {
+ "name": "CopyFromMicrosoftFabricWarehouse",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<Microsoft Fabric Warehouse input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "WarehouseSource",
+ "sqlReaderQuery": "SELECT * FROM MyTable"
+ },
+ "sink": {
+ "type": "<sink type>"
+ }
+ }
+ }
+]
+```
+
+#### Example: using stored procedure
+
+```json
+"activities":[
+ {
+ "name": "CopyFromMicrosoftFabricWarehouse",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<Microsoft Fabric Warehouse input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "WarehouseSource",
+ "sqlReaderStoredProcedureName": "CopyTestSrcStoredProcedureWithParameters",
+ "storedProcedureParameters": {
+ "stringData": { "value": "str3" },
+ "identifier": { "value": "$$Text.Format('{0:yyyy}', <datetime parameter>)", "type": "Int"}
+ }
+ },
+ "sink": {
+ "type": "<sink type>"
+ }
+ }
+ }
+]
+```
+
+#### Sample stored procedure:
+
+```sql
+CREATE PROCEDURE CopyTestSrcStoredProcedureWithParameters
+(
+ @stringData varchar(20),
+ @identifier int
+)
+AS
+SET NOCOUNT ON;
+BEGIN
+ select *
+ from dbo.UnitTestSrcTable
+ where dbo.UnitTestSrcTable.stringData != stringData
+ and dbo.UnitTestSrcTable.identifier != identifier
+END
+GO
+```
+
+### Microsoft Fabric Warehouse as a sink type
+
+Azure Data Factory and Synapse pipelines support [Use COPY statement](#use-copy-statement) to load data into Microsoft Fabric Warehouse.
+
+To copy data to Microsoft Fabric Warehouse, set the sink type in Copy Activity to **WarehouseSink**. The following properties are supported in the Copy Activity **sink** section:
+
+| Property | Description | Required |
+| :- | :-- | :-- |
+| type | The **type** property of the Copy Activity sink must be set to **WarehouseSink**. | Yes |
+| allowCopyCommand| Indicates whether to use [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?source=recommendations&view=fabric&preserve-view=true) to load data into Microsoft Fabric Warehouse. <br/><br/>See [Use COPY statement to load data into Microsoft Fabric Warehouse](#use-copy-statement) section for constraints and details.<br/><br/>The allowed value is **True**. | Yes |
+| copyCommandSettings | A group of properties that can be specified when `allowCopyCommand` property is set to TRUE. | No |
+| writeBatchTimeout| This property specifies the wait time for the insert, upsert and stored procedure operation to complete before it times out.<br/><br/>Allowed values are for the timespan. An example is "00:30:00" for 30 minutes. If no value is specified, the timeout defaults to "00:30:00"| No |
+| preCopyScript | Specify a SQL query for Copy Activity to run before writing data into Microsoft Fabric Warehouse in each run. Use this property to clean up the preloaded data. | No |
+| tableOption | Specifies whether to [automatically create the sink table](copy-activity-overview.md#auto-create-sink-tables) if not exists based on the source schema. Allowed values are: `none` (default), `autoCreate`. |No |
+| disableMetricsCollection | The service collects metrics for copy performance optimization and recommendations, which introduce additional master DB access. If you are concerned with this behavior, specify `true` to turn it off. | No (default is `false`) |
++
+#### Example: Microsoft Fabric Warehouse sink
+
+```json
+"activities":[
+ {
+ "name": "CopyToMicrosoftFabricWarehouse",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "<input dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "<Microsoft Fabric Warehouse output dataset name>",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "<source type>"
+ },
+ "sink": {
+ "type": "WarehouseSink",
+ "allowCopyCommand": true,
+ "tableOption": "autoCreate",
+ "disableMetricsCollection": false
+ }
+ }
+ }
+]
+```
+
+## Parallel copy from Microsoft Fabric Warehouse
+
+The Microsoft Fabric Warehouse connector in copy activity provides built-in data partitioning to copy data in parallel. You can find data partitioning options on the **Source** tab of the copy activity.
++
+When you enable partitioned copy, copy activity runs parallel queries against your Microsoft Fabric Warehouse source to load data by partitions. The parallel degree is controlled by the [`parallelCopies`](copy-activity-performance-features.md#parallel-copy) setting on the copy activity. For example, if you set `parallelCopies` to four, the service concurrently generates and runs four queries based on your specified partition option and settings, and each query retrieves a portion of data from your Microsoft Fabric Warehouse.
+
+You are suggested to enable parallel copy with data partitioning especially when you load large amount of data from your Microsoft Fabric Warehouse. The following are suggested configurations for different scenarios. When copying data into file-based data store, it's recommended to write to a folder as multiple files (only specify folder name), in which case the performance is better than writing to a single file.
+
+| Scenario | Suggested settings |
+| | |
+| Full load from large table, while with an integer or datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Partition column** (optional): Specify the column used to partition data. If not specified, the index or primary key column is used.<br/>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, and all rows in the table will be partitioned and copied. If not specified, copy activity auto detect the values.<br><br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions - IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. |
+| Load a large amount of data by using a custom query, while with an integer or date/datetime column for data partitioning. | **Partition options**: Dynamic range partition.<br>**Query**: `SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`.<br>**Partition column**: Specify the column used to partition data.<br>**Partition upper bound** and **partition lower bound** (optional): Specify if you want to determine the partition stride. This is not for filtering the rows in table, and all rows in the query result will be partitioned and copied. If not specified, copy activity auto detect the value.<br><br>During execution, the service replaces `?AdfRangePartitionColumnName` with the actual column name and value ranges for each partition, and sends to Microsoft Fabric Warehouse. <br>For example, if your partition column "ID" has values range from 1 to 100, and you set the lower bound as 20 and the upper bound as 80, with parallel copy as 4, the service retrieves data by 4 partitions- IDs in range <=20, [21, 50], [51, 80], and >=81, respectively. <br><br>Here are more sample queries for different scenarios:<br> 1. Query the whole table: <br>`SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition`<br> 2. Query from a table with column selection and additional where-clause filters: <br>`SELECT <column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 3. Query with subqueries: <br>`SELECT <column_list> FROM (<your_sub_query>) AS T WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>`<br> 4. Query with partition in subquery: <br>`SELECT <column_list> FROM (SELECT <your_sub_query_column_list> FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition) AS T`|
+
+Best practices to load data with partition option:
+
+- Choose distinctive column as partition column (like primary key or unique key) to avoid data skew.
+- If you use Azure Integration Runtime to copy data, you can set larger "[Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units)" (>4) to utilize more computing resource. Check the applicable scenarios there.
+- "[Degree of copy parallelism](copy-activity-performance-features.md#parallel-copy)" control the partition numbers, setting this number too large sometime hurts the performance, recommend setting this number as (DIU or number of Self-hosted IR nodes) * (2 to 4).
+- Note that Microsoft Fabric Warehouse can execute a maximum of 32 queries at a moment, setting "Degree of copy parallelism" too large may cause a Warehouse throttling issue.
+
+**Example: query with dynamic range partition**
+
+```json
+"source": {
+ "type": "WarehouseSource",
+ "query":ΓÇ»"SELECT * FROM <TableName> WHERE ?AdfDynamicRangePartitionCondition AND <your_additional_where_clause>",
+ "partitionOption": "DynamicRange",
+ "partitionSettings": {
+ "partitionColumnName": "<partition_column_name>",
+ "partitionUpperBound": "<upper_value_of_partition_column (optional) to decide the partition stride, not as data filter>",
+ "partitionLowerBound": "<lower_value_of_partition_column (optional) to decide the partition stride, not as data filter>"
+ }
+}
+```
+## <a name="use-copy-statement"></a> Use COPY statement to load data into Microsoft Fabric Warehouse
+
+Using [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?source=recommendations&view=fabric&preserve-view=true) is a simple and flexible way to load data into Microsoft Fabric Warehouse with high throughput. To learn more details, check [Bulk load data using the COPY statement](../synapse-analytics/sql-data-warehouse/quickstart-bulk-load-copy-tsql.md)
++
+- If your source data is in **Azure Blob or Azure Data Lake Storage Gen2**, and the **format is COPY statement compatible**, you can use copy activity to directly invoke COPY statement to let Microsoft Fabric Warehouse pull the data from source. For details, see **[Direct copy by using COPY statement](#direct-copy-by-using-copy-statement)**.
+- If your source data store and format isn't originally supported by COPY statement, use the **[Staged copy by using COPY statement](#staged-copy-by-using-copy-statement)** feature instead. The staged copy feature also provides you with better throughput. It automatically converts the data into COPY statement compatible format, stores the data in Azure Blob storage, then calls COPY statement to load data into Microsoft Fabric Warehouse.
+
+>[!TIP]
+>When using COPY statement with Azure Integration Runtime, effective [Data Integration Units (DIU)](copy-activity-performance-features.md#data-integration-units) is always 2. Tuning the DIU doesn't impact the performance.
+
+### Direct copy by using COPY statement
+
+Microsoft Fabric Warehouse COPY statement directly supports Azure Blob, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2. If your source data meets the criteria described in this section, use COPY statement to copy directly from the source data store to Microsoft Fabric Warehouse. Otherwise, use [Staged copy by using COPY statement](#staged-copy-by-using-copy-statement). The service checks the settings and fails the copy activity run if the criteria is not met.
+
+- The **source linked service and format** are with the following types and authentication methods:
+
+ | Supported source data store type | Supported format | Supported source authentication type |
+ | :-- | -- | :-- |
+ | [Azure Blob](connector-azure-blob-storage.md) | [Delimited text](format-delimited-text.md) | Account key authentication, shared access signature authentication|
+ | &nbsp; | [Parquet](format-parquet.md) | Account key authentication, shared access signature authentication |
+ | [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md) | [Delimited text](format-delimited-text.md)<br/>[Parquet](format-parquet.md) | Account key authentication, shared access signature authentication |
+
+- Format settings are with the following:
+
+ - For **Parquet**: `compression` can be **no compression**, **Snappy**, or **``GZip``**.
+ - For **Delimited text**:
+ - `rowDelimiter` is explicitly set as **single character** or "**\r\n**", the default value is not supported.
+ - `nullValue` is left as default or set to **empty string** ("").
+ - `encodingName` is left as default or set to **utf-8 or utf-16**.
+ - `escapeChar` must be same as `quoteChar`, and is not empty.
+ - `skipLineCount` is left as default or set to 0.
+ - `compression` can be **no compression** or **``GZip``**.
+
+- If your source is a folder, `recursive` in copy activity must be set to true, and `wildcardFilename` need to be `*` or `*.*`.
+
+- `wildcardFolderPath` , `wildcardFilename` (other than `*`or `*.*`), `modifiedDateTimeStart`, `modifiedDateTimeEnd`, `prefix`, `enablePartitionDiscovery` and `additionalColumns` are not specified.
+
+The following COPY statement settings are supported under `allowCopyCommand` in copy activity:
+
+| Property | Description | Required |
+| :- | :-- | :-- |
+| defaultValues | Specifies the default values for each target column in Microsoft Fabric Warehouse. The default values in the property overwrite the DEFAULT constraint set in the data warehouse, and identity column cannot have a default value. | No |
+| additionalOptions | Additional options that will be passed to a Microsoft Fabric Warehouse COPY statement directly in "With" clause in [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?source=recommendations&view=fabric&preserve-view=true). Quote the value as needed to align with the COPY statement requirements. | No |
+
+```json
+"activities":[
+ {
+ "name": "CopyFromAzureBlobToMicrosoftFabricWarehouseViaCOPY",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "ParquetDataset",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "MicrosoftFabricWarehouseDataset",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "ParquetSource",
+ "storeSettings":{
+ "type": "AzureBlobStorageReadSettings",
+ "recursive": true
+ }
+ },
+ "sink": {
+ "type": "WarehouseSink",
+ "allowCopyCommand": true,
+ "copyCommandSettings":ΓÇ»{
+ "defaultValues":ΓÇ»[
+ {
+ "columnName":ΓÇ»"col_string",
+ "defaultValue":ΓÇ»"DefaultStringValue"
+ }
+ ],
+ "additionalOptions":ΓÇ»{
+ "MAXERRORS":ΓÇ»"10000",
+ "DATEFORMAT":ΓÇ»"'ymd'"
+ }
+ }
+ },
+ "enableSkipIncompatibleRow": true
+ }
+ }
+]
+```
+
+### Staged copy by using COPY statement
+
+When your source data is not natively compatible with COPY statement, enable data copying via an interim staging Azure Blob or Azure Data Lake Storage Gen2 (it can't be Azure Premium Storage). In this case, the service automatically converts the data to meet the data format requirements of COPY statement. Then it invokes COPY statement to load data into Microsoft Fabric Warehouse. Finally, it cleans up your temporary data from the storage. See [Staged copy](copy-activity-performance-features.md#staged-copy) for details about copying data via a staging.
+
+To use this feature, create an [Azure Blob Storage linked service](connector-azure-blob-storage.md#linked-service-properties) or [Azure Data Lake Storage Gen2 linked service](connector-azure-data-lake-storage.md#linked-service-properties) with **account key or system-managed identity authentication** that refers to the Azure storage account as the interim storage.
+
+>[!IMPORTANT]
+>- When you use managed identity authentication for your staging linked service, learn the needed configurations for [Azure Blob](connector-azure-blob-storage.md#managed-identity) and [Azure Data Lake Storage Gen2](connector-azure-data-lake-storage.md#managed-identity) respectively.
+>- If your staging Azure Storage is configured with VNet service endpoint, you must use managed identity authentication with "allow trusted Microsoft service" enabled on storage account, refer to [Impact of using VNet Service Endpoints with Azure storage](/azure/azure-sql/database/vnet-service-endpoint-rule-overview#impact-of-using-virtual-network-service-endpoints-with-azure-storage).
+
+>[!IMPORTANT]
+>If your staging Azure Storage is configured with Managed Private Endpoint and has the storage firewall enabled, you must use managed identity authentication and grant Storage Blob Data Reader permissions to the Synapse SQL Server to ensure it can access the staged files during the COPY statement load.
+
+```json
+"activities":[
+ {
+ "name": "CopyFromSQLServerToMicrosoftFabricWarehouseViaCOPYstatement",
+ "type": "Copy",
+ "inputs": [
+ {
+ "referenceName": "SQLServerDataset",
+ "type": "DatasetReference"
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "MicrosoftFabricWarehouseDataset",
+ "type": "DatasetReference"
+ }
+ ],
+ "typeProperties": {
+ "source": {
+ "type": "SqlSource",
+ },
+ "sink": {
+ "type": "WarehouseSink",
+ "allowCopyCommand": true
+ },
+ "stagingSettings": {
+ "linkedServiceName": {
+ "referenceName": "MyStagingStorage",
+ "type": "LinkedServiceReference"
+ }
+ }
+ }
+ }
+]
+```
+
+## Lookup activity properties
+
+To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md).
+
+## GetMetadata activity properties
+
+To learn details about the properties, check [GetMetadata activity](control-flow-get-metadata-activity.md)
+
+## Data type mapping for Microsoft Fabric Warehouse
+
+When you copy data from Microsoft Fabric Warehouse, the following mappings are used from Microsoft Fabric Warehouse data types to interim data types within the service internally. To learn about how the copy activity maps the source schema and data type to the sink, see [Schema and data type mappings](copy-activity-schema-and-type-mapping.md).
+
+| Microsoft Fabric Warehouse data type | Data Factory interim data type |
+| : | :-- |
+| bigint | Int64 |
+| binary | Byte[] |
+| bit | Boolean |
+| char | String, Char[] |
+| date | DateTime |
+| datetime2 | DateTime |
+| Decimal | Decimal |
+| FILESTREAM attribute (varbinary(max)) | Byte[] |
+| Float | Double |
+| int | Int32 |
+| numeric | Decimal |
+| real | Single |
+| smallint | Int16 |
+| time | TimeSpan |
+| uniqueidentifier | Guid |
+| varbinary | Byte[] |
+| varchar | String, Char[] |
++
+## Next steps
+
+For a list of data stores supported as sources and sinks by the copy activity, see [Supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
data-factory Continuous Integration Delivery Manual Promotion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/continuous-integration-delivery-manual-promotion.md
Use the steps below to promote a Resource Manager template to each environment f
:::image type="content" source="media/continuous-integration-delivery/custom-deployment-edit-template.png" alt-text="Edit template":::
-1. In the settings section, enter the configuration values, like linked service credentials, required for the deployment. When you're done, select **Review + create** to deploy the Resource Manager template.
+1. In the **Custom deployment** section, enter the target subscription, region, and other details required for the deployment. When you're done, select **Review + create** to deploy the Resource Manager template.
:::image type="content" source="media/continuous-integration-delivery/continuous-integration-image5.png" alt-text="Settings section":::
data-factory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new-archive.md
This archive page retains updates from older months.
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly updates.
+## June 2023
+
+### Continuous integration and continuous deployment
+
+npm package now supports pre-downloaded bundle for building ARM templates. If your firewall setting is blocking direct download for your npm package, you can now pre-load the package upfront, and let npm package consume local version instead. This is a super boost for your CI/CD pipeline in a firewalled environment.
+
+### Region expansion
+
+Azure Data Factory is now available in Sweden Central. You can co-locate your ETL workflow in this new region if you are utilizing the region for storing and managing your modern data warehouse. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-just-became/ba-p/3857249)
+
+### Data movement
+
+Securing outbound traffic with Azure Data Factory's outbound network rules is now supported. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/securing-outbound-traffic-with-azure-data-factory-s-outbound/ba-p/3844032)
+
+### Connectors
+
+The Amazon S3 connector is now supported as a sink destination using Mapping Data Flows. [Learn more](connector-amazon-simple-storage-service.md)
+
+### Data flow
+
+We introduce optional Source settings for DelimitedText and JSON sources in top-level CDC resource. The top-level CDC resource in data factory now supports optional source configurations for Delimited and JSON sources. You can now select the column/row delimiters for delimited sources and set the document type for JSON sources. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-optional-source-settings-for-delimitedtext-and-json/ba-p/3824274)
+ ## May 2023 ### Data Factory in Microsoft Fabric
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly. For older months' update
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
+## February 2024
+
+### Data movement
+
+We added native UI support of parameterization for the following linked
+ ## January 2024 ### Data movement
Merge schema option in delta sink now supports schema evolution in Mapping Data
Documentation search now included in the Azure Data Factory search toolbar. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/documentation-search-now-embedded-in-azure-data-factory/ba-p/3873890)
-## June 2023
-
-### Continuous integration and continuous deployment
-
-npm package now supports pre-downloaded bundle for building ARM templates. If your firewall setting is blocking direct download for your npm package, you can now pre-load the package upfront, and let npm package consume local version instead. This is a super boost for your CI/CD pipeline in a firewalled environment.
-
-### Region expansion
-
-Azure Data Factory is now available in Sweden Central. You can co-locate your ETL workflow in this new region if you are utilizing the region for storing and managing your modern data warehouse. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-just-became/ba-p/3857249)
-
-### Data movement
-
-Securing outbound traffic with Azure Data Factory's outbound network rules is now supported. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/securing-outbound-traffic-with-azure-data-factory-s-outbound/ba-p/3844032)
-
-### Connectors
-
-The Amazon S3 connector is now supported as a sink destination using Mapping Data Flows. [Learn more](connector-amazon-simple-storage-service.md)
-
-### Data flow
-
-We introduce optional Source settings for DelimitedText and JSON sources in top-level CDC resource. The top-level CDC resource in data factory now supports optional source configurations for Delimited and JSON sources. You can now select the column/row delimiters for delimited sources and set the document type for JSON sources. [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/introducing-optional-source-settings-for-delimitedtext-and-json/ba-p/3824274)
- ## Related content - [What's new archive](whats-new-archive.md)
data-manager-for-agri Concepts Ingest Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-weather-data.md
Run the install command through Azure Resource Manager ARM Client tool. The comm
armclient PUT /subscriptions/<subscriptionid>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<farmbeats-resource-name>/extensions/<extensionid>?api-version=2020-05-12-preview '{}' ``` > [!NOTE]
-> All values within < > is to be replaced with your respective environment values.
+> All values within < > is to be replaced with your respective environment values. The extension ID supported today is 'IBM.TWC'
> ### Sample output
data-manager-for-agri Concepts Llm Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-llm-apis.md
Our LLM capability enables seamless selection of APIs mapped to farm operations
## Prerequisites - An instance of [Azure Data Manager for Agriculture](quickstart-install-data-manager-for-agriculture.md)-- An instance of [Azure Open AI](../ai-services/openai/how-to/create-resource.md) created in your Azure subscription.
+- An instance of [Azure OpenAI](../ai-services/openai/how-to/create-resource.md) created in your Azure subscription.
- You need [Azure Key Vault](../key-vault/general/quick-create-portal.md) - You need [Azure Container Registry](../container-registry/container-registry-get-started-portal.md)
These use cases help input providers to plan equipment, seeds, applications and
## Next steps * Fill this onboarding [**form**](https://forms.office.com/r/W4X381q2rd) to get started with testing our LLM feature.
-* View our Azure Data Manager for Agriculture APIs [here](/rest/api/data-manager-for-agri).
+* View our Azure Data Manager for Agriculture APIs [here](/rest/api/data-manager-for-agri).
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Title: Configure Microsoft Defender for Cloud to automatically assess machines for vulnerabilities
-description: Use Microsoft Defender for Cloud to ensure your machines have a vulnerability assessment solution
+ Title: Automatically assess machines for vulnerabilities
+description: Use Microsoft Defender for Cloud to automatically ensure your machines have a vulnerability assessment solution
Last updated 04/24/2023
To assess your machines for vulnerabilities, you can use one of the following so
Learn more in [View and remediate findings from vulnerability assessment solutions on your machines](remediate-vulnerability-findings-vm.md).
-## Next steps
+## Next step
> [!div class="nextstepaction"] > [Remediate the discovered vulnerabilities](remediate-vulnerability-findings-vm.md)-
-Defender for Cloud also offers vulnerability assessment for your:
--- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- [Vulnerability assessments for AWS with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md)
defender-for-cloud Common Questions Microsoft Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/common-questions-microsoft-defender-vulnerability-management.md
Title: Common questions about the Microsoft Defender Vulnerability Management solution
+ Title: Microsoft Defender Vulnerability Management FAQ
description: Answers to common questions on the new Container VA offering powered by Microsoft Defender Vulnerability Management Last updated 11/30/2023
No. Each unique image is billed once according to the pricing of the Defender pl
Vulnerability assessment for container images in the registry is agentless. Vulnerability assessment for runtime supports both agentless and agent-based deployment. This approach allows us to provide maximum visibility when vulnerability assessment is enabled, while providing improved refresh rate for image inventory on clusters running our agent.
-## Is there any difference in supported environments between the Qualys and Microsoft Defender Vulnerability Management powered offerings?
-
-Both offerings support registry scan for ACR and ECR as well as runtime vulnerability assessment for AKS and EKS.
- ## How complicated is it to enable container vulnerability assessment powered by Microsoft Defender Vulnerability Management? The Microsoft Defender Vulnerability Management powered offering is already enabled by default in all supported plans. For instructions on how to re-enable Microsoft Defender Vulnerability Management with a single click if you previously disabled this offering, see [Enabling vulnerability assessments powered by Microsoft Defender Vulnerability Management](enable-vulnerability-assessment.md).
The Microsoft Defender Vulnerability Management powered offering is already enab
In Azure, new images are typically scanned in a few minutes, and it might take up to an hour in rare cases. In AWS, new images are typically scanned within a few hours, and might take up to a day in rare cases.
-## Is there any difference between scanning criteria for the Qualys and Microsoft Defender Vulnerability Management offerings?
-
-Container vulnerability assessment powered by Microsoft Defender Vulnerability Management for Azure supports all scan triggers supported by Qualys, and in addition also supports scanning of all images pushed in the last 90 days to a registry. For more information, see [scanning triggers for Microsoft Defender Vulnerability Management for Azure](agentless-vulnerability-assessment-azure.md#scan-triggers). Container vulnerability assessment powered by Microsoft Defender Vulnerability Management for AWS supports a subset of the scanning criteria. For more information, see [scanning triggers for Microsoft Defender Vulnerability Management for AWS](agentless-vulnerability-assessment-aws.md#scan-triggers).
-
-## Is there a difference in rescan period between the Qualys and Microsoft Defender Vulnerability Management offerings?
-
-Vulnerability assessments performed using the Qualys scanner are refreshed weekly.
-Vulnerability assessments performed using the Microsoft Defender Vulnerability Management scanner are refreshed daily. For Defender for Container Registries (deprecated), rescan period is once every 7 days for vulnerability assessments performed by both the Qualys and Microsoft Defender Vulnerability Management scanner.
-
-## Is there any difference between the OS and language packages covered by the Qualys and Microsoft Defender Vulnerability Management offerings?
-
-Container vulnerability assessment powered by Microsoft Defender Vulnerability Management supports all OS packages and language packages supported by Qualys except FreeBSD. In addition, the offering powered by Microsoft Defender Vulnerability Management also provides support for Red Hat Enterprise version 8 and 9, CentOS versions 8 and 9, Oracle Linux 9, openSUSE Tumbleweed, Debian 12, Fedora 36 and 37, and CBL-Mariner 1 and 2.
-There's no difference for coverage of language specific packages between the Qualys and Microsoft Defender Vulnerability Management powered offerings.
--- [Full list of supported packages and their versions for Microsoft Defender Vulnerability Management](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)--- [Full list of supported packages and their versions for Qualys](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated)- ## Are there any other capabilities that are unique to the Microsoft Defender Vulnerability Management powered offering? - Each reported vulnerability is enriched with real-world exploit exploitability insights, helping customers prioritize remediation of vulnerabilities with known exploit methods and exploitability tools. Exploit sources include CISA key, exploit DB, Microsoft Security Response Center, and more.
defender-for-cloud Concept Data Security Posture Prepare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md
Title: Support and prerequisites for data-aware security posture
-description: Learn about the requirements for data-aware security posture in Microsoft Defender for Cloud
+description: Learn about the requirements for data-aware security posture in Microsoft Defender for Cloud.
Previously updated : 01/28/2024 Last updated : 03/04/2024
The table summarizes support for data-aware posture management.
|**Support** | **Details**| | | |
-|What Azure data resources can I discover? | **Object storage:**<br /><br />[Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).<br /><br /><br />**Databases**<br /><br />Azure SQL Databases |
-|What AWS data resources can I discover? | **Object storage:**<br /><br />AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.<br /><br />**Databases**<br /><br />- Amazon Aurora<br />- Amazon RDS for PostgreSQL<br />- Amazon RDS for MySQL<br />- Amazon RDS for MariaDB<br />- Amazon RDS for SQL Server (non-custom)<br />- Amazon RDS for Oracle Database (non-custom, SE2 Edition only) <br /><br />Prerequisites and limitations: <br />- Automated backups need to be enabled. <br />- The IAM role created for the scanning purposes (DefenderForCloud-DataSecurityPostureDB by default) needs to have permissions to the KMS key used for the encryption of the RDS instance. <br />- You can't share a DB snapshot that uses an option group with permanent or persistent options, except for Oracle DB instances that have the **Timezone** or **OLS** option (or both). [Learn more](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html) |
+|What Azure data resources can I discover? | **Object storage:**<br /><br />[Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).<br /><br /><br />**Databases**<br /><br />Azure SQL Databases |
+|What AWS data resources can I discover? | **Object storage:**<br /><br />AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.<br /><br />**Databases**<br /><br />- Amazon Aurora<br />- Amazon RDS for PostgreSQL<br />- Amazon RDS for MySQL<br />- Amazon RDS for MariaDB<br />- Amazon RDS for SQL Server (noncustom)<br />- Amazon RDS for Oracle Database (noncustom, SE2 Edition only) <br /><br />Prerequisites and limitations: <br />- Automated backups need to be enabled. <br />- The IAM role created for the scanning purposes (DefenderForCloud-DataSecurityPostureDB by default) needs to have permissions to the KMS key used for the encryption of the RDS instance. <br />- You can't share a DB snapshot that uses an option group with permanent or persistent options, except for Oracle DB instances that have the **Timezone** or **OLS** option (or both). [Learn more](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html) |
|What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region | |What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> `Microsoft.Authorization/roleAssignments/*` (read, write, delete) **and** `Microsoft.Security/pricings/*` (read, write, delete) **and** `Microsoft.Security/pricings/SecurityOperators` (read, write)<br/><br/> Amazon S3 buckets and RDS instances: AWS account permission to run Cloud Formation (to create a role). <br/><br/>GCP storage buckets: Google account permission to run script (to create a role). | |What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.| |What Azure regions are supported? | You can discover Azure storage accounts in:<br/><br/> Asia East; Asia South East; Australia Central; Australia Central 2; Australia East; Australia South East; Brazil South; Brazil Southeast; Canada Central; Canada East; Europe North; Europe West; France Central; France South; Germany North; Germany West Central; India Central; India South; Japan East; Japan West; Jio India West; Korea Central; Korea South; Norway East; Norway West; South Africa North; South Africa West; Sweden Central; Switzerland North; Switzerland West; UAE North; UK South; UK West; US Central; US East; US East 2; US North Central; US South Central; US West; US West 2; US West 3; US West Central; <br/><br/> You can discover Azure SQL Databases in any region where Defender CSPM and Azure SQL Databases are supported. |
-|What AWS regions are supported? | S3:<br /><br />Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Montreal); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/><br />RDS:<br /><br/>Africa (Capetown); Asia Pacific (Hong Kong SAR); Asia Pacific (Hyderabad); Asia Pacific (Melbourne); Asia Pacific (Mumbai); Asia Pacific (Osaka); Asia Pacific (Seoul); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); Europe (Zurich); Middle East (UAE); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br /><br /> Discovery is done locally within the region. |
+|What AWS regions are supported? | S3:<br /><br />Asia Pacific (Mumbai); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Montreal); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br/><br/><br />RDS:<br /><br/>Africa (Cape Town); Asia Pacific (Hong Kong SAR); Asia Pacific (Hyderabad); Asia Pacific (Melbourne); Asia Pacific (Mumbai); Asia Pacific (Osaka); Asia Pacific (Seoul); Asia Pacific (Singapore); Asia Pacific (Sydney); Asia Pacific (Tokyo); Canada (Central); Europe (Frankfurt); Europe (Ireland); Europe (London); Europe (Paris); Europe (Stockholm); Europe (Zurich); Middle East (UAE); South America (São Paulo); US East (Ohio); US East (N. Virginia); US West (N. California): US West (Oregon).<br /><br /> Discovery is done locally within the region. |
|What GCP regions are supported? | europe-west1, us-east1, us-west1, us-central1, us-east4, asia-south1, northamerica-northeast1| |Do I need to install an agent? | No, discovery requires no agent installation. | |What's the cost? | The feature is included with the Defender CSPM and Defender for Storage plans, and doesnΓÇÖt incur extra costs except for the respective plan costs. |
For object storage:
For databases: - Databases are scanned on a weekly basis.-- For newly enabled subscriptions, results will appear within 24 hours.
+- For newly enabled subscriptions, results appear within 24 hours.
### Discovering AWS S3 buckets
To protect AWS resources in Defender for Cloud, set up an AWS connector using a
- Use all KMS keys only for RDS on source account - Create & full control on all KMS keys with tag prefix *DefenderForDatabases* - Create alias for KMS keys-- KMS keys are created once for each region that contains RDS instances. The creation of a KMS key may incur a minimal additional cost, according to AWS KMS pricing.
+- KMS keys are created once for each region that contains RDS instances. The creation of a KMS key may incur a minimal extra cost, according to AWS KMS pricing.
### Discovering GCP storage buckets
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Title: Use Azure Monitor gallery workbooks with Defender for Cloud data
+ Title: Azure Monitor workbooks with Defender for Cloud data
description: Learn how to create rich, interactive reports for your Microsoft Defender for Cloud data by using workbooks from the integrated Azure Monitor workbooks gallery.
Defender for Cloud includes vulnerability scanners for your machines, containers
Learn more about using these scanners: - [Find vulnerabilities with Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md)-- [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md)-- [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md) - [Scan your SQL resources for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md) Findings for each resource type are reported in separate recommendations:
The DevOps Security workbook provides a customizable visual report of your DevOp
:::image type="content" source="media/custom-dashboards-azure-workbooks/devops-workbook.png" alt-text="Screenshot that shows a sample results page after you select the DevOps workbook." lightbox="media/custom-dashboards-azure-workbooks/devops-workbook.png"::: > [!NOTE]
-> To use this workbork, your environment must have a [GitHub connector](quickstart-onboard-github.md), [GitLab connector](quickstart-onboard-gitlab.md), or [Azure DevOps connector](quickstart-onboard-devops.md).
+> To use this workbook, your environment must have a [GitHub connector](quickstart-onboard-github.md), [GitLab connector](quickstart-onboard-gitlab.md), or [Azure DevOps connector](quickstart-onboard-devops.md).
To deploy the workbook:
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
To protect the Azure Resource Manager based registries in your subscription, ena
Defender for Cloud identifies Azure Resource Manager based ACR registries in your subscription and seamlessly provides Azure-native vulnerability assessment and management for your registry's images.
-**Microsoft Defender for container registries** includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities. The integrated scanner is powered by Qualys, the industry-leading vulnerability scanning vendor.
+**Microsoft Defender for container registries** includes a vulnerability scanner to scan the images in your Azure Resource Manager-based Azure Container Registry registries and provide deeper visibility into your images' vulnerabilities.
-When issues are found ΓÇô by Qualys or Defender for Cloud ΓÇô you'll get notified in the workload protection dashboard. For every vulnerability, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Defender for Cloud's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#container-recommendations).
+When issues are found, you'll get notified in the workload protection dashboard. For every vulnerability, Defender for Cloud provides actionable recommendations, along with a severity classification, and guidance for how to remediate the issue. For details of Defender for Cloud's recommendations for containers, see the [reference list of recommendations](recommendations-reference.md#container-recommendations).
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. Defender for Cloud provides details of each reported vulnerability and a severity classification. Additionally, it gives guidance for how to remediate the specific vulnerabilities found on each image.
Below is a high-level diagram of the components and benefits of protecting your
### How does Defender for Cloud scan an image?
-Defender for Cloud pulls the image from the registry and runs it in an isolated sandbox with the Qualys scanner. The scanner extracts a list of known vulnerabilities.
+Defender for Cloud pulls the image from the registry and runs it in an isolated sandbox with the scanner. The scanner extracts a list of known vulnerabilities.
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts.
If you connect unsupported registries to your Azure subscription, Defender for C
Yes. If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't impact your secure score or generate unwanted noise.
-[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings).
+[Learn about creating rules to disable findings from the integrated vulnerability assessment tool](disable-vulnerability-findings-containers.md).
### Why is Defender for Cloud alerting me to vulnerabilities about an image that isnΓÇÖt in my registry?
Defender for Cloud provides vulnerability assessments for every image pushed or
## Next steps > [!div class="nextstepaction"]
-> [Scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+> [Scan your images for vulnerabilities](agentless-vulnerability-assessment-azure.md)
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
You can check out the following blogs:
Now that you enabled Defender for Containers, you can: -- [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- [Scan your ACR images for vulnerabilities](agentless-vulnerability-assessment-aws.md)
- [Scan your AWS images for vulnerabilities with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md) - [Scan your GGP images for vulnerabilities with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-gcp.md) - Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Defender for Containers scans the container images in Azure Container Registry (
Vulnerability information powered by Microsoft Defender Vulnerability Management is added to the [cloud security graph](concept-attack-path.md#what-is-cloud-security-graph) for contextual risk, calculation of attack paths, and hunting capabilities.
-> [!NOTE]
-> The Qualys offering is only available to customers who onboarded to Defender for Containers before November 15, 2023.
-
-There are two solutions for vulnerability assessment in Azure, one powered by Microsoft Defender Vulnerability Management and one powered by Qualys.
- Learn more about: - [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md)
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
- Title: Vulnerability assessment for Azure powered by Qualys (Deprecated)
-description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities.
-- Previously updated : 01/10/2024----
-# Vulnerability assessment for Azure powered by Qualys (Deprecated)
-
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
-
-> [!IMPORTANT]
->
-> The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**. If you are currently using container vulnerability assessment powered by Qualys, start planning your transition to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md).
->
-> - For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
->
-> - For more information about migrating to our new container vulnerability assessment offering powered by Microsoft Defender Vulnerability Management, see [Transition from Qualys to Microsoft Defender Vulnerability Management](transition-to-defender-vulnerability-management.md).
->
-> - For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md).
-
-Vulnerability assessment for Azure, powered by Qualys, is an out-of-box solution that empowers security teams to easily discover and remediate vulnerabilities in Linux container images, with zero configuration for onboarding, and without deployment of any agents.
-
-> [!NOTE]
->
-> This feature supports scanning of images in the Azure Container Registry (ACR) only. If you want to find vulnerabilities stored in other container registries, you can import the images into ACR, after which the imported images are scanned by the built-in vulnerability assessment solution. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
-
-In every subscription where this capability is enabled, all images stored in ACR (existing and new) are automatically scanned for vulnerabilities without any extra configuration of users or registries. Recommendations with vulnerability reports are provided for all images in ACR as well as images that are currently running in AKS that were pulled from an ACR registry. Images are scanned shortly after being added to a registry, and rescanned for new vulnerabilities once every week.
-
-Container vulnerability assessment powered by Qualys has the following capabilities:
--- **Scanning OS packages** - container vulnerability assessment can scan vulnerabilities in packages installed by the OS package manager in Linux. See the [full list of the supported OS and their versions](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated).--- **Language specific packages** ΓÇô support for language specific packages and files, and their dependencies installed or copied without the OS package manager. See the [full list of supported languages](support-matrix-defender-for-containers.md#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated).--- **Image scanning in Azure Private Link** - Azure container vulnerability assessment provides the ability to scan images in container registries that are accessible via Azure Private Links. This capability requires access to trusted services and authentication with the registry. Learn how to [allow access by trusted services](/azure/container-registry/allow-access-trusted-services).--- **Reporting** - Container Vulnerability Assessment for Azure powered by Qualys provides vulnerability reports using the following recommendations:-
- | Recommendation | Description | Assessment Key |
- |--|--|--|
- | [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainerRegistryRecommendationDetailsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648)| Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 |
- | [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c)ΓÇ»| Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c |
--- **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md).--- **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get).-- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](disable-vulnerability-findings-containers.md).-- **Support for disabling vulnerability findings** - Learn how to [disable vulnerability assessment findings on Container registry images](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings).-
-## Scan triggers
--- **One-time triggering**
- - Each image pushed/imported to a container registry is scanned shortly after being pushed to a registry. In most cases, the scan is completed within a few minutes, but sometimes it might take up to an hour.
- - Each image pulled from a container registry is scanned if it wasn't scanned in the last seven days.
-- **Continuous rescan triggering** ΓÇô Continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published.
- - **Rescan** is performed once every 7 days for:
- - images pulled in the last 30 days
- - images currently running on the Kubernetes clusters monitored by the Defender agent
-
-## Prerequisites
-
-Before you can scan your ACR images, you must enable the [Defender for Containers](defender-for-containers-enable.md) plan on your subscription.
-
-For a list of the types of images and container registries supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=azure-aks#registries-and-images).
-
-## View and remediate findings
-
-1. To view the findings, open the **Recommendations** page. If issues are found, you'll see the recommendation [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/container-registry-images-name-line.png" alt-text="Screenshot showing the recommendation line." lightbox="media/defender-for-containers-vulnerability-assessment-azure/container-registry-images-name-line.png":::
-
-1. Select the recommendation.
-
- The recommendation details page opens with additional information. This information includes the list of registries with vulnerable images ("Affected resources") and the remediation steps.
-
-1. Select a specific registry to see the repositories in it that have vulnerable repositories.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/container-registry-images-unhealthy-registry.png" alt-text="Screenshot showing where to select a specific registry." lightbox="media/defender-for-containers-vulnerability-assessment-azure/container-registry-images-unhealthy-registry.png":::
-
- The registry details page opens with the list of affected repositories.
-
-1. Select a specific repository to see the repositories in it that have vulnerable images.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/container-registry-details.png" alt-text="Screenshot showing select specific image to see vulnerabilities." lightbox="media/defender-for-containers-vulnerability-assessment-azure/container-registry-details.png":::
-
- The repository details page opens. It lists the vulnerable images together with an assessment of the severity of the findings.
-
-1. Select a specific image to see the vulnerabilities.
-
- ![Select images.](media/monitor-container-security/acr-finding-select-image.png)
-
- The list of findings for the selected image opens.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/list-of-findings.png" alt-text="Screenshot showing list of findings for the selected image." lightbox="media/defender-for-containers-vulnerability-assessment-azure/list-of-findings.png":::
-
-1. To learn more about a finding, select the finding.
-
- The findings details pane opens.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/finding-details.png" alt-text="Screenshot showing details about a specific finding." lightbox="media/defender-for-containers-vulnerability-assessment-azure/finding-details.png":::
-
- This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
-
-1. Follow the steps in the remediation section of this pane.
-
-1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
-
- 1. Push the updated image to trigger a scan.
-
- 1. Check the recommendations page for the recommendation [Container registry images should have vulnerability findings resolved-powered by Qualys](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648).
-
- If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
-
- 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
-
-## Disable specific findings
-
-> [!NOTE]
-> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-
-When a finding matches the criteria you've defined in your disable rules, it doesn't appear in the list of findings. Typical scenarios include:
--- Disable findings with severity below medium-- Disable findings that are nonpatchable-- Disable findings with CVSS score below 6.5-- Disable findings with specific text in the security check or category (for example: "RedHat" or "CentOS Security Update for sudo")-
-> [!IMPORTANT]
-> To create a rule, you need permissions to edit a policy in Azure Policy.
->
-> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
-
-You can use any of the following criteria:
--- Finding ID-- CVE-- Category-- Security check-- CVSS v3 scores-- Severity-- Patchable status-
-To create a rule:
-
-1. From the recommendations detail page for [Azure registry container images should have vulnerabilities resolved (powered by Qualys)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
-1. Select the relevant scope.
-
- :::image type="content" source="./media/defender-for-containers-vulnerability-assessment-azure/disable-rule.png" alt-text="Screenshot showing how to create a disable rule for VA findings on registry." lightbox="media/defender-for-containers-vulnerability-assessment-azure/disable-rule.png":::
-1. Define your criteria.
-1. Select **Apply rule**.
-
-1. To view, override, or delete a rule:
- 1. Select **Disable rule**.
- 1. From the scope list, subscriptions with active rules appear as **Rule applied**.
- :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot showing the scope list.":::
- 1. To view or delete the rule, select the ellipsis menu ("...").
-
-## View vulnerabilities for images running on your AKS clusters
-
-Defender for Cloud gives its customers the ability to prioritize the remediation of vulnerabilities in images that are currently being used within their environment using the [Azure running container images should have vulnerabilities resolved - (powered by Qualys)](https://portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c/showSecurityCenterCommandBar~/false) recommendation.
-
-To provide the findings for the recommendation, Defender for Cloud collects the inventory of your running containers that are collected by the [Defender agent](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure). Defender for Cloud correlates that inventory with the vulnerability assessment scan of images that are stored in ACR. The recommendation shows your running containers with the vulnerabilities associated with the images that are used by each container and provides vulnerability reports and remediation steps.
--
-## Next steps
--- Learn more about the Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads).-- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
Title: Microsoft Defender for open-source relational databases - the benefits and features
+ Title: Microsoft Defender for open-source relational databases
description: Learn about the benefits and features of Microsoft Defender for open-source relational databases such as PostgreSQL, MySQL, and MariaDB Last updated 06/19/2022
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Last updated 01/08/2024
# Enable vulnerability scanning with the integrated Qualys scanner (deprecated) > [!IMPORTANT]
-> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to [transition to the Microsoft Defender Vulnerability Management vulnerability scanning solution](how-to-transition-to-built-in.md).
+> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to [transition to the Microsoft Defender Vulnerability Management vulnerability scanning solution](how-to-transition-to-built-in.md).
> > For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112). >
The vulnerability scanner extension works as follows:
Your machines appear in one or more of the following groups: - **Healthy resources** ΓÇô Defender for Cloud detected a vulnerability assessment solution running on these machines.
- - **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these machines.
+ - **Unhealthy resources** ΓÇô A vulnerability scanner extension can be deployed to these machines.
- **Not applicable resources** ΓÇô [these machines aren't supported for the vulnerability scanner extension](faq-vulnerability-assessments.yml). 1. From the list of unhealthy machines, select the ones to receive a vulnerability assessment solution and select **Remediate**.
The vulnerability scanner extension works as follows:
>[!IMPORTANT] > If the deployment fails on one or more machines, ensure the target machines can communicate with Qualys' cloud service by adding the following IPs to your allowlists (via port 443 - the default for HTTPS): >
- > - `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center
+ > - `https://qagpublic.qg3.apps.qualys.com` - Qualys' US data center
>
- > - `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center
+ > - `https://qagpublic.qg2.apps.qualys.eu` - Qualys' European data center
> > If your machine is in a region in an Azure European geography (such as Europe, UK, Germany), its artifacts will be processed in Qualys' European data center. Artifacts for virtual machines located elsewhere are sent to the US data center.
The following commands trigger an on-demand scan:
Defender for Cloud also offers vulnerability analysis for your: - SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Defender for Containers to scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)
+- Azure Container Registry images - [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md)
defender-for-cloud Just In Time Access Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-usage.md
description: Learn how just-in-time VM access (JIT) in Microsoft Defender for Cl
Previously updated : 08/27/2023 Last updated : 10/01/2023 # Enable just-in-time access on VMs
In this article, you learn how to include JIT in your security program, includin
| To enable a user to: | Permissions to set| | | |
- |Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
+ |Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription (or resource group when using API or PowerShell only) that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription (or resource group when using API or PowerShell only) of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
|Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> | |Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>|
In this article, you learn how to include JIT in your security program, includin
- To set up JIT on your Amazon Web Service (AWS) VM, you need to [connect your AWS account](quickstart-onboard-aws.md) to Microsoft Defender for Cloud. > [!TIP]
- > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
+ > To create a least-privileged role for users that need to request JIT access to a VM, and perform no other JIT operations, use the [Set-JitLeastPrivilegedRole script](https://github.com/Azure/Azure-Security-Center/tree/main/Powershell%20scripts/JIT%20Scripts/JIT%20Custom%20Role) from the Defender for Cloud GitHub community pages.
-> [!NOTE]
-> In order to successfully create a custom JIT policy, the policy name, together with the targeted VM name, must not exceed a total of 56 characters.
+ > [!NOTE]
+ > In order to successfully create a custom JIT policy, the policy name, together with the targeted VM name, must not exceed a total of 56 characters.
## Work with JIT VM access using Microsoft Defender for Cloud
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
To learn about *planned* changes that are coming soon to Defender for Cloud, see
If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## March 2024
+
+|Date | Update |
+|-|-|
+| March 3 | [Defender for Cloud Containers Vulnerability Assessment powered by Qualys retirement](#defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys-retirement) |
+
+### Defender for Cloud Containers Vulnerability Assessment powered by Qualys retirement
+
+March 3, 2024
+
+The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is being retired. The retirement will be completed by March 6, and until that time partial results may still appear both in the Qualys recommendations, and Qualys results in the security graph. Any customers who were previously using this assessment should upgrade to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md). For information about transitioning to the container vulnerability assessment offering powered by Microsoft Defender Vulnerability Management, see [Transition from Qualys to Microsoft Defender Vulnerability Management](transition-to-defender-vulnerability-management.md).
+ ## February 2024 |Date | Update |
If you're looking for items older than six months, you can find them in the [Arc
February 28, 2024
-The updated experience for managing security policies, initially released in Preview for Azure, is expanding its support to cross cloud (AWS and GCP) environments. This Preview release includes:
+The updated experience for managing security policies, initially released in Preview for Azure, is expanding its support to cross cloud (AWS and GCP) environments. This Preview release includes:
+ - Managing [regulatory compliance standards](update-regulatory-compliance-packages.md) in Defender for Cloud across Azure, AWS, and GCP environments. - Same cross cloud interface experience for creating and managing [Microsoft Cloud Security Benchmark(MCSB) custom recommendations](manage-mcsb.md).-- The updated experience is applied to AWS and GCP for [creating custom recommendations with a KQL query](create-custom-recommendations.md).
+- The updated experience is applied to AWS and GCP for [creating custom recommendations with a KQL query](create-custom-recommendations.md).
### Cloud support for Defender for Containers
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
This article summarizes support information for Container capabilities in Micros
> - Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. > - Only the versions of AKS, EKS and GKE supported by the cloud vendor are officially supported by Defender for Cloud.
+> [!IMPORTANT]
+> The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is being retired. The retirement will be completed by March 6, and until that time partial results may still appear both in the Qualys recommendations, and Qualys results in the security graph. Any customers who were previously using this assessment should upgrade to to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md). For information about transitioning to the container vulnerability assessment offering powered by Microsoft Defender Vulnerability Management, see [Transition from Qualys to Microsoft Defender Vulnerability Management](transition-to-defender-vulnerability-management.md).
+ ## Azure Following are the features for each of the domains in Defender for Containers:
Following are the features for each of the domains in Defender for Containers:
|--|--|--|--|--|--|--|--|--| | Agentless registry scan (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| Vulnerability assessment for images in ACR | ACR, Private ACR | GA | Preview | Enable **Agentless container vulnerability assessment** toggle | Agentless | Defender for Containers or Defender CSPM | Commercial clouds<br/><br/> National clouds: Azure Government, Azure operated by 21Vianet | | Agentless/agent-based runtime (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| Vulnerability assessment for running images in AKS | AKS | GA | Preview | Enable **Agentless container vulnerability assessment** toggle | Agentless (Requires Agentless discovery for Kubernetes) **OR/AND** Defender agent | Defender for Containers or Defender CSPM | Commercial clouds<br/><br/> National clouds: Azure Government, Azure operated by 21Vianet |
-| Deprecated: Agentless/agent-based runtime scan (powered by Qualys) [OS packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated) | Vulnerability assessment for running images in AKS | AKS | GA | Preview | Activated with plan | Defender agent | Defender for Containers | Commercial clouds<br /> |
-| Deprecated: Agentless registry scan (powered by Qualys) <BR>[Supported OS packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated) | Vulnerability assessment for images in ACR | ACR, Private ACR | GA | Preview | Activated with plan | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-| Deprecated: Agentless registry scan (powered by Qualys) <BR>[Supported language packages](#registries-and-images-support-for-azurevulnerability-assessment-powered-by-qualys-deprecated) | Vulnerability assessment for images in ACR | ACR, Private ACR | Preview | - | Activated with plan | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
### Runtime threat protection
Following are the features for each of the domains in Defender for Containers:
| Defender agent auto provisioning | Automatic deployment of Defender agent | AKS | GA | - | Enable **Defender Agent in Azure** toggle | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet | | Azure Policy for Kubernetes auto provisioning | Automatic deployment of Azure policy agent for Kubernetes | AKS | GA | - | Enable **Azure policy for Kubernetes** toggle | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure operated by 21Vianet |
-### Registries and images support for Azure - vulnerability assessment powered by Qualys (Deprecated)
-
-| Aspect | Details |
-|--|--|
-| Registries and images | **Supported**<br> ΓÇó [ACR registries protected with Azure Private Link](../container-registry/container-registry-private-link.md) (Private registries requires access to Trusted Services) <br> ΓÇó Windows images using Windows OS version 1709 and above (Preview). This is free while it's in preview, and will incur charges (based on the Defender for Containers plan) when it becomes generally available.<br><br>**Unsupported**<br> ΓÇó Super-minimalist images such as [Docker scratch](https://hub.docker.com/_/scratch/) images<br> ΓÇó "Distroless" images that only contain an application and its runtime dependencies without a package manager, shell, or OS<br> ΓÇó Images with [Open Container Initiative (OCI) Image Format Specification](https://github.com/opencontainers/image-spec/blob/master/spec.md) <br> ΓÇó Providing image tag information for [multi-architecture images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) is currently unsupported|
-| OS Packages | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6, 7, 8 <br> ΓÇó CentOS 6, 7 <br> ΓÇó Oracle Linux 6, 7, 8 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap 42, 15 <br> ΓÇó SUSE Enterprise Linux 11, 12, 15 <br> ΓÇó Debian GNU/Linux wheezy, jessie, stretch, buster, bullseye <br> ΓÇó Ubuntu 10.10-22.04 <br> ΓÇó FreeBSD 11.1-13.1 <br> ΓÇó Fedora 32, 33, 34, 35|
-| Language specific packages (Preview) <br><br> (**Only supported for Linux images**) | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
- ### Registries and images support for Azure - Vulnerability assessment powered by Microsoft Defender Vulnerability Management | Aspect | Details |
defender-for-cloud Transition To Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/transition-to-defender-vulnerability-management.md
Title: Transition to Microsoft Defender Vulnerability Management description: Learn how to transition to Microsoft Defender Vulnerability Management in Microsoft Defender for Cloud. Previously updated : 01/08/2024 Last updated : 02/19/2024 # Transition to Microsoft Defender Vulnerability Management Microsoft Defender for Cloud is unifying all vulnerability assessment solutions to utilize the Microsoft Defender Vulnerability Management vulnerability scanner.
-Microsoft Defender Vulnerability Management integrates across many cloud native use cases, such as containers ship and runtime scenarios. As part of this change, we're retiring our built-in vulnerability assessments offering powered by Qualys.
+Microsoft Defender Vulnerability Management integrates across many cloud native use cases, such as containers ship and runtime scenarios.
-> [!IMPORTANT]
-> The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**.
->
-> Customers that onboarded at least one subscription to Defender for Containers prior to **November 15th, 2023** can continue to use Container Vulnerability Assessment powered by Qualys until **March 1st, 2024**.
->
-> For more information about the change, see [Defender for Cloud unifies Vulnerability Assessment solution powered by Microsoft Defender Vulnerability Management](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
-
-If you're currently using the built vulnerability assessment solution powered by Qualys, start planning for the upcoming retirement by following the steps on this page.
+The Defender for Cloud Containers Vulnerability Assessment powered by Qualys [is now retired](release-notes.md#defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys-retirement). If you haven't transitioned yet to[Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md), follow the steps on the page to make the transition.
## Step 1: Verify that scanning is enabled
Container vulnerability assessment scanning powered by Microsoft Defender Vulner
For more information on enabling Microsoft Defender Vulnerability Management scanning, see [Enable vulnerability assessment powered by Microsoft Defender Vulnerability Management](enable-vulnerability-assessment.md).
-## Step 2: Disable Qualys recommendations
-
-If your organization is ready to transition to container vulnerability assessment scanning powered by Microsoft Defender Vulnerability Management and no longer receive results from the Qualys recommendations, you can go ahead and disable the recommendations reporting on Qualys scanning results. Following are the recommendation names and assessment keys referenced throughout this guide.
-
-### Qualys recommendations and assessment Keys
-
-| Recommendation | Description | Assessment Key
-|--|--|--|
-| [Azure registry container images should have vulnerability findings resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainerRegistryRecommendationDetailsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648)| Container image vulnerability assessment scans your registry for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | dbd0cb49-b563-45e7-9724-889e799fa648 |
-| [Azure running container images should have vulnerability findings resolved (powered by Qualys)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/KubernetesRuntimeVisibilityRecommendationDetailsBlade/assessmentKey/41503391-efa5-47ee-9282-4eff6131462c)ΓÇ»| Container image vulnerability assessment scans container images running on your Kubernetes clusters for security vulnerabilities and exposes detailed findings for each image. Resolving the vulnerabilities can greatly improve your containers' security posture and protect them from attacks. | 41503391-efa5-47ee-9282-4eff6131462c |
-
-### Microsoft Defender Vulnerability Management recommendations and assessment keys
-
-| Recommendation | Description | Assessment Key
-|--|--|--|
-| [Azure registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)-Preview](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
-| [Azure running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 |
-
-### Disable using the Qualys recommendations for Azure commercial clouds
-
-To disable the above Qualys recommendations for Azure commercial clouds using the Defender for Cloud UI:
-
-1. In the Azure portal, navigate to Defender for Cloud and open the **Recommendations** page.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/select-recommendations.png" alt-text="Screenshot showing Recommendations selection.":::
-
-1. Search for one of the Qualys recommendations.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/powered-by-qualys.png" alt-text="Screenshot showing Search for one of the Qualys recommendations." lightbox="media/transition-to-defender-vulnerability-management/powered-by-qualys.png":::
-
-1. Choose the recommendation and select **"Exempt"**.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/powered-by-qualys-select-exempt.png" alt-text="Screenshot showing how to select Exempt.":::
-
-1. Select the management group or subscriptions where you want to exempt the Qualys recommendation.
+## Step 2: (Optional) Update REST API and Azure Resource Graph queries
- :::image type="content" source="media/transition-to-defender-vulnerability-management/powered-by-qualys-exempt.png" alt-text="Screenshot showing the selection of the management group or subscriptions to exempt.":::
-
-1. Fill out the remaining details and select create. Wait up to 30 minutes for the exemptions to take effect.
-
-### Disable using the Qualys recommendations for national clouds
-
-To disable the above Qualys recommendations for national clouds (Azure Government and Azure operated by 21Vianet) using the Defender for Cloud UI:
-
-1. Go to **Environment settings** and select the relevant subscription you want to disable the recommendation on.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/environment-settings.png" alt-text="Screenshot showing how to select subscription in environment settings." lightbox="media/transition-to-defender-vulnerability-management/environment-settings.png":::
-
-1. In the **Settings** pane, go to **Security policy**, and select the initiative assignment.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/security-policy.png" alt-text="Screenshot of security policy settings." lightbox="media/transition-to-defender-vulnerability-management/security-policy.png":::
-
-1. Search for the Qualys recommendation and select **Manage effect and parameters**.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/qualys-recommendation.png" alt-text="Screenshot of Qualys recommendation." lightbox="media/transition-to-defender-vulnerability-management/qualys-recommendation.png":::
-
-1. Change to **Disabled**.
-
- :::image type="content" source="media/transition-to-defender-vulnerability-management/select-disabled.png" alt-text="Screenshot of disable button." lightbox="media/transition-to-defender-vulnerability-management/select-disabled.png":::
-
-## Step 3: (optional) Update REST API and Azure Resource Graph queries
-
-If you're currently accessing container vulnerability assessment results powered by Qualys programmatically, either via the Azure Resource Graph (ARG) Rest API or Subassessment REST API or ARG queries, you need to update your existing queries to match the new schema and/or REST API provided by the new container vulnerability assessment powered by Microsoft Defender Vulnerability Management.
+If you were accessing container vulnerability assessment results by Qualys programmatically, either via the Azure Resource Graph (ARG) Rest API or Subassessment REST API or ARG queries, you need to update your existing queries to match the new schema and/or REST API provided by the new container vulnerability assessment powered by Microsoft Defender Vulnerability Management.
The next section includes a few examples that can help in understanding how existing queries for the Qualys powered offering should be translated to equivalent queries with the Microsoft Defender Vulnerability Management powered offering. ### ARG query examples
-Any Azure Resource Graph queries used for reporting should be updated to reflect the Microsoft Defender Vulnerability Management assessmentKeys listed previously. Following are examples to help you transition to Microsoft Defender Vulnerability Management queries.
+Any Azure Resource Graph queries used for reporting should be updated to reflect the Microsoft Defender Vulnerability Management assessmentKeys listed previously. The following are examples to help you transition to Microsoft Defender Vulnerability Management queries.
#### Show unhealthy container images
securityresources
| summarize count=count() by tostring(severity) ```
-#### View pod, container and namespace for a running vulnerable image on the AKS cluster
+#### View pod, container, and namespace for a running vulnerable image on the AKS cluster
##### **Qualys**
securityresources
```
-## Step 4: (optional) Container Security reporting
+## Step 3: (Optional) Container Security reporting
Microsoft Defender for Cloud provides out of the box reporting via Azure Workbooks, including a Container Security workbook.
This workbook includes container vulnerability scanning results from both regist
:::image type="content" source="media/transition-to-defender-vulnerability-management/workbook-vulnerability-assessment-results.png" alt-text="Screenshot of workbook including container vulnerability scanning results." lightbox="media/transition-to-defender-vulnerability-management/workbook-vulnerability-assessment-results.png":::
-The workbook provides results from both Qualys and Microsoft Defender Vulnerability Management scanning, offering a comprehensive overview of vulnerabilities detected within your Azure Registry container images. The Containers Security workbook provides the following benefits for container vulnerability assessment:
--- **Dual Scanner Integration**: Users can easily compare results from both scanners in a single report while the Qualys results are still available. Filters also allow it to focus on results for a specific container registry or cluster.
+The workbook provides results from Microsoft Defender Vulnerability Management scanning, offering a comprehensive overview of vulnerabilities detected within your Azure Registry container images. The Containers Security workbook provides the following benefits for container vulnerability assessment:
- **Overview of all vulnerabilities**: View all vulnerabilities detected across your Azure Container Registries and running on the AKS cluster.
defender-for-cloud Tutorial Enable Databases Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-databases-plan.md
These plans protect all of the supported databases in your subscription.
- [Microsoft Defender for SQL servers on machines](defender-for-sql-usage.md) - [Overview of Microsoft Defender for open-source relational databases](defender-for-databases-introduction.md) - [Overview of Microsoft Defender for Azure Cosmos DB](concept-defender-for-cosmos.md)-
defender-for-cloud Tutorial Enable Resource Manager Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-resource-manager-plan.md
Microsoft Defender for Resource Manager automatically monitors the resource mana
1. Select **Save**.
-## Next steps
+## Next step
[Overview of Microsoft Defender for Resource Manager](defender-for-resource-manager-introduction.md)
defender-for-cloud Tutorial Enable Storage Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-storage-plan.md
Microsoft Defender for Storage is an Azure-native solution offering an advanced
With Microsoft Defender for Storage, organizations can customize their protection and enforce consistent security policies by enabling it on subscriptions and storage accounts with granular control and flexibility.
- > [!TIP]
+ > [!TIP]
> If you're currently using Microsoft Defender for Storage classic, consider [migrating to the new plan](defender-for-storage-classic-migrate.md), which offers several benefits over the classic plan. ## Availability
With Microsoft Defender for Storage, organizations can customize their protectio
*Azure DNS Zone is not supported for malware scanning and sensitive data threat detection. ## Prerequisites for malware scanning+ To enable and configure malware scanning, you must have Owner roles (such as Subscription Owner or Storage Account Owner) or specific roles with the necessary data actions. Learn more about the [required permissions](support-matrix-defender-for-storage.md). ## Set up and configure Microsoft Defender for Storage
Enabling Defender for Storage via a policy is recommended because it facilitates
## Next steps - Learn how to [enable and Configure the Defender for Storage plan at scale with an Azure built-in policy](defender-for-storage-policy-enablement.md).---
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you can find them in the [What's
| [Update to agentless VM scanning built-in Azure role](#update-to-agentless-vm-scanning-built-in-azure-role) |January 14, 2024 | February 2024 | | [Deprecation of two recommendations related to PCI](#deprecation-of-two-recommendations-related-to-pci) |January 14, 2024 | February 2024 | | [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 |
-| [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 |
+| [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 |
| [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 | | [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 | | [Consolidation of Defender for Cloud's Service Level 2 names](#consolidation-of-defender-for-clouds-service-level-2-names) | November 1, 2023 | December 2023 |
In February 2021, the deprecation of the MSCA task was communicated to all custo
Customers can get the latest DevOps security tooling from Defender for Cloud through [Microsoft Security DevOps](azure-devops-extension.md) and additional security tooling through [GitHub Advanced Security for Azure DevOps](https://azure.microsoft.com/products/devops/github-advanced-security). - ## Update recommendations to align with Azure AI Services resources **Announcement date: February 20, 2024**
Customers that are still using the API version **2022-09-01-preview** under `Mic
Customers currently using Defender for Cloud DevOps security from Azure portal won't be impacted. - For details on the new API version, see [Microsoft Defender for Cloud REST APIs](/rest/api/defenderforcloud/operation-groups). ## Changes in endpoint protection recommendations
For more information about our decision to unify our vulnerability assessment of
You can also check out the [common questions about the transition to Microsoft Defender Vulnerability Management solution](faq-scanner-detection.yml).
-## Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys
-
-**Announcement date: January 9, 2023**
+## New version of Defender Agent for Defender for Containers
-**Estimated date for change: March 2024**
+**Announcement date: January 4, 2024**
-The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**. If you're currently using container vulnerability assessment powered by Qualys, start planning your transition to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md).
-
-For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
-
-For more information about transitioning to our new container vulnerability assessment offering powered by Microsoft Defender Vulnerability Management, see [Transition from Qualys to Microsoft Defender Vulnerability Management](transition-to-defender-vulnerability-management.md).
+**Estimated date for change: February 2024**
-For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md).
+A new version of the [Defender Agent for Defender for Containers](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) will be released in February 2024. It includes performance and security improvements, support for both AMD64 and ARM64 arch nodes (Linux only), and uses [Inspektor Gadget](https://www.inspektor-gadget.io/) as the process collection agent instead of Sysdig. The new version is only supported on Linux kernel versions 5.4 and higher, so if you have older versions of the Linux kernel, you'll need to upgrade. For more information, see [Supported host operating systems](support-matrix-defender-for-containers.md#supported-host-operating-systems).
## Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
# Assign security standards - Defender for Cloud's regulatory standards and benchmarks are represented as [security standards](security-policy-concept.md). Each standard is an initiative defined in Azure Policy. In Defender for Cloud, you assign security standards to specific scopes such as Azure subscriptions, AWS accounts, and GCP projects that have Defender for Cloud enabled.
defender-for-cloud Workflow Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/workflow-automation.md
This article describes the workflow automation feature of Microsoft Defender for
- You must also have write permissions for the target resource. - To work with Azure Logic Apps workflows, you must also have the following Logic Apps roles/permissions:
- - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)
- - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification.
-
-- If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances).
+ - [Logic App Operator](../role-based-access-control/built-in-roles.md#logic-app-operator) permissions are required or Logic App read/trigger access (this role can't create or edit logic apps; only *run* existing ones)
+ - [Logic App Contributor](../role-based-access-control/built-in-roles.md#logic-app-contributor) permissions are required for logic app creation and modification.
+- If you want to use Logic Apps connectors, you might need other credentials to sign in to their respective services (for example, your Outlook/Teams/Slack instances).
## Create a logic app and define when it should automatically run
The logic app designer supports the following Defender for Cloud triggers:
> [!NOTE] > If you are using the legacy trigger "When a response to a Microsoft Defender for Cloud alert is triggered", your logic apps will not be launched by the Workflow Automation feature. Instead, use either of the triggers mentioned above. - 1. After you've defined your logic app, return to the workflow automation definition pane ("Add workflow automation"). 1. Select **Refresh** to ensure your new logic app is available for selection. 1. Select your logic app and save the automation. The logic app dropdown only shows those with supporting Defender for Cloud connectors mentioned above.
To manually run a logic app, open an alert, or a recommendation and select **Tri
[![Manually trigger a logic app.](media/workflow-automation/manually-trigger-logic-app.png)](media/workflow-automation/manually-trigger-logic-app.png#lightbox)
-## Configure workflow automation at scale
+## Configure workflow automation at scale
Automating your organization's monitoring and incident response processes can greatly improve the time it takes to investigate and mitigate security incidents.
To implement these policies:
|Workflow automation for security recommendations |[Deploy Workflow Automation for Microsoft Defender for Cloud recommendations](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F73d6ab6c-2475-4850-afd6-43795f3492ef)|73d6ab6c-2475-4850-afd6-43795f3492ef| |Workflow automation for regulatory compliance changes|[Deploy Workflow Automation for Microsoft Defender for Cloud regulatory compliance](https://portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F509122b9-ddd9-47ba-a5f1-d0dac20be63c)|509122b9-ddd9-47ba-a5f1-d0dac20be63c|
-
You can also find these by searching Azure Policy. In Azure Policy, select **Definitions** and search for them by name.
-
1. From the relevant Azure Policy page, select **Assign**. :::image type="content" source="./media/workflow-automation/export-policy-assign.png" alt-text="Assigning the Azure Policy.":::
defender-for-iot Dell Xe4 Sff https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-xe4-sff.md
+
+ Title: DELL XE4 SFF for OT monitoring in SMB/ L500 deployments - Microsoft Defender for IoT
+description: Learn about the DELL XE4 SFF appliance when used for OT monitoring with Microsoft Defender for IoT in SMB/ L500 deployments.
Last updated : 02/25/2024+++
+# DELL XE4 SFF
+
+This article describes the **DELL XE4 SFF** appliance deployment and installation for OT sensors monitoring production lines.
+
+| Appliance characteristic |Details |
+|||
+|**Hardware profile** | L500 |
+|**Performance** | Max bandwidth: 25 Mbps <br> Up to 8x RJ45 monitoring ports or 6x SFP (OPT) |
+|**Physical specifications** | Mounting: Small Form Factor <br> Ports: 1x1Gbps (builtin) and optional expansion PCIe cards for copper and SFP connectors|
+|**Status** | Supported, available preconfigured |
+
+The following image shows a sample of the DELL XE4 SFF front panel:
++
+The following image shows a sample of the DELL XE4 SFF back panel:
++
+The following image shows the DELL XE4 SFF dust filter installation and maintenance:
++
+## Specifications
+
+|Component|Technical specifications|
+|-|-|
+|Construction |Small Form Factor |
+|Dimensions |1. Width: 6.65 in. (169.00 mm) <br>2. Depth: 11.84 in. (300.80 mm) <br>3. Height: 14.45 in. (367.00 mm) |
+|Weight |Weight (min): 14.13 lb (6.41 kg) <br>Weight (max): 21.03 lb (9.54 kg) |
+|CPU |12th Generation Intel Core i5-12600 (6 Cores/18MB/12T/3.3GHz to 4.8GHz/65W) |
+|Memory |8 GB (1x8GB) DDR4 Non-ECC Memory |
+|Storage |M.2 2280 512-GB PCIe NVMe Class 40 Solid State Drive |
+|Network controller |Built-in 1x1Gbps |
+|Power Adapter |300 W internal power supply unit (PSU), 92% Efficient PSU, 80 Plus Platinum |
+|PWS |300 W internal power supply unit (PSU), 92% Efficient, 80 Plus Platinum, V3, TCO9 |
+|Temperature |5┬░C to 45┬░C (41┬░F to 113┬░F) |
+|Dust Filter |Dell XE4 optional dust filter fits over the front of the chassis and safeguards internal components in areas such as factory, warehouse, and retail environments without impeding air flow. |
+|Humidity |20% to 80% (non-condensing, Max dew point temperature = 26┬░C) |
+|Vibration |0.26 GRMS random at 5 Hz to 350 Hz |
+|Shock |Bottom/Right half-sine pulse 40G, 2 ms |
+|EMC |Product Safety, EMC and Environmental Datasheets <br><https://www.dell.com/learn/us/en/uscorp1/product-info-datasheets-safety-emc-environmental> |
+
+## DELL XE4 SFF - Bill of Materials
+
+|Type|Description|PN|Quantity|
+|-||-|-|
+|Processor|12th Generation Intel Core i5-12600 (6 Cores/18MB/12T/3.3GHz to 4.8GHz/65W) |338-CCYL|1|
+|Memory| 8 GB (1x8GB) DDR4 Non-ECC Memory |370-AGFP |1 |
+|Storage |M.2 2280 512 GB PCIe NVMe Class 40 Solid State Drive |400-BMWH |1 |
+|Storage |M.2 22x30 Thermal Pad |412-AAQT |1 |
+|Storage |M2X3.5 Screw for SSD/DDPE |773-BBBC |1 |
+|Speakers |Internal Speaker |520-AARD |1 |
+|Graphics |Intel Integrated Graphics |490-BBFG |1 |
+|Optical Drive |No Optical Drive |429-ABKF |1 |
+|Network Adapters (NIC) |No Additional Network Card Selected (Integrated NIC included) |555-BBJO |1 |
+|Power Cord |System Power Cord (UK/MY/SG/HK/Bangladesh/Brunei/Pakistan/Sri Lanka/Maldives) |450-AANJ |1 |
+|Documentation |End User License Agreement (MUI) OEM |340-ABMB |1 |
+|Keyboard |Dell Multimedia Keyboard-KB216 - UK (QWERTY) - Black |580-ADDF |1 |
+|Mouse |Dell Optical Mouse-MS116 - Black |570-ABJO |1 |
+|System Monitoring Options |System Monitoring not selected in this configuration |817-BBSI |1 |
+|Additional Video Ports |No Additional Video Ports |492-BCKH |1 |
+|Add-in Cards |No Additional Add In Cards |382-BBHX |1 |
+|Additional Storage Devices - Media Reader |No Media Card Reader |385-BBCR |1|
+|Dust Protection |Dust Filter |325-BDSX |1 |
+|Chassis Options |300 W internal power supply unit (PSU), 92% Efficient, 80 Plus Platinum, V3, TCO9 |329-BJVV |1 |
+|Systems Management |In-Band Systems Management |631-ADFK |1|
+|EPEAT 2018 |EPEAT 2018 Registered (Silver) |379-BDTO |1 |
+|ENERGY STAR |ENERGY STAR Qualified |387-BBLW |1 |
+|TPM Security |Trusted Platform Module (Discrete TPM Enabled) |329-BBJL |1 |
+|Consolidation Fees - (EM-EMEA Only) |Consolidation Fee Desktop |546-10007 |1 |
+|Shipping Material |OptiPlex OEM Small Form Factor Packaging and Labels |328-BFCV |1 |
+|Optical Software |CMS Software not included |632-BBBJ |1 |
+|Processor Label |Intel Core i5 Processor Label |340-CUEW |1 |
+|Raid Connectivity |NO RAID |817-BBBN |1 |
+|Transportation from ODM to region |Desktop BTO Standard shipment |800-BBIO |1 |
+|Bezel |OEM, BRAND, XE4SFF, RM7WS2MN-V2, NOM, AVIGILON |325-BFBS |1 |
+|Bezel |OEM Badge Small Form Factor |340-DBRM |1 |
+|Bezel |Client ID Module info mod for no apps on OEM Client platform projects |750-ABLB |1 |
+|OEMR XE4 Small Form Factor |OptiPlex XE4 Small Form Factor OEM-Ready |210-BDLO |1 |
+|Hard Drive Cables and Brackets |No Hard Drive Bracket, Dell OptiPlex |575-BBKX |1 |
+|Order Information |Dell Order |799-AANV |1 |
+|BIOS Configuration - Standard |BIOS: NIC Set To On w/ PXE |696-10356 |1 |
+|BIOS Configuration - Standard |BIOS Wake-On-Lan Set To Enabled-Same As Remote wake up |696-10362 |1 |
+|BIOS Configuration - Standard |BIOS Setting Mandatory Enablement SKU |696-10421 |1 |
+|Label |Regulatory Label XE4 OEM SFF EMEA |389-EEGP |1 |
+|Operating System |Ubuntu Linux 20.04 |605-BBNY |1 |
+|Operating System Recovery Options |No Media |620-AAOH |1 |
+|Dell
+|Dell
+|Dell
+|Dell
+
+## Optional port expansion
+
+Optional modules for port expansion include:
+
+|Description| PN|Quantity|
+|--|--||
+|Dell Intel Ethernet i350 Quad Port 1 GbE Base-T Adapter PCIe Full Height |540-BDLF |1|
+|Intel 1 GB Single Port PCIe Network card (half height) |540-BBMO |1 |
+|Intel X710 Dual Port 10 GbE SFP+ Adapter | 540-BDQZ |1|
+
+## Set up the Dell XE4 BIOS
+
+This procedure describes how to configure the BIOS configuration for an unconfigured sensor appliance.
+If any of the steps are missing in the BIOS, make sure that the hardware matches the specifications above.
+Set up the Dell XE4 BIOS to achieve optimal performance for sensors.
+
+**To configure the Dell XE4 BIOS**:
+
+1. Set up **Boot Mode**
+
+ 1. Select **System setup options** > **Boot Configuration menu** > **Boot Mode**
+
+ 1. Set to **UEFI Only**
+
+1. Set up **Installation boot from USB/DVD (as applicable)**
+
+ 1. Select **System setup options** > **Boot Configuration menu** > **Boot Sequence**
+
+ 1. Select the installation disk drive as the first option
+
+1. Set up **Restart on Power Loss**
+
+ 1. Select **System setup options** > **Power Menu** > **AC Behavior**
+
+ 1. Set to **Restart**
+
+1. Disable Sleep/Hibernation
+
+ 1. Select **System setup options** > **Power Menu** > **Block Sleep**
+
+ 1. Enable **Block Sleep**
+
+## Dell XE4 software setup
+
+This procedure describes how to install Defender for IoT software on the Dell XE4. The installation process takes about 20 minutes. After the installation, the system restarts several times.
+
+To install Defender for IoT software:
+
+1. Connect the screen and keyboard to the appliance, and then connect to the CLI.
+
+1. Connect an external CD or disk-on-key that contains the software you downloaded from the Azure portal.
+
+1. Switch on the appliance.
+
+1. Continue by installing your Defender for IoT software. For more information, see [Defender for IoT software installation](../ot-deploy/install-software-ot-sensor.md#install-defender-or-iot-software-on-ot-sensors).
+
+## Next steps
+
+Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](../ot-appliance-sizing.md)
+
+Then, use any of the following procedures to continue:
+
+- [Download software for an OT sensor](../ot-deploy/install-software-ot-sensor.md#download-software-files-from-the-azure-portal)
+- [Download software files for an on-premises management console](../legacy-central-management/install-software-on-premises-management-console.md#download-software-files-from-the-azure-portal)
defender-for-iot Hpe Proliant Dl20 Plus Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-smb.md
Title: HPE ProLiant DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB deployments - Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20 Gen10 Plus appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
+ Title: HPE ProLiant DL20 Gen10 Plus (NHP 2LFF) for OT monitoring in SMB/ L500 deployments - Microsoft Defender for IoT
+description: Learn about the HPE ProLiant DL20 Gen10 Plus appliance when used for OT monitoring with Microsoft Defender for IoT in SMB deployments.
Last updated 04/24/2022
defender-for-iot Hpe Proliant Dl20 Smb Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-smb-legacy.md
Title: HPE ProLiant DL20 Gen10 (NHP 2LFF) for OT monitoring in SMB deployments- Microsoft Defender for IoT
-description: Learn about the HPE ProLiant DL20 Gen10 appliance when used for in SMB deployments for OT monitoring with Microsoft Defender for IoT.
+description: Learn about the HPE ProLiant DL20 Gen10 appliance when used for OT monitoring with Microsoft Defender for IoT in SMB deployments.
Last updated 10/30/2022
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | **Version 24.1.0**:<br>- [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
+| **OT networks** | **Version 24.1.0**:<br> - [Alert suppression rules from the Azure portal (Public preview)](#alert-suppression-rules-from-the-azure-portal-public-preview)<br>- [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [Newly supported protocols](#newly-supported-protocols)<br><br>**Cloud features**<br>- [New license renewal reminder in the Azure portal](#new-license-renewal-reminder-in-the-azure-portal) <br><br>- [New OT appliance hardware profile](#new-ot-appliance-hardware-profile) <br><br>- [New fields for SNMP MIB OIDs](#new-fields-for-snmp-mib-oids)|
### Alert suppression rules from the Azure portal (Public preview)
For more information, see [Suppress irrelevant alerts](how-to-accelerate-alert-i
### Focused alerts in OT/IT environments
-Organizations where sensors are deployed between OT and IT networks deal with many alerts, related to both OT and IT traffic. The amount of alerts, some of which are irrelevant, can cause alert fatigue and affect overall performance.
+Organizations where sensors are deployed between OT and IT networks deal with many alerts, related to both OT and IT traffic. The number of alerts, some of which are irrelevant, can cause alert fatigue and affect overall performance.
To address these challenges, we've updated Defender for IoT's detection policy to automatically trigger alerts based on business impact and network context, and reduce low-value IT related alerts.
When the license for one or more of your OT sites is about to expire, a note is
:::image type="content" source="media/whats-new/license-renewal-note.png" alt-text="Screenshot of the license renewal reminder note." lightbox="media/whats-new/license-renewal-note.png":::
+### New OT appliance hardware profile
+
+The DELL XE4 SFF appliance is now supported for OT sensors monitoring production lines. This is part of the L500 hardware profile, a *Production line* environment, with six cores, 8-GB RAM, and 512-GB disk storage.
+
+For more information, see [DELL XE4 SFF](appliance-catalog/dell-xe4-sff.md).
+ ### New fields for SNMP MIB OIDs Additional standard, generic fields have been added to the SNMP MiB OIDs. For the full list of fields, see [OT sensor OIDs for manual SNMP configurations](how-to-set-up-snmp-mib-monitoring.md#ot-sensor-oids-for-manual-snmp-configurations).
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
Microsoft Dev Box requires a configured and working Active Directory join, which
> [!NOTE] > Microsoft Dev Box automatically creates a resource group for each network connection, which holds the network interface cards (NICs) that use the virtual network assigned to the network connection. The resource group has a fixed name based on the name and region of the network connection. You can't change the name of the resource group, or specify an existing resource group. ## Related content
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
You can create and manage multiple dev boxes as a dev box user. Create a dev box
To complete this quickstart, you need: -- Permissions as a [Dev Box User](quickstart-configure-dev-box-service.md#provide-access-to-a-dev-box-project) for a project that has an available dev box pool. If you don't have permissions to a project, contact your administrator.
+- Your organization must have configured Microsoft Dev Box with at least one project and dev box pool before you can create a dev box.
+ - Platform engineers can follow these steps to configure Microsoft Dev Box: [Quickstart: Configure Microsoft Dev Box](quickstart-configure-dev-box-service.md) -
+- You must have permissions as a [Dev Box User](quickstart-configure-dev-box-service.md#provide-access-to-a-dev-box-project) for a project that has an available dev box pool. If you don't have permissions to a project, contact your administrator.
## Create a dev box
Microsoft Dev Box enables you to create cloud-hosted developer workstations in a
Depending on the project configuration and your permissions, you have access to different projects and associated dev box configurations. If you have a choice of projects and dev box pools, select the project and dev box pool that best fits your needs. For example, you might choose a project that has a dev box pool located near to you for least latency.
+> [!IMPORTANT]
+> You organization must have configured Microsoft Dev Box with at least one project and dev box pool before you can create a dev box. If you don't see any projects or dev box pools, contact your administrator.
+ To create a dev box in the Microsoft Dev Box developer portal: 1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal).
energy-data-services Concepts Index And Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-index-and-search.md
For more information, see [Indexer service OSDU&trade; documentation](https://co
`Search service` provides a mechanism for discovering indexed metadata documents. The Search API supports full-text search on string fields, range queries on date, numeric, or string field, etc. along with geo-spatial searches.
+When metadata records are loaded onto the Platform using `Storage service`, we can configure permissions for viewers and owners of the metadata records under the *acl* field. The viewers and owners are assigned via groups as defined in the `Entitlement service`. When performing a search as a user, the matched metadata records will only show up for users who are assigned to the Group.
+ For a detailed tutorial on `Search service`, refer [Search service OSDU&trade; documentation](https://community.opengroup.org/osdu/platform/system/search-service/-/blob/release/0.15/docs/tutorial/SearchService.md)
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [Domain data management service concepts](concepts-ddms.md)
+> [Domain data management service concepts](concepts-ddms.md)
firewall Firewall Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-known-issues.md
# Azure Firewall known issues and limitations
-This article list the known issues for [Azure Firewall](overview.md). It is updated as issues are resolved.
+This article lists the known issues for [Azure Firewall](overview.md). It is updated as issues are resolved.
For Azure Firewall limitations, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits).
Azure Firewall Standard has the following known issues:
|Adding a DNAT rule to a secured virtual hub with a security provider isn't supported.|This results in an asynchronous route for the returning DNAT traffic, which goes to the security provider.|Not supported.| | Error encountered when creating more than 2000 rule collections. | The maximal number of NAT/Application or Network rule collections is 2000 (Resource Manager limit). | This is a current limitation. | |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.|
-|Can't upgrade to Premium with Availability Zones in the Southeast Asia region|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
|CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.| |Azure private DNS zone isn't supported with Azure Firewall|Azure private DNS zone doesn't work with Azure Firewall regardless of Azure Firewall DNS settings.|To achieve the desire state of using a private DNS server, use Azure Firewall DNS proxy instead of an Azure private DNS zone.| |Physical zone 2 in Japan East is unavailable for firewall deployments.|You canΓÇÖt deploy a new firewall with physical zone 2. Additionally, if you stop an existing firewall which is deployed in physical zone 2, it cannot be restarted. For more information, see [Physical and logical availability zones](../reliability/availability-zones-overview.md#physical-and-logical-availability-zones).|For new firewalls, deploy with the remaining availability zones or use a different region. To configure an existing firewall, see [How can I configure availability zones after deployment?](firewall-faq.yml#how-can-i-configure-availability-zones-after-deployment).
frontdoor Migrate Tier Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier-powershell.md
Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo
## Prepare for migration
+> [!NOTE]
+> * Managed certificate is currently **not supported** for Azure Front Door Standard or Premium in Azure Government Cloud. You need to use BYOC for Azure Front Door Standard or Premium in Azure Government Cloud or wait until this capability is available.
+ #### [Without WAF and BYOC (Bring your own certificate)](#tab/without-waf-byoc) Run the [Start-AzFrontDoorCdnProfilePrepareMigration](/powershell/module/az.cdn/start-azfrontdoorcdnprofilepreparemigration) command to prepare for migration. Replace the values for the resource group name, resource ID, profile name with your own values. For *SkuName* use either **Standard_AzureFrontDoor** or **Premium_AzureFrontDoor**. The *SkuName* is based on the output from the [Test-AzFrontDoorCdnProfileMigration](/powershell/module/az.cdn/test-azfrontdoorcdnprofilemigration) command.
frontdoor Migrate Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/migrate-tier.md
Azure Front Door Standard and Premium tier bring the latest cloud delivery netwo
## Enable managed identities
-> [!NOTE]
-> If you're not using your own certificate, enabling managed identities and granting access to the Key Vault is not required. You can skip to the [**Migrate**](#migrate) phase.
- If you're using your own certificate and you'll need to enable managed identity so Azure Front Door can access the certificate in your Azure Key Vault. Managed identity is a feature of Microsoft Entra ID that allows you to securely connect to other Azure services without having to manage credentials. For more information, see [What are managed identities for Azure resources?](..//active-directory/managed-identities-azure-resources/overview.md)
+> [!NOTE]
+> * If you're not using your own certificate, enabling managed identities and granting access to the Key Vault is not required. You can skip to the [**Migrate**](#migrate) phase.
+> * Managed certificate is currently **not supported** for Azure Front Door Standard or Premium in Azure Government Cloud. You need to use BYOC for Azure Front Door Standard or Premium in Azure Government Cloud or wait until this capability is available.
+ 1. Select **Enable** and then select either **System assigned** or **User assigned** depending on the type of managed identities you want to use. :::image type="content" source="./media/migrate-tier/enable-managed-identity.png" alt-text="Screenshot of the enable manage identity button for Front Door migration.":::
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
Previously updated : 05/26/2023 Last updated : 03/04/2024
The migration tool checks to see if your Azure Front Door (classic) profile is c
* If you're using BYOC (Bring Your Own Certificate) for Azure Front Door (classic), you need to [grant Key Vault access](standard-premium/how-to-configure-https-custom-domain.md#register-azure-front-door) to Azure Front Door Standard or Premium. This step is required for Azure Front Door Standard or Premium to access your certificate in Key Vault. If you're using Azure Front Door managed certificate, you don't need to grant Key Vault access.
+ > [!NOTE]
+ > Managed certificate is currently **not supported** for Azure Front Door Standard or Premium in Azure Government Cloud. You need to use BYOC for Azure Front Door Standard or Premium in Azure Government Cloud or wait until this capability is available..
+ #### Prepare for migration Azure Front Door creates a new Standard or Premium profile based on your Front Door (classic) profile's configuration. The new Front Door profile tier depends on the Web Application Firewall (WAF) policy settings you associate with the profile.
governance Assign Policy Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-rest-api.md
The first step in understanding compliance in Azure is to identify the status of
This quickstart steps you through the process of creating a policy assignment to identify virtual machines that aren't using managed disks.
-At the end of this process, you'll successfully identify virtual machines that aren't using managed
+At the end of this process, you identify virtual machines that aren't using managed
disks. They're _non-compliant_ with the policy assignment. REST API is used to create and manage Azure resources. This guide uses REST API to create a policy
assignment and to identify non-compliant resources in your Azure environment.
account before you begin. - If you haven't already, install [ARMClient](https://github.com/projectkudu/ARMClient). It's a tool
- that sends HTTP requests to Azure Resource Manager-based REST APIs. You can also use the "Try It"
- feature in REST documentation or tooling like PowerShell's
- [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod) or
- [Postman](https://www.postman.com).
-
+ that sends HTTP requests to Azure Resource Manager-based REST APIs. You can also use tooling like PowerShell's
+ [Invoke-RestMethod](/powershell/module/microsoft.powershell.utility/invoke-restmethod).
## Create a policy assignment
Run the following command to create a policy assignment:
The preceding endpoint and request body uses the following information: REST API URI:-- **Scope** - A scope determines what resources or grouping of resources the policy assignment gets
+- **Scope** - A scope determines which resources or group of resources the policy assignment gets
enforced on. It could range from a management group to an individual resource. Be sure to replace `{scope}` with one of the following patterns: - Management group: `/providers/Microsoft.Management/managementGroups/{managementGroup}` - Subscription: `/subscriptions/{subscriptionId}` - Resource group: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}` - Resource: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/[{parentResourcePath}/]{resourceType}/{resourceName}`-- **Name** - The actual name of the assignment. For this example, _audit-vm-manageddisks_ was used.
+- **Name** - The name of the assignment. For this example, _audit-vm-manageddisks_ was used.
Request Body: - **DisplayName** - Display name for the policy assignment. In this case, you're using _Audit VMs without managed disks Assignment_. - **Description** - A deeper explanation of what the policy does or why it's assigned to this scope. - **policyDefinitionId** - The policy definition ID, based on which you're using to create the
- assignment. In this case, it's the ID of policy definition _Audit VMs that do not use managed
+ assignment. In this case, it's the ID of policy definition _Audit VMs that don't use managed
disks_. - **nonComplianceMessages** - Set the message seen when a resource is denied due to non-compliance or evaluated to be non-compliant. For more information, see
Request Body:
## Identify non-compliant resources
-To view the resources that aren't compliant under this new assignment, run the following command to
+To view the non-compliant resources that aren't compliant under this new assignment, run the following command to
get the resource IDs of the non-compliant resources that are output into a JSON file: ```http
Your results resemble the following example:
} ```
-The results are comparable to what you'd typically see listed under **Non-compliant resources** in
-the Azure portal view.
+The results are comparable to what you'd typically see listed under **Non-compliant resources** in the Azure portal view.
## Clean up resources
Replace `{scope}` with the scope you used when you first created the policy assi
## Next steps
-In this quickstart, you assigned a policy definition to identify non-compliant resources in your
-Azure environment.
+In this quickstart, you assigned a policy definition to identify non-compliant resources in your Azure environment.
-To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial for:
+To learn more about assigning policies to validate that new resources are compliant, continue to the tutorial for:
> [!div class="nextstepaction"] > [Creating and managing policies](./tutorials/create-and-manage.md)
governance First Query Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-rest-api.md
For more examples of REST API calls for Azure Resource Graph, see the
## Clean up resources
-REST API has no libraries or modules to uninstall. If you installed a tool such as _ARMClient_ or
-_Postman_ to make the calls and no longer need it, you may uninstall the tool now.
+REST API has no libraries or modules to uninstall. If you installed a tool like _ARMClient_ to make the calls and no longer need it, you may uninstall the tool now.
## Next steps
healthcare-apis Fhir Service Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-service-resource-manager-template.md
You can deploy the FHIR service resource by **removing** the workspaces resource
{ "type": "Microsoft.HealthcareApis/workspaces", "name": "[parameters('workspaceName')]",
- "apiVersion": "2020-11-01-preview",
+ "apiVersion": "2023-11-01",
"location": "[parameters('region')]", "properties": {} },
You can deploy the FHIR service resource by **removing** the workspaces resource
"type": "Microsoft.HealthcareApis/workspaces/fhirservices", "kind": "fhir-R4", "name": "[concat(parameters('workspaceName'), '/', parameters('fhirServiceName'))]",
- "apiVersion": "2020-11-01-preview",
+ "apiVersion": "2023-11-01",
"location": "[parameters('region')]", "dependsOn": [ "[resourceId('Microsoft.HealthcareApis/workspaces', parameters('workspaceName'))]"
You can create a new resource group, or use an existing one by skipping the step
$resourcegroupname="your resource group" $location="South Central US" $workspacename="your workspace name"
-$servicename="your fhir service name"
+$fhirservicename="your fhir service name"
$tenantid="xxx" $subscriptionid="xxx" $storageaccountname="storage account name"
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Title: Architectural concepts in Azure IoT Central
+ Title: Azure IoT Central solution architecture
description: This article introduces key IoT Central architectural concepts such as device management, security, integration, and extensibility. Previously updated : 11/28/2022 Last updated : 03/04/2024
A device can use properties to report its state, such as whether a valve is open
IoT Central can also control devices by calling commands on the device. For example, instructing a device to download and install a firmware update.
-The telemetry, properties, and commands that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
+The telemetry, properties, and commands that a device implements are collectively known as the device capabilities. You define these capabilities in a model that the device and the IoT Central application share. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Assign a device to a device template](concepts-device-templates.md#assign-a-device-to-a-device-template).
The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/about-iot-sdks.md).
Devices connect to IoT Central using one the supported protocols: [MQTT, AMQP, o
Local gateway devices are useful in several scenarios, such as: -- Devices can't connect directly to IoT Central because they can't connect to the internet. For example, you may have a collection of Bluetooth enabled occupancy sensors that need to connect through a gateway device.
+- Devices can't connect directly to IoT Central because they can't connect to the internet. For example, you might have a collection of Bluetooth enabled occupancy sensors that need to connect through a gateway device.
- The quantity of data generated by your devices is high. To reduce costs, combine or aggregate the data in a local gateway before you send it to your IoT Central application. - Your solution requires fast responses to anomalies in the data. You can run rules on a gateway device that identify anomalies and take an action locally without the need to send data to your IoT Central application.
Gateway devices typically require more processing power than a standalone device
Although IoT Central has built-in analytics features, you can export data to other services and applications.
-[Transformations](howto-transform-data-internally.md) in an IoT Central data export definition let you manipulate the format and structure of the device data before it's exported to a destination.
+[Transformations](howto-transform-data-internally.md) in an IoT Central data export definition let you manipulate the format and structure of the device data before exporting it to a destination.
Reasons to export data include: ### Storage and analysis
-For long-term storage and control over archiving and retention policies, you can [continuously export your data](howto-export-to-blob-storage.md).
- to other storage destinations. Use of separate storage also lets you use other analytics tools to derive insights and view the data in your solution.
+For long-term storage and control over archiving and retention policies, you can [continuously export your data](howto-export-to-blob-storage.md) to other storage destinations. The use of a separate storage service outside of IoT Central lets you use other analytics tools to derive insights from the data in your solution.
### Business automation
For long-term storage and control over archiving and retention policies, you can
### Additional computation
-You may need to [transform or do computations](howto-transform-data.md) on your data before it can be used either in IoT Central or another service. For example, you could add local weather information to the location data reported by a delivery truck.
+You might need to [transform or do computations](howto-transform-data.md) on your data before it can be used either in IoT Central or another service. For example, you could add local weather information to the location data reported by a delivery truck.
## Extend with REST API
Build integrations that let other applications and services manage your applicat
## Next steps Now that you've learned about the architecture of Azure IoT Central, the suggested next step is to learn about [device connectivity](overview-iot-central-developer.md) in Azure IoT Central.-
iot-central Concepts Device Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-authentication.md
Title: Device authentication in Azure IoT Central
description: This article introduces key IoT Central device authentication concepts such as enrollment groups, shared access signatures, and X.509 certificates. Previously updated : 10/28/2022 Last updated : 03/01/2024
This article describes the following device authentication options:
In a production environment, using X.509 certificates is the recommended device authentication mechanism for IoT Central. To learn more, see [Device Authentication using X.509 CA Certificates](../../iot-hub/iot-hub-x509ca-overview.md).
-An X.509 enrollment group contains a root or intermediate X.509 certificate. Devices can authenticate if they have a valid leaf certificate that's derived from the root or intermediate certificate.
+An X.509 enrollment group contains a root or intermediate X.509 certificate. Devices can authenticate if they have a valid leaf certificate derived from the root or intermediate certificate.
To connect a device with an X.509 certificate to your application:
To connect a device with an X.509 certificate to your application:
1. Add and verify an intermediate or root X.509 certificate in the enrollment group. 1. Generate a leaf certificate from the root or intermediate certificate in the enrollment group. Install the leaf certificate on the device for it to use when it connects to your application.
-Each enrollment group should use a unique X.509 certificate. IoT Central does not support using the same X.509 certificate across multiple enrollment groups.
+Each enrollment group should use a unique X.509 certificate. IoT Central doesn't support using the same X.509 certificate across multiple enrollment groups.
-To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md)
+To learn more, see [How to connect devices with X.509 certificates](how-to-connect-devices-x509.md).
### For testing purposes only
In a production environment, use certificates from your certificate provider. Fo
## SAS enrollment group
-A SAS enrollment group contains group-level SAS keys. Devices can authenticate if they have a valid SAS token that's derived from a group-level SAS key.
+A SAS enrollment group contains group-level SAS keys. Devices can authenticate if they have a valid SAS token derived from a group-level SAS key.
To connect a device with device SAS token to your application:
To connect a device with device SAS token to your application:
> [!NOTE] > To use existing SAS keys in your enrollment groups, disable the **Auto generate keys** toggle and manually enter your SAS keys.
-If you use the default **SAS-IoT-Devices** enrollment group, IoT Central generates the individual device keys for you. To access these keys, select **Connect** on the device details page. This page displays the **ID Scope**, **Device ID**, **Primary key**, and **Secondary key** that you use in your device code. This page also displays a QR code the contains the same data.
+If you use the default **SAS-IoT-Devices** enrollment group, IoT Central generates the individual device keys for you. To access these keys, select **Connect** on the device details page. This page displays the **ID Scope**, **Device ID**, **Primary key**, and **Secondary key** that you use in your device code. This page also displays a QR code that contains the same data.
## Individual enrollment
-Typically, devices connect by using credentials derived from an enrollment group X.509 certificate or SAS key. However, if your devices each have their own credentials, you can use individual enrollments. An individual enrollment is an entry for a single device that's allowed to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
+Typically, devices connect by using credentials derived from an enrollment group X.509 certificate or SAS key. However, if your devices each have their own credentials, you can use individual enrollments. An individual enrollment is an entry for a single device allowing it to connect. Individual enrollments can use either X.509 leaf certificates or SAS tokens (from a physical or virtual trusted platform module) as attestation mechanisms. For more information, see [DPS individual enrollment](../../iot-dps/concepts-service.md#individual-enrollment).
> [!NOTE] > When you create an individual enrollment for a device, it takes precedence over the default enrollment group options in your IoT Central application.
iot-central Concepts Faq Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-apaas-paas.md
- Title: Move from IoT Central to a PaaS solution
-description: This article discusses how to move between application platform as a service (aPaaS) and platform as a service (PaaS) Azure IoT solution approaches.
-- Previously updated : 11/28/2022-----
-# How do I move between aPaaS and PaaS solutions?
-
-IoT Central is the fastest and easiest way to evaluate your IoT scenario. You can use the *IoT Central migrator tool* to migrate devices seamlessly from IoT Central to a platform as a service (PaaS) solution that uses IoT Hub and the Device Provisioning Service (DPS).
-
-## Move devices with the IoT Central migrator tool
-
-Use the migrator tool to move devices with no downtime from IoT Central to your own DPS instance. In a PaaS solution, you link a DPS instance to your IoT hub. The migrator tool disconnects devices from IoT Central and connects them to your PaaS solution. From this point forward, new devices are created in your IoT hub.
-
-Download the [migrator tool from GitHub](https://github.com/Azure/iotc-migrator).
-
-## Minimize disruption
-
-To minimize disruption, you can migrate your devices in phases. The migrator tool uses device groups to move devices from IoT Central to your IoT hub. Divide your device fleet into device groups such as devices in Texas, devices in New York, and devices in the rest of the US. Then migrate each device group independently.
-
-> [!WARNING]
-> You can't add unassigned devices to a device group. Therefore you can't currently use the migrator tool to migrate unassigned devices.
-
-Minimize business impact by following these steps:
--- Create the PaaS solution and run it in parallel with the IoT Central application.--- Set up continuous data export in IoT Central application and appropriate routes to the PaaS solution IoT hub. Transform both data channels and store the data into the same data lake.--- Migrate the devices in phases and verify at each phase. If something doesn't go as planned, fail the devices back to IoT Central.--- When you've migrated all the devices to the PaaS solution and fully exported your data from IoT Central, you can remove the devices from the IoT Central solution.-
-After the migration, devices aren't automatically deleted from the IoT Central application. These devices continue to be billed as IoT Central charges for all provisioned devices in the application. When you remove these devices from the IoT Central application, you're no longer billed for them. Eventually, remove the IoT Central application.
-
-## Firmware best practices
-
-So that you can seamlessly migrate devices from your IoT Central applications to PaaS solution, follow these guidelines:
--- The device must be an IoT Plug and Play device that uses a [Digital Twins Definition Language (DTDL) V2](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md) model. IoT Central requires all devices to have a DTDL model. These models simplify the interoperability between an IoT PaaS solution and IoT Central.--- The device must follow the [IoT Plug and Play conventions](../../iot/concepts-convention.md).-- IoT Central uses the DPS to provision the devices. The PaaS solution must also use DPS to provision the devices.-- The updatable DPS pattern ensures that the device can move seamlessly between IoT Central applications and the PaaS solution without any downtime.-
-> [!NOTE]
-> IoT Central defines some extensions to the DTDL v2 language. To learn more, see [IoT Central extension](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.iotcentral.v2.md).
-
-## Move existing data out of IoT Central
-
-You can configure IoT Central to continuously export telemetry and property values. Export destinations are data stores such as Azure Data Lake, Event Hubs, and Webhooks. You can export device templates using either the IoT Central UI or the REST API. The REST API lets you export the users in an IoT Central application.
-
-## Next steps
-
-Now that you've learned about moving from aPaaS to PaaS solutions, a suggested next step is to explore the [IoT Central migrator tool](https://github.com/Azure/iotc-migrator).
-
iot-central Concepts Faq Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md
Title: Scalability and high availability
description: This article describes how IoT Central automatically scales to handle more devices, its high availability disaster recovery capabilities. Previously updated : 03/21/2023 Last updated : 03/04/2024
iot-central Concepts Iiot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iiot-architecture.md
Title: Industrial IoT patterns with Azure IoT Central
description: This article introduces common Industrial IoT patterns that you can implement using Azure IoT Central Previously updated : 11/28/2022 Last updated : 03/01/2024
IoT Central lets you evaluate your IIoT scenario by using the following built-in
By using the Azure IoT platform, IoT Central lets you evaluate solutions that are scalable and secure.
+To set up a sample to evaluate a solution, see [Ingest Industrial Data with Azure IoT Central and Calculate OEE](https://github.com/Azure-Samples/iotc-solution-builder).
+ ## Connect your industrial assets Operational technology (OT) is the hardware and software that monitors and controls the equipment and infrastructure in industrial facilities. There are four ways to connect your industrial assets to Azure IoT Central:
iot-central Concepts Iot Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-iot-edge.md
Title: Azure IoT Edge and Azure IoT Central
description: Understand how to use Azure IoT Edge with an IoT Central application including the different gateway patterns and IoT Edge management capabilities. Previously updated : 10/11/2022 Last updated : 03/04/2024
IoT Central enables the following capabilities to for IoT Edge devices:
An IoT Edge device can be: * A standalone device composed of custom modules.
-* A *gateway device*, with downstream devices connecting to it. A gateway device may include custom modules.
+* A *gateway device*, with downstream devices connecting to it. A gateway device can include custom modules.
## IoT Edge devices and IoT Central
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Title: Connect devices with X.509 certificates to your application
description: This article describes how devices can use X.509 certificates to authenticate to your application. Previously updated : 12/14/2022 Last updated : 03/01/2024
zone_pivot_groups: programming-languages-set-ten
IoT Central supports both shared access signatures (SAS) and X.509 certificates to secure the communication between a device and your application. The [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial uses SAS. In this article, you learn how to modify the code sample to use X.509 certificates. X.509 certificates are recommended in production environments. For more information, see [Device authentication concepts](concepts-device-authentication.md).
-This guide shows two ways to use X.509 certificates - [group enrollments](how-to-connect-devices-x509.md#use-group-enrollment) typically used in a production environment, and [individual enrollments](how-to-connect-devices-x509.md#use-individual-enrollment) useful for testing. The article also describes how to [roll device certificates](#roll-x509-device-certificates) to maintain connectivity when certificates expire.
+This guide shows two ways to use X.509 certificates - [group enrollments](how-to-connect-devices-x509.md#use-group-enrollment) typically used in a production environment, and [individual enrollments](how-to-connect-devices-x509.md#use-individual-enrollment) useful for testing. The article also describes how to [roll device certificates](#roll-your-x509-device-certificates) to maintain connectivity when certificates expire.
This guide builds on the samples shown in the [Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) tutorial that use C#, Java, JavaScript, and Python. For an example that uses the C programming language, see the [Provision multiple X.509 devices using enrollment groups](../../iot-dps/tutorial-custom-hsm-enrollment-group-x509.md). ## Prerequisites
-To complete the steps in this how-to guide, you should first complete the [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial.
+To complete the steps in this how-to guide, you should first complete the [Create and connect a client application to your Azure IoT Central application](./tutorial-connect-device.md) tutorial. You modify the code you used in the tutorial when you follow the steps in this guide.
In this how-to guide, you generate some test X.509 certificates. To be able to generate these certificates, you need:
In this section, you use an X.509 certificate to connect a device with a certifi
> [!TIP] > A device ID can contain letters, numbers, and the `-` character.
-These commands produce the following root and the device certificate
+These commands produce the following root and the device certificates:
| filename | contents | | -- | -- |
Make a note of the location of these files. You need it later.
1. Upload the root certificate file called _mytestrootcert_cert.pem_ that you generated previously.
-1. If you're using an intermediate or root certificate authority that you trust and know you have full ownership of the certificate, you can self-attest that you've verified the certificate by setting certificate status verified on upload to **On**. Otherwise, set certificate status verified on upload to **Off**.
+1. If you're using an intermediate or root certificate authority that you trust and know you have full ownership of the certificate, you can self-attest that you verified the certificate by setting certificate status verified on upload to **On**. Otherwise, set certificate status verified on upload to **Off**.
1. If you set certificate status verified on upload to **Off**, select **Generate verification code**.
Make a note of the location of these files. You need it later.
You can now connect devices that have an X.509 certificate derived from this primary root certificate.
-After you save the enrollment group, make a note of the ID scope.
+After you save the enrollment group, make a note of the ID scope. You need it later.
### Run sample device code
To learn more, see [Create and provision IoT Edge devices at scale on Linux usin
IoT Edge uses X.509 certificates to secure the connection between downstream devices and an IoT Edge device acting as a transparent gateway. To learn more about configuring this scenario, see [Connect a downstream device to an Azure IoT Edge gateway](../../iot-edge/how-to-connect-downstream-device.md).
-## Roll X.509 device certificates
+## Roll your X.509 device certificates
-During the lifecycle of your IoT Central application, you'll need to roll your x.509 certificates. For example:
+During the lifecycle of your IoT Central application, you might need to roll your X.509 certificates. For example:
- If you have a security breach, rolling certificates is a security best practice to help secure your system.-- x.509 certificates have expiry dates. The frequency in which you roll your certificates depends on the security needs of your solution. Customers with solutions involving highly sensitive data may roll certificates daily, while others roll their certificates every couple years.
+- X.509 certificates have expiry dates. The frequency in which you roll your certificates depends on the security needs of your solution. Customers with solutions involving highly sensitive data might roll certificates daily, while others roll their certificates every couple years.
For uninterrupted connectivity, IoT Central lets you configure primary and secondary X.509 certificates. If the primary and secondary certificates have different expiry dates, you can roll the expired certificate while devices continue to connect with the other certificate.
To handle certificate expirations, use the following approach to update the curr
4. Add and verify root X.509 certificate in the enrollment group.
-5. Later when the secondary certificate has expired, come back and update the secondary certificate.
+5. Later when the secondary certificate expires, come back and update the secondary certificate.
### Individual enrollments and certificate expiration
When the secondary certificate nears expiration, and needs to be rolled, you can
4. For secondary certificate update, select the folder icon to select the new certificate to be uploaded for the enrollment entry. Select **Save**. 5. Later when the primary certificate has expired, come back and update the primary certificate.-
-## Next steps
-
-Now that you've learned how to connect devices using X.509 certificates, the suggested next step is to learn how to [Monitor device connectivity using Azure CLI](howto-monitor-devices-azure-cli.md).
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Title: Connect an IoT Edge transparent gateway to an application
description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application. The article shows how to use the IoT Edge 1.4 runtime. Previously updated : 01/10/2023 Last updated : 03/04/2024
To generate the demo certificates and install them on your gateway device:
After you run the previous commands, the following files are ready to use in the next steps: - *~/certs/certs/azure-iot-test-only.root.ca.cert.pem* - The root CA certificate used to make all the other demo certificates for testing an IoT Edge scenario.
- - *~/certs/certs/iot-edge-device-mycacert-full-chain.cert.pem* - A device CA certificate that's referenced from the IoT Edge configuration file. In a gateway scenario, this CA certificate is how the IoT Edge device verifies its identity to downstream devices.
+ - *~/certs/certs/iot-edge-device-mycacert-full-chain.cert.pem* - A device CA certificate referenced from the IoT Edge configuration file. In a gateway scenario, this CA certificate is how the IoT Edge device verifies its identity to downstream devices.
- *~/certs/private/iot-edge-device-mycacert.key.pem* - The private key associated with the device CA certificate. To learn more about these demo certificates, see [Create demo certificates to test IoT Edge device features](../../iot-edge/how-to-create-test-certificates.md).
To generate the demo certificates and install them on your gateway device:
pk = "file:///home/AzureUser/certs/private/iot-edge-device-ca-mycacert.key.pem" ```
- The example shown above assumes you're signed in as **AzureUser** and created a device CA certificate called "mycacert".
+ The previous example assumes you're signed in as **AzureUser** and created a device CA certificate called "mycacert".
1. Save the changes and restart the IoT Edge runtime:
Your transparent gateway is now configured and ready to start forwarding telemet
## Provision a downstream device
-IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python installed and internet connectivity. Check the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python/blob/main/README.md) for current Python version requirements. The [Azure Cloud Shell](https://shell.azure.com/) has Python pre-installed:
+IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python installed and internet connectivity. Check the [Azure IoT Python SDK](https://github.com/Azure/azure-iot-sdk-python/blob/main/README.md) for current Python version requirements. The [Azure Cloud Shell](https://shell.azure.com/) has Python preinstalled:
1. Run the following command to install the `azure.iot.device` module:
To run the thermostat simulator on the `leafdevice` virtual machine:
``` > [!TIP]
- > If you see an error when the downstream device tries to connect. Try re-running the device provisioning steps above.
+ > If you see an error when the downstream device tries to connect. Try re-running the device provisioning steps.
1. To see the telemetry in IoT Central, navigate to the **Overview** page for the **thermostat1** device: :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png" alt-text="Screenshot showing telemetry from the downstream device." lightbox="media/how-to-connect-iot-edge-transparent-gateway/downstream-device-telemetry.png"::: On the **About** page you can view property values sent from the downstream device, and on the **Command** page you can call commands on the downstream device.-
-## Next steps
-
-Now that you've learned how to configure a transparent gateway with IoT Central, the suggested next step is to learn more about [IoT Edge](../../iot-edge/about-iot-edge.md).
iot-central Howto Authorize Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-authorize-rest-api.md
Title: Authorize REST API in Azure IoT Central
+ Title: Authenticate REST API calls in Azure IoT Central
description: How to authenticate and authorize IoT Central REST API calls by using bearer tokens or an IoT Central API token. Previously updated : 07/25/2022 Last updated : 03/01/2024
The IoT Central REST API lets you develop client applications that integrate wit
Every IoT Central REST API call requires an authorization header that IoT Central uses to determine the identity of the caller and the permissions that caller is granted within the application.
-This article describes the types of token you can use in the authorization header, and how to get them. Srvice principals are the recommended approach to access management for IoT Central REST APIs.
+This article describes the types of token you can use in the authorization header, and how to get them. Service principals are the recommended approach for IoT Central REST API access management.
## Token types To access an IoT Central application using the REST API, you can use an: -- _Azure Active Directory bearer token_. A bearer token is associated with a Microsoft Entra user account or service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application.
+- _Microsoft Entra bearer token_. A bearer token is associated with a Microsoft Entra user account or service principal. The token grants the caller the same permissions the user or service principal has in the IoT Central application.
- IoT Central API token. An API token is associated with a role in your IoT Central application.
-Use a bearer token associated with your user account while you're developing and testing automation and scripts that use the REST API. Use a bearer token that's associated with a service principal for production automation and scripts. Use a bearer token in preference to an API token to reduce the risk of leaks and problems when tokens expire.
+Use a bearer token associated with your user account while you're developing and testing automation and scripts that use the REST API. Use a bearer token associated with a service principal for production automation and scripts. Use a bearer token in preference to an API token to reduce the risk of leaks and problems when tokens expire.
To learn more about users and roles in IoT Central, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
To get an API token, you can use the IoT Central UI or a REST API call. Administ
In the IoT Central UI: 1. Navigate to **Permissions > API tokens**.
-1. Click **+ New** or **Create an API token**.
+1. Select **+ New** or **Create an API token**.
1. Enter a name for the token and select a role and [organization](howto-create-organizations.md). 1. Select **Generate**. 1. IoT Central displays the token that looks like the following example:
To use a bearer token when you make a REST API call, your authorization header l
To use an API token when you make a REST API call, your authorization header looks like the following example: `Authorization: SharedAccessSignature sr=e8a...&sig=jKY8W...&skn=operator-token&se=1647950487889`-
-## Next steps
-
-Now that you've learned how to authorize REST API calls, a suggested next step is to [How to use the IoT Central REST API to query devices](howto-query-with-rest-api.md).
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Title: Analyze device data in your Azure IoT Central application
description: Analyze device data in your Azure IoT Central application by using device groups and the built-in data explorer. Previously updated : 11/03/2022 Last updated : 03/04/2024
Choose a **Device group** to get started and then the telemetry you want to anal
## Interact with your data
-After you've queried your data, you can visualize it on the line chart. You can show or hide telemetry, change the time duration, or view the data in a grid.
+After you query your data, you can visualize it on the line chart. You can show or hide telemetry, change the time duration, or view the data in a grid.
Select **Save** to save an analytics query. Later, you can retrieve any queries you saved.
Select **Save** to save an analytics query. Later, you can retrieve any queries
:::image type="content" source="media/howto-create-analytics/time-editor-panel.png" alt-text="Screenshot that shows the time editor panel." lightbox="media/howto-create-analytics/time-editor-panel.png":::
- - **Inner date range slider tool**: Use the two endpoint controls to highlight the time span you want. The inner date range is constrained by the outer date range slider control.
+ - **Inner date range slider tool**: Use the two endpoint controls to highlight the time span you want. The outer date range slider control constrains the inner date range.
- **Outer date range slider control**: Use the endpoint controls to select the outer date range that's available for your inner date range control.
Select the ellipsis, for more chart controls:
- **Drop a Marker:** The **Drop Marker** control lets you anchor certain data points on the chart. It's useful when you're trying to compare data for multiple lines across different time periods. :::image type="content" source="media/howto-create-analytics/additional-chart-controls.png" alt-text="A Screenshot that shows how to access the additional chart controls." lightbox="media/howto-create-analytics/additional-chart-controls.png":::-
-## Next steps
-
-Now that you've learned how to visualize your data with the built-in analytics capabilities, a suggested next step is to learn how to [Export IoT data to cloud destinations using Blob Storage](howto-export-to-blob-storage.md).
iot-central Howto Edit Device Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-edit-device-template.md
Title: Edit device templates in your Azure IoT Central application
description: Iteratively update your device templates without impacting your live connected devices by using versioned device templates. Previously updated : 10/31/2022 Last updated : 03/04/2024
-# Edit an existing device template
+# Edit a device template
A device template includes a model that describes how a device interacts with IoT Central. This model defines the capabilities of the device and how to IoT Central interacts with them. Devices can send telemetry and property values to IoT Central, IoT Central can send property updates and commands to a device. IoT Central also uses the model to define interactions with IoT Central features such as jobs, rules, and exports.
-Changes to the model in a device template can affect your entire application, including any connected devices. Changes to a capability that's used by rules, exports, device groups, or jobs may cause them to behave unexpectedly or not work at all. For example, if you remove a telemetry definition from a template:
+Changes to the model in a device template can affect your entire application, including any connected devices. Changes to a capability used by rules, exports, device groups, or jobs might cause them to behave unexpectedly or not work at all. For example, if you remove a telemetry definition from a template:
- IoT Central is no longer able interpret that value. IoT Central shows device data that it can't interpret as **Unmodeled data** on the device's **Raw data** page. - IoT Central no longer includes the value in any data exports.
To learn how to manage device templates by using the IoT Central REST API, see [
## Modify a device template
-Additive changes, such as adding a capability or interface to a model are non-breaking changes. You can make additive changes to a model at any stage of the development life cycle.
+Additive changes, such as adding a capability or interface to a model are nonbreaking changes. You can make additive changes to a model at any stage of the development life cycle.
Breaking changes include removing parts of a model, or changing a capability name or schema type. These changes could cause application features such as rules, exports, or dashboards to display error messages and stop working.
After you attach production devices to a device template, evaluate the impact of
### Update an IoT Edge device template
-For an IoT Edge device, the model groups capabilities by modules that correspond to the IoT Edge modules running on the device. The deployment manifest is a separate JSON document that tells an IoT Edge device which modules to install, how to configure them, and what properties the module has. If you've modified a deployment manifest, you can update the device template to include the modules and properties defined in the manifest:
+For an IoT Edge device, the model groups capabilities by modules that correspond to the IoT Edge modules running on the device. The deployment manifest is a separate JSON document that tells an IoT Edge device which modules to install, how to configure them, and what properties the module has. If you modify a deployment manifest, you can update the device template to include the modules and properties defined in the manifest:
1. Navigate to the **Modules** node in the device template. 1. On the **Modules summary** page, select **Import modules from manifest**.
To learn more, see [IoT Edge devices and IoT Central](concepts-iot-edge.md#iot-e
The following actions are useful when you edit a device template: -- _Save_. When you change part of your device template, saving the changes creates a draft that you can return to. These changes don't yet affect connected devices. Any devices created from this template won't have the saved changes until you publish it.
+- _Save_. When you change part of your device template, saving the changes creates a draft that you can return to. These changes don't yet affect connected devices. Any devices created from this template don't have the saved changes until you publish it.
- _Publish_. When you publish the device template, it applies any saved changes to existing device instances. Newly created device instances always use the latest published template.-- _Version a template_. When you version a device template, it creates a new template with all the latest saved changes. Existing device instances aren't impacted by changes made to a new version. To learn more, see [Version a device template](#version-a-device-template).
+- _Version a template_. When you version a device template, it creates a new template with all the latest saved changes. Changes made to a new version don't impact existing device instances. To learn more, see [Version a device template](#version-a-device-template).
- _Version an interface_. When you version an interface, it creates a new interface with all the latest saved capabilities. You can reuse an interface in multiples locations within a template. That's why a change made to one reference to an interface changes all the places in the template that use the interface. When you version an interface, this behavior changes because the new version is now a separate interface. To learn more, see [Version an interface](#version-an-interface). - _Migrate a device_. When you migrate a device, the device instance swaps from one device template to another. Device migration can cause a short while IoT Central processes the changes. To learn more, see [Migrate a device across versions](#migrate-a-device-across-versions).
You can create multiple versions of the device template. Over time, you'll have
> [!TIP] > You can use a job to migrate all the devices in a device group to a new device template at the same time.-
-## Next steps
-
-If you're an operator or solution builder, a suggested next step is to learn [how to manage your devices](./howto-manage-devices-individually.md).
-
-If you're a device developer, a suggested next step is to read about [Azure IoT Edge devices and Azure IoT Central](./concepts-iot-edge.md).
iot-central Howto Manage Dashboards With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards-with-rest-api.md
- Title: Use the REST API to manage dashboards in Azure IoT Central
-description: How to use the IoT Central REST API to create, update, delete, and manage dashboards in an application
-- Previously updated : 10/06/2022------
-# How to use the IoT Central REST API to manage dashboards
-
-The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage dashboards in your IoT Central application.
-
-Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
-
-For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
--
-To learn how to manage dashboards by using the IoT Central UI, see [How to manage dashboards.](../core/howto-manage-dashboards.md)
-
-## Dashboards
-
-You can create dashboards that are associated with a specific organization. An organization dashboard is only visible to users who have access to the organization the dashboard is associated with. Only users in a role that has [organization dashboard permissions](howto-manage-users-roles.md#customizing-the-app) can create, edit, and delete organization dashboards.
-
-All users can create *personal dashboards*, visible only to themselves. Users can switch between organization and personal dashboards.
-
-> [!NOTE]
-> Creating personal dashboards using API is currently not supported.
-
-## Dashboards REST API
-
-The IoT Central REST API lets you:
-
-* Add a dashboard to your application
-* Update a dashboard in your application
-* Get a list of the dashboard in the application
-* Get a dashboard by ID
-* Delete a dashboard in your application
-
-## Add a dashboard
-
-Use the following request to create a dashboard.
-
-```http
-PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
-```
-
-`dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#digital-twin-model-identifier) identifier for the dashboard.
-
-The request body has some required fields:
-
-* `@displayName`: Display name of the dashboard.
-* `@favorite`: Is the dashboard in the favorites list?
-* `group`: Device group ID.
-* `Tile` : Configuration specifying tile object, including the layout, display name, and configuration.
-
-Tile has some required fields:
-
-| Name | Description |
-| - | -- |
-| `displayName` | Display name of the tile |
-| `height` | Height of the tile |
-| `width` | Width of the tile |
-| `x` | Horizontal position of the tile |
-| `y` | Vertical position of the tile |
-
-The dimensions and location of a tile both use integer units. The smallest possible tile has a height and width of one.
-
-You can configure a tile object to display multiple types of data. This article includes examples of tiles that show line charts, markdown, and last known value. To learn more about the different tile types you can add to a dashboard, see [Tile types](howto-manage-dashboards.md#tile-types).
-
-### Line chart tile
-
-Plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices during the past hour.
-
-The line chart tile has the following configuration:
-
-| Name | Description |
-|--|--|
-| `capabilities` | Specifies the aggregate value of the telemetry to display. |
-| `devices` | The list of devices to display. |
-| `format` | The format configuration of the chart such as the axes to use. |
-| `group` | The ID of the device group to display. |
-| `queryRange` | The time range and resolution to display.|
-| `type` | `lineChart` |
-
-### Markdown tile
-
-Clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.
-The markdown tile has the following configuration:
-
-| Name | Description |
-|--|--|
-| `description` | The markdown string to render inside the tile. |
-| `href` | The link to visit when the tile is selected. |
-| `image` | A base64 encoded image to display. |
-| `type` | `markdown` |
-
-### Last known value tile
-
-Display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices.
-
-The last known value (LKV) tile has the following configuration:
-
-| Name | Description |
-|--|--|
-| `capabilities` | Specifies the telemetry to display. |
-| `devices` | The list of devices to display. |
-| `format` | The format configuration of the LKV tile such as text size of word wrapping. |
-| `group` | The ID of the device group to display. |
-| `showTrend` | Show the difference between the last known value and the previous value. |
-| `type` | `lkv` |
-
-The following example shows a request body that adds a new dashboard with line chart, markdown, and last known value tiles. The LKV and line chart tiles are `2x2` tiles. The markdown tile is a `1x1` tile. The tiles are arranged on the top row of the dashboard:
-
-```json
-{
- "displayName": "My Dashboard ",
- "tiles": [
- {
- "displayName": "LKV Temperature",
- "configuration": {
- "type": "lkv",
- "capabilities": [
- {
- "capability": "temperature",
- "aggregateFunction": "avg"
- }
- ],
- "group": "0fb6cf08-f03c-4987-93f6-72103e9f6100",
- "devices": [
- "3xksbkqm8r",
- "1ak6jtz2m5q",
- "h4ow04mv3d"
- ],
- "format": {
- "abbreviateValue": false,
- "wordWrap": false,
- "textSize": 14
- }
- },
- "x": 0,
- "y": 0,
- "width": 2,
- "height": 2
- },
- {
- "displayName": "Documentation",
- "configuration": {
- "type": "markdown",
- "description": "Comprehensive help articles and links to more support.",
- "href": "https://aka.ms/iotcentral-pnp-docs",
- "image": "4d6c6373-0220-4191-be2e-d58ca2a289e1"
- },
- "x": 2,
- "y": 0,
- "width": 1,
- "height": 1
- },
- {
- "displayName": "Average temperature",
- "configuration": {
- "type": "lineChart",
- "capabilities": [
- {
- "capability": "temperature",
- "aggregateFunction": "avg"
- }
- ],
- "devices": [
- "3xksbkqm8r",
- "1ak6jtz2m5q",
- "h4ow04mv3d"
- ],
- "group": "0fb6cf08-f03c-4987-93f6-72103e9f6100",
- "format": {
- "xAxisEnabled": true,
- "yAxisEnabled": true,
- "legendEnabled": true
- },
- "queryRange": {
- "type": "time",
- "duration": "PT30M",
- "resolution": "PT1M"
- }
- },
- "x": 3,
- "y": 0,
- "width": 2,
- "height": 2
- }
- ],
- "favorite": false
-}
-```
-<!-- TODO: Fix this - also check the image example above... -->
-The response to this request looks like the following example:
-
-```json
-{
- "id": "dtmi:kkfvwa2xi:p7pyt5x38",
- "displayName": "My Dashboard",
- "personal": false,
- "tiles": [
- {
- "displayName": "lineChart",
- "configuration": {
- "type": "lineChart",
- "capabilities": [
- {
- "capability": "temperature",
- "aggregateFunction": "avg"
- }
- ],
- "devices": [
- "1cfqhp3tue3",
- "mcoi4i2qh3"
- ],
- "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
- "format": {
- "xAxisEnabled": true,
- "yAxisEnabled": true,
- "legendEnabled": true
- },
- "queryRange": {
- "type": "time",
- "duration": "PT30M",
- "resolution": "PT1M"
- }
- },
- "x": 5,
- "y": 0,
- "width": 2,
- "height": 2
- }
- ],
- "favorite": false
-}
-```
-
-## Get a dashboard
-
-Use the following request to retrieve the details of a dashboard by using a dashboard ID.
-
-```http
-GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
-```
-
-The response to this request looks like the following example:
-
-```json
-{
- "id": "dtmi:kkfvwa2xi:p7pyt5x38",
- "displayName": "My Dashboard",
- "personal": false,
- "tiles": [
- {
- "displayName": "lineChart",
- "configuration": {
- "type": "lineChart",
- "capabilities": [
- {
- "capability": "AvailableMemory",
- "aggregateFunction": "avg"
- }
- ],
- "devices": [
- "1cfqhp3tue3",
- "mcoi4i2qh3"
- ],
- "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
- "format": {
- "xAxisEnabled": true,
- "yAxisEnabled": true,
- "legendEnabled": true
- },
- "queryRange": {
- "type": "time",
- "duration": "PT30M",
- "resolution": "PT1M"
- }
- },
- "x": 5,
- "y": 0,
- "width": 2,
- "height": 2
- }
- ],
- "favorite": false
-}
-```
-
-## Update a dashboard
-
-```http
-PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
-```
-
-The following example shows a request body that updates the display name of a dashboard and adds the dashboard to the list of favorites:
-
-```json
-
-{
- "displayName": "New Dashboard Name",
- "favorite": true
-}
-
-```
-
-The response to this request looks like the following example:
-
-```json
-{
- "id": "dtmi:kkfvwa2xi:p7pyt5x38",
- "displayName": "New Dashboard Name",
- "personal": false,
- "tiles": [
- {
- "displayName": "lineChart",
- "configuration": {
- "type": "lineChart",
- "capabilities": [
- {
- "capability": "AvailableMemory",
- "aggregateFunction": "avg"
- }
- ],
- "devices": [
- "1cfqhp3tue3",
- "mcoi4i2qh3"
- ],
- "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
- "format": {
- "xAxisEnabled": true,
- "yAxisEnabled": true,
- "legendEnabled": true
- },
- "queryRange": {
- "type": "time",
- "duration": "PT30M",
- "resolution": "PT1M"
- }
- },
- "x": 5,
- "y": 0,
- "width": 5,
- "height": 5
- }
- ],
- "favorite": true
-}
-```
-
-## Delete a dashboard
-
-Use the following request to delete a dashboard by using the dashboard ID:
-
-```http
-DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
-```
-
-## List dashboards
-
-Use the following request to retrieve a list of dashboards from your application:
-
-```http
-GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-10-31-preview
-```
-
-The response to this request looks like the following example:
-
-```json
-{
- "value": [
- {
- "id": "dtmi:kkfvwa2xi:p7pyt5x3o",
- "displayName": "Dashboard",
- "personal": false,
- "tiles": [
- {
- "displayName": "Device templates",
- "configuration": {
- "type": "markdown",
- "description": "Get started by adding your first device.",
- "href": "/device-templates/new/devicetemplates",
- "image": "f5ba1b00-1d24-4781-869b-6f954df48736"
- },
- "x": 1,
- "y": 0,
- "width": 1,
- "height": 1
- },
- {
- "displayName": "Quick start demo",
- "configuration": {
- "type": "markdown",
- "description": "Learn how to use Azure IoT Central in minutes.",
- "href": "https://aka.ms/iotcentral-pnp-video",
- "image": "9eb01d71-491a-44e5-8fac-7af3bc9f9acd"
- },
- "x": 2,
- "y": 0,
- "width": 1,
- "height": 1
- },
- {
- "displayName": "Tutorials",
- "configuration": {
- "type": "markdown",
- "description": "Step-by-step articles teach you how to create apps and devices.",
- "href": "https://aka.ms/iotcentral-pnp-tutorials",
- "image": "7d9fc12c-d46e-41c6-885f-0a67c619366e"
- },
- "x": 3,
- "y": 0,
- "width": 1,
- "height": 1
- },
- {
- "displayName": "Documentation",
- "configuration": {
- "type": "markdown",
- "description": "Comprehensive help articles and links to more support.",
- "href": "https://aka.ms/iotcentral-pnp-docs",
- "image": "4d6c6373-0220-4191-be2e-d58ca2a289e1"
- },
- "x": 4,
- "y": 0,
- "width": 1,
- "height": 1
- },
- {
- "displayName": "IoT Central Image",
- "configuration": {
- "type": "image",
- "format": {
- "backgroundColor": "#FFFFFF",
- "fitImage": true,
- "showTitle": false,
- "textColor": "#FFFFFF",
- "textSize": 0,
- "textSizeUnit": "px"
- },
- "image": ""
- },
- "x": 0,
- "y": 0,
- "width": 1,
- "height": 1
- },
- {
- "displayName": "Contoso Image",
- "configuration": {
- "type": "image",
- "format": {
- "backgroundColor": "#FFFFFF",
- "fitImage": true,
- "showTitle": false,
- "textColor": "#FFFFFF",
- "textSize": 0,
- "textSizeUnit": "px"
- },
- "image": "c9ac5af4-f38e-4cd3-886a-e0cb107f391c"
- },
- "x": 0,
- "y": 1,
- "width": 5,
- "height": 3
- },
- {
- "displayName": "Available Memory",
- "configuration": {
- "type": "lineChart",
- "capabilities": [
- {
- "capability": "AvailableMemory",
- "aggregateFunction": "avg"
- }
- ],
- "devices": [
- "1cfqhp3tue3",
- "mcoi4i2qh3"
- ],
- "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
- "format": {
- "xAxisEnabled": true,
- "yAxisEnabled": true,
- "legendEnabled": true
- },
- "queryRange": {
- "type": "time",
- "duration": "PT30M",
- "resolution": "PT1M"
- }
- },
- "x": 5,
- "y": 0,
- "width": 2,
- "height": 2
- }
- ],
- "favorite": false
- }
- ]
-}
-```
-
-## Next steps
-
-Now that you've learned how to manage dashboards with the REST API, a suggested next step is to [How to manage file upload with rest api.](howto-upload-file-rest-api.md)
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-dashboards.md
Title: Create and manage Azure IoT Central dashboards
description: Learn how to create and manage application and personal dashboards in Azure IoT Central. Customize dashboards by using tiles. Previously updated : 11/03/2022 Last updated : 03/04/2024
This table describes the types of tiles you can add to a dashboard:
| State chart | Plot changes for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. | | Property | Display the current values for properties and cloud properties for one or more devices. For example, you can use this tile to display device properties like the manufacturer or firmware version. | | Map (property) | Display the location of one or more devices on a map.|
-| Map (telemetry) | Display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display a sampled route of where a device has been in the past week.|
+| Map (telemetry) | Display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display a sampled route of where a device went in the past week.|
| Image (static) | Display a custom image and can be clickable. The URL can be a relative link to another page in the application or an absolute link to an external site.| | Label | Display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard, like descriptions, contact details, or Help.| | Markdown | Clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.|
iot-central Howto Manage Deployment Manifests With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-deployment-manifests-with-rest-api.md
Title: Azure IoT Central deployment manifests and the REST API
description: How to use the IoT Central REST API to manage IoT Edge deployment manifests in an IoT Central application. Previously updated : 11/22/2022 Last updated : 03/04/2024
The following example shows a request body that adds a deployment manifest that
The request body has some required fields: * `id`: a unique ID for the deployment manifest in the IoT Central application.
-* `displayName`: a name for the deployment manifest that's displayed in the UI.
+* `displayName`: a name for the deployment manifest displayed in the UI.
* `data`: the IoT Edge deployment manifest. The response to this request looks like the following example:
The response to this request looks like the following example:
## Assign a deployment manifest to a device
-To use a deployment manifest that's already stored in your IoT Central application, first use the [Get a deployment manifest](#get-a-deployment-manifest) API to fetch it.
+To use a deployment manifest already stored in your IoT Central application, first use the [Get a deployment manifest](#get-a-deployment-manifest) API to fetch it.
Use the following request to assign a deployment manifest to an IoT Edge device in your IoT Central application: ```http
iot-central Howto Manage Deployment Manifests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-deployment-manifests.md
Previously updated : 11/22/2022 Last updated : 03/04/2024
You can manage the deployment manifest for an existing device:
:::image type="content" source="media/howto-manage-deployment-manifests/manage-manifest.png" alt-text="Screenshot that shows the options to manage a deployment manifest on a device.":::
-Use **Assign edge manifest** to select a previously uploaded deployment manifest from the **Edge manifests** page. You can also use this option to manually notify a device if you've modified the deployment manifest on the **Edge manifests** page.
+Use **Assign edge manifest** to select a previously uploaded deployment manifest from the **Edge manifests** page. You can also use this option to manually notify a device if you modify the deployment manifest on the **Edge manifests** page.
Use **Edit manifest** to modify the deployment manifest for this device. Changes you make here don't affect the deployment manifest on the **Edge manifests** page.
To assign or update the deployment manifest for multiple devices, use a [job](ho
A deployment manifest defines the modules to run on the device and optionally [writable properties](../../iot-edge/module-composition.md?#define-or-update-desired-properties) that you can use to configure modules.
-If you're assigning a device template to an IoT Edge device, you may want to define the modules and writable properties in the device template. To add the modules and property definitions to a device template:
+If you're assigning a device template to an IoT Edge device, you might want to define the modules and writable properties in the device template. To add the modules and property definitions to a device template:
1. Navigate to the **Modules Summary** page of the IoT Edge device template. 1. Select **Import modules from manifest**.
If you're assigning a device template to an IoT Edge device, you may want to def
1. Select **Import**. IoT Central adds the custom modules defined in the deployment manifest to the device template. The names of the modules in the device template match the names of the custom modules in the deployment manifest. The generated interface includes property definitions for the properties defined for the custom module in the deployment manifest: :::image type="content" source="media/howto-manage-deployment-manifests/import-modules.png" alt-text="Screenshot the shows importing module definitions to a device template.":::-
-## Next steps
-
-Now that you've learned how to manage IoT Edge deployment manifests in your Azure IoT Central application, the suggested next step is to learn how to [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
iot-central Howto Manage Devices Individually https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-individually.md
Title: Manage devices individually in your application
description: Learn how to manage devices individually in your Azure IoT Central application. Monitor, manage, create, delete, and update devices. Previously updated : 02/13/2023 Last updated : 03/01/2024
Every device has a single status value in the UI. The device status can be one o
- A new real device is added on the **Devices** page. - A set of devices is added using **Import** on the **Devices** page. -- The device status changes to **Provisioned** when a registered device completes the provisioning step by using DPS. To complete the provisioning process, the device needs the *Device ID* that was used to register the device, either a SAS key or X.509 certificate, and the *ID scope*. After provisioning, the device can connect to your IoT Central application and start sending data.
+- The device status changes to **Provisioned** when a registered device completes the provisioning step by using the Device Provisioning Service (DPS). To complete the provisioning process, the device needs the *Device ID* that was used to register the device, either a SAS key or X.509 certificate, and the *ID scope*. After DPS provisions the device, it can connect to your IoT Central application and start sending data.
- Blocked devices have a status of **Blocked**. An operator can block and unblock devices. When a device is blocked, it can't send data to your IoT Central application. An operator must unblock the device before it can resume sending data. When an operator unblocks a device the status returns to its previous value, **Registered** or **Provisioned**.
The following table shows how the status value for a device in the UI maps to th
| UI Device status | Notes | REST API Get | | - | -- | |
-| Waiting for approval | The auto-approve option is disabled in the device connection group and the device was not added through the UI. <br/> A user must manually approve the device through the UI before it can be used. | `Provisioned: false` <br/> `Enabled: false` |
-| Registered | A device has been approved either automatically or manually. | `Provisioned: false` <br/> `Enabled: true` |
-| Provisioned | The device has been provisioned and can connect to your IoT Central application. | `Provisioned: true` <br/> `Enabled: true` |
-| Blocked | The device is not allowed to connect to your IoT Central application. You can block a device that is in any of the other states. | `Provisioned:` depends on `Waiting for approval`/`Registered`/`Provisioned status` <br/> `Enabled: false` |
+| Waiting for approval | The auto approve option is disabled in the device connection group and the device wasn't added through the UI. <br/> A user must manually approve the device through the UI before it can be used. | `Provisioned: false` <br/> `Enabled: false` |
+| Registered | A device was approved either automatically or manually. | `Provisioned: false` <br/> `Enabled: true` |
+| Provisioned | The device was provisioned and can connect to your IoT Central application. | `Provisioned: true` <br/> `Enabled: true` |
+| Blocked | The device isn't allowed to connect to your IoT Central application. You can block a device that is in any of the other states. | `Provisioned:` depends on `Waiting for approval`/`Registered`/`Provisioned status` <br/> `Enabled: false` |
-A device can also have a status of **Unassigned**. This status isn't shown in the **Device status** field in the UI, it is shown in the **Device template** field in the UI. However, you can filter the device list for devices with the **Unassigned** status. If the device status is **Unassigned**, the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios:
+A device can also have a status of **Unassigned**. This status isn't shown in the **Device status** field in the UI, it's shown in the **Device template** field in the UI. However, you can filter the device list for devices with the **Unassigned** status. If the device status is **Unassigned**, the device connecting to IoT Central isn't assigned to a device template. This situation typically happens in the following scenarios:
- A set of devices is added using **Import** on the **Devices** page without specifying the device template. - A device was registered manually on the **Devices** page without specifying the device template. The device then connected with valid credentials.
An operator can assign a device to a device template from the **Devices** page b
### Device connection status
-When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. These events aren't sent by the device, they're generated internally by IoT Central.
+When a device or edge device connects using the MQTT protocol, _connected_ and _disconnected_ events for the device are generated. The device doesn't send these events, IoT Central generates them internally.
The following diagram shows how, when a device connects, the connection is registered at the end of a time window. If multiple connection and disconnection events occur, IoT Central registers the one that's closest to the end of the time window. For example, if a device disconnects and reconnects within the time window, IoT Central registers the connection event. Currently, the time window is approximately one minute.
To find these values:
1. Choose **Devices** on the left pane.
-1. Click on the device in the device list to see the device details.
+1. To see the device details, click on the device in the device list.
1. Select **Connect** to view the connection information. The QR code encodes a JSON document that includes the **ID Scope**, **Device ID**, and **Primary key** derived from the default **SAS-IoT-Devices** device connection group.
To delete either a real or simulated device from your Azure IoT Central applicat
## Change a property
-Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
+Cloud properties are the device metadata associated with the device, such as city and serial number. Cloud properties only exist in the IoT Central application and aren't synchronized to your devices. Writable properties control the behavior of a device and let you set the state of a device remotely, for example by setting the target temperature of a thermostat device. Device properties are set by the device and are read-only within IoT Central. You can view and update properties on the **Device Details** views for your device.
1. Choose **Devices** on the left pane.
Cloud properties are the device metadata associated with the device, such as cit
1. Modify the properties to the values you need. You can modify multiple properties at a time and update them all at the same time. 1. Choose **Save**. If you saved writable properties, the values are sent to your device. When the device confirms the change for the writable property, the status returns back to **synced**. If you saved a cloud property, the value is updated.-
-## Next steps
-
-Now that you've learned how to manage devices individually, the suggested next step is to learn how to [Manage devices in bulk in your Azure IoT Central application](howto-manage-devices-in-bulk.md)).
iot-central Howto Manage Devices With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-with-rest-api.md
Title: How to use the IoT Central REST API to manage devices
-description: Learn how to use the IoT Central REST API to add, modify, delete, and manage devices in an application
+description: Learn how to use the IoT Central REST API to add, modify, delete, and manage devices in an application.
Previously updated : 03/23/2023 Last updated : 03/01/2024
The following table shows how the status value for a device in the UI maps to th
| UI Device status | Notes | REST API Get | | - | -- | |
-| Waiting for approval | The auto-approve option is disabled in the device connection group and the device was not added through the UI. <br/> A user must manually approve the device through the UI before it can be used. | `Provisioned: false` <br/> `Enabled: false` |
-| Registered | A device has been approved either automatically or manually. | `Provisioned: false` <br/> `Enabled: true` |
-| Provisioned | The device has been provisioned and can connect to your IoT Central application. | `Provisioned: true` <br/> `Enabled: true` |
-| Blocked | The device is not allowed to connect to your IoT Central application. You can block a device that is in any of the other states. | `Provisioned:` depends on `Waiting for approval`/`Registered`/`Provisioned status` <br/> `Enabled: false` |
+| Waiting for approval | The auto approve option is disabled in the device connection group and the device wasn't added through the UI. <br/> A user must manually approve the device through the UI before it can be used. | `Provisioned: false` <br/> `Enabled: false` |
+| Registered | A device was approved either automatically or manually. | `Provisioned: false` <br/> `Enabled: true` |
+| Provisioned | The device was provisioned and can connect to your IoT Central application. | `Provisioned: true` <br/> `Enabled: true` |
+| Blocked | The device isn't allowed to connect to your IoT Central application. You can block a device that is in any of the other states. | `Provisioned:` depends on `Waiting for approval`/`Registered`/`Provisioned status` <br/> `Enabled: false` |
### Get device credentials
In the preview version of the API (`api-version=2022-10-31-preview`), you can us
### maxpagesize
-Use the **maxpagesize** to set the result size, the maximum returned result size is 100, the default size is 25.
+Use the **maxpagesize** to set the result size. The maximum returned result size is 100 and the default size is 25.
Use the following request to retrieve a top 10 device from your application:
Use the following request to create a new device group.
PUT https://{your app subdomain}/api/deviceGroups/{deviceGroupId}?api-version=2022-07-31 ```
-When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true
+When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is `true`.
```json {
The request body has some required fields:
* `@etag`: ETag used to prevent conflict in device updates. * `description`: Short summary of device group.
-The organizations field is only used when an application has an organization hierarchy defined. To learn more about organizations, see [Manage IoT Central organizations](howto-edit-device-template.md)
+The organizations field is only used when an application has an organization hierarchy defined. To learn more about organizations, see [Manage IoT Central organizations](howto-edit-device-template.md).
The response to this request looks like the following example:
In this section, you generate the X.509 certificates you need to connect a devic
> [!TIP] > A device ID can contain letters, numbers, and the `-` character.
-These commands produce the following root and the device certificate
+These commands produce the following root and the device certificates:
| filename | contents | | -- | -- |
Use the following request to set the primary X.509 certificate of the myx509eg e
PUT https://{your app subdomain}.azureiotcentral.com/api/enrollmentGroups/myx509eg/certificates/primary?api-version=2022-07-31 ```
-entry - Entry of certificate, either `primary` or `secondary`
- Use this request to add either a primary or secondary X.509 certificate to the enrollment group. The following example shows a request body that adds an X.509 certificate to an enrollment group:
The response to this request looks like the following example:
] } ```-
-## Next steps
-
-Now that you've learned how to manage devices with the REST API, a suggested next step is to [How to control devices with rest api.](howto-control-devices-with-rest-api.md)
iot-central Howto Manage Iot Central With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-with-rest-api.md
Title: Use the REST API to manage IoT Central applications
-description: This article describes how to create and manage your IoT Central applications with the REST API and add a system assigned managed identity to your application.
+description: This article describes how to create and manage your IoT Central applications with the REST API, add a system assigned managed identity, and manage dashboards.
Previously updated : 06/13/2023 Last updated : 03/04/2024 # Use the REST API to create and manage IoT Central applications
-You can use the [control plane REST API](/rest/api/iotcentral/2021-06-01controlplane/apps) to create and manage IoT Central applications. You can also use the REST API to add a managed identity to your application.
+You can use the [control plane REST API](/rest/api/iotcentral/2021-06-01controlplane/apps) to create and manage IoT Central applications. You can also use the REST API to:
+
+* Add a managed identity to your application.
+* Manage dashboards in your application
To use this API, you need a bearer token for the `management.azure.com` resource. To get a bearer token, you can use the Azure CLI:
To delete an IoT Central application, use:
DELETE https://management.azure.com/subscriptions/<your subscription id>/resourceGroups/<your resource group name>/providers/Microsoft.IoTCentral/iotApps/<your application name>?api-version=2021-06-01 ```
-## Next steps
+## Dashboards
+
+You can create dashboards that are associated with a specific organization. An organization dashboard is only visible to users who have access to the organization the dashboard is associated with. Only users in a role that has [organization dashboard permissions](howto-manage-users-roles.md#customizing-the-app) can create, edit, and delete organization dashboards.
+
+All users can create *personal dashboards*, visible only to themselves. Users can switch between organization and personal dashboards.
+
+> [!NOTE]
+> Creating personal dashboards using API is currently not supported.
+
+To learn how to manage dashboards by using the IoT Central UI, see [How to manage dashboards.](../core/howto-manage-dashboards.md)
+
+### Dashboards REST API
+
+The IoT Central REST API lets you:
+
+* Add a dashboard to your application
+* Update a dashboard in your application
+* Get a list of the dashboard in the application
+* Get a dashboard by ID
+* Delete a dashboard in your application
+
+## Add a dashboard
+
+Use the following request to create a dashboard.
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
+```
+
+`dashboardId` - A unique [DTMI](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/DTDL.v2.md#digital-twin-model-identifier) identifier for the dashboard.
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the dashboard.
+* `@favorite`: Is the dashboard in the favorites list?
+* `group`: Device group ID.
+* `Tile` : Configuration specifying tile object, including the layout, display name, and configuration.
+
+Tile has some required fields:
+
+| Name | Description |
+| - | -- |
+| `displayName` | Display name of the tile |
+| `height` | Height of the tile |
+| `width` | Width of the tile |
+| `x` | Horizontal position of the tile |
+| `y` | Vertical position of the tile |
+
+The dimensions and location of a tile both use integer units. The smallest possible tile has a height and width of one.
+
+You can configure a tile object to display multiple types of data. This article includes examples of tiles that show line charts, markdown, and last known value. To learn more about the different tile types you can add to a dashboard, see [Tile types](howto-manage-dashboards.md#tile-types).
+
+### Line chart tile
+
+Plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices during the past hour.
+
+The line chart tile has the following configuration:
+
+| Name | Description |
+|--|--|
+| `capabilities` | Specifies the aggregate value of the telemetry to display. |
+| `devices` | The list of devices to display. |
+| `format` | The format configuration of the chart such as the axes to use. |
+| `group` | The ID of the device group to display. |
+| `queryRange` | The time range and resolution to display.|
+| `type` | `lineChart` |
+
+### Markdown tile
+
+Clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.
+The markdown tile has the following configuration:
+
+| Name | Description |
+|--|--|
+| `description` | The markdown string to render inside the tile. |
+| `href` | The link to visit when the tile is selected. |
+| `image` | A base64 encoded image to display. |
+| `type` | `markdown` |
+
+### Last known value tile
+
+Display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices.
+
+The last known value (LKV) tile has the following configuration:
+
+| Name | Description |
+|--|--|
+| `capabilities` | Specifies the telemetry to display. |
+| `devices` | The list of devices to display. |
+| `format` | The format configuration of the LKV tile such as text size of word wrapping. |
+| `group` | The ID of the device group to display. |
+| `showTrend` | Show the difference between the last known value and the previous value. |
+| `type` | `lkv` |
+
+The following example shows a request body that adds a new dashboard with line chart, markdown, and last known value tiles. The LKV and line chart tiles are `2x2` tiles. The markdown tile is a `1x1` tile. The tiles are arranged on the top row of the dashboard:
+
+```json
+{
+ "displayName": "My Dashboard ",
+ "tiles": [
+ {
+ "displayName": "LKV Temperature",
+ "configuration": {
+ "type": "lkv",
+ "capabilities": [
+ {
+ "capability": "temperature",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "group": "0fb6cf08-f03c-4987-93f6-72103e9f6100",
+ "devices": [
+ "3xksbkqm8r",
+ "1ak6jtz2m5q",
+ "h4ow04mv3d"
+ ],
+ "format": {
+ "abbreviateValue": false,
+ "wordWrap": false,
+ "textSize": 14
+ }
+ },
+ "x": 0,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ },
+ {
+ "displayName": "Documentation",
+ "configuration": {
+ "type": "markdown",
+ "description": "Comprehensive help articles and links to more support.",
+ "href": "https://aka.ms/iotcentral-pnp-docs",
+ "image": "4d6c6373-0220-4191-be2e-d58ca2a289e1"
+ },
+ "x": 2,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Average temperature",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "temperature",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "3xksbkqm8r",
+ "1ak6jtz2m5q",
+ "h4ow04mv3d"
+ ],
+ "group": "0fb6cf08-f03c-4987-93f6-72103e9f6100",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 3,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+}
+```
+<!-- TODO: Fix this - also check the image example above... -->
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "dtmi:kkfvwa2xi:p7pyt5x38",
+ "displayName": "My Dashboard",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "temperature",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+}
+```
+
+## Get a dashboard
+
+Use the following request to retrieve the details of a dashboard by using a dashboard ID.
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "dtmi:kkfvwa2xi:p7pyt5x38",
+ "displayName": "My Dashboard",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+}
+```
+
+## Update a dashboard
+
+```http
+PATCH https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
+```
+
+The following example shows a request body that updates the display name of a dashboard and adds the dashboard to the list of favorites:
-Now that you've learned how to create and manage Azure IoT Central applications using the REST API, here's the suggested next step:
+```json
+
+{
+ "displayName": "New Dashboard Name",
+ "favorite": true
+}
+
+```
-> [!div class="nextstepaction"]
-> [How to use the IoT Central REST API to manage users and roles](howto-manage-users-roles-with-rest-api.md)
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "dtmi:kkfvwa2xi:p7pyt5x38",
+ "displayName": "New Dashboard Name",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "lineChart",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 5,
+ "height": 5
+ }
+ ],
+ "favorite": true
+}
+```
+
+## Delete a dashboard
+
+Use the following request to delete a dashboard by using the dashboard ID:
+
+```http
+DELETE https://{your app subdomain}.azureiotcentral.com/api/dashboards/{dashboardId}?api-version=2022-10-31-preview
+```
+
+## List dashboards
+
+Use the following request to retrieve a list of dashboards from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/dashboards?api-version=2022-10-31-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "value": [
+ {
+ "id": "dtmi:kkfvwa2xi:p7pyt5x3o",
+ "displayName": "Dashboard",
+ "personal": false,
+ "tiles": [
+ {
+ "displayName": "Device templates",
+ "configuration": {
+ "type": "markdown",
+ "description": "Get started by adding your first device.",
+ "href": "/device-templates/new/devicetemplates",
+ "image": "f5ba1b00-1d24-4781-869b-6f954df48736"
+ },
+ "x": 1,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Quick start demo",
+ "configuration": {
+ "type": "markdown",
+ "description": "Learn how to use Azure IoT Central in minutes.",
+ "href": "https://aka.ms/iotcentral-pnp-video",
+ "image": "9eb01d71-491a-44e5-8fac-7af3bc9f9acd"
+ },
+ "x": 2,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Tutorials",
+ "configuration": {
+ "type": "markdown",
+ "description": "Step-by-step articles teach you how to create apps and devices.",
+ "href": "https://aka.ms/iotcentral-pnp-tutorials",
+ "image": "7d9fc12c-d46e-41c6-885f-0a67c619366e"
+ },
+ "x": 3,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Documentation",
+ "configuration": {
+ "type": "markdown",
+ "description": "Comprehensive help articles and links to more support.",
+ "href": "https://aka.ms/iotcentral-pnp-docs",
+ "image": "4d6c6373-0220-4191-be2e-d58ca2a289e1"
+ },
+ "x": 4,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "IoT Central Image",
+ "configuration": {
+ "type": "image",
+ "format": {
+ "backgroundColor": "#FFFFFF",
+ "fitImage": true,
+ "showTitle": false,
+ "textColor": "#FFFFFF",
+ "textSize": 0,
+ "textSizeUnit": "px"
+ },
+ "image": ""
+ },
+ "x": 0,
+ "y": 0,
+ "width": 1,
+ "height": 1
+ },
+ {
+ "displayName": "Contoso Image",
+ "configuration": {
+ "type": "image",
+ "format": {
+ "backgroundColor": "#FFFFFF",
+ "fitImage": true,
+ "showTitle": false,
+ "textColor": "#FFFFFF",
+ "textSize": 0,
+ "textSizeUnit": "px"
+ },
+ "image": "c9ac5af4-f38e-4cd3-886a-e0cb107f391c"
+ },
+ "x": 0,
+ "y": 1,
+ "width": 5,
+ "height": 3
+ },
+ {
+ "displayName": "Available Memory",
+ "configuration": {
+ "type": "lineChart",
+ "capabilities": [
+ {
+ "capability": "AvailableMemory",
+ "aggregateFunction": "avg"
+ }
+ ],
+ "devices": [
+ "1cfqhp3tue3",
+ "mcoi4i2qh3"
+ ],
+ "group": "da48c8fe-bac7-42bc-81c0-d8158551f066",
+ "format": {
+ "xAxisEnabled": true,
+ "yAxisEnabled": true,
+ "legendEnabled": true
+ },
+ "queryRange": {
+ "type": "time",
+ "duration": "PT30M",
+ "resolution": "PT1M"
+ }
+ },
+ "x": 5,
+ "y": 0,
+ "width": 2,
+ "height": 2
+ }
+ ],
+ "favorite": false
+ }
+ ]
+}
+```
iot-central Howto Manage Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles.md
Title: Manage users and roles in Azure IoT Central application
-description: Create, edit, delete, and manage users and roles in your Azure IoT Central application to control access to resources
+description: Create, edit, delete, and manage users and roles in your Azure IoT Central application to control access to resources.
Previously updated : 08/01/2022 Last updated : 03/01/2024
To learn how to manage users and roles by using the IoT Central REST API, see [H
Every user must have a user account before they can sign in and access an application. IoT Central supports Microsoft user accounts, Microsoft Entra accounts, Microsoft Entra groups, and Microsoft Entra service principals. To learn more, see [Microsoft account help](https://support.microsoft.com/products/microsoft-account?category=manage-account) and [Quickstart: Add new users to Microsoft Entra ID](../../active-directory/fundamentals/add-users-azure-active-directory.md).
-1. To add a user to an IoT Central application, go to the **Users** page in the **Permissions** section.
+1. To add a user to an IoT Central application, go to the **Users** page in the **Permissions** section:
:::image type="content" source="media/howto-manage-users-roles/manage-users.png" alt-text="Screenshot that shows the manage users page in IoT Central." lightbox="media/howto-manage-users-roles/manage-users.png":::
-1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. To add a Microsoft Entra group on the **Users** page, choose **+ Assign group**. Start typing the name of the Active Directory group or service principal to auto-populate the form.
+1. To add a user on the **Users** page, choose **+ Assign user**. To add a service principal on the **Users** page, choose **+ Assign service principal**. To add a Microsoft Entra group on the **Users** page, choose **+ Assign group**. Start typing the name of the Active Directory group or service principal to autopopulate the form.
> [!NOTE] > Service principals and Active Directory groups must belong to the same Microsoft Entra tenant as the Azure subscription associated with the IoT Central application. 1. If your application uses [organizations](howto-create-organizations.md), choose an organization to assign to the user from the **Organization** drop-down menu.
-1. Choose a role for the user from the **Role** drop-down menu. Learn more about roles in the [Manage roles](#manage-roles) section of this article.
+1. Choose a role for the user from the **Role** drop-down menu. Learn more about roles in the [Manage roles](#manage-roles) section of this article:
:::image type="content" source="media/howto-manage-users-roles/add-user.png" alt-text="Screenshot showing how to add a user and select a role." lightbox="media/howto-manage-users-roles/add-user.png":::
Every user must have a user account before they can sign in and access an applic
> [!NOTE] > A user who is in a custom role that grants them the permission to add other users, can only add users to a role with same or fewer permissions than their own role.
- When you invite a new user, you need to share the application URL with them and ask them to sign in. After the user has signed in for the first time, the application appears on the user's [My apps](https://apps.azureiotcentral.com/myapps) page.
+ When you invite a new user, you need to share the application URL with them and ask them to sign in. After the user signs in for the first time, the application appears on the user's [My apps](https://apps.azureiotcentral.com/myapps) page.
> [!NOTE] > If a user is deleted from Microsoft Entra ID and then added back, they won't be able to sign into the IoT Central application. To re-enable access, the application's administrator should delete and re-add the user in the application as well.
The following limitations apply to Microsoft Entra groups and service principals
### Edit the roles and organizations that are assigned to users
-Roles and organizations can't be changed after they're assigned. To change the role or organization that's assigned to a user, delete the user, and then add the user again with a different role or organization.
+Roles and organizations can't be changed after they're assigned. To change the role or organization assigned to a user, delete the user, and then add the user again with a different role or organization.
> [!NOTE] > The roles assigned are specific to the IoT Central application and cannot be managed from the Azure Portal.
If your solution requires finer-grained access controls, you can create roles wi
- Select **+ New**, add a name and description for your role, and select **Application** or **Organization** as the role type. This option lets you create a role definition from scratch. - Navigate to an existing role and select **Copy**. This option lets you start with an existing role definition that you can customize. > [!WARNING] > You can't change the role type after you create a role.
When you invite a user to your application, if you associate the user with:
- The root organization, then only **Application** roles are available. - Any other organization, then only **Organization** roles are available.
-You can add users to your custom role in the same way that you add users to a built-in role
+You can add users to your custom role in the same way that you add users to a built-in role.
### Custom role options
When you define a custom role, you choose the set of permissions that a user is
| Create | View <br/> Other dependencies: View custom roles | | Delete | View <br/> Other dependencies: View custom roles | | Full Control | View, Create, Delete <br/> Other dependencies: View custom roles |-
-## Next steps
-
-Now that you've learned how to manage users and roles in your IoT Central application, the suggested next step is to learn how to [Manage IoT Central organizations](howto-create-organizations.md).
iot-central Howto Migrate To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-migrate-to-iot-hub.md
Title: Migrate devices from Azure IoT Central to Azure IoT Hub
description: Describes how to use the migration tool to migrate devices that currently connect to an Azure IoT Central application to an Azure IoT hub. Previously updated : 09/12/2022 Last updated : 03/01/2024
The tool requires your connected devices to implement a **DeviceMove** command t
> [!TIP] > You can also use the migrator tool to migrate devices between IoT Cental applications, or from an IoT hub to an IoT Central application.
+### Minimize disruption
+
+To minimize disruption, you can migrate your devices in phases. The migrator tool uses device groups to move devices from IoT Central to your IoT hub. Divide your device fleet into device groups such as devices in Texas, devices in New York, and devices in the rest of the US. Then migrate each device group independently.
+
+> [!WARNING]
+> You can't add unassigned devices to a device group. Therefore you can't currently use the migrator tool to migrate unassigned devices.
+
+Minimize business impact by following these steps:
+
+- Create the PaaS solution and run it in parallel with the IoT Central application.
+
+- Set up continuous data export in IoT Central application and appropriate routes to the PaaS solution IoT hub. Transform both data channels and store the data into the same data lake.
+
+- Migrate the devices in phases and verify at each phase. If something doesn't go as planned, fail the devices back to IoT Central.
+
+- When you've migrated all the devices to the PaaS solution and fully exported your data from IoT Central, you can remove the devices from the IoT Central solution.
+
+After the migration, devices aren't automatically deleted from the IoT Central application. These devices continue to be billed as IoT Central charges for all provisioned devices in the application. When you remove these devices from the IoT Central application, you're no longer billed for them. Eventually, remove the IoT Central application.
+
+### Move existing data out of IoT Central
+
+You can configure IoT Central to continuously export telemetry and property values. Export destinations are data stores such as Azure Data Lake, Event Hubs, and Webhooks. You can export device templates using either the IoT Central UI or the REST API. The REST API lets you export the users in an IoT Central application.
+ ## Prerequisites You need the following prerequisites to complete the device migration steps:
Devices that migrated successfully:
- Are now sending telemetry to your IoT hub :::image type="content" source="media/howto-migrate-to-iot-hub/destination-metrics.png" alt-text="Screenshot of IoT Hub in the Azure portal that shows telemetry metrics for the migrated devices." lightbox="media/howto-migrate-to-iot-hub/destination-metrics.png":::-
-## Next steps
-
-Now that know how to migrate devices from an IoT Central application to an IoT hub, a suggested next step is to learn how to [Monitor Azure IoT Hub](../../iot-hub/monitor-iot-hub.md).
iot-central Howto Set Up Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-set-up-template.md
Title: Define a new IoT device type in Azure IoT Central
-description: How to create an Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties and commands for your device type.
+description: How to create an Azure IoT device template in your Azure IoT Central application. You define the telemetry, state, properties, and commands for your device type.
Previously updated : 10/31/2022 Last updated : 03/01/2024
This section shows you how to import a device template from the catalog and how
1. On the **Select type** page, scroll down until you find the **ESP32-Azure IoT Kit** tile in the **Use a pre-configured device template** section. 1. Select the **ESP32-Azure IoT Kit** tile, and then select **Next: Review**. 1. On the **Review** page, select **Create**.
-The name of the template you created is **Sensor Controller**. The model includes components such as **Sensor Controller**, **SensorTemp**, and **Device Information interface**. Components define the capabilities of an ESP32 device. Capabilities include the telemetry, properties and commands.
+The name of the template you created is **Sensor Controller**. The model includes components such as **Sensor Controller**, **SensorTemp**, and **Device Information interface**. Components define the capabilities of an ESP32 device. Capabilities include the telemetry, properties, and commands.
:::image type="content" source="media/howto-set-up-template/device-template.png" alt-text="Screenshot that shows a Sensor controller device template." lightbox="media/howto-set-up-template/device-template.png"::: ## Autogenerate a device template
-You can also automatically create a device template from a connected device that's not yet assigned to a device template. IoT Central uses the telemetry and property values the device sends to infer a device model.
+You can also automatically create a device template from a currently unassigned device. IoT Central uses the telemetry and property values the device sends to infer a device model.
> [!NOTE] > Currently, this preview feature can't use telemetry and properties from components. It can only generate capabilities from root telemetry and properties.
The following steps show how to use this feature:
You can rename or delete a template from the template's editor page.
-After you've defined the template, you can publish it. Until the template is published, you can't connect a device to it, and it doesn't appear on the **Devices** page.
+After you define the template, you can publish it. Until the template is published, you can't connect a device to it, and it doesn't appear on the **Devices** page.
To learn more about modifying and versioning device templates, see [Edit an existing device template](howto-edit-device-template.md).
The model defines how your device interacts with your IoT Central application. C
To create a device model, you can: - Use IoT Central to create a custom model from scratch.-- Import a DTDL model from a JSON file. A device builder might have used Visual Studio Code to author a device model for your application.-- Select one of the devices from the device catalog. This option imports the device model that the manufacturer has published for this device. A device model imported like this is automatically published.
+- Import a DTDL model from a JSON file. A device builder might use Visual Studio Code to author a device model for your application.
+- Select one of the devices from the device catalog. This option imports the device model that the manufacturer published for this device. A device model imported like this is automatically published.
1. To view the model ID, select the root interface in the model and select **Edit identity**:
The following table shows the configuration settings for a command capability:
| Description | A description of the command capability. | | Request | If enabled, a definition of the request parameter, including: name, display name, schema, unit, and display unit. | | Response | If enabled, a definition of the command response, including: name, display name, schema, unit, and display unit. |
-|Initial value | The default parameter value. This is an IoT Central extension to DTDL. |
+|Initial value | The default parameter value. This parameter is an IoT Central extension to DTDL. |
To learn more about how devices implement commands, see [Telemetry, property, and command payloads > Commands and long running commands](../../iot/concepts-message-payloads.md#commands).
Cloud-to-device messages:
## Cloud properties
-Use cloud properties to store information about devices in IoT Central. Cloud properties are never sent to a device. For example, you can use cloud properties to store the name of the customer who has installed the device, or the device's last service date.
+Use cloud properties to store information about devices in IoT Central. Cloud properties are never sent to a device. For example, you can use cloud properties to store the name of the customer who installed the device, or the device's last service date.
:::image type="content" source="media/howto-set-up-template/cloud-properties.png" alt-text="Screenshot that shows how to add cloud properties.":::
Generating default views is a quick way to visualize your important device infor
- **Overview**: A view with device telemetry, displaying charts and metrics. - **About**: A view with device information, displaying device properties.
-After you've selected **Generate default views**, they're automatically added under the **Views** section of your device template.
+After you select **Generate default views**, they're automatically added under the **Views** section of your device template.
### Custom views
To add a view to a device template:
:::image type="content" source="media/howto-set-up-template/tile.png" alt-text="Screenshot that shows how to configure a tile." lightbox="media/howto-set-up-template/tile.png" :::
-To test your view, select **Configure preview device**. This feature lets you see the view as an operator sees it after it's published. Use this feature to validate that your views show the correct data. Choose from the following options:
+To test your view, select **Configure preview device**. This feature lets you see the view as an operator sees it after it publishes. Use this feature to validate that your views show the correct data. Choose from the following options:
- No preview device.-- The real test device you've configured for your device template.
+- The real test device you configured for your device template.
- An existing device in your application, by using the device ID. ### Forms
Before you can connect a device that implements your device model, you must publ
To publish a device template, go to you your device template, and select **Publish**. After you publish a device template, an operator can go to the **Devices** page, and add either real or simulated devices that use your device template. You can continue to modify and save your device template as you're making changes. When you want to push these changes out to the operator to view under the **Devices** page, you must select **Publish** each time.-
-## Next steps
-
-A suggested next step is to read about how to [Make changes to an existing device template](howto-edit-device-template.md).
iot-central Howto Use Location Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-location-data.md
Title: Use location data in an Azure IoT Central solution
description: Learn how to use location data sent from a device connected to your IoT Central application. Plot location data on a map or create geofencing rules. Previously updated : 11/03/2022 Last updated : 03/04/2024
iot-central Overview Iot Central Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-admin.md
Title: Azure IoT Central application administration guide
description: How to administer your IoT Central application. Application administration includes users, organization, security, and automated deployments. Previously updated : 11/28/2022 Last updated : 03/04/2024
An IoT Central application lets you monitor and manage your devices, letting you
IoT Central application administration includes the following tasks: -- Create applications-- Manage security
+- Create applications.
+- Manage security.
- Configure application settings. - Upgrade applications. - Export and share applications.
You use an *application template* to create an application. An application templ
- Sample dashboards - Sample device templates - Simulated devices producing real-time data-- Pre-configured rules and jobs
+- Preconfigured rules and jobs
- Rich documentation including tutorials and how-tos
-You choose the application template when you create your application. You can't change the template an application uses after it's created.
+You choose the application template when you create your application. You can't change the template an application uses after you create it.
### Custom templates
An administrator can configure file uploads of an IoT Central application that l
An administrator can: -- Create a copy of an application if you just need a duplicate copy of your application. For example, you may need a duplicate copy for testing.
+- Create a copy of an application if you just need a duplicate copy of your application. For example, you might need a duplicate copy for testing.
- Create an application template from an existing application if you plan to create multiple copies. To learn more, see [Create and use a custom application template](howto-create-iot-central-application.md#create-and-use-a-custom-application-template).
iot-central Overview Iot Central Api Tour https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-api-tour.md
Version 2022-07-31 of the data plane API lets you manage the following resources
- Scheduled jobs - Users
-The preview devices API also lets you [manage dashboards](howto-manage-dashboards-with-rest-api.md), [manage deployment manifests](howto-manage-deployment-manifests-with-rest-api.md), and [manage data exports](howto-manage-data-export-with-rest-api.md).
+The preview devices API also lets you [manage dashboards](howto-manage-iot-central-with-rest-api.md#dashboards), [manage deployment manifests](howto-manage-deployment-manifests-with-rest-api.md), and [manage data exports](howto-manage-data-export-with-rest-api.md).
To get started with the data plane APIs, see [Tutorial: Use the REST API to manage an Azure IoT Central application](tutorial-use-rest-api.md).
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
Title: Device connectivity guide
description: This guide describes how IoT devices connect to and communicate with your IoT Central application. The article describes telemetry, properties, and commands. Previously updated : 02/13/2023 Last updated : 03/01/2024
An IoT device is a standalone device that connects directly to IoT Central. An I
An IoT Edge device connects directly to IoT Central. An IoT Edge device can send its own telemetry, report its properties, and respond to writable property updates and commands. IoT Edge modules process data locally on the IoT Edge device. An IoT Edge device can also act as an intermediary for other devices known as downstream devices. Scenarios that use IoT Edge devices include: -- Aggregate or filter telemetry before it's sent to IoT Central. This approach can help reduce the costs of sending data to IoT Central.
+- Aggregate or filter telemetry before sending it to IoT Central. This approach can help reduce the costs of sending data to IoT Central.
- Enable devices that can't connect directly to IoT Central to connect through the IoT Edge device. For example, a downstream device might use bluetooth to connect to the IoT Edge device, which then connects over the internet to IoT Central. - Control downstream devices locally to avoid the latency associated with connecting to IoT Central over the internet.
To learn more, see [Add an Azure IoT Edge device to your Azure IoT Central appli
### Gateways
-A gateway device manages one or more downstream devices that connect to your IoT Central application. A gateway device can process the telemetry from the downstream devices before it's forwarded to your IoT Central application. Both IoT devices and IoT Edge devices can act as gateways. To learn more, see [Define a new IoT gateway device type in your Azure IoT Central application](./tutorial-define-gateway-device-type.md) and [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
+A gateway device manages one or more downstream devices that connect to your IoT Central application. A gateway device can process the telemetry from the downstream devices before forwarding it to your IoT Central application. Both IoT devices and IoT Edge devices can act as gateways. To learn more, see [Define a new IoT gateway device type in your Azure IoT Central application](./tutorial-define-gateway-device-type.md) and [How to connect devices through an IoT Edge transparent gateway](how-to-connect-iot-edge-transparent-gateway.md).
## How devices connect
As you connect a device to IoT Central, it goes through the following stages: _r
### Register a device
-When you register a device with IoT Central, you're telling IoT Central the ID of a device that you want to connect to the application. Optionally at this stage, you can assign the device to a [device template](concepts-device-templates.md) that declares the capabilities of the device to your application.
+When you register a device with IoT Central, you tell IoT Central the ID of a device that you want to connect to the application. Optionally at this stage, you can assign the device to a [device template](concepts-device-templates.md) that declares the capabilities of the device to your application.
> [!TIP] > A device ID can contain letters, numbers, and the `-` character.
iot-central Overview Iot Central Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-security.md
Title: Azure IoT Central application security guide
description: This guide describes how to secure your IoT Central application including users, devices, API access, and authentication to other services for data export. Previously updated : 11/28/2022 Last updated : 03/04/2024
To learn more, see:
## Connect to a destination on a secure virtual network
-Data export in IoT Central lets you continuously stream device data to destinations such as Azure Blob Storage, Azure Event Hubs, Azure Service Bus Messaging. You may choose to lock down these destinations by using an Azure Virtual Network (VNet) and private endpoints. To enable IoT Central to connect to a destination on a secure VNet, configure a firewall exception. To learn more, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
+Data export in IoT Central lets you continuously stream device data to destinations such as Azure Blob Storage, Azure Event Hubs, Azure Service Bus Messaging. You can choose to lock down these destinations by using an Azure Virtual Network and private endpoints. To enable IoT Central to connect to a destination on a secure virtual network, configure a firewall exception. To learn more, see [Export data to a secure destination on an Azure Virtual Network](howto-connect-secure-vnet.md).
## Audit logs
iot-central Quick Configure Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-configure-rules.md
Title: Quickstart - Configure Azure IoT Central rules and actions
description: In this quickstart, you learn how to configure telemetry-based rules and actions in your IoT Central application. Previously updated : 10/28/2022 Last updated : 01/03/2024
# Quickstart: Configure rules and actions for your device in Azure IoT Central
-Get started with IoT Central rules. IoT Central rules let you automate actions that occur in response to specific conditions. The example in this quickstart uses accelerometer telemetry from the phone to trigger a rule when the phone is turned over.
+In this quickstart, you configure an IoT Central rule. IoT Central rules let you automate actions that occur in response to specific conditions. The example in this quickstart uses accelerometer telemetry from the phone to trigger a rule when the phone is turned over.
In this quickstart, you:
In this quickstart, you:
## Create a telemetry-based rule
-The smartphone app sends telemetry that includes values from the accelerometer sensor. The sensor works slightly differently on Android and iOS devices:
+The smartphone app sends telemetry that includes values from the accelerometer sensor. The sensor works differently on Android and iOS devices:
# [Android](#tab/android)
To trigger the rule, make sure the smartphone app is sending data and then place
After your testing is complete, disable the rule to stop receiving the notification emails in your inbox.
-## Next steps
+## Next step
In this quickstart, you learned how to create a telemetry-based rule and add an action to it.
iot-central Quick Deploy Iot Central https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-deploy-iot-central.md
Title: Quickstart - Connect a device to Azure IoT Central
description: In this quickstart, you learn how to connect your first device to a new IoT Central application. This quickstart uses a smartphone app as an IoT device. Previously updated : 10/28/2022 Last updated : 03/01/2024
# Quickstart - Use your smartphone as a device to send telemetry to an IoT Central application
-Get started with an Azure IoT Central application and connect your first device. To get you started quickly, you install an app on your smartphone to act as the device. The app sends telemetry, reports properties, and responds to commands:
+In this quickstart, you create an Azure IoT Central application and connect your first device. To get you started quickly, you install an app on your smartphone to act as the device. The app sends telemetry, reports properties, and responds to commands:
:::image type="content" source="media/quick-deploy-iot-central/overview.png" alt-text="Overview of quickstart scenario connecting a smartphone app to IoT Central." border="false":::
In this quickstart, you:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
- You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md)
+ You should have at least **Contributor** access in your Azure subscription. If you created the subscription yourself, you're automatically an administrator with sufficient access. To learn more, see [What is Azure role-based access control?](../../role-based-access-control/overview.md).
- An Android or iOS smartphone on which you're able to install a free app from one of the official app stores.
IoT Central provides various industry-focused application templates to help you
| Field | Description | | -- | -- | | Subscription | The Azure subscription you want to use. |
- | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. |
+ | Resource group | The resource group you want to use. You can create a new resource group or use an existing one. |
| Resource name | A valid Azure resource name such as *my-contoso-app*. | | Application URL | A URL subdomain for your application such as *my-contoso-app*. The URL for an IoT Central application looks like `https://my-contoso-app.azureiotcentral.com`. | | Template | **Custom application** |
To register your device:
Keep this page open. In the next section, you scan this QR code using the smartphone app to connect it to IoT Central. > [!TIP]
-> The QR code contains the information, such as the registered device ID, your device needs to establish a connection to your IoT Central application. It saves you from the need to enter the connection information manually.
+> The QR code contains the information, such as the registered device ID, that your device needs to establish a connection to your IoT Central application. It saves you from the need to enter the connection information manually.
## Connect your device
To see the acknowledgment from the smartphone app, select **command history**.
[!INCLUDE [iot-central-clean-up-resources](../../../includes/iot-central-clean-up-resources.md)]
-## Next steps
+## Next step
In this quickstart, you created an IoT Central application and connected device that sends telemetry. Then you used a smartphone app as the IoT device that connects to IoT Central. Here's the suggested next step to continue learning about IoT Central:
iot-central Quick Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-export-data.md
Title: Quickstart - Export data from Azure IoT Central
description: In this quickstart, you learn how to use the data export feature in IoT Central to integrate with other cloud services. Previously updated : 10/28/2022 Last updated : 03/01/2024
ms.devlang: azurecli
# Quickstart: Export data from an IoT Central application
-Get started with IoT Central data export to integrate your IoT Central application with another cloud service such as Azure Data Explorer. Azure Data Explorer lets you store, query, and process the telemetry from devices such as the **IoT Plug and Play** smartphone app.
+In this quickstart, you configure your IoT Central application to export data Azure Data Explorer. Azure Data Explorer lets you store, query, and process the telemetry from devices such as the **IoT Plug and Play** smartphone app.
In this quickstart, you:
To configure the data export:
:::image type="content" source="media/quick-export-data/data-transformation-query.png" alt-text="Screenshot that shows the data transformation query for the export." lightbox="media/quick-export-data/data-transformation-query.png":::
- If you want to see how the transformation works and experiment with the query, paste the following sample telemetry message into **1. Add your input message**:
+ To see how the transformation works and experiment with the query, paste the following sample telemetry message into **1. Add your input message**:
```json {
To query the exported telemetry:
| render timechart ```
-You may need to wait for several minutes to collect enough data. Try holding your phone in different orientations to see the telemetry values change:
+You might need to wait for several minutes to collect enough data. To see the telemetry values change, try holding your phone in different orientations:
:::image type="content" source="media/quick-export-data/acceleration-plot.png" alt-text="Screenshot of the query results for the accelerometer telemetry." lightbox="media/quick-export-data/acceleration-plot.png":::
To remove the Azure Data Explorer instance from your subscription and avoid bein
az group delete --name IoTCentralExportData-rg ```
-## Next steps
+## Next step
In this quickstart, you learned how to continuously export data from IoT Central to another Azure service.
iot-central Tutorial Connect Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-iot-edge-device.md
Title: Tutorial - Connect an IoT Edge device to your application
description: This tutorial shows you how to register, provision, and connect an IoT Edge device to your IoT Central application. Previously updated : 12/14/2022 Last updated : 03/04/2024
Select **Review + create** and then **Create**. Wait for the deployment to finis
To verify the deployment of the IoT Edge device was successful:
-1. In your IoT Central application, navigate to the **Devices** page. Check the status of the **Environmental sensor - 001** device is **Provisioned**. You may need to wait for a few minutes while the device connects.
+1. In your IoT Central application, navigate to the **Devices** page. Check the status of the **Environmental sensor - 001** device is **Provisioned**. You might need to wait for a few minutes while the device connects.
1. Navigate to the **Environmental sensor - 001** device.
At the moment, the IoT Edge device doesn't have a device template assigned, so a
## Add a device template
-A deployment manifest may include definitions of properties exposed by a module. For example, the configuration in the deployment manifest for the **SimulatedTemperatureSensor** module includes the following:
+A deployment manifest can include definitions of properties exposed by a module. For example, the configuration in the deployment manifest for the **SimulatedTemperatureSensor** module includes the following:
```json "SimulatedTemperatureSensor": {
A deployment manifest can only define module properties, not commands or telemet
1. Navigate to the **SimulatedTemperatureSensor** module in the **Environmental sensor** device template.
-1. Select **Add inherited interface** (you may need to select **...** to see this option). Select **Import interface**. Then import the *EnvironmentalSensorTelemetry.json* file you previously downloaded.
+1. Select **Add inherited interface** (you might need to select **...** to see this option). Select **Import interface**. Then import the *EnvironmentalSensorTelemetry.json* file you previously downloaded.
The module now includes a **telemetry** interface that defines **machine**, **ambient**, and **timeCreated** telemetry types:
iot-central Tutorial Create Telemetry Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-create-telemetry-rules.md
Title: Tutorial - Create and manage rules in Azure IoT Central
description: This tutorial shows you how Azure IoT Central rules let you monitor your devices in near real time and automatically invoke actions when a rule triggers. Previously updated : 10/27/2022 Last updated : 03/04/2024
iot-central Tutorial Define Gateway Device Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md
Title: Tutorial - Define an Azure IoT Central gateway device type
description: This tutorial shows you, as a builder, how to define a new IoT gateway device type in your Azure IoT Central application. Previously updated : 10/26/2022 Last updated : 03/04/2024 - # Tutorial - Define a new IoT gateway device type in your Azure IoT Central application
In this tutorial, you create a **Smart Building** gateway device template. A **S
:::image type="content" source="media/tutorial-define-gateway-device-type/gatewaypattern.png" alt-text="Diagram that shows the relationship between a gateway device and its downstream devices." border="false":::
-As well as enabling downstream devices to communicate with your IoT Central application, a gateway device can also:
+A gateway device can also:
* Send its own telemetry, such as temperature. * Respond to writable property updates made by an operator. For example, an operator could change the telemetry send interval.
You now have device templates for the two downstream device types:
## Create a gateway device template
-In this tutorial you create a device template for a gateway device from scratch. You use this template later to create a simulated gateway device in your application.
+In this tutorial, you create a device template for a gateway device from scratch. You use this template later to create a simulated gateway device in your application.
To add a new gateway device template to your application:
To publish the gateway device template:
After a device template is published, it's visible on the **Devices** page and to the operator. The operator can use the template to create device instances or establish rules and monitoring. Editing a published template could affect behavior across the application.
-To learn more about modifying a device template after it's published, see [Edit an existing device template](howto-edit-device-template.md).
+To learn more about modifying a device template after you publish it, see [Edit an existing device template](howto-edit-device-template.md).
## Create the simulated devices
iot-central Tutorial Industrial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md
- Title: Tutorial - Explore an Azure IoT Central industrial scenario
-description: This tutorial shows you how to deploy an end-to-end industrial IoT solution by using IoT Edge, IoT Central, and Azure Data Explorer.
-- Previously updated : 07/10/2023----
-#Customer intent: As a solution builder, I want to deploy a complete industrial IoT solution that uses IoT Central so that I understand how IoT Central enables industrial IoT scenarios.
--
-# Explore an industrial IoT scenario with IoT Central
-
-The solution shows how to use Azure IoT Central to ingest industrial IoT data from edge resources and then export the data to Azure Data Explorer for further analysis. The sample deploys and configures resources such as:
--- An Azure virtual machine to host the Azure IoT Edge runtime.-- An IoT Central application to ingest OPC-UA data, transform it, and then export it to Azure Data Explorer.-- An Azure Data Explorer environment to store, manipulate, and explore the OPC-UA data.-
-The following diagram shows the data flow in the scenario and highlights the key capabilities of IoT Central relevant to industrial solutions:
--
-The sample uses a custom tool to deploy and configure all of the resources. The tool shows you what resources it deploys and provides links to further information.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Deploy an end-to-end industrial IoT solution
-> * Use the **IoT Central Solution Builder** tool to deploy a solution
-> * Create a customized deployment
-
-## Prerequisites
--- Azure subscription that you access using a [work or school account](https://techcommunity.microsoft.com/t5/itops-talk-blog/what-s-the-difference-between-a-personal-microsoft-account-and-a/ba-p/2241897). Currently, you can't use a Microsoft account to deploy the solution with the **IoT Central Solution Builder** tool.-- Local machine to run the **IoT Central Solution Builder** tool. Prebuilt binaries are available for Windows and macOS.-- If you need to build the **IoT Central Solution Builder** tool instead of using one of the prebuilt binaries, you need a local Git installation.-- Text editor. If you want to edit the configuration file to customize your solution.-
-In this tutorial, you use the Azure CLI to create an app registration in Microsoft Entra ID:
--
-## Setup
-
-Complete the following tasks to prepare the tool to deploy your solution:
--- Create a Microsoft Entra app registration-- Install the **IoT Central Solution Builder** tool-- Configure the **IoT Central Solution Builder** tool-
-To create an Active Directory app registration in your Azure subscription:
--- If you're running the Azure CLI on your local machine, sign in to your Azure tenant:-
- ```azurecli
- az login
- ```
-
- > [!TIP]
- > If you're using the Azure Cloud Shell, you're signed in automatically. If you want to use a different subscription, use the [az account](/cli/azure/account?view=azure-cli-latest#az-account-set&preserve-view=true) command.
--- Make a note of the `id` value from the previous command. This value is your *subscription ID*. You use this value later in the tutorial.--- Make a note of the `tenantId` value from the previous command. This value is your *tenant ID*. You use this value later in the tutorial.--- To create an Active Directory app registration, run the following command:-
- ```azurecli
- az ad app create \
- --display-name "IoT Central Solution Builder" \
- --enable-access-token-issuance false \
- --enable-id-token-issuance false \
- --is-fallback-public-client false \
- --public-client-redirect-uris "msald38cef1a-9200-449d-9ce5-3198067beaa5://auth" \
- --required-resource-accesses "[{\"resourceAccess\":[{\"id\":\"00d678f0-da44-4b12-a6d6-c98bcfd1c5fe\",\"type\":\"Scope\"}],\"resourceAppId\":\"2746ea77-4702-4b45-80ca-3c97e680e8b7\"},{\"resourceAccess\":[{\"id\":\"73792908-5709-46da-9a68-098589599db6\",\"type\":\"Scope\"}],\"resourceAppId\":\"9edfcdd9-0bc5-4bd4-b287-c3afc716aac7\"},{\"resourceAccess\":[{\"id\":\"41094075-9dad-400e-a0bd-54e686782033\",\"type\":\"Scope\"}],\"resourceAppId\":\"797f4846-ba00-4fd7-ba43-dac1f8f63013\"},{\"resourceAccess\":[{\"id\":\"e1fe6dd8-ba31-4d61-89e7-88639da4683d\",\"type\":\"Scope\"}],\"resourceAppId\":\"00000003-0000-0000-c000-000000000000\"}]" \
- --sign-in-audience "AzureADandPersonalMicrosoftAccount"
- ```
-
- > [!NOTE]
- > The display name must be unique in your subscription.
--- Make a note of the `appId` value from the output of the previous command. This value is your *application (client) ID*. You use this value later in the tutorial.-
-To install the **IoT Central Solution Builder** tool:
--- If you're using Windows, download and run the latest setup file from the [releases](https://github.com/Azure-Samples/iotc-solution-builder/releases) page.-- For other platforms, clone the [iotc-solution-builder](https://github.com/Azure-Samples/iotc-solution-builder) GitHub repository and follow the instructions in the readme file to [build the tool](https://github.com/Azure-Samples/iotc-solution-builder#build-the-tool).-
-To configure the **IoT Central Solution Builder** tool:
--- Run the **IoT Central Solution Builder** tool.-- Select **Action > Edit Azure config**:-
- :::image type="content" source="media/tutorial-industrial-end-to-end/iot-central-solution-builder-azure-config.png" alt-text="Screenshot that shows the edit Azure config menu option in the IoT solution builder tool.":::
--- Enter the application ID, subscription ID, and tenant ID that you made a note of previously. Select **OK**.--- Select **Action > Sign in**. Sign in with the same credentials you used to create the Active Directory app registration.-
-The **IoT Central Solution Builder** tool is now ready to use to deploy your industrial IoT solution.
-
-## Deploy the solution
-
-Use the **IoT Central Solution Builder** tool to deploy the Azure resources for the solution. The tool deploys and configures the resources to create a running solution.
-
-Download the [adxconfig-opcpub.json](https://raw.githubusercontent.com/Azure-Samples/iotc-solution-builder/main/iotedgeDeploy/configs/adxconfig-opcpub.json) configuration file. This configuration file deploys the required resources.
-
-To load the configuration file for the solution to deploy:
--- In the tool, select **Open Configuration**.-- Select the `adxconfig-opcpub.json` file you download.-- The tool displays the deployment steps:-
- :::image type="content" source="media/tutorial-industrial-end-to-end/iot-central-solution-builder-steps.png" alt-text="Screenshot that shows the deployment steps defined in the configuration file loaded into the tool.":::
-
- > [!TIP]
- > Select any step to view relevant documentation.
-
-Each step uses either an ARM template or REST API call to deploy or configure resources. Open the `adxconfig-opcpub.json` to see the details of each step.
-
-To deploy the solution:
--- Select **Start Provisioning**.-- Optionally, change the suffix and Azure location to use. The suffix is appended to the name of all the resources the tool creates to help you identify them in the Azure portal.-- Select **Configure**.-- The tool shows its progress as it deploys the solution.-
- > [!TIP]
- > The tool takes about 15 minutes to deploy and configure all the resources.
--- Navigate to the Azure portal and sign in with the same credentials you used to sign in to the tool.-- Find the resource group the tool created. The name of the resource group is **iotc-rg-{suffix from tool}**. In the following screenshot, the suffix used by the tool is **iotcsb29472**:-
- :::image type="content" source="media/tutorial-industrial-end-to-end/azure-portal-resources.png" alt-text="Screenshot that shows the deployed resources in the Azure portal.":::
-
-To customize the deployed solution, you can edit the `adxconfig-opcpub.json` configuration file and then run the tool.
-
-## Walk through the solution
-
-The configuration file run by the tool defines the Azure resources to deploy and any required configuration. The tool runs the steps in the configuration file in sequence. Some steps are dependent on previous steps.
-
-The following sections describe the resources you deployed and what they do. The order here follows the device data as it flows from the IoT Edge device to IoT Central, and then on to Azure Data Explorer:
--
-### IoT Edge
-
-The tool deploys the IoT Edge 1.2 runtime to an Azure virtual machine. The installation script that the tool runs edits the IoT Edge *config.toml* file to add the following values from IoT Central:
--- **Id scope** for the IoT Central app.-- **Device Id** for the gateway device registered in the IoT Central app.-- **Symmetric key** for the gateway device registered in the IoT Central app.-
-The IoT Edge deployment manifest defines four custom modules:
--- [azuremetricscollector](../../iot-edge/how-to-collect-and-transport-metrics.md?view=iotedge-2020-11&tabs=iotcentral&preserve-view=true) - sends metrics from the IoT Edge device to the IoT Central application.-- [opcplc](https://github.com/Azure-Samples/iot-edge-opc-plc) - generates simulated OPC-UA data.-- [opcpublisher](https://github.com/Azure/Industrial-IoT/tree/main/docs/opc-publisher) - forwards OPC-UA data from an OPC-UA server to the **miabgateway**.-- [miabgateway](https://github.com/iot-for-all/iotc-miab-gateway) - gateway to send OPC-UA data to your IoT Central app and handle commands sent from your IoT Central app.-
-You can see the deployment manifest in the tool configuration file. The tool assigns the deployment manifest to the IoT Edge device it registers in your IoT Central application.
-
-To learn more about how to use the REST API to deploy and configure the IoT Edge runtime, see [Run Azure IoT Edge on Ubuntu Virtual Machines](../../iot-edge/how-to-install-iot-edge-ubuntuvm.md).
-
-### Simulated OPC-UA telemetry
-
-The [opcplc](https://github.com/Azure-Samples/iot-edge-opc-plc) module on the IoT Edge device generates simulated OPC-UA data for the solution. This module implements an OPC-UA server with multiple nodes that generate random data and anomalies. The module also lets you configure user defined nodes.
-
-The [opcpublisher](https://github.com/Azure/Industrial-IoT/tree/main/docs/opc-publisher) module on the IoT Edge device forwards OPC-UA data from an OPC-UA server to the **miabgateway** module.
-
-### IoT Central application
-
-The IoT Central application in the solution:
--- Provides a cloud-hosted endpoint to receive OPC-UA data from the IoT Edge device.-- Lets you manage and control the connected devices and gateways.-- Transforms the OPC-UA data it receives and exports it to Azure Data Explorer.-
-The configuration file uses a control plane [REST API to create and manage IoT Central applications](howto-manage-iot-central-with-rest-api.md).
-
-### Device templates and devices
-
-The solution uses a single device template called **Manufacturing In A Box Gateway** in your IoT Central application. The device template models the IoT Edge gateway and includes the **Manufacturing In A Box Gateway** and **Azure Metrics Collector** modules.
-
-The **Manufacturing In A Box Gateway** module includes the following interfaces:
--- **Manufacturing In A Box Gateway Device Interface**. This interface defines read-only properties and events such as **Processor architecture**, **Operating system**, **Software version**, and **Module Started** that the device reports to IoT Central. The interface also defines a **Restart Gateway Module** command and a writable **Debug Telemetry** property.-- **Manufacturing In A Box Gateway Module Interface**. This interface lets you manage the downstream OPC-UA servers connected to the gateway. The interface includes commands such as the **Provision OPC Device** command that the tool calls during the configuration process.-
-There are two devices registered in your IoT Central application:
--- **opc-anomaly-device**. This device isn't assigned to a device template. The device represents the OPC-UA server implemented in the **opcplc** IoT Edge module. This OPC-UA server generates simulated OPC-UA data. Because the device isn't associated with a device template, IoT Central marks the telemetry as **Unmodeled**.-- **industrial-connect-gw**. This device is assigned to the **Manufacturing In A Box Gateway** device template. Use this device to monitor the health of the gateway and manage the downstream OPC-UA servers. The configuration file run by the tool calls the **Provision OPC Device** command to provision the downstream OPC-UA server.-
-The configuration file uses the following data plane REST APIs to add the device templates and devices to the IoT Central application, register the devices, and retrieve the device provisioning authentication keys:
--- [How to use the IoT Central REST API to manage device templates](howto-manage-device-templates-with-rest-api.md).-- [How to use the IoT Central REST API to control devices](howto-control-devices-with-rest-api.md).-
-You can also use the IoT Central UI or CLI to manage the devices and gateways in your solution. For example, to check the **opc-anomaly-device** is sending data, navigate to the **Raw data** view for the device in the IoT Central application. If the device is sending telemetry, you see telemetry messages in the **Raw data** view. If there are no telemetry messages, restart the Azure virtual machine in the Azure portal.
-
-> [!TIP]
-> You can find the Azure virtual machine with IoT Edge runtime in the resource group created by the configuration tool.
-
-### Data export configuration
-
-The solution uses the IoT Central data export capability to export OPC-UA data. IoT Central data export continuously sends filtered telemetry received from the OPC-UA server to an Azure Data Explorer environment. The filter ensures that only data from the OPC-UA is exported. The data export uses a [transformation](howto-transform-data-internally.md) to map the raw telemetry into a tabular structure suitable for Azure Data Explorer to ingest. The following snippet shows the transformation query:
-
-```jq
-{
- applicationId: .applicationId,
- deviceId: .device.id,
- deviceName: .device.name,
- templateName: .device.templateName,
- enqueuedTime: .enqueuedTime,
- telemetry: .telemetry | map({ key: .name, value: .value }) | from_entries,
- }
-```
-
-The configuration file uses the data plane REST API to create the data export configuration in IoT Central. To learn more, see [How to use the IoT Central REST API to manage data exports](howto-manage-data-export-with-rest-api.md).
-
-### Azure Data Explorer
-
-The solution uses Azure Data Explore to store and analyze the OPC-UA telemetry. The solution uses two tables and a function to process the data as it arrives:
--- The **rawOpcData** table receives the data from the IoT Central data export. The solution configures this table for streaming ingestion.-- The **opcDeviceData** table stores the transformed data.-- The **extractOpcTagData** function processes the data as it arrives in the **rawOpcData** table and adds transformed records to the **opcDeviceData** table.-
-You can query the transformed data in the **opcDeviceData** table. For example:
-
-```kusto
-opcDeviceData
-| where enqueuedTime > ago(1d)
-| where tag=="DipData"
-| summarize avgValue = avg(value) by deviceId, bin(sourceTimestamp, 15m)
-| render timechart
-```
-
-The configuration file uses a control plane REST API to deploy the Azure Data Explorer cluster and data plane REST APIS to create and configure the database.
-
-## Customize the solution
-
-The **IoT Central Solution Builder** tool uses a JSON configuration file to define the sequence of steps to run. To customize the solution, edit the configuration file. You can't modify an existing solution with the tool, you can only deploy a new solution.
-
-The example configuration file adds all the resources to the same resource group in your solution. To remove a deployed solution, delete the resource group.
-
-Each step in the configuration file defines one of the following actions:
--- Use an Azure Resource Manager template to deploy an Azure resource. For example, the sample configuration file uses a Resource Manager template to deploy the Azure virtual machine that hosts the IoT Edge runtime.-- Make a REST API call to deploy or configure a resource. For example, the sample configuration file uses REST APIs to create and configure the IoT Central application.-
-## Tidy up
-
-To avoid unnecessary charges, delete the resource group created by the tool when you've finished exploring the solution.
-
-## Next steps
-
-In this tutorial, you learned how to deploy an end-to-end industrial IoT scenario that uses IoT Central. To learn more about industrial IoT solutions with IoT Central, see:
-
-> [!div class="nextstepaction"]
-> [Industrial IoT patterns with Azure IoT Central](./concepts-iiot-architecture.md)
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-device-groups.md
Title: Tutorial - Use Azure IoT Central device groups
description: Tutorial - Learn how to use device groups to analyze telemetry from devices in your Azure IoT Central application. Previously updated : 10/26/2022 Last updated : 03/04/2024
Add two cloud properties to the **Sensor Controller** model in the device templa
1. Select **Save** to save your changes.
-Add a new form to the device template to manage the device:
+To manage the device, add a new form to the device template:
1. Select the **Views** node, and then select the **Editing device and cloud data** tile to add a new view.
To analyze the telemetry for a device group:
1. Select the **Contoso devices** device group you created. Then add both the **Temperature** and **SensorHumid** telemetry types.
- Use the ellipsis icons next to the telemetry types to select an aggregation type. The default is **Average**. Use **Group by** to change how the aggregate data is shown. For example, if you split by device ID you see a plot for each device when you select **Analyze**.
+ To select an aggregation type, use the ellipsis icons next to the telemetry types. The default is **Average**. Use **Group by** to change how the aggregate data is shown. For example, if you split by device ID you see a plot for each device when you select **Analyze**.
1. Select **Analyze** to view the average telemetry values.
iot-central Tutorial Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md
Title: Tutorial - Use the REST API to manage an application
description: In this tutorial you use the REST API to create and manage an IoT Central application, add a device, and configure data export. Previously updated : 04/26/2023 Last updated : 03/04/2024
This tutorial shows you how to use the Azure IoT Central REST API to create and
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Authorize the REST API.
-> * Create an IoT Central application.
-> * Add a device to your application.
-> * Query and control the device.
-> * Set up data export.
-> * Delete an application.
+> - Authorize the REST API.
+> - Create an IoT Central application.
+> - Add a device to your application.
+> - Query and control the device.
+> - Set up data export.
+> - Delete an application.
## Prerequisites
To import the collection, open Postman and select **Import**. In the **Import**
Your workspace now contains the **IoT Central REST tutorial** collection. This collection includes all the APIs you use in the tutorial.
-The collection uses variables to parameterize the REST API calls. To see the variables, select the `...` next to **IoT Central REST tutorial** and select **Edit**. Then select **Variables**. Many of the variables are either set automatically as you make the API calls or have predetermined values.
+The collection uses variables to parameterize the REST API calls. To see the variables, select the `...` next to **IoT Central REST tutorial** and select **Edit**. Then select **Variables**. Many of the variables are either set automatically as you make the API calls or have preset values.
## Authorize the REST API
To connect the **IoT Plug and Play** app to your Iot Central application:
To verify the device is now provisioned, you can use the REST API: 1. In Postman, open the **IoT Central REST tutorial** collection, and select the **Get a device** request.
-1. Select **Send**. In the response, notice that the device is now provisioned. IoT Central has also assigned a device template to the device based on the model ID sent by the device.
+1. Select **Send**. In the response, notice that the device is now provisioned. IoT Central also assigned a device template to the device based on the model ID sent by the device.
You can use the REST API to manage device templates in the application. For example, to view the device templates in the application:
You can use the REST API to call device commands. The following request calls a
## Export telemetry
-You can use the RESP API to configure and manage your IoT Central application. The following steps show you how to configure data export to send telemetry values to a webhook. To simplify the setup, this article uses a **RequestBin** webhook as the destination. **RequestBin** is a third-party service.
+You can use the REST API to configure and manage your IoT Central application. The following steps show you how to configure data export to send telemetry values to a webhook. To simplify the setup, this article uses a **RequestBin** webhook as the destination. **RequestBin** is a non-Microsoft service.
To create your test endpoint for the data export destination:
To configure the export definition in your IoT Central application by using the
1. In Postman, open the **IoT Central REST tutorial** collection, and select the **Create a telemetry export definition** request. 1. Select **Send**. Notice that the status is **Not started**.
-It may take a couple of minutes for the export to start. To check the status of the export by using the REST API:
+It might take a couple of minutes for the export to start. To check the status of the export by using the REST API:
1. In Postman, open the **IoT Central REST tutorial** collection, and select the **Get an export by ID** request. 1. Select **Send**. When the status is **healthy**, IoT Central is sending telemetry to your webhook.
iot-hub-device-update Delta Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/delta-updates.md
# How to understand and use delta updates in Device Update for IoT Hub (Preview)
-Delta updates allow you to generate a small update that represents only the changes between two full updates - a source image and a target image. This approach is ideal for reducing the bandwidth used to download an update to a device, particularly if there have been only a few changes between the source and target updates.
+Delta updates allow you to generate a small update that represents only the changes between two full updates - a source image and a target image. This approach is ideal for reducing the bandwidth used to download an update to a device, particularly if there are only a few changes between the source and target updates.
>[!NOTE] >The delta update feature is currently in [public preview](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## Requirements for using delta updates in Device Update for IoT Hub -- The source and target update files must be SWU (SWUpdate) format.
+- The source and target update files must be SWUpdate (SWU) format.
- Within each SWUpdate file, there must be a raw image that uses the Ext2, Ext3, or Ext4 filesystem. That image can be compressed with gzip or zstd.-- The delta generation process recompresses the target SWU update using zstd compression in order to produce an optimal delta. You'll import this recompressed target SWU update to the Device Update service along with the generated delta update file.
+- The delta generation process recompresses the target SWU update using zstd compression in order to produce an optimal delta. Import this recompressed target SWU update to the Device Update service along with the generated delta update file.
- Within SWUpdate on the device, zstd decompression must also be enabled.
- - This requires using [SWUpdate 2019.11](https://github.com/sbabic/swupdate/releases/tag/2019.11) or later.
+ - This process requires using [SWUpdate 2019.11](https://github.com/sbabic/swupdate/releases/tag/2019.11) or later.
## Configure a device with Device Update agent and delta processor component
In order for your device to download and install delta updates from the Device U
### Device Update agent
-The Device Update agent _orchestrates_ the update process on the device, including download, install, and restart actions. Add the Device Update agent to a device and configure it for use. You'll need the 1.0 or later version of the agent. For instructions, see [Device Update agent provisioning](device-update-agent-provisioning.md).
+The Device Update agent _orchestrates_ the update process on the device, including download, install, and restart actions. Add the Device Update agent to a device and configure it for use. Use agent version 1.0 or later. For instructions, see [Device Update agent provisioning](device-update-agent-provisioning.md).
### Update handler
An update handler integrates with the Device Update agent to perform the actual
### Delta processor
-The delta processor re-creates the original SWU image file on your device after the delta file has been downloaded, so your update handler can install the SWU file. You'll find all the delta processor code in the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo.
+The delta processor re-creates the original SWU image file on your device after the delta file is downloaded, so your update handler can install the SWU file. The delta processor code is available in the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo.
To add the delta processor component to your device image and configure it for use, follow the README.md instructions to use CMAKE to build the delta processor from source. From there, install the shared object (libadudiffapi.so) directly by copying it to the `/usr/lib` directory:
sudo ldconfig
## Add a source SWU image file to your device
-After a delta update has been downloaded to a device, it must be compared against a valid _source SWU file_ that has been previously cached on the device in order to be re-created into a full image. The simplest way to populate this cached image is to deploy a full image update to the device via the Device Update service (using the existing [import](import-update.md) and [deployment](deploy-update.md) processes). As long as the device has been configured with the Device Update agent (version 1.0 or later) and delta processor, the installed SWU file is cached automatically by the Device Update agent for later delta update use.
+After a delta update is downloaded to a device, it must be compared against a valid _source SWU file_ that was previously cached on the device. This process is needed for the delta update to re-create the full target image. The simplest way to populate this cached image is to deploy a full image update to the device via the Device Update service (using the existing [import](import-update.md) and [deployment](deploy-update.md) processes). As long as the device is configured with the Device Update agent (version 1.0 or later) and delta processor, the Device Update agent caches the installed SWU file automatically for later delta update use.
-If you instead want to directly pre-populate the source image on your device, the path where the image is expected is:
+If you instead want to directly prepopulate the source image on your device, the path where the image is expected is:
`[BASE_SOURCE_DOWNLOAD_CACHE_PATH]/sha256-[ENCODED HASH]`
By default, `BASE_SOURCE_DOWNLOAD_CACHE_PATH` is the path `/var/lib/adu/sdc/[pro
### Environment prerequisites
-Before creating deltas with DiffGen, several things need to be downloaded and/or installed on the environment machine. We recommend a Linux environment and specifically Ubuntu 20.04 (or WSL if natively on Windows).
+Before creating deltas with DiffGen, several things need to be downloaded and/or installed on the environment machine. We recommend a Linux environment and specifically Ubuntu 20.04 (or Windows Subsystem for Linux if natively on Windows).
The following table provides a list of the content needed, where to retrieve them, and the recommended installation if necessary: | Binary Name | Where to acquire | How to install | |--|--|--| | DiffGen | [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta) GitHub repo | From the root folder, select the _Microsoft.Azure.DeviceUpdate.Diffs.[version].nupkg_ file. [Learn more about NuGet packages](/nuget/).|
-| .NET (Runtime) | Via Terminal / Package Managers | [Instructions for Linux](/dotnet/core/install/linux). Only the Runtime is required. |
+| .NETCore Runtime, version 6.0.0 | Via Terminal / Package Managers | [Instructions for Linux](/dotnet/core/install/linux). Only the Runtime is required. |
### Dependencies
The DiffGen tool is run with several arguments. All arguments are required, and
- The script recompress_tool.py runs to create the file [recompressed_target_archive], which then is used instead of [target_archive] as the target file for creating the diff. - The image files within [recompressed_target_archive] are compressed with zstd.
-If your SWU files are signed (likely), you'll need another argument as well:
+If your SWU files are signed (likely), you need another argument as well:
`DiffGenTool [source_archive] [target_archive] [output_path] [log_folder] [working_folder] [recompressed_target_archive] "[signing_command]"` -- In addition to using [recompressed_target_archive] as the target file, providing a signing command string parameter runs recompress_and_sign_tool.py to create the file [recompressed_target_archive] and have the sw-description file within the archive signed (meaning a sw-description.sig file is present). You can use the sample `sign_file.sh` script from the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta/tree/main/src/scripts/signing_samples/openssl_wrapper) GitHub repo. Open the script, edit it to add the path to your private key file, then save it. See the examples section below for sample usage.
+- In addition to using [recompressed_target_archive] as the target file, providing a signing command string parameter runs recompress_and_sign_tool.py to create the file [recompressed_target_archive] and have the sw-description file within the archive signed (meaning a sw-description.sig file is present). You can use the sample `sign_file.sh` script from the [Azure/iot-hub-device-update-delta](https://github.com/Azure/iot-hub-device-update-delta/tree/main/src/scripts/signing_samples/openssl_wrapper) GitHub repo. Open the script, edit it to add the path to your private key file, then save it. See the examples section for sample usage.
The following table describes the arguments in more detail:
The following table describes the arguments in more detail:
|--|--| | [source_archive] | This is the image that the delta is based against when creating the delta. _Important_: this image must be identical to the image that is already present on the device (for example, cached from a previous update). | | [target_archive] | This is the image that the delta updates the device to. |
-| [output_path] | The path (including the desired name of the delta file being generated) on the host machine where the delta file is placed after creation. If the path doesn't exist, the directory is created by the tool. |
-| [log_folder] | The path on the host machine where logs creates. We recommend defining this location as a sub folder of the output path. If the path doesn't exist, it is created by the tool. |
-| [working_folder] | The path on the machine where collateral and other working files are placed during the delta generation. We recommend defining this location as a subfolder of the output path. If the path doesn't exist, it is created by the tool. |
+| [output_path] | The path (including the desired name of the delta file being generated) on the host machine where the delta file is placed after creation. If the path doesn't exist, the tool creates it. |
+| [log_folder] | The path on the host machine where logs creates. We recommend defining this location as a sub folder of the output path. If the path doesn't exist, the tool creates it. |
+| [working_folder] | The path on the machine where collateral and other working files are placed during the delta generation. We recommend defining this location as a subfolder of the output path. If the path doesn't exist, the tool creates it. |
| [recompressed_target_archive] | The path on the host machine where the recompressed target file is created. This file is used instead of <target_archive> as the target file for diff generation. If this path exists before calling DiffGenTool, the path is overwritten. We recommend defining this path as a file in the subfolder of the output path. | | "[signing_command]" _(optional)_ | A customizable command used for signing the sw-description file within the recompressed archive file. The sw-description file in the recompressed archive is used as an input parameter for the signing command; DiffGenTool expects the signing command to create a new signature file, using the name of the input with `.sig` appended. Surrounding the parameter in double quotes is needed so that the whole command is passed in as a single parameter. Also, avoid putting the '~' character in a key path used for signing, and use the full home path instead (for example, use /home/USER/keys/priv.pem instead of ~/keys/priv.pem). | ### DiffGen examples
-In the examples below, we're operating out of the /mnt/o/temp directory (in WSL):
+In these examples, we're operating out of the /mnt/o/temp directory (in WSL):
_Creating diff between input source file and recompressed target file:_
The first step to import an update into the Device Update service is always to c
The delta update feature uses a capability called [related files](related-files.md), which requires an import manifest that is version 5 or later.
-To create an import manifest for your delta update using the related files feature, you'll need to add [relatedFiles](import-schema.md#relatedfiles-object) and [downloadHandler](import-schema.md#downloadhandler-object) objects to your import manifest.
+To create an import manifest for your delta update using the related files feature, you need to add [relatedFiles](import-schema.md#relatedfiles-object) and [downloadHandler](import-schema.md#downloadhandler-object) objects to your import manifest.
Use the `relatedFiles` object to specify information about the delta update file, including the file name, file size and sha256 hash. Importantly, you also need to specify two properties which are unique to the delta update feature:
Use the `relatedFiles` object to specify information about the delta update file
} ```
-Both of the properties above are specific to your _source SWU image file_ that you used as an input to the DiffGen tool when creating your delta update. The information about the source SWU image is needed in your import manifest even though you don't actually import the source image. The delta components on the device use this metadata about the source image to locate the image on the device once the delta has been downloaded.
+Both of these properties are specific to your _source SWU image file_ that you used as an input to the DiffGen tool when creating your delta update. The information about the source SWU image is needed in your import manifest even though you don't actually import the source image. The delta components on the device use this metadata about the source image to locate the image on the device once the delta is downloaded.
-Use the `downloadHandler` object to specify how the Device Update agent orchestrates the delta update, using the related files feature. Unless you are customizing your own version of the Device Update agent for delta functionality, you should only use this downloadHandler:
+Use the `downloadHandler` object to specify how the Device Update agent orchestrates the delta update, using the related files feature. Unless you're customizing your own version of the Device Update agent for delta functionality, you should only use this downloadHandler:
```json "downloadHandler": {
Save your generated import manifest JSON to a file with the extension `.importma
### Import using the Azure portal
-Once you've created your import manifest, you're ready to import the delta update. To import, follow the instructions in [Add an update to Device Update for IoT Hub](import-update.md#import-an-update). You must include these items when importing:
+Once you create your import manifest, you're ready to import the delta update. To import, follow the instructions in [Add an update to Device Update for IoT Hub](import-update.md#import-an-update). You must include these items when importing:
- The import manifest .json file you created in the previous step. - The _recompressed_ target SWU image created when you ran the DiffGen tool.
Once you've created your import manifest, you're ready to import the delta updat
## Deploy the delta update to your devices
-When you deploy a delta update, the experience in the Azure portal looks identical to deploying a regular image update. For more information on deploying updates, see [Deploy an update by using Device Update for Azure IoT Hub](deploy-update.md)
+When you deploy a delta update, the experience in the Azure portal looks identical to deploying a regular image update. For more information on deploying updates, see [Deploy an update by using Device Update for Azure IoT Hub](deploy-update.md).
-Once you've created the deployment for your delta update, the Device Update service and client automatically identify if there's a valid delta update for each device you're deploying to. If a valid delta is found, the delta update will be downloaded and installed on that device. If there's no valid delta update found, the full image update (the recompressed target SWU image) will be downloaded instead as a fallback. This approach ensures that all devices you're deploying the update to will get to the appropriate version.
+Once you create the deployment for your delta update, the Device Update service and client automatically identify if there's a valid delta update for each device you're deploying to. If a valid delta is found, the delta update is downloaded and installed on that device. If there's no valid delta update found, the full image update (the recompressed target SWU image) is downloaded instead as a fallback. This approach ensures that all devices you're deploying the update to get to the appropriate version.
There are three possible outcomes for a delta update deployment:
If the delta update failed but did a successful fallback to the full image, it s
- resultCode: _[value greater than 0]_ - extendedResultCode: _[non-zero]_
-If the update was unsuccessful, it shows an error status that can be interpreted using the instructions below:
+If the update was unsuccessful, it shows an error status that can be interpreted using these instructions:
- Start with the Device Update Agent errors in [result.h](https://github.com/Azure/iot-hub-device-update/blob/main/src/inc/aduc/result.h).
If the update was unsuccessful, it shows an error status that can be interpreted
| SOURCE_UPDATE_CACHE | 9 | 0x09 | Indicates errors in Delta Download handler extension Source Update Cache. Example: 0x909XXXXX | | DELTA_PROCESSOR | 10 | 0x0A | Error code for errors from delta processor API. Example: 0x90AXXXXX |
- - If the error code isn't present in [result.h](https://github.com/Azure/iot-hub-device-update/blob/main/src/inc/aduc/result.h), it's likely an error in the delta processor component (separate from the Device Update agent). If so, the extendedResultCode will be a negative decimal value of the following hexadecimal format: 0x90AXXXXX
+ - If the error code isn't present in [result.h](https://github.com/Azure/iot-hub-device-update/blob/main/src/inc/aduc/result.h), it's likely an error in the delta processor component (separate from the Device Update agent). If so, the extendedResultCode is a negative decimal value of the following hexadecimal format: 0x90AXXXXX
- 9 is "Delta Facility" - 0A is "Delta Processor Component" (ADUC_COMPONENT_DELTA_DOWNLOAD_HANDLER_DELTA_PROCESSOR)
iot-hub-device-update Device Update Plug And Play https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-plug-and-play.md
Title: Understand how Device Update for IoT Hub uses IoT Plug and Play
description: Device Update for IoT Hub uses to discover and manage devices that are over-the-air update capable. Previously updated : 2/2/2023 Last updated : 3/4/2024
Model ID is how smart devices advertise their capabilities to Azure IoT applicat
Device Update for IoT Hub requires the IoT Plug and Play smart device to announce a model ID as part of the device connection. [Learn how to announce a model ID](../iot/concepts-developer-guide-device.md#model-id-announcement).
-Device Update has 2 PnP models defined that support DU features. The Device Update model, '**dtmi:azure:iot:deviceUpdateContractModel;2**', supports the core functionality and uses the device update core interface to send update actions and metadata to devices and receive update status from devices.
+Device Update has several PnP models defined that support DU features. The Device Update model, '**dtmi:azure:iot:deviceUpdateContractModel;3**', supports the core functionality and uses the device update core interface to send update actions and metadata to devices and receive update status from devices.
-The other supported model is **dtmi:azure:iot:deviceUpdateModel;2** which extends **deviceUpdateContractModel;2** and also uses other PnP interfaces that send device properties and information and enable diagnostic features. Learn more about the [Device Update Models and Interfaces Versions] (https://github.com/Azure/iot-plugandplay-models/tree/main/dtmi/azure/iot).
+The other supported model is **dtmi:azure:iot:deviceUpdateModel;3** which extends **deviceUpdateContractModel;3** and also uses other PnP interfaces that send device properties and information and enable diagnostic features. Learn more about the [Device Update Models and Interfaces Versions] (https://github.com/Azure/iot-plugandplay-models/tree/main/dtmi/azure/iot).
-The Device Update agent uses the **dtmi:azure:iot:deviceUpdateModel;2** which supports all the latest features in the [1.0.0 release](understand-device-update.md#flexible-features-for-updating-devices). This model supports the [V5 manifest version](import-concepts.md).
+The Device Update agent uses the **dtmi:azure:iot:deviceUpdateModel;3** which supports all the latest features in the [1.1.0 release](https://github.com/Azure/iot-hub-device-update/releases/). This model supports the [V5 manifest version](import-concepts.md).Older manifests will work with the latest agents but new features require the use of the latest manifest version.
### Agent metadata
The **deviceProperties** field contains the manufacturer and model information f
|-|||--| |manufacturer|string|device to cloud|The device manufacturer of the device, reported through `deviceProperties`. This property is read from one of two places - first, the DeviceUpdateCore interface attempts to read the 'aduc_manufacturer' value from the [Configuration file](device-update-configuration-file.md). If the value isn't populated in the configuration file, it defaults to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MANUFACTURER. This property is reported only at boot time. <br><br> Default value: 'Contoso'.| |model|string|device to cloud|The device model of the device, reported through `deviceProperties`. This property is read from one of two places - first, the DeviceUpdateCore interface attempts to read the 'aduc_model' value from the [Configuration file](device-update-configuration-file.md). If the value isn't populated in the configuration file, it defaults to reporting the compile-time definition for ADUC_DEVICEPROPERTIES_MODEL. This property is reported only at boot time. <br><br> Default value: 'Video'|
-|contractModelId|string|device to cloud|This property is used by the service to identify the base model version being used by the Device Update agent to manage and communicate with the agent.<br>Value: 'dtmi:azure:iot:deviceUpdateContractModel;2' for devices using DU agent version 1.0.0. <br>**Note:** Agents using the 'dtmi:azure:iot:deviceUpdateModel;2' must report the contractModelId as 'dtmi:azure:iot:deviceUpdateContractModel;2' as deviceUpdateModel;2 is extended from deviceUpdateModel;2|
+|contractModelId|string|device to cloud|This property is used by the service to identify the base model version being used by the Device Update agent to manage and communicate with the agent.<br>Value: 'dtmi:azure:iot:deviceUpdateContractModel;3' for devices using DU agent version 1.1.0. <br>**Note:** Agents using the 'dtmi:azure:iot:deviceUpdateModel;2' must report the contractModelId as 'dtmi:azure:iot:deviceUpdateContractModel;3' as deviceUpdateModel;3 is extended from deviceUpdateContractModel;3|
|aduVer|string|device to cloud|Version of the Device Update agent running on the device. This value is read from the build only if ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true) during compile time. Customers can choose to opt out of version reporting by setting the value to 0 (false). [How to customize Device Update agent properties](https://github.com/Azure/iot-hub-device-update/blob/main/docs/agent-reference/how-to-build-agent-code.md).| |doVer|string|device to cloud|Version of the Delivery Optimization agent running on the device. The value is read from the build only if ENABLE_ADU_TELEMETRY_REPORTING is set to 1 (true) during compile time. Customers can choose to opt out of the version reporting by setting the value to 0 (false). [How to customize Delivery Optimization agent properties](https://github.com/microsoft/do-client/blob/main/README.md#building-do-client-components).| |Custom compatibility Properties|User Defined|device to cloud|Implementer can define other device properties to be used for the compatibility check while targeting the update deployment.|
IoT Hub device twin example:
"deviceProperties": { "manufacturer": "contoso", "model": "virtual-vacuum-v1",
- "contractModelId": "dtmi:azure:iot:deviceUpdateContractModel;2",
- "aduVer": "DU;agent/0.8.0-rc1-public-preview",
- "doVer": "DU;lib/v0.6.0+20211001.174458.c8c4051,DU;agent/v0.6.0+20211001.174418.c8c4051"
- },
+ "contractModelId": "dtmi:azure:iot:deviceUpdateContractModel;3",
+ "aduVer": "DU;agent/1.1.0",
+ },
"compatPropertyNames": "manufacturer,model", "lastInstallResult": { "resultCode": 700,
iot Iot Overview Analyze Visualize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-analyze-visualize.md
There are many services you can use to analyze and visualize your IoT data. Some
[Azure Data Explorer](/azure/data-explorer/data-explorer-overview/) is a fully managed, high-performance, big-data analytics platform that makes it easy to analyze high volumes of data in near real time. The following articles and tutorials show some examples of how to use Azure Data Explorer to analyze and visualize IoT data: - [IoT Hub data connection (Azure Data Explorer)](/azure/data-explorer/ingest-data-iot-hub-overview)-- [Explore an Azure IoT Central industrial scenario](../iot-central/core/tutorial-industrial-end-to-end.md) - [Export IoT data to Azure Data Explorer (IoT Central)](../iot-central/core/howto-export-to-azure-data-explorer.md) - [Azure Digital Twins query plugin for Azure Data Explorer](../digital-twins/concepts-data-explorer-plugin.md)
iot Iot Overview Device Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-overview-device-management.md
The [Device Update for IoT Hub](../iot-hub-device-update/understand-device-updat
During the lifecycle of your IoT solution, you might need to roll over the keys used to authenticate devices. For example, you might need to roll over your keys if you suspect that a key is compromised or if a certificate expires: - [Roll over the keys used to authenticate devices in IoT Hub and DPS](../iot-dps/how-to-roll-certificates.md)-- [Roll over the keys used to authenticate devices in IoT Central](../iot-central/core/how-to-connect-devices-x509.md#roll-x509-device-certificates)
+- [Roll over the keys used to authenticate devices in IoT Central](../iot-central/core/how-to-connect-devices-x509.md#roll-your-x509-device-certificates)
## Device monitoring
key-vault Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/best-practices.md
Managed HSM is a cloud service that safeguards cryptographic keys. Because these
To control access to your managed HSM: -- Create an [Microsoft Entra security group](../../active-directory/fundamentals/active-directory-manage-groups.md) for the HSM Administrators (instead of assigning the Administrator role to individuals) to prevent "administration lockout" if an individual account is deleted.
+- Create a [Microsoft Entra security group](../../active-directory/fundamentals/active-directory-manage-groups.md) for the HSM Administrators (instead of assigning the Administrator role to individuals) to prevent "administration lockout" if an individual account is deleted.
- Lock down access to your management groups, subscriptions, resource groups, and managed HSMs. Use Azure role-based access control (Azure RBAC) to control access to your management groups, subscriptions, and resource groups. - Create per-key role assignments by using [Managed HSM local RBAC](access-control.md#data-plane-and-managed-hsm-local-rbac). - To maintain separation of duties, avoid assigning multiple roles to the same principals.
load-balancer Load Balancer Test Frontend Reachability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-test-frontend-reachability.md
Based on the current health probe state of your backend instances, you receive d
## Usage considerations - ICMP pings can't be disabled and are allowed by default on Standard Public Load Balancers.-- ICMP pings with packet sizes larger than 64 bytes will be dropped, leading to timeouts.
+- ICMP pings with packet sizes larger than 64 bytes will be dropped, leading to timeouts.
+- Outbound ICMP pings are not supported on a Load Balancer.
> [!NOTE] > ICMP ping requests are not sent to the backend instances; they are handled by the Load Balancer.
logic-apps Workflow Assistant Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/workflow-assistant-standard.md
The following table includes only some example use cases, so please share your f
- Workflow size
- You might experience different performance levels in the workflow assistant, based on factors such as the number of workflow operations or complexity. The assistant is trained on workflows with different complexity levels but still has limited scope and might not be able to handle very large workflows. These limitations are primarily related to token constraints in the queries sent to Azure Open AI Service. The Azure Logic Apps team is committed to continuous improvement and enhancing these limitations through iterative updates.
+ You might experience different performance levels in the workflow assistant, based on factors such as the number of workflow operations or complexity. The assistant is trained on workflows with different complexity levels but still has limited scope and might not be able to handle very large workflows. These limitations are primarily related to token constraints in the queries sent to Azure OpenAI Service. The Azure Logic Apps team is committed to continuous improvement and enhancing these limitations through iterative updates.
<a name="provide-feedback"></a>
In the chat pane, under the workflow assistant's response, choose an option:
**Q**: How does the workflow assistant use my query to generate responses?
-**A**: The workflow is powered by [Azure Open AI Service](../ai-services/openai/overview.md) and [ChatGPT](https://openai.com/blog/chatgpt), which use Azure Logic Apps documentation from reputable sources along with internet data that's used to train GPT 3.5-Turbo. This content is processed into a vectorized format, which is then accessible through a backend system built on Azure App Service. Queries are triggered based on interactions with the workflow designer.
+**A**: The workflow is powered by [Azure OpenAI Service](../ai-services/openai/overview.md) and [ChatGPT](https://openai.com/blog/chatgpt), which use Azure Logic Apps documentation from reputable sources along with internet data that's used to train GPT 3.5-Turbo. This content is processed into a vectorized format, which is then accessible through a backend system built on Azure App Service. Queries are triggered based on interactions with the workflow designer.
-When you enter your question in the assistant's chat box, the Azure Logic Apps backend performs preprocessing and forwards the results to a large language model in Azure Open AI Service. This model generates responses based on the current context in the form of the workflow definition's JSON code and your prompt.
+When you enter your question in the assistant's chat box, the Azure Logic Apps backend performs preprocessing and forwards the results to a large language model in Azure OpenAI Service. This model generates responses based on the current context in the form of the workflow definition's JSON code and your prompt.
**Q**: What data does the workflow assistant collect?
When you enter your question in the assistant's chat box, the Azure Logic Apps b
**Q**: What's the difference between Azure OpenAI Service and ChatGPT?
-**A**: [Azure Open AI Service](../ai-services/openai/overview.md) is an enterprise-ready AI technology that's powered and optimized for your business processes and your business data to meet security and privacy requirements.
+**A**: [Azure OpenAI Service](../ai-services/openai/overview.md) is an enterprise-ready AI technology that's powered and optimized for your business processes and your business data to meet security and privacy requirements.
[ChatGPT](https://openai.com/blog/chatgpt) is built by [Open AI](https://openai.com) and is a general-purpose large language model (LLM) trained by OpenAI on a massive dataset of text, designed to engage in human-like conversations and answer a wide range of questions on several topics.
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Batch endpoints can be used to perform long batch operations over large amounts
To successfully invoke a batch endpoint and create jobs, ensure you have the following:
-* You have permissions to run a batch endpoint deployment. Read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) to know the specific permissions needed.
+* You have permissions to run a batch endpoint deployment. **AzureML Data Scientist**, **Contributor**, and **Owner** roles can be used to run a deployment. For custom roles definitions read [Authorization on batch endpoints](how-to-authenticate-batch-endpoint.md) to know the specific permissions needed.
* You have a valid Microsoft Entra ID token representing a security principal to invoke the endpoint. This principal can be a user principal or a service principal. In any case, once an endpoint is invoked, a batch deployment job is created under the identity associated with the token. For testing purposes, you can use your own credentials for the invocation as mentioned below.
machine-learning How To Use Retrieval Augmented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-retrieval-augmented-generation.md
In this tutorial, you'll learn how to use RAG by creating a prompt flow. A promp
* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* Access to Azure Open AI.
+* Access to Azure OpenAI.
* Enable prompt flow in your Azure Machine Learning workspace
notification-hubs Create Notification Hub Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-portal.md
In this quickstart, you create a notification hub in the Azure portal. The first
In this section, you create a namespace and a hub in the namespace. ## Create a notification hub in an existing namespace
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Previously updated : 02/22/2024 Last updated : 03/04/2024 #Customer intent: I need to understand the Azure Red Hat OpenShift support policies for OpenShift 4.0.
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't scale the cluster workers to zero, or attempt a cluster shutdown. Deallocating or powering down any virtual machine in the cluster resource group isn't supported. * If you're making use of infrastructure nodes, don't run any undesignated workloads on them as this can affect the Service Level Agreement and cluster stability. Also, it's recommended to have three infrastructure nodes; one in each availability zone. See [Deploy infrastructure nodes in an Azure Red Hat OpenShift (ARO) cluster](howto-infrastructure-nodes.md) for more information. * Non-RHCOS compute nodes aren't supported. For example, you can't use an RHEL compute node.
-* Don't attempt to remove or replace a master node. These are high risk operations that can cause issues with etcd, permanent network loss, and loss of access and manageability by ARO SRE. If you feel that a master node should be replaced or removed, contact support before making any changes.
+* Don't attempt to remove or replace a master node. That's a high risk operation that can cause issues with etcd, permanent network loss, and loss of access and manageability by ARO SRE. If you feel that a master node should be replaced or removed, contact support before making any changes.
### Operators
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't add taints that would prevent any default OpenShift components from being scheduled. * To avoid disruption resulting from cluster maintenance, in-cluster workloads should be configured with high availability practices, including but not limited to pod affinity and anti-affinity, pod disruption budgets, and adequate scaling. * Don't run extra workloads on the control plane nodes. While they can be scheduled on the control plane nodes, it causes extra resource usage and stability issues that can affect the entire cluster.
-* Running custom workloads (including operators installed from Operator Hub or additional operators provided by Red Hat) in infrastructure nodes isn't supported.
+* Running custom workloads (including operators installed from Operator Hub or other operators provided by Red Hat) in infrastructure nodes isn't supported.
### Logging and monitoring * Don't remove or modify the default cluster Prometheus service, except to modify scheduling of the default Prometheus instance.
-* Don't remove or modify the default cluster Alertmanager svc, default receiver, or any default alerting rules, except to add additional receivers to notify external systems.
+* Don't remove or modify the default cluster Alertmanager svc, default receiver, or any default alerting rules, except to add other receivers to notify external systems.
* Don't remove or modify Azure Red Hat OpenShift service logging (mdsd pods). ### Network and security
Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl
* Don't override any of the cluster's MachineConfig objects (for example, the kubelet configuration) in any way. * Don't set any unsupportedConfigOverrides options. Setting these options prevents minor version upgrades. * Don't place policies within your subscription or management group that prevent SREs from performing normal maintenance against the Azure Red Hat OpenShift cluster. For example, don't require tags on the Azure Red Hat OpenShift RP-managed cluster resource group.
-* Don't circumvent the deny assignment that is configured as part of the service, or perform administrative tasks that are normally prohibited by the deny assignment.
+* Don't circumvent the deny assignment that is configured as part of the service, or perform administrative tasks normally prohibited by the deny assignment.
* OpenShift relies on the ability to automatically tag Azure resources. If you have configured a tagging policy, don't apply more than 10 user-defined tags to resources in the managed resource group. ## Incident management
-An incident is an event that results in a degradation or outage Azure Red Hat OpenShift services. An incident can be raised by a customer or Customer Experience and Engagement (CEE) member through a [support case](openshift-service-definitions.md#support), directly by the centralized monitoring and alerting system, or directly by a member of the ARO Site Reliability Engineer (SRE) team.
+An incident is an event that results in a degradation or outage Azure Red Hat OpenShift services. Incidents are raised by a customer or Customer Experience and Engagement (CEE) member through a [support case](openshift-service-definitions.md#support), directly by the centralized monitoring and alerting system, or directly by a member of the ARO Site Reliability Engineer (SRE) team.
Depending on the impact on the service and customer, the incident is categorized in terms of severity.
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|NC24sV3|Standard_NC24s_v3|24|448| |NC24rsV3|Standard_NC24rs_v3|24|448| |NC64asT4v3|Standard_NC64as_T4_v3|64|440|
+|ND96asr_v4*|Standard_ND96asr_v4|96|900|
+|ND96amsr_A100_v4*|Standard_ND96amsr_A100_v4|96|1924|
+|NC24ads_A100_v4*|Standard_NC24ads_A100_v4|24|220|
+|NC48ads_A100_v4*|Standard_NC48ads_A100_v4|48|440|
+|NC96ads_A100_v4*|Standard_NC96ads_A100_v4|96|880|
+
+\*Day-2 only (i.e., not supported as an install-time option)
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
- Title: Create and configure MCC EDR Ingestion Agents
-description: Learn how to create and configure MCC EDR Ingestion Agents for Azure Operator Insights
----- Previously updated : 10/31/2023--
-# Create and configure MCC EDR Ingestion Agents for Azure Operator Insights
-
-The MCC EDR agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent receives EDRs from an Affirmed MCC, and forwards them to Azure Operator Insights Data Products.
-
-## Prerequisites
--- You must have an Affirmed Networks MCC deployment that generates EDRs.-- You must deploy an Azure Operator Insights MCC Data Product.-- You must provide VMs with the following specifications to run the agent:-
-| Resource | Requirements |
-|-||
-| OS | Red Hat Enterprise Linux 8.6 or later, or Oracle Linux 8.8 or later |
-| vCPUs | 4 |
-| Memory | 32 GB |
-| Disk | 64 GB |
-| Network | Connectivity from MCCs and to Azure |
-| Software | systemd, logrotate and zip installed |
-| Other | SSH or alternative access to run shell commands |
-| DNS | (Preferable) Ability to resolve public DNS. If not, you need to perform extra steps to resolve Azure locations. See [VMs without public DNS: Map Azure host names to IP addresses.](#vms-without-public-dns-map-azure-host-names-to-ip-addresses). |
-
-Each agent instance must run on its own VM. The number of VMs needed depends on the scale and redundancy characteristics of your deployment. This recommended specification can achieve 1.5-Gbps throughput on a standard D4s_v3 Azure VM. For any other VM spec, we recommend that you measure throughput at the network design stage.
-
-Latency on the MCC to agent connection can negatively affect throughput. Latency should usually be low if the MCC and agent are colocated or the agent runs in an Azure region close to the MCC.
-
-Talk to the Affirmed Support Team to determine your requirements.
-
-### Deploying multiple VMs for fault tolerance
-
-The MCC EDR agent is designed to be highly reliable and resilient to low levels of network disruption. If an unexpected error occurs, the agent restarts and provides service again as soon as it's running.
-
-The agent doesn't buffer data, so if a persistent error or extended connectivity problems occur, EDRs are dropped.
-
-For extra fault tolerance, you can deploy multiple instances of the MCC EDR agent and configure the MCC to switch to a different instance if the original instance becomes unresponsive, or to share EDR traffic across a pool of agents. For more information, see the [Affirmed Networks Active Intelligent vProbe System Administration Guide](https://manuals.metaswitch.com/vProbe/latest/vProbe_System_Admin/Content/02%20AI-vProbe%20Configuration/Generating_SESSION__BEARER__FLOW__and_HTTP_Transac.htm) (only available to customers with Affirmed support) or speak to the Affirmed Networks Support Team.
-
-### VM security recommendations
-
-The VM used for the MCC EDR agent should be set up following best practice for security. For example:
--- Networking - Only allow network traffic on the ports that are required to run the agent and maintain the VM.-- OS version - Keep the OS version up-to-date to avoid known vulnerabilities.-- Access - Limit access to the VM to a minimal set of users, and set up audit logging for their actions. For the MCC EDR agent, we recommend that the following are restricted:
- - Admin access to the VM (for example, to stop/start/install the MCC EDR software)
- - Access to the directory where the logs are stored *(/var/log/az-mcc-edr-uploader/)*
- - Access to the certificate and private key for the service principal that you create during this procedure
-
-## Download the RPM for the agent
-
-Download the RPM for the MCC EDR agent using the details you received as part of the [Azure Operator Insights onboarding process](overview.md#how-do-i-get-access-to-azure-operator-insights) or from [https://go.microsoft.com/fwlink/?linkid=2254537](https://go.microsoft.com/fwlink/?linkid=2254537).
-
-## Set up authentication
-
-You must have a service principal with a certificate credential that can access the Azure Key Vault created by the Data Product to retrieve storage credentials. Each agent must also have a copy of a valid certificate and private key for the service principal stored on this virtual machine.
-
-### Create a service principal
-
-> [!IMPORTANT]
-> You may need a Microsoft Entra tenant administrator in your organization to perform this setup for you.
-
-1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). Leave the **Redirect URI** field empty.
-1. Note the Application (client) ID, and your Microsoft Entra Directory (tenant) ID (these IDs are UUIDs of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, where each character is a hexadecimal digit).
-
-### Prepare certificates
-
-The ingestion agent only supports certificate-based authentication for service principals. It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity.
-
-1. Obtain a certificate. We strongly recommend using trusted certificate(s) from a certificate authority.
-2. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal).
-3. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key Vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates](../key-vault/certificates/overview-renew-certificate.md).
- - You need the 'Key Vault Certificates Officer' role on the Azure Key Vault in order to add the certificate to the Key Vault. See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
-
-4. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl:
-
- `openssl pkcs12 -nodes -export -in <pem-certificate-filename> -inkey <pem-key-filename> -out <pkcs12-certificate-filename>`
-
-> [!IMPORTANT]
-> The pkcs12 file must not be protected with a passphrase. When OpenSSL prompts you for an export password, press <kbd>Enter</kbd> to supply an empty passphrase.
-
-5. Validate your pkcs12 file. This displays information about the pkcs12 file including the certificate and private key:
-
- `openssl pkcs12 -nodes -in <pkcs12-certificate-filename> -info`
-
-6. Ensure the pkcs12 file is base64 encoded. On Linux, you can base64 encode a pkcs12-formatted certificate by using the command:
-
- `base64 -w 0 <pkcs12-certificate-filename> > <base64-encoded-pkcs12-certificate-filename>`
-
-### Grant permissions for the Data Product Key Vault
-
-1. Find the Azure Key Vault that holds the storage credentials for the input storage account. This Key Vault is in a resource group named *\<data-product-name\>-HostedResources-\<unique-id\>*.
-1. Grant your service principal the 'Key Vault Secrets User' role on this Key Vault.  You need Owner level permissions on your Azure subscription.  See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
-1. Note the name of the Key Vault.
-
-## Prepare the VMs
-
-Repeat these steps for each VM onto which you want to install the agent:
-
-1. Ensure you have an SSH session open to the VM, and that you have `sudo` permissions.
-1. Verify that the VM has the following ports open:
- - Port 36001/TCP inbound from the MCCs
- - Port 443/TCP outbound to Azure
-
- These ports must be open both in cloud network security groups and in any firewall running on the VM itself (such as firewalld or iptables).
-1. Install systemd, logrotate and zip on the VM, if not already present. For example, `sudo dnf install systemd logrotate zip`.
-1. Obtain the ingestion agent RPM and copy it to the VM.
-1. Copy the pkcs12-formatted base64-encoded certificate (created in the [Prepare certificates](#prepare-certificates) step) to the VM, in a location accessible to the ingestion agent.
-
-## VMs without public DNS: Map Azure host names to IP addresses
-
-**If your agent VMs have access to public DNS, then you can skip this step and continue to [Install agent software](#install-agent-software).**
-
-If your agent VMs don't have access to public DNS, then you need to add entries on each agent VM to map the Azure host names to IP addresses.
-
-This process assumes that you're connecting to Azure over ExpressRoute and are using Private Links and/or Service Endpoints. If you're connecting over public IP addressing, you **cannot** use this workaround and must use public DNS.
-
-1. Create the following resources from a virtual network that is peered to your ingestion agents:
- - A Service Endpoint to Azure Storage
- - A Private Link or Service Endpoint to the Key Vault created by your Data Product. The Key Vault is the same one you found in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).
-1. Note the IP addresses of these two connections.
-1. Note the ingestion URL for your Data Product.  You can find the ingestion URL on your Data Product overview page in the Azure portal, in the form *\<account name\>.blob.core.windows.net*.
-1. Note the URL of the Data Product Key Vault.  The URL appears as *\<vault name\>.vault.azure.net*
-1. Add a line to */etc/hosts* on the VM linking the two values in this format, for each of the storage and Key Vault:
- ```
- <Storage private IP>   <ingestion URL>
- <Key Vault private IP>  <Key Vault URL>
- ````
-1. Add the public IP address of the URL *login.microsoftonline.com* to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
-
- ```
- <Public IP>   login.microsoftonline.com
- ````
-
-## Install agent software
-
-Repeat these steps for each VM onto which you want to install the agent:
-
-1. In an SSH session, change to the directory where the RPM was copied.
-1. Install the RPM:  `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM isn't installed.
-1. Change to the configuration directory: `cd /etc/az-mcc-edr-uploader`
-1. Make a copy of the default configuration file:  `sudo cp example_config.yaml config.yaml`
-1. Edit the *config.yaml* and fill out the fields.  Most of them are set to default values and don't need to be changed.  The full reference for each parameter is described in [MCC EDR Ingestion Agents configuration reference](mcc-edr-agent-configuration.md). The following parameters must be set:
-
- 1. **site\_id** should be changed to a unique identifier for your on-premises site – for example, the name of the city or state for this site.  This name becomes searchable metadata in Operator Insights for all EDRs from this agent. 
- 1. **agent\_id** should be a unique identifier for this agent ΓÇô for example, the VM hostname.
-
- 1. For the secret provider with name `data_product_keyvault`, set the following fields:
- 1. **provider.vault\_name** must be the name of the Key Vault for your Data Product. You identified this name in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).  
- 1. **provider.auth** must be filled out with:
-
- 1. **tenant\_id** as your Microsoft Entra ID tenant.
-
- 2. **identity\_name** as the application ID of the service principal that you created in [Create a service principal](#create-a-service-principal).
-
- 3. **cert\_path** as the file path of the base64-encoded pkcs12 certificate for the service principal to authenticate with.
-
- 1. **sink.container\_name** *must be left as "edr".*
-
-1. Start the agent: `sudo systemctl start az-mcc-edr-uploader`
-
-1. Check that the agent is running: `sudo systemctl status az-mcc-edr-uploader`
-
- 1. If you see any status other than `active (running)`, look at the logs as described in the [Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights](troubleshoot-mcc-edr-agent.md) article to understand the error.  It's likely that some configuration is incorrect.
-
- 2. Once you resolve the issue,  attempt to start the agent again.
-
- 3. If issues persist, raise a support ticket.
-
-1. Once the agent is running, ensure it starts automatically after reboots: `sudo systemctl enable az-mcc-edr-uploader.service`
-
-1. Save a copy of the delivered RPM ΓÇô you need it to reinstall or to back out any future upgrades.
-
-## Configure Affirmed MCCs
-
-Once the agents are installed and running, configure the MCCs to send EDRs to them.
-
-1. Follow the steps under "Generating SESSION, BEARER, FLOW, and HTTP Transaction EDRs" in the [Affirmed Networks Active Intelligent vProbe System Administration Guide](https://manuals.metaswitch.com/vProbe/latest/vProbe_System_Admin/Content/02%20AI-vProbe%20Configuration/Generating_SESSION__BEARER__FLOW__and_HTTP_Transac.htm) (only available to customers with Affirmed support), making the following changes:
-
- - Replace the IP addresses of the MSFs in MCC configuration with the IP addresses of the VMs running the ingestion agents.
-
- - Confirm that the following EDR server parameters are set:
-
- - port: 36001
- - encoding: protobuf
- - keep-alive: 2 seconds
--
-## Related content
--- [Manage MCC EDR Ingestion Agents for Azure Operator Insights](how-to-manage-mcc-edr-agent.md)-- [Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights](troubleshoot-mcc-edr-agent.md)
operator-insights How To Install Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-sftp-agent.md
- Title: Create and configure SFTP Ingestion Agents for Azure Operator Insights
-description: Learn how to create and configure SFTP Ingestion Agents for Azure Operator Insights
----- Previously updated : 12/06/2023--
-# Create and configure SFTP Ingestion Agents for Azure Operator Insights
-
-An SFTP Ingestion Agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. The agent pulls files from an SFTP server, and forwards them to Azure Operator Insights Data Products.
-
-For more background, see [SFTP Ingestion Agent overview](sftp-agent-overview.md).
-
-## Prerequisites
--- You must deploy an Azure Operator Insights Data Product.-- You must have an SFTP server containing the files to be uploaded to the Azure Operator Insights Data Product. This SFTP server must be accessible from the VM where you install the agent.-- You must choose the number of agents and VMs on which to install the agents, using the guidance in the following section.-
-### Choosing agents and VMs
-
-An agent collects files from _file sources_ that you configure on it. File sources include the details of the SFTP server, the files to collect from it and how to manage those files.
-
-You must choose how to set up your agents, file sources and VMs using the following rules:
-- File sources must not overlap, meaning that they must not collect the same files from the same servers.-- You must configure each file source on exactly one agent. If you configure a file source on multiple agents, Azure Operator Insights receives duplicate data.-- Each agent can have multiple file sources.-- Each agent must run on a separate VM.-- The number of agents and therefore VMs also depends on:
- - The scale and redundancy characteristics of your deployment.
- - The number and size of the files, and how frequently the files are copied.
-
-As a guide, this table documents the throughput that the recommended specification on a standard D4s_v3 Azure VM can achieve.
-
-| File count | File size (KiB) | Time (seconds) | Throughput (Mbps) |
-||--|-|-|
-| 64 | 16,384 | 6 | 1,350 |
-| 1,024 | 1,024 | 10 | 910 |
-| 16,384 | 64 | 80 | 100 |
-| 65,536 | 16 | 300 | 25 |
-
-For example, if you need to collect from two file sources, you could:
--- Deploy one VM with one agent that collects from both file sources.-- Deploy two VMs, each with one agent. Each agent (and therefore each VM) collects from one file source.-
-Each VM running the agent must meet the following specifications.
-
-| Resource | Requirements |
-|-||
-| OS | Red Hat Enterprise Linux 8.6 or later, or Oracle Linux 8.8 or later |
-| vCPUs | Minimum 4, recommended 8 |
-| Memory | Minimum 32 GB |
-| Disk | 30 GB |
-| Network | Connectivity to the SFTP server and to Azure |
-| Software | systemd, logrotate and zip installed |
-| Other | SSH or alternative access to run shell commands |
-| DNS | (Preferable) Ability to resolve public DNS. If not, you need to perform extra steps to resolve Azure locations. See [VMs without public DNS: Map Azure host names to IP addresses.](#vms-without-public-dns-map-azure-host-names-to-ip-addresses). |
-
-### VM security recommendations
-
-The VM used for the SFTP agent should be set up following best practice for security. For example:
--- Networking - Only allow network traffic on the ports that are required to run the agent and maintain the VM.-- OS version - Keep the OS version up-to-date to avoid known vulnerabilities.-- Access - Limit access to the VM to a minimal set of users, and set up audit logging for their actions. For the SFTP agent, we recommend that the following are restricted:
- - Admin access to the VM (for example, to stop/start/install the SFTP agent software)
- - Access to the directory where the logs are stored *(/var/log/az-sftp-uploader/)*
- - Access to the certificate and private key for the service principal that you create during this procedure
- - Access to the directory for secrets that you create on the VM during this procedure.
-
-## Download the RPM for the agent
-
-Download the RPM for the SFTP agent using the details you received as part of the [Azure Operator Insights onboarding process](overview.md#how-do-i-get-access-to-azure-operator-insights) or from [https://go.microsoft.com/fwlink/?linkid=2254734](https://go.microsoft.com/fwlink/?linkid=2254734).
-
-## Set up authentication to Azure
-
-You must have a service principal with a certificate credential that can access the Azure Key Vault created by the Data Product to retrieve storage credentials. Each agent must also have a copy of a valid certificate and private key for the service principal stored on this virtual machine.
-
-### Create a service principal
-
-> [!IMPORTANT]
-> You might need a Microsoft Entra tenant administrator in your organization to perform this setup for you.
-
-1. Create or obtain a Microsoft Entra ID service principal. Follow the instructions detailed in [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal). Leave the **Redirect URI** field empty.
-1. Note the Application (client) ID, and your Microsoft Entra Directory (tenant) ID (these IDs are UUIDs of the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx, where each character is a hexadecimal digit).
-
-### Prepare certificates
-
-The ingestion agent only supports certificate-based authentication for service principals. It's up to you whether you use the same certificate and key for each VM, or use a unique certificate and key for each.  Using a certificate per VM provides better security and has a smaller impact if a key is leaked or the certificate expires. However, this method adds a higher maintainability and operational complexity.
-
-1. Obtain a certificate. We strongly recommend using trusted certificate(s) from a certificate authority.
-2. Add the certificate(s) as credential(s) to your service principal, following [Create a Microsoft Entra app and service principal in the portal](/entra/identity-platform/howto-create-service-principal-portal).
-3. We **strongly recommend** additionally storing the certificates in a secure location such as Azure Key vault.  Doing so allows you to configure expiry alerting and gives you time to regenerate new certificates and apply them to your ingestion agents before they expire.  Once a certificate expires, the agent is unable to authenticate to Azure and no longer uploads data.  For details of this approach see [Renew your Azure Key Vault certificates Azure portal](../key-vault/certificates/overview-renew-certificate.md).
- - You need the 'Key Vault Certificates Officer' role on the Azure Key Vault in order to add the certificate to the Key Vault. See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
-
-4. Ensure the certificate(s) are available in pkcs12 format, with no passphrase protecting them. On Linux, you can convert a certificate and key from PEM format using openssl:
-
- `openssl pkcs12 -nodes -export -in <pem-certificate-filename> -inkey <pem-key-filename> -out <pkcs12-certificate-filename>`
-
- > [!IMPORTANT]
- > The pkcs12 file must not be protected with a passphrase. When OpenSSL prompts you for an export password, press <kbd>Enter</kbd> to supply an empty passphrase.
-
-5. Validate your pkcs12 file. This displays information about the pkcs12 file including the certificate and private key:
-
- `openssl pkcs12 -nodes -in <pkcs12-certificate-filename> -info`
-
-6. Ensure the pkcs12 file is base64 encoded. On Linux, you can base64 encode a pkcs12-formatted certificate by using the command:
-
- `base64 -w 0 <pkcs12-certificate-filename> > <base64-encoded-pkcs12-certificate-filename>`
-
-### Grant permissions for the Data Product Key Vault
-
-1. Find the Azure Key Vault that holds the storage credentials for the input storage account. This Key Vault is in a resource group named *\<data-product-name\>-HostedResources-\<unique-id\>*.
-1. Grant your service principal the 'Key Vault Secrets User' role on this Key Vault.  You need Owner level permissions on your Azure subscription.  See [Assign Azure roles using the Azure portal](../role-based-access-control/role-assignments-portal.md) for details of how to assign roles in Azure.
-1. Note the name of the Key Vault.
-
-## Prepare the VMs
-
-Repeat these steps for each VM onto which you want to install the agent:
-
-1. Ensure you have an SSH session open to the VM, and that you have `sudo` permissions.
-2. Create a directory to use for storing secrets for the agent. Note the path of this directory. This is the _secrets directory_ and it's the directory where you'll add secrets for connecting to the SFTP server.
-3. Verify that the VM has the following ports open:
- - Port 443/TCP outbound to Azure
- - Port 22/TCP outbound to the SFTP server
-
- These ports must be open both in cloud network security groups and in any firewall running on the VM itself (such as firewalld or iptables).
-4. Install systemd, logrotate and zip on the VM, if not already present. For example, `sudo dnf install systemd logrotate zip`.
-5. Obtain the ingestion agent RPM and copy it to the VM.
-6. Copy the pkcs12-formatted base64-encoded certificate (created in the [Prepare certificates](#prepare-certificates) step) to the VM, in a location accessible to the ingestion agent.
-7. Ensure the SFTP server's public SSH key is listed on the VM's global known_hosts file located at `/etc/ssh/ssh_known_hosts`.
-
-> [!TIP]
-> Use the Linux command `ssh-keyscan` to add a server's SSH public key to a VM's `known_hosts` file manually. For example, `ssh-keyscan -H <server-ip> | sudo tee -a /etc/ssh/ssh_known_hosts`.
-
-## Configure the connection between the SFTP server and VM
-
-Follow these steps on the SFTP server:
-
-1. Ensure port 22/TCP to the VM is open.
-1. Create a new user, or determine an existing user on the SFTP server that the ingestion agent will use to connect to the SFTP server.
-1. Determine the authentication method that the ingestion agent will use to connect to the SFTP server. The agent supports two methods:
- - Password authentication
- - SSH key authentication
-1. Create a file to store the secret value (password or SSH key) in the secrets directory on the agent VM, which you created in the [Prepare the VMs](#prepare-the-vms) step.
- - The file must not have a file extension.
- - Choose an appropriate name for the secret file, and note it for later.  This name is referenced in the agent configuration.
- - The secret file must contain only the secret value (password or SSH key), with no extra whitespace.
-1. If you're using an SSH key that has a passphrase to authenticate, use the same method to create a separate secret file that contains the passphrase.
-1. Configure the SFTP server to remove files after a period of time (a _retention period_). Ensure the retention period is long enough that the agent should have processed the files before the SFTP server deletes them. The example configuration file contains configuration for checking for new files every five minutes.
-
-> [!IMPORTANT]
-> Your SFTP server must remove files after a suitable retention period so that it does not run out of disk space. The SFTP ingestion agent does not remove files automatically.
->
-> A shorter retention time reduces disk usage, increases the speed of the agent and reduces the risk of duplicate uploads. However, a shorter retention period increases the risk that data is lost if data cannot be retrieved by the agent or uploaded to Azure Operator Insights.
-
-## VMs without public DNS: Map Azure host names to IP addresses
-
-**If your agent VMs have access to public DNS, skip this step and continue to [Install and configure agent software](#install-and-configure-agent-software).**
-
-If your agent VMs don't have access to public DNS, then you need to add entries on each agent VM to map the Azure host names to IP addresses.
-
-This process assumes that you're connecting to Azure over ExpressRoute and are using Private Links and/or Service Endpoints. If you're connecting over public IP addressing, you **cannot** use this workaround and must use public DNS.
-
-1. Create the following resources from a virtual network that is peered to your ingestion agents:
- - A Service Endpoint to Azure Storage
- - A Private Link or Service Endpoint to the Key Vault created by your Data Product.  The Key Vault is the same one you found in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).
-1. Note the IP addresses of these two connections.
-1. Note the ingestion URL for your Data Product.  You can find the ingestion URL on your Data Product overview page in the Azure portal, in the form *\<account name\>.blob.core.windows.net*.
-1. Note the URL of the Data Product Key Vault.  The URL appears as *\<vault name\>.vault.azure.net*.
-1. Add a line to */etc/hosts* on the VM linking the two values in this format, for each of the storage and Key Vault:
- ```
- <Storage private IP>   <ingestion URL>
- <Key Vault private IP>  <Key Vault URL>
- ````
-1. Add the public IP address of the URL *login.microsoftonline.com* to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
- ```
- <Public IP>   login.microsoftonline.com
- ````
-
-## Install and configure agent software
-
-Repeat these steps for each VM onto which you want to install the agent:
-
-1. In an SSH session, change to the directory where the RPM was copied.
-1. Install the RPM:  `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.  If there are any missing dependencies, the RPM won't be installed.
-1. Change to the configuration directory: `cd /etc/az-sftp-uploader`
-1. Make a copy of the default configuration file:  `sudo cp example_config.yaml config.yaml`
-1. Edit the *config.yaml* file and fill out the fields. Start by filling out the parameters that don't depend on the type of Data Product.  Many parameters are set to default values and don't need to be changed.  The full reference for each parameter is described in [SFTP Ingestion Agents configuration reference](sftp-agent-configuration.md). The following parameters must be set:
-
- 1. **site\_id** should be changed to a unique identifier for your on-premises site – for example, the name of the city or state for this site.  This name becomes searchable metadata in Operator Insights for all data ingested by this agent. Reserved URL characters must be percent-encoded.
- 1. For the secret provider with name `data_product_keyvault`, set the following fields:
- 1. **provider.vault\_name** must be the name of the Key Vault for your Data Product. You identified this name in [Grant permissions for the Data Product Key Vault](#grant-permissions-for-the-data-product-key-vault).  
- 1. **provider.auth** must be filled out with:
-
- 1. **tenant\_id** as your Microsoft Entra ID tenant.
-
- 2. **identity\_name** as the application ID of the service principal that you created in [Create a service principal](#create-a-service-principal).
-
- 3. **cert\_path** as the file path of the base64-encoded pcks12 certificate in the secrets directory folder, for the service principal to authenticate with.
- 1. For the secret provider with name `local_file_system`, set the following fields:
-
- 1. **provider.auth.secrets_directory** the absolute path to the secrets directory on the agent VM, which was created in the [Prepare the VMs](#prepare-the-vms) step.
-
-
- 1. **file_sources** a list of file source details, which specifies the configured SFTP server and configures which files should be uploaded, where they should be uploaded, and how often. Multiple file sources can be specified but only a single source is required. Must be filled out with the following values:
-
- 1. **source_id** a unique identifier for the file source. Any URL reserved characters in source_id must be percent-encoded.
-
- 1. **source.sftp** must be filled out with:
-
- 1. **host** the hostname or IP address of the SFTP server.
-
- 1. **base\_path** the path to a folder on the SFTP server that files will be uploaded to Azure Operator Insights from.
-
- 1. **known\_hosts\_file** the path on the VM to the global known_hosts file, located at `/etc/ssh/ssh_known_hosts`. This file should contain the public SSH keys of the SFTP host server as outlined in [Prepare the VMs](#prepare-the-vms).
-
- 1. **user** the name of the user on the SFTP server that the agent should use to connect.
-
- 1. **auth** must be filled according to the authentication method that you chose in [Configure the connection between the SFTP server and VM](#configure-the-connection-between-the-sftp-server-and-vm). The required fields depend on which authentication type is specified:
-
- - Password:
-
- 1. **type** set to `password`
-
- 1. **secret\_name** is the name of the file containing the password in the `secrets_directory` folder.
-
- - SSH key:
-
- 1. **type** set to `ssh_key`
-
- 1. **key\_secret** is the name of the file containing the SSH key in the `secrets_directory` folder.
-
- 1. **passphrase\_secret\_name** is the name of the file containing the passphrase for the SSH key in the `secrets_directory` folder. If the SSH key doesn't have a passphrase, don't include this field.
-
-
-2. Continue to edit *config.yaml* to set the parameters that depend on the type of Data Product that you're using.
-For the **Monitoring - Affirmed MCC** Data Product, set the following parameters in each file source block in **file_sources**:
-
- 1. **source.settling_time_secs** set to `60`
- 2. **source.schedule** set to `0 */5 * * * * *` so that the agent checks for new files in the file source every 5 minutes
- 3. **sink.container\_name** set to `pmstats`
-
-> [!TIP]
-> The agent supports additional optional configuration for the following:
-> - Specifying a pattern of files in the `base_path` folder which will be uploaded (by default all files in the folder are uploaded)
-> - Specifying a pattern of files in the `base_path` folder which should not be uploaded
-> - A time and date before which files in the `base_path` folder will not be uploaded
-> - How often the SFTP agent uploads files (the value provided in the example configuration file corresponds to every hour)
-> - A settling time, which is a time period after a file is last modified that the agent will wait before it is uploaded (the value provided in the example configuration file is 5 minutes)
->
-> For more information about these configuration options, see [SFTP Ingestion Agents configuration reference](sftp-agent-configuration.md).
-
-## Start the agent software
-
-1. Start the agent: `sudo systemctl start az-sftp-uploader`
-
-2. Check that the agent is running: `sudo systemctl status az-sftp-uploader`
-
- 1. If you see any status other than `active (running)`, look at the logs as described in [Monitor and troubleshoot SFTP Ingestion Agents for Azure Operator Insights](troubleshoot-sftp-agent.md) to understand the error.  It's likely that some configuration is incorrect.
-
- 2. Once you resolve the issue,  attempt to start the agent again.
-
- 3. If issues persist, raise a support ticket.
-
-3. Once the agent is running, ensure it starts automatically after reboots: `sudo systemctl enable az-sftp-uploader.service`
-
-4. Save a copy of the delivered RPM ΓÇô you need it to reinstall or to back out any future upgrades.
-
-## Related content
-
-[Manage SFTP Ingestion Agents for Azure Operator Insights](how-to-manage-sftp-agent.md)
-
-[Monitor and troubleshoot SFTP Ingestion Agents for Azure Operator Insights](troubleshoot-sftp-agent.md)
operator-insights How To Manage Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-mcc-edr-agent.md
- Title: Manage MCC EDR Ingestion Agents for Azure Operator Insights
-description: Learn how to upgrade, update, roll back and manage MCC EDR Ingestion agents for AOI
----- Previously updated : 11/02/2023--
-# Manage MCC EDR Ingestion Agents for Azure Operator Insights
-
-The MCC EDR agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. You might need to upgrade the agent, update its configuration, roll back changes or rotate its certificates.
-
-> [!WARNING]
-> When the agent is restarted, a small number of EDRs being handled may be dropped.  It is not possible to gracefully restart without dropping any data.  For safety, update agents one at a time, only updating the next when you are sure the previous was successful.
-
-## Upgrade agent software
-
-To upgrade to a new release of the agent, repeat the following steps on each VM that has the old agent.
-
-1. Copy the RPM to the VM.  In an SSH session, change to the directory where the RPM was copied.
-
-1. Save a copy of the existing */etc/az-mcc-edr-uploader/config.yaml* configuration file.
-
-1. Upgrade the RPM: `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.  
-
-1. Create a new config file based on the new sample, keeping values from the original. Follow specific instructions in the release notes for the upgrade to ensure the new configuration is generated correctly.
-
-1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
-
-1. Once the agent is running, make sure it will automatically start on a reboot: `sudo systemctl enable az-mcc-edr-uploader.service`
-1. Verify that the agent is running and that EDRs are being routed to it as described in [Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights](troubleshoot-mcc-edr-agent.md).
-
-## Update agent configuration
-
-> [!WARNING]
-> Changing the configuration requires restarting the agent, whereupon a small number of EDRs being handled may be dropped.  It is not possible to gracefully restart without dropping any data.  For safety, update agents one at a time, only updating the next when you are sure the previous was successful.
-
-If you need to change the agent's configuration, perform the following steps:
-
-1. Save a copy of the original configuration file */etc/az-mcc-edr-uploader/config.yaml*
-
-1. Edit the configuration file to change the config values.  
-
-1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
-
-## Roll back upgrades or configuration changes
-
-If an upgrade or configuration change fails:
-
-1. Copy the backed-up configuration file from before the change to the */etc/az-mcc-edr-uploader/config.yaml* file.
-
-1. If a software upgrade failed, downgrade back to the original RPM.
-
-1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
-
-1. If this was software upgrade, make sure it will automatically start on a reboot: `sudo systemctl enable az-mcc-edr-uploader.service`
-
-## Rotate certificates
-
-You must refresh your service principal credentials before they expire.
-
-To do so:
-
-1. Create a new certificate, and add it to the service principal. For instructions, refer to [Upload a trusted certificate issued by a certificate authority](/entra/identity-platform/howto-create-service-principal-portal).
-
-1. Obtain the new certificate and private key in the base64-encoded PKCS12 format, as described in [Create and configure MCC EDR Ingestion Agents for Azure Operator Insights](how-to-install-mcc-edr-agent.md).
-
-1. Copy the certificate to the ingestion agent VM.
-
-1. Save the existing certificate file and replace with the new certificate file.
-
-1. Restart the agent: `sudo systemctl restart az-mcc-edr-uploader.service`
operator-insights How To Manage Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-sftp-agent.md
- Title: Manage SFTP Ingestion Agents for Azure Operator Insights
-description: Learn how to upgrade, update, roll back and manage SFTP Ingestion agents for AOI
----- Previously updated : 12/06/2023--
-# Manage SFTP Ingestion Agents for Azure Operator Insights
-
-The SFTP agent is a software package that is installed onto a Linux Virtual Machine (VM) owned and managed by you. You might need to upgrade the agent, update its configuration, roll back changes or rotate its certificates.
-
-> [!TIP]
-> When the agent restarts, each configured file source performs an immediate catch-up upload run. Subsequent upload runs take place according to the configured schedule for each file source.
-
-## Upgrade the agent software
-
-To upgrade to a new release of the agent, repeat the following steps on each VM that has the old agent.
-
-1. Copy the RPM to the VM.  In an SSH session, change to the directory where the RPM was copied.
-
-2. Save a copy of the existing */etc/az-sftp-uploader/config.yaml* configuration file.
-
-3. Upgrade the RPM: `sudo dnf install ./*.rpm`.  Answer 'y' when prompted.
-
-4. Create a new config file based on the new sample, keeping values from the original. Follow specific instructions in the release notes for the upgrade to ensure the new configuration is generated correctly.
-
-5. Restart the agent: `sudo systemctl restart az-sftp-uploader.service`
-
-6. Once the agent is running, configure the az-sftp-uploader service to automatically start on a reboot: `sudo systemctl enable az-sftp-uploader.service`
-7. Verify that the agent is running and that it's copying files as described in [Monitor and troubleshoot SFTP Ingestion Agents for Azure Operator Insights](troubleshoot-sftp-agent.md).
-
-## Update agent configuration
-
-If you need to change the agent's configuration, perform the following steps:
-
-1. Save a copy of the original configuration file */etc/az-sftp-uploader/config.yaml*
-
-2. Edit the configuration file to change the config values.  
-
-> [!WARNING]
-> If you change the `source_id` for a file source, the agent treats it as a new file source and might upload duplicate files with the new `source_id`. To avoid this, add the `exclude_before_time` parameter to the file source configuration. For example, if you configure `exclude_before_time: "2024-01-01T00:00:00-00:00"` then any files last modified before midnight on January 1, 2024 UTC will be ignored by the agent.
-
-3. Restart the agent: `sudo systemctl restart az-sftp-uploader.service`
-
-## Roll back upgrades or configuration changes
-
-If an upgrade or configuration change fails:
-
-1. Copy the backed-up configuration file from before the change to the */etc/az-sftp-uploader/config.yaml* file.
-
-1. If a software upgrade failed, downgrade back to the original RPM.
-
-1. Restart the agent: `sudo systemctl restart az-sftp-uploader.service`
-
-1. If this was software upgrade, configure the az-sftp-uploader service to automatically start on a reboot: `sudo systemctl enable az-sftp-uploader.service`
-
-## Rotate certificates
-
-You must refresh your service principal credentials before they expire.
-
-To do so:
-
-1. Create a new certificate, and add it to the service principal. For instructions, refer to [Upload a trusted certificate issued by a certificate authority](/entra/identity-platform/howto-create-service-principal-portal).
-
-1. Obtain the new certificate and private key in the base64-encoded PKCS12 format, as described in [Create and configure SFTP Ingestion Agents for Azure Operator Insights](how-to-install-sftp-agent.md).
-
-1. Copy the certificate to the ingestion agent VM.
-
-1. Save the existing certificate file and replace with the new certificate file.
-
-1. Restart the agent: `sudo systemctl restart az-sftp-uploader.service`
operator-insights Ingestion Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/ingestion-agent-release-notes-archive.md
+
+ Title: Archive for What's new with Azure Operator Insights ingestion agent
+description: Release notes for Azure Connected Machine agent versions older than six months.
+ Last updated : 02/28/2024++
+# Archive for What's new with Azure Operator Insights ingestion agent
+
+The primary [What's new in Azure Operator Insights ingestion agent?](ingestion-agent-release-notes.md) article contains updates for the last six months, while this article contains all the older information.
+
+The Azure Operator Insights ingestion agent receives improvements on an ongoing basis. This article provides you with information about:
+
+- Previous releases
+- Known issues
+- Bug fixes
+
+## Related content
+
+- [Azure Operator Insights ingestion agent overview](ingestion-agent-overview.md)
operator-insights Ingestion Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/ingestion-agent-release-notes.md
+
+ Title: What's new with Azure Operator Insights ingestion agent
+description: This article has release notes for Azure Operator Insights ingestion agent. For many of the summarized issues, there are links to more details.
+ Last updated : 02/28/2024++
+# What's new with Azure Operator Insights ingestion agent
+
+The Azure Operator Insights ingestion agent receives improvements on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+
+- The latest releases
+- Known issues
+- Bug fixes
+
+This page is updated for each new release of the ingestion agent, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Operator Insights ingestion agent](ingestion-agent-release-notes-archive.md).
+
+## Version 1.0.0 - February 2024
+
+Download for [RHEL8](https://download.microsoft.com/download/c/6/c/c6c49e4b-dbb8-4d00-be7f-f6916183b6ac/az-aoi-ingestion-1.0.0-1.el8.x86_64.rpm).
+
+### Known issues
+
+None
+
+### New features
+
+This is the first release of the Azure Operator Insights ingestion agent. It supports ingestion of Affirmed MCC EDRs and of arbitrary files from an SFTP server.
+
+### Fixed
+
+None
+
+## Related content
+
+- [Azure Operator Insights ingestion agent overview](ingestion-agent-overview.md)
operator-insights Mcc Edr Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/mcc-edr-agent-configuration.md
- Title: MCC EDR Ingestion Agents configuration reference for Azure Operator Insights
-description: This article documents the complete set of configuration for the agent, listing all fields with examples and explanatory comments.
---- Previously updated : 11/02/2023---
-# MCC EDR Ingestion Agents configuration reference
-
-This reference provides the complete set of configuration for the agent, listing all fields with examples and explanatory comments.
-
-```
-# The name of the site this agent lives in
-site_id: london-lab01
-# The identifier for this agent
-agent_id: mcc-edr-agent01
-# Config for secrets providers. We currently support reading secrets from Azure Key Vault and from the local filesystem.
-# Multiple secret providers can be defined and each must be given a unique name.
-# The name can then be referenced for secrets later in the config.
-secret_providers:
- - name: dp_keyvault
- provider:
- type: key_vault
- vault_name: contoso-dp-kv
- auth:
- tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
- identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
- cert_path: /path/to/local/certkey.pkcs
-# Source configuration. This controls how EDRs are ingested from MCC.
-source:
- # The TCP port to listen on. Must match the port MCC is configured to send to.
- listen_port: 36001
- # The maximum amount of data to buffer in memory before uploading.
- message_queue_capacity_in_bytes: 33554432
- # The maximum size of a single blob (file) to store in the input storage account in Azure.
- maximum_blob_size_in_bytes: 134217728
- # Quick check on the maximum RAM that the agent should use.
- # This is a guide to check the other tuning parameters, rather than a hard limit.
- maximum_overall_capacity_in_bytes: 1275068416
- # The maximum time to wait when no data is received before uploading pending batched data to Azure.
- blob_rollover_period_in_seconds: 300
- # EDRs greater than this size are dropped. Subsequent EDRs continue to be processed.
- # This condition likely indicates MCC sending larger than expected EDRs. MCC is not normally expected
- # to send EDRs larger than the default size. If EDRs are being dropped because of this limit,
- # investigate and confirm that the EDRs are valid, and then increase this value.
- soft_maximum_edr_size_in_bytes: 20480
- # EDRs greater than this size are dropped and the connection from MCC is closed. This condition
- # likely indicates an MCC bug or MCC sending corrupt data. It prevents the agent from uploading
- # corrupt EDRs to Azure. You should not need to change this value.
- hard_maximum_edr_size_in_bytes: 100000
-sink:
- # The container within the ingestion account.
- # This *must* be in the format Azure Operator Insights expects.
- # Do not adjust without consulting your support representative.
- container_name: edr
- # Optional. How often, in hours, the agent should refresh its ADLS token. Defaults to 1.
- adls_token_cache_period_hours: 1
- auth:
- type: sas_token
- # This must reference a secret provider configured above.
- secret_provider: dp_keyvault
- # The name of a secret in the corresponding provider.
- # This will be the name of a secret in the Key Vault.
- # This is created by the Data Product and should not be changed.
- secret_name: input-storage-sas
- # Optional. The maximum size of each block that is uploaded to Azure.
- # Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 bytes).
- block_size_in_bytes: 33554432
-```
operator-insights Set Up Ingestion Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/set-up-ingestion-agent.md
The VM used for the ingestion agent should be set up following best practice for
Download the RPM for the ingestion agent using the details you received as part of the [Azure Operator Insights onboarding process](overview.md#how-do-i-get-access-to-azure-operator-insights) or from [https://go.microsoft.com/fwlink/?linkid=2260508](https://go.microsoft.com/fwlink/?linkid=2260508).
+Links to the current and previous releases of the agents are available below the heading of each [release note](ingestion-agent-release-notes.md). If you're looking for an agent version that's more than 6 months old, check out the [release notes archive](ingestion-agent-release-notes-archive.md).
+ ## Set up authentication to Azure You must have a service principal with a certificate credential that can access the Azure Key Vault created by the Data Product to retrieve storage credentials. Each agent must also have a copy of a valid certificate and private key for the service principal stored on this virtual machine.
operator-insights Sftp Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/sftp-agent-configuration.md
- Title: SFTP Ingestion Agents configuration reference for Azure Operator Insights
-description: This article documents the complete set of configuration for the SFTP ingestion agent, listing all fields with examples and explanatory comments.
----- Previously updated : 12/06/2023-
-# SFTP Ingestion Agents configuration reference
-
-This reference provides the complete set of configuration for the agent, listing all fields with examples and explanatory comments.
-
-```
-# The name of the site this agent lives in. Reserved URL characters must be percent-encoded.
-site_id: london-lab01
-# Config for secrets providers. We support reading secrets from Azure Key Vault and from the VM's local filesystem.
-# Multiple secret providers can be defined and each must be given a unique name, which is referenced later in the config.
-# Two secret providers must be configured for the SFTP agent to run:
-# A secret provider of type `key_vault` which contains details required to connect to the Azure Key Vault and allow connection to the storage account.
-# A secret provider of type `file_system`, which specifies a directory on the VM where secrets for connecting to the SFTP server are stored.
-secret_providers:
- - name: data_product_keyvault
- provider:
- type: key_vault
- vault_name: contoso-dp-kv
- auth:
- tenant_id: ad5421f5-99e4-44a9-8a46-cc30f34e8dc7
- identity_name: 98f3263d-218e-4adf-b939-eacce6a590d2
- cert_path: /path/to/local/certkey.pkcs
- - name: local_file_system
- provider:
- # The file system provider specifies a folder in which secrets are stored.
- # Each secret must be an individual file without a file extension, where the secret name is the file name, and the file contains the secret only.
- type: file_system
- # The absolute path to the secrets directory
- secrets_directory: /path/to/secrets/directory
-file_sources:
- # Source configuration. This specifies which files are ingested from the SFTP server.
- # Multiple sources can be defined here (where they can reference different folders on the same SFTP server). Each source must have a unique identifier where any URL reserved characters in source_id must be percent-encoded.
- # A sink must be configured for each source.
- - source_id: sftp-source01
- source:
- sftp:
- # The IP address or hostname of the SFTP server.
- host: 192.0.2.0
- # Optional. The port to connect to on the SFTP server. Defaults to 22.
- port: 22
- # The path to a folder on the SFTP server that files will be uploaded to Azure Operator Insights from.
- base_path: /path/to/sftp/folder
- # The path on the VM to the 'known_hosts' file for the SFTP server.  This file must be in SSH format and contain details of any public SSH keys used by the SFTP server. This is required by the agent to verify it is connecting to the correct SFTP server.
- known_hosts_file: /path/to/known_hosts
- # The name of the user on the SFTP server which the agent will use to connect.
- user: sftp-user
- auth:
- # The name of the secret provider configured above which contains the secret for the SFTP user.
- secret_provider: local_file_system
- # The form of authentication to the SFTP server. This can take the values 'password' or 'ssh_key'. The appropriate field(s) must be configured below depending on which type is specified.
- type: password
- # Only for use with 'type: password'. The name of the file containing the password in the secrets_directory folder
- secret_name: sftp-user-password
- # Only for use with 'type: ssh_key'. The name of the file containing the SSH key in the secrets_directory folder
- key_secret: sftp-user-ssh-key
- # Optional. Only for use with 'type: ssh_key'. The passphrase for the SSH key. This can be omitted if the key is not protected by a passphrase.
- passphrase_secret_name: sftp-user-ssh-key-passphrase
- # Optional. A regular expression to specify which files in the base_path folder should be ingested. If not specified, the STFP agent will attempt to ingest all files in the base_path folder (subject to exclude_pattern, settling_time_secs and exclude_before_time).
- include_pattern: "*\.csv$"
- # Optional. A regular expression to specify any files in the base_path folder which should not be ingested. Takes priority over include_pattern, so files which match both regular expressions will not be ingested.
- exclude_pattern: '\.backup$'
- # A duration in seconds. During an upload run, any files last modified within the settling time are not selected for upload, as they may still be being modified.
- settling_time_secs: 60
- # A datetime that adheres to the RFC 3339 format. Any files last modified before this datetime will be ignored.
- exclude_before_time: "2022-12-31T21:07:14-05:00"
- # An expression in cron format, specifying when upload runs are scheduled for this source. All times refer to UTC. The cron schedule should include fields for: second, minute, hour, day of month, month, day of week, and year. E.g.:
- # `* /3 * * * * *` for once every 3 minutes
- # `0 30 5 * * * *` for 05:30 every day
- # `0 15 3 * * Fri,Sat *` for 03:15 every Friday and Saturday
- schedule: "*/30 * * * Apr-Jul Fri,Sat,Sun 2025"
- sink:
- auth:
- type: sas_token
- # This must reference a secret provider configured above.
- secret_provider: data_product_keyvault
- # The name of a secret in the corresponding provider.
- # This will be the name of a secret in the Key Vault.
- # This is created by the Data Product and should not be changed.
- secret_name: input-storage-sas
- # The container within the ingestion account. This *must* be exactly the name of the container that Azure Operator Insights expects.
- container_name: example-container
- # Optional. A string giving an optional base path to use in Azure Blob Storage. Reserved URL characters must be percent-encoded. It may be required depending on the Data Product.
- base_path: pmstats
- # Optional. How often, in hours, the sink should refresh its ADLS token. Defaults to 1.
- adls_token_cache_period_hours: 1
- # Optional. The maximum number of blobs that can be uploaded to ADLS in parallel. Further blobs will be queued in memory until an upload completes. Defaults to 10.
- # Note: This value is also the maximum number of concurrent SFTP reads for the associated source. Ensure your SFTP server can handle this many concurrent connections. If you set this to a value greater than 10 and are using an OpenSSH server, you may need to increase `MaxSessions` and/or `MaxStartups` in `sshd_config`.
- maximum_parallel_uploads: 10
- # Optional. The maximum size of each block that is uploaded to Azure.
- # Each blob is composed of one or more blocks. Defaults to 32MiB (=33554432 Bytes).
- block_size_in_bytes: 33554432
- ```
operator-insights Sftp Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/sftp-agent-overview.md
- Title: Overview of SFTP Ingestion Agents for Azure Operator Insights
-description: Understand how SFTP ingestion agents for Azure Operator Insights collect and upload data about your network to Azure
----- Previously updated : 12/8/2023-
-#CustomerIntent: As a someone deploying Azure Operator Insights, I want to understand how SFTP agents work so that I can set one up and configure it for my network.
--
-# SFTP Ingestion Agent overview
-
-An SFTP Ingestion Agent collects files from one or more SFTP servers, and uploads them to Azure Operator Insights.
-
-## File sources
-
-An SFTP ingestion agent collects files from _file sources_ that you configure on it. A file source includes the details of the SFTP server, the files to collect from it and how to manage those files.
-
-For example, a single SFTP server might have logs, CSV files and text files. You could configure each type of file as a separate file source. For each file source, you can specify the directory to collect files from (optionally including or excluding specific files based on file paths), how often to collect files and other options. For full details of the available options, see [SFTP Ingestion Agents configuration reference](sftp-agent-configuration.md).
-
-File sources have the following restrictions:
--- File sources must not overlap, meaning that they must not collect the same files from the same servers.-- You must configure each file source on exactly one agent. If you configure a file source on multiple agents, Azure Operator Insights receives duplicate data.-
-## Processing files
-
-The SFTP agent uploads files to Azure Operator Insights during scheduled _upload runs_. The frequency of these runs is defined in the file source's configuration. Each upload run uploads files according to the file source's configuration:
--- File paths and regular expressions for including and excluding files specify the files to upload.-- The _settling time_ excludes files last modified within this period from any upload. For example, if the upload run starts at 05:30 and the settling time is 60 seconds (one minute), the upload run only uploads files modified before 05:29.-- The _exclude before time_ (if set) excludes files last modified before the specified date and time.-
-The SFTP agent records when it last completed an upload run for a file source. It uses this record to determine which files to upload during the next upload run, using the following process:
-
-1. The agent checks the last recorded time.
-1. The agent uploads any files modified since that time. It assumes that it processed older files during a previous upload run.
-1. At the end of the upload run:
- - If the agent uploaded all the files or the only errors were nonretryable errors, the agent updates the record. The new time is based on the time the upload run started, minus the settling time.
- - If the upload run had retryable errors (for example, if the connection to Azure was lost), the agent doesn't update the record. Not updating the record allows the agent to retry the upload for any files that didn't upload successfully. Retries don't duplicate any data previously uploaded.
-
-The SFTP agent is designed to be highly reliable and resilient to low levels of network disruption. If an unexpected error occurs, the agent restarts and provides service again as soon as it's running. After a restart, the SFTP agent carries out an immediate catch-up upload run for all configured file sources. It then returns to its configured schedule.
-
-## Authentication
-
-The SFTP agent authenticates to two separate systems, with separate credentials.
--- To authenticate to the ingestion endpoint of an Azure Operator Insights Data Product, the agent obtains a connection string from an Azure Key Vault. The agent authenticates to this Key Vault with a Microsoft Entra ID service principal and certificate that you set up when you create the agent.-- To authenticate to your SFTP server, the agent can use password authentication or SSH key authentication.-
-For configuration instructions, see [Set up authentication to Azure](how-to-install-sftp-agent.md#set-up-authentication-to-azure) and [Configure the connection between the SFTP server and VM](how-to-install-sftp-agent.md#configure-the-connection-between-the-sftp-server-and-vm).
-
-## Next step
-
-> [!div class="nextstepaction"]
-> [Create and configure SFTP Ingestion Agents for Azure Operator Insights](how-to-install-sftp-agent.md)
operator-insights Troubleshoot Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/troubleshoot-mcc-edr-agent.md
- Title: Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights
-description: Learn how to monitor MCC EDR Ingestion Agents and troubleshoot common issues
----- Previously updated : 12/06/2023--
-# Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights
-
-If you notice problems with data collection from your MCC EDR ingestion agents, use the information in this section to fix common problems or create a diagnostics package. You can upload the diagnostics package to support tickets that you create in the Azure portal.
-
-## Agent diagnostics overview
-
-The ingestion bus agents are software packages, so their diagnostics are limited to the functioning of the application. Microsoft doesn't provide OS or resource monitoring. You're encouraged to use standard tooling such as snmpd, Prometheus node exporter, or others to send OS-level data and telemetry to your own monitoring systems. [Monitor virtual machines with Azure Monitor](../azure-monitor/vm/monitor-virtual-machine.md) describes tools you can use if your ingestion agents are running on Azure VMs.
-
-You can also use the diagnostics provided by the MCCs, or by Azure Operator Insights itself in Azure Monitor, to help identify and debug ingestion issues.
-
-The agent writes logs and metrics to files under */var/log/az-mcc-edr-uploader/*. If the agent is failing to start for any reason, such as misconfiguration, the stdout.log file contains human-readable logs explaining the issue.
-
-Metrics are reported in a simple human-friendly form. They're provided primarily for Microsoft Support to have telemetry for debugging unexpected issues.
-
-## Troubleshoot common issues
-
-For most of these troubleshooting techniques, you need an SSH connection to the VM running the agent.
-
-### Agent fails to start
-
-Symptoms: `sudo systemctl status az-mcc-edr-uploader` shows that the service is in failed state.
-
-Steps to fix:
--- Ensure the service is running: `sudo systemctl start az-mcc-edr-uploader`.--- Look at the */var/log/az-mcc-edr-uploader/stdout.log* file and check for any reported errors.  Fix any issues with the configuration file and start the agent again.-
-### MCC cannot connect
-
-Symptoms: MCC reports alarms about MSFs being unavailable.
-
-Steps to fix:
--- Check that the agent is running.-- Ensure that MCC is configured with the correct IP and port.--- Check the logs from the agent and see if it's reporting connections.  If not, check the network connectivity to the agent VM and verify that the firewalls aren't blocking traffic to port 36001.--- Collect a packet capture to see where the connection is failing.-
-### No EDRs appearing in AOI
-
-Symptoms: no data appears in Azure Data Explorer.
-
-Steps to fix:
--- Check that the MCC is healthy and ingestion bus agents are running.--- Check the logs from the ingestion agent for errors uploading to Azure. If the logs point to an invalid connection string, or connectivity issues, fix the configuration/connection string and restart the agent.--- Check the network connectivity and firewall configuration on the storage account.-
-### Data missing or incomplete
-
-Symptoms: Azure Monitor shows a lower incoming EDR rate in ADX than expected.
-
-Steps to fix:
--- Check that the agent is running on all VMs and isn't reporting errors in logs.--- Verify that the agent VMs aren't being sent more than the rated load.  --- Check agent metrics for dropped bytes/dropped EDRs.  If the metrics don't show any dropped data, then MCC isn't sending the data to the agent. Check the "received bytes" metrics to see how much data is being received from MCC.--- Check that the agent VM isn't overloaded – monitor CPU and memory usage.   In particular, ensure no other process is taking resources from the VM.-
-## Collect diagnostics
-
-Microsoft Support might request diagnostic packages when investigating an issue.
-
-To collect a diagnostics package, SSH to the Virtual Machine and run the command `/usr/bin/microsoft/az-ingestion-gather-diags`. This command generates a date-stamped zip file in the current directory that you can copy from the system.
-
-> [!NOTE]
-> Diagnostics packages don't contain any customer data or the value of the Azure Storage connection string.
operator-insights Troubleshoot Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/troubleshoot-sftp-agent.md
- Title: Monitor and troubleshoot SFTP Ingestion Agents for Azure Operator Insights
-description: Learn how to monitor SFTP Ingestion Agents and troubleshoot common issues
----- Previously updated : 12/06/2023--
-# Monitor and troubleshoot SFTP Ingestion Agents for Azure Operator Insights
-
-If you notice problems with data collection from your SFTP ingestion agents, use the information in this section to fix common problems or create a diagnostics package. You can upload the diagnostics package to support tickets that you create in the Azure portal.
-
-## Agent diagnostics overview
-
-The ingestion bus agents are software packages, so their diagnostics are limited to the functioning of the application. Microsoft doesn't provide OS or resource monitoring. You're encouraged to use standard tooling such as snmpd, Prometheus node exporter, or others to send OS-level data, logs and metrics to your own monitoring systems. [Monitor virtual machines with Azure Monitor](../azure-monitor/vm/monitor-virtual-machine.md) describes tools you can use if your ingestion agents are running on Azure VMs.
-
-You can also use the diagnostics provided by Azure Operator Insights itself in Azure Monitor to help identify and debug ingestion issues.
-
-The agent writes logs and metrics to files under */var/log/az-sftp-uploader/*. If the agent is failing to start for any reason, such as misconfiguration, the stdout.log file contains human-readable logs explaining the issue.
-
-Metrics are reported in a simple human-friendly form. They're provided primarily to help Microsoft Support debug unexpected issues.
-
-## Troubleshoot common issues
-
-For most of these troubleshooting techniques, you need an SSH connection to the VM running the agent.
-
-### Agent fails to start
-
-Symptoms: `sudo systemctl status az-sftp-uploader` shows that the service is in failed state.
-
-Steps to fix:
--- Ensure the service is running: `sudo systemctl start az-sftp-uploader`.--- Look at the */var/log/az-sftp-uploader/stdout.log* file and check for any reported errors.  Fix any issues with the configuration file and start the agent again.-
-### Agent can't connect to SFTP server
-
-Symptoms: No files are uploaded to AOI. The agent log file, */var/log/az-sftp-uploader/stdout.log*, contains errors about connecting the SFTP server.
-
-Steps to fix:
--- Verify the SFTP user and credentials used by the agent are valid for the SFTP server.--- Check network connectivity and firewall configuration between the agent and the SFTP server. By default, the SFTP server must have port 22 open to accept SFTP connections.--- Check that the `known_hosts` file on the agent VM contains a valid public SSH key for the SFTP server:
- - On the agent VM, run `ssh-keygen -l -F *<sftp-server-IP-or-hostname>*`
- - If there's no output, then `known_hosts` doesn't contain a matching entry. Follow the instructions in [Learn how to create and configure SFTP Ingestion Agents for Azure Operator Insights](how-to-install-sftp-agent.md) to add a `known_hosts` entry for the SFTP server.
--
-### No files are uploaded to Azure Operator Insights
-
-Symptoms:
-- No data appears in Azure Data Explorer.-- The AOI *Data Ingested* metric for the relevant data type is zero. -
-Steps to fix:
--- Check that the agent is running on all VMs and isn't reporting errors in logs.--- Check that files exist in the correct location on the SFTP server, and that they aren't being excluded due to file source config (see [Files are missing](#files-are-missing)).--- Check the network connectivity and firewall configuration between the ingestion agent and Azure Operator Insights.--
-### Files are missing
-
-Symptoms:
-- Data is missing from Azure Data Explorer.-- The AOI *Data Ingested* and *Processed File Count* metrics for the relevant data type are lower than expected. -
-Steps to fix:
--- Check that the agent is running on all VMs and isn't reporting errors in logs. Search the logs for the name of the missing file to find errors related to that file.--- Check that the files exist on the SFTP server and that they aren't being excluded due to file source config. Check the file source config and confirm that:
- - The files exist on the SFTP server under the path defined in `base_path`. Ensure that there are no symbolic links in the file paths of the files to upload: the ingestion agent ignores symbolic links.
- - The "last modified" time of the files is at least `settling_time_secs` seconds earlier than the time of the most recent upload run for this file source.
- - The "last modified" time of the files is later than `exclude_before_time` (if specified).
- - The file path relative to `base_path` matches the regular expression given by `include_pattern` (if specified).
- - The file path relative to `base_path` *doesn't* match the regular expression given by `exclude_pattern` (if specified).
--- If recent files are missing, check the agent logs to confirm that the ingestion agent performed an upload run for the file source at the expected time. The `schedule` parameter in the file source config gives the expected schedule. --- Check that the agent VM isn't overloaded – monitor CPU and memory usage. In particular, ensure no other process is taking resources from the VM.-
-### Files are uploaded more than once
-
-Symptoms:
-- Duplicate data appears in Azure Operator Insights-
-Steps to fix:
--- Check whether the ingestion agent encountered a retryable error on a previous upload and then retried that upload more than 24 hours after the last successful upload. In that case, the agent might upload duplicate data during the retry attempt. The duplication of data should affect only the retry attempt.--- Check that the file sources defined in the config file refer to non-overlapping sets of files. If multiple file sources are configured to pull files from the same location on the SFTP server, use the `include_pattern` and `exclude_pattern` config fields to specify distinct sets of files that each file source should consider.--- If you're running multiple instances of the SFTP ingestion agent, check that the file sources configured for each agent don't overlap with file sources on any other agent. In particular, look out for file source config that has been accidentally copied from another agent's config.--- If you recently changed the `source_id` for a configured file source, use the `exclude_before_time` field to avoid files being reuploaded with the new `source_id`. For instructions, see [Manage SFTP Ingestion Agents for Azure Operator Insights: Update agent configuration](how-to-manage-sftp-agent.md#update-agent-configuration).-
-## Collect diagnostics
-
-Microsoft Support might request diagnostic packages when investigating an issue.
-
-To collect a diagnostics package, SSH to the Virtual Machine and run the command `/usr/bin/microsoft/az-ingestion-gather-diags`. This command generates a date-stamped zip file in the current directory that you can copy from the system.
-
-> [!NOTE]
-> Diagnostics packages don't contain any customer data or the value of any credentials.
operator-insights Upgrade Ingestion Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/upgrade-ingestion-agent.md
In this article, you'll upgrade your ingestion agent and roll back an upgrade.
Obtain the latest version of the ingestion agent RPM from [https://go.microsoft.com/fwlink/?linkid=2260508](https://go.microsoft.com/fwlink/?linkid=2260508).
+Links to the current and previous releases of the agents are available below the heading of each [release note](ingestion-agent-release-notes.md). If you're looking for an agent version that's more than 6 months old, check out the [release notes archive](ingestion-agent-release-notes-archive.md).
+ ## Upgrade the agent software To upgrade to a new release of the agent, repeat the following steps on each VM that has the old agent.
postgresql Concepts Networking Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private.md
Choose this networking option if you want the following capabilities:
:::image type="content" source="./media/how-to-manage-virtual-network-portal/flexible-pg-vnet-diagram.png" alt-text="Diagram that shows how peering works between virtual networks, one of which includes an Azure Database for PostgreSQL flexible server instance."::: In the preceding diagram:-- Azure Database for PostgreSQL flexible server instances are injected into subnet 10.0.1.0/24 of the VNet-1 virtual network.
+- Azure Databases for PostgreSQL flexible server instances are injected into subnet 10.0.1.0/24 of the VNet-1 virtual network.
- Applications that are deployed on different subnets within the same virtual network can access Azure Database for PostgreSQL flexible server instances directly. - Applications that are deployed on a different virtual network (VNet-2) don't have direct access to Azure Database for PostgreSQL flexible server instances. You have to perform [virtual network peering for a private DNS zone](#private-dns-zone-and-virtual-network-peering) before they can access the flexible server.
In the preceding diagram:
An Azure virtual network contains a private IP address space that's configured for your use. Your virtual network must be in the same Azure region as your Azure Database for PostgreSQL flexible server instance. To learn more about virtual networks, see the [Azure Virtual Network overview](../../virtual-network/virtual-networks-overview.md).
-Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into VNET](../../virtual-network/virtual-network-for-azure-services.md) with Azure Database for PostgreSQL flexible server instances:
+Here are some concepts to be familiar with when you're using virtual networks where resources are [integrated into virtual network](../../virtual-network/virtual-network-for-azure-services.md) with Azure Database for PostgreSQL flexible server instances:
-* **Delegated subnet**. A virtual network contains subnets (sub-networks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
+* **Delegated subnet**. A virtual network contains subnets (subnetworks). Subnets enable you to segment your virtual network into smaller address spaces. Azure resources are deployed into specific subnets within a virtual network.
Your VNET integrated Azure Database for PostgreSQL flexible server instance must be in a subnet that's *delegated*. That is, only Azure Database for PostgreSQL flexible server instances can use that subnet. No other Azure resource types can be in the delegated subnet. You delegate a subnet by assigning its delegation property as `Microsoft.DBforPostgreSQL/flexibleServers`.
- The smallest CIDR range you can specify for the subnet is /28, which provides sixteen IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that cannot be assigned to host, mentioned above. This leaves you eleven available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes 4 addresses.
- For Replication and Microsoft Entra connections please make sure Route Tables do not affect traffic.A common pattern is route all outbound traffic via an Azure Firewall or a custom on-premises network filtering appliance.
+ The smallest CIDR range you can specify for the subnet is /28, which provides 16 IP addresses, however the first and last address in any network or subnet can't be assigned to any individual host. Azure reserves five IPs to be utilized internally by Azure networking, which include two IPs that can't be assigned to host, mentioned above. This leaves you 11 available IP addresses for /28 CIDR range, whereas a single Azure Database for PostgreSQL flexible server instance with High Availability features utilizes four addresses.
+ For Replication and Microsoft Entra connections, please make sure Route Tables don't affect traffic.A common pattern is routed all outbound traffic via an Azure Firewall or a custom on-premises network filtering appliance.
If the subnet has a Route Table associated with the rule to route all traffic to a virtual appliance: * Add a rule with Destination Service Tag ΓÇ£AzureActiveDirectoryΓÇ¥ and next hop ΓÇ£InternetΓÇ¥ * Add a rule with Destination IP range same as the Azure Database for PostgreSQL flexible server subnet range and next hop ΓÇ£Virtual NetworkΓÇ¥
Here are some concepts to be familiar with when you're using virtual networks wh
[Azure Private DNS](../../dns/private-dns-overview.md) provides a reliable and secure DNS service for your virtual network. Azure Private DNS manages and resolves domain names in the virtual network without the need to configure a custom DNS solution.
-When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones will need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access.
+When using private network access with Azure virtual network, providing the private DNS zone information is **mandatory** in order to be able to do DNS resolution. For new Azure Database for PostgreSQL flexible server instance creation using private network access, private DNS zones need to be used while configuring Azure Database for PostgreSQL flexible server instances with private access.
For new Azure Database for PostgreSQL flexible server instance creation using private network access with API, ARM, or Terraform, create private DNS zones and use them while configuring Azure Database for PostgreSQL flexible server instances with private access. See more information on [REST API specifications for Microsoft Azure](https://github.com/Azure/azure-rest-api-specs/blob/master/specification/postgresql/resource-manager/Microsoft.DBforPostgreSQL/stable/2021-06-01/postgresql.json). If you use the [Azure portal](./how-to-manage-virtual-network-portal.md) or [Azure CLI](./how-to-manage-virtual-network-cli.md) for creating Azure Database for PostgreSQL flexible server instances, you can either provide a private DNS zone name that you had previously created in the same or a different subscription or a default private DNS zone is automatically created in your subscription.
-If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`**. Use those zones while configuring Azure Database for PostgreSQL flexible server instances with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your Azure Database for PostgreSQL flexible server instances or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
+If you use an Azure API, an Azure Resource Manager template (ARM template), or Terraform, **create private DNS zones that end with `.postgres.database.azure.com`**. Use those zones while configuring Azure Database for PostgreSQL flexible server instances with private access. For example, use the form `[name1].[name2].postgres.database.azure.com` or `[name].postgres.database.azure.com`. If you choose to use the form `[name].postgres.database.azure.com`, the name **can't** be the name you use for one of your Azure Databases for PostgreSQL flexible server instances or an error message will be shown during provisioning. For more information, see the [private DNS zones overview](../../dns/private-dns-overview.md).
Using Azure portal, API, CLI or ARM, you can also change private DNS Zone from the one you provided when creating your Azure Database for PostgreSQL flexible server instance to another private DNS zone that exists the same or different subscription.
Private DNS zone settings and virtual network peering are independent of each ot
> [!NOTE] > Only private DNS zone names that end with **'postgres.database.azure.com'** can be linked. Your DNS zone name cannot be the same as your Azure Database for PostgreSQL flexible server instance(s) otherwise name resolution will fail.
-To map a Server name to the DNS record you can run *nslookup* command in [Azure Cloud Shell](../../cloud-shell/overview.md) using Azure PowerShell or Bash, substituting name of your server for <server_name> parameter in example below:
+To map a Server name to the DNS record, you can run *nslookup* command in [Azure Cloud Shell](../../cloud-shell/overview.md) using Azure PowerShell or Bash, substituting name of your server for <server_name> parameter in example below:
```bash nslookup -debug <server_name>.postgres.database.azure.com | grep 'canonical name'
There are three main patterns for connecting spoke virtual networks to each othe
Use [Azure Virtual Network Manager (AVNM)](../../virtual-network-manager/overview.md) to create new (and onboard existing) hub and spoke virtual network topologies for the central management of connectivity and security controls.
+### Communication with privately networked clients in different regions
+
+Frequently customers have a need to connect to clients different Azure regions. More specifically, this question typically boils down to how to connect two VNETs (one of which has Azure Database for PostgreSQL - Flexible Server and another application client) that are in different regions.
+There are multiple ways to achieve such connectivity, some of which are:
+* **[Global VNET peering](../../virtual-network/virtual-network-peering-overview.md)**. Most common methodology, as its is the easiest way to connect networks in different regions together. Global VNET peering creates a connection over the Azure backbone directly between the two peered VNETs. This provides best network throughput and lowest latencies for connectivity using this method. When VNETs are peered, Azure will also handle the routing automatically for you, these VNETs can communicate with all resources in the peered VNET, established on a VPN gateway.
+* **[VNET-to-VNET connection](../../vpn-gateway/vpn-gateway-howto-vnet-vnet-resource-manager-portal.md)**. A VNET-to-VNET connection is essentially a VPN between the two different Azure locations. The VNET-to-VNET connection is established on a VPN gateway. This means your traffic incurs two additional traffic hops as compared to global VNET peering. There's also additional latency and lower bandwidth as compared to that method.
+* **[Communication via network appliance in Hub and Spoke architecture](#using-hub-and-spoke-private-networking-design)**.
+Instead of connecting spoke virtual networks directly to each other, you can use network appliances to forward traffic between spokes. Network appliances provide more network services like deep packet inspection and traffic segmentation or monitoring, but they can introduce latency and performance bottlenecks if they're not properly sized.
+ ### Replication across Azure regions and virtual networks with private networking Database replication is the process of copying data from a central or primary server to multiple servers known as replicas. The primary server accepts read and write operations whereas the replicas serve read-only transactions. The primary server and replicas collectively form a database cluster. The goal of database replication is to ensure redundancy, consistency, high availability, and accessibility of data, especially in high-traffic, mission-critical applications.
Azure Database for PostgreSQL flexible server offers two methods for replication
Replication across Azure regions, with separate [virtual networks (VNETs)](../../virtual-network/virtual-networks-overview.md) in each region, **requires connectivity across regional virtual network boundaries** that can be provided via **[virtual network peering](../../virtual-network/virtual-network-peering-overview.md)** or in **[Hub and Spoke architectures](#using-hub-and-spoke-private-networking-design) via network appliance**.
-By default **DNS name resolution** is **scoped to a virtual network**. This means that any client in one virtual network (VNET1) is unable to resolve the Azure Database for PostgreSQL flexible server FQDN in another virtual network (VNET2)
+By default **DNS name resolution** is **scoped to a virtual network**. This means that any client in one virtual network (VNET1) is unable to resolve the Azure Database for PostgreSQL flexible server FQDN in another virtual network (VNET2).
In order to resolve this issue, you must make sure clients in VNET1 can access the Azure Database for PostgreSQL flexible server Private DNS Zone. This can be achieved by adding a **[virtual network link](../../dns/private-dns-virtual-network-links.md)** to the Private DNS Zone of your Azure Database for PostgreSQL flexible server instance.
Here are some limitations for working with virtual networks created via VNET int
## Host name
-Regardless of the networking option that you choose, we recommend that you always use an **FQDN** as host name when connecting to your Azure Database for PostgreSQL flexible server instance. The server's IP address is not guaranteed to remain static. Using the FQDN will help you avoid making changes to your connection string.
+Regardless of the networking option that you choose, we recommend that you always use an **FQDN** as host name when connecting to your Azure Database for PostgreSQL flexible server instance. The server's IP address isn't guaranteed to remain static. Using the FQDN helps you avoid making changes to your connection string.
An example that uses an FQDN as a host name is `hostname = servername.postgres.database.azure.com`. Where possible, avoid using `hostname = 10.0.0.4` (a private address) or `hostname = 40.2.45.67` (a public address).
postgresql Generative Ai Azure Cognitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-cognitive.md
Azure AI extension gives the ability to invoke the [language services](../../ai-
## Prerequisites
+1. [Enable and configure](generative-ai-azure-overview.md#enable-the-azure_ai-extension) the `azure_ai` extension.
1. [Create a Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint. 1. After it deploys, select **Go to resource**.
For more information, see Cognitive Services Compliance and Privacy notes at htt
#### Return type
-`azure_cognitive.sentiment_analysis_result` a result record containing the sentiment predictions of the input text. It contains the sentiment, which can be `positive`, `negative`, `neutral` and `mixed`; and the score for positive, neutral and negative found in the text represented as a real number between 0 and 1. For example in `(neutral,0.26,0.64,0.09)`, the sentiment is `neutral` with `positive` score at `0.26`, neutral at `0.64` and negative at `0.09`.
+`azure_cognitive.sentiment_analysis_result` a result record containing the sentiment predictions of the input text. It contains the sentiment, which can be `positive`, `negative`, `neutral`, and `mixed`; and the score for positive, neutral, and negative found in the text represented as a real number between 0 and 1. For example in `(neutral,0.26,0.64,0.09)`, the sentiment is `neutral` with `positive` score at `0.26`, neutral at `0.64` and negative at `0.09`.
### `azure_cognitive.detect_language`
For more information, see Cognitive Services Compliance and Privacy notes at htt
#### Return type
-`azure_cognitive.language_detection_result`, a result containing the detected language name, its two-letter ISO 639-1 representation and the confidence score for the detection. For example in `(Portuguese,pt,0.97)`, the language is `Portuguese`, and detection confidence is `0.97`.
+`azure_cognitive.language_detection_result`, a result containing the detected language name, its two-letter ISO 639-1 representation, and the confidence score for the detection. For example in `(Portuguese,pt,0.97)`, the language is `Portuguese`, and detection confidence is `0.97`.
### `azure_cognitive.extract_key_phrases`
For more information, see Cognitive Services Compliance and Privacy notes at htt
#### Return type
-`azure_cognitive.pii_entity_recognition_result`, a result containing the redacted text and entities as `azure_cognitive.entity[]`. Each entity contains the nonredacted text, personal data category, subcategory and a score indicating the confidence that the entity correctly matches the identified substring. For example, if invoked with a `text` set to `'My phone number is +1555555555, and the address of my office is 16255 NE 36th Way, Redmond, WA 98052.'`, and `language` set to `'en'`, it could return `("My phone number is ***********, and the address of my office is ************************************.","{""(+1555555555,PhoneNumber,\\""\\"",0.8)"",""(\\""16255 NE 36th Way, Redmond, WA 98052\\"",Address,\\""\\"",1)""}")`.
+`azure_cognitive.pii_entity_recognition_result`, a result containing the redacted text, and entities as `azure_cognitive.entity[]`. Each entity contains the nonredacted text, personal data category, subcategory, and a score indicating the confidence that the entity correctly matches the identified substring. For example, if invoked with a `text` set to `'My phone number is +1555555555, and the address of my office is 16255 NE 36th Way, Redmond, WA 98052.'`, and `language` set to `'en'`, it could return `("My phone number is ***********, and the address of my office is ************************************.","{""(+1555555555,PhoneNumber,\\""\\"",0.8)"",""(\\""16255 NE 36th Way, Redmond, WA 98052\\"",Address,\\""\\"",1)""}")`.
### `azure_cognitive.summarize_abstractive`
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
Invoke [Azure OpenAI embeddings](../../ai-services/openai/reference.md#embedding
## Prerequisites
+1. [Enable and configure](generative-ai-azure-overview.md#enable-the-azure_ai-extension) the `azure_ai` extension.
1. Create an Open AI account and [request access to Azure OpenAI Service](https://aka.ms/oai/access). 1. Grant Access to Azure OpenAI in the desired subscription. 1. Grant permissions toΓÇ»[create Azure OpenAI resources and to deploy models](../../ai-services/openai/how-to/role-based-access-control.md). 1. [Create and deploy an Azure OpenAI service resource and a model](../../ai-services/openai/how-to/create-resource.md), for example deploy the embeddings model [text-embedding-ada-002](../../ai-services/openai/concepts/models.md#embeddings-models). Copy the deployment name as it is needed to create embeddings. ## Configure OpenAI endpoint and key
-In the Azure OpenAI resource, under **Resource Management** > **Keys and Endpoints** you can find the endpoint and the keys for your Azure OpenAI resource. Use the endpoint and one of the keys to enable `azure_ai` extension to invoke the model deployment.
+In the Azure OpenAI resource, under **Resource Management** > **Keys and Endpoints** you can find the endpoint and the keys for your Azure OpenAI resource. To invoke the model deployment, enable the `azure_ai` extension using the endpoint and one of the keys.
```postgresql select azure_ai.set_setting('azure_openai.endpoint','https://<endpoint>.openai.azure.com');
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
To connect by using a Microsoft Entra token with PgAdmin, follow these steps:
Here are some essential considerations when you're connecting: -- `user@tenant.onmicrosoft.com` is the display name of the Microsoft Entra user.
+- `user@tenant.onmicrosoft.com` is the userPrincipalName of the Microsoft Entra user.
- Be sure to use the exact way the Azure user is spelled. Microsoft Entra user and group names are case-sensitive. - If the name contains spaces, use a backslash (`\`) before each space to escape it. You can use the Azure CLI to get the signed in user and set the value for `PGUGSER` environment variable: ```bash
- export PGUSER=$(az ad signed-in-user show --query "[displayName]" -o tsv | sed 's/ /\\ /g')
+ export PGUSER=$(az ad signed-in-user show --query "[userPrincipalName]" -o tsv | sed 's/ /\\ /g')
``` - The access token's validity is 5 minutes to 60 minutes. You should get the access token before initiating the sign-in to Azure Database for PostgreSQL.
postgresql Concepts Connectivity Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md
The following table lists the gateway IP address subnets of the Azure Database f
| Canada Central | 13.71.168.32| 13.71.168.32/29, 20.38.144.32/29, 52.246.152.32/29, 20.48.196.32/27| | Canada East |40.69.105.32 | 40.69.105.32/29, 52.139.106.192/27 | | Central US | 52.182.136.37, 52.182.136.38 | 104.208.21.192/29, 13.89.168.192/29, 52.182.136.192/29, 20.40.228.128/27|
-| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/2752.130.112.136/29, 52.130.13.96/27|
+| China East | 52.130.112.139 | 52.130.112.136/29, 52.130.13.96/27|
| China East 2 | 40.73.82.1, 52.130.120.89 | 52.130.120.88/29, 52.130.7.0/27| | China North | 52.130.128.89| 52.130.128.88/29, 40.72.77.128/27 | | China North 2 |40.73.50.0 | 52.130.40.64/29, 52.130.21.160/27|
private-link Create Private Endpoint Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-bicep.md
In this quickstart, you'll use Bicep to create a private endpoint.
You can also create a private endpoint by using the [Azure portal](create-private-endpoint-portal.md), [Azure PowerShell](create-private-endpoint-powershell.md), the [Azure CLI](create-private-endpoint-cli.md), or an [Azure Resource Manager Template](create-private-endpoint-template.md). + ## Prerequisites You need an Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
private-link Create Private Endpoint Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
[![The 'Deploy to Azure' button.](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.sql%2Fprivate-endpoint-sql%2Fazuredeploy.json) + ## Prerequisites You need an Azure account with an active subscription. If you don't already have an Azure account, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
private-link Create Private Endpoint Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-endpoint-terraform.md
The script generates a random password for the SQL server and a random SSH key f
[!INCLUDE [About Terraform](~/azure-dev-docs-pr/articles/terraform/includes/abstract.md)] + ## Prerequisites - You need an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
private-link Tutorial Private Endpoint Sql Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-cli.md
Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to communicate with Private Link resources privately. + In this tutorial, you learn how to: > [!div class="checklist"]
private-link Tutorial Private Endpoint Sql Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-portal.md
-
+ Title: 'Tutorial: Connect to an Azure SQL server using an Azure Private Endpoint - Azure portal' description: Get started with this tutorial to learn how to connect to a storage account privately via Azure Private Endpoint using the Azure portal.
Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to privately and securely communicate with Private Link resources such as Azure SQL server. + In this tutorial, you learn how to: > [!div class="checklist"]
private-link Tutorial Private Endpoint Sql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/tutorial-private-endpoint-sql-powershell.md
Azure Private endpoint is the fundamental building block for Private Link in Azure. It enables Azure resources, like virtual machines (VMs), to communicate with Private Link resources privately. + In this tutorial, you learn how to: > [!div class="checklist"]
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
Are you sure you want to continue connecting (yes/no/[fingerprint])?
Verification would involve comparing the fingerprint from the client output with the one stored in the table below. If they match, then type `yes` to continue and the client will then automatically store the new key in the `known_hosts` for the future. ### How long does the rotation take?
-Rotations are gradual and may take multiple days. Either the old or new host key may be presented by the Azure service during this time.
+Rotations are gradual and may take multiple weeks. Either the old or new host key may be presented by the Azure service during this time.
### Why do the host keys expire? Periodically rotating secrets is a standard security practice and can help reduce attack vectors.
storage Storage Quickstart Blobs Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-java.md
description: In this quickstart, you learn how to use the Azure Blob Storage cli
Previously updated : 10/24/2022 Last updated : 03/04/2024 ms.devlang: java
+zone_pivot_groups: azure-blob-storage-quickstart-options
# Quickstart: Azure Blob Storage client library for Java
-Get started with the Azure Blob Storage client library for Java to manage blobs and containers. Follow these steps to install the package and try out example code for basic tasks.
+
+> [!NOTE]
+> The **Build from scratch** option walks you step by step through the process of creating a new project, installing packages, writing the code, and running a basic console app. This approach is recommended if you want to understand all the details involved in creating an app that connects to Azure Blob Storage. If you prefer to automate deployment tasks and start with a completed project, choose [Start with a template](storage-quickstart-blobs-java.md?pivots=blob-storage-quickstart-template).
+++
+> [!NOTE]
+> The **Start with a template** option uses the Azure Developer CLI to automate deployment tasks and starts you off with a completed project. This approach is recommended if you want to explore the code as quickly as possible without going through the setup tasks. If you prefer step by step instructions to build the app, choose [Build from scratch](storage-quickstart-blobs-java.md?pivots=blob-storage-quickstart-scratch).
++
+Get started with the Azure Blob Storage client library for Java to manage blobs and containers.
++
+In this article, you follow steps to install the package and try out example code for basic tasks.
+++
+In this article, you use the [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) to deploy Azure resources and run a completed console app with just a few commands.
+ > [!TIP] > If you're working with Azure Storage resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Blob Storage, see [Upload a file to an Azure Storage Blob](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-azure-storage).
Get started with the Azure Blob Storage client library for Java to manage blobs
## Prerequisites -- Azure account with an active subscription - [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).+
+- Azure account with an active subscription - [create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio)
- Azure Storage account - [create a storage account](../common/storage-account-create.md).-- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above.-- [Apache Maven](https://maven.apache.org/download.cgi).
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above
+- [Apache Maven](https://maven.apache.org/download.cgi)
+++
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- [Java Development Kit (JDK)](/java/azure/jdk/) version 8 or above
+- [Apache Maven](https://maven.apache.org/download.cgi)
+- [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd)
+ ## Setting up + This section walks you through preparing a project to work with the Azure Blob Storage client library for Java. ### Create the project
public class App
} ``` ++
+With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can create a storage account and run the sample code with just a few commands. You can run the project in your local development environment, or in a [DevContainer](https://code.visualstudio.com/docs/devcontainers/containers).
+
+### Initialize the Azure Developer CLI template and deploy resources
+
+From an empty directory, follow these steps to initialize the `azd` template, provision Azure resources, and get started with the code:
+
+- Clone the quickstart repository assets from GitHub and initialize the template locally:
+
+ ```console
+ azd init --template blob-storage-quickstart-java
+ ```
+
+ You'll be prompted for the following information:
+
+ - **Environment name**: This value is used as a prefix for all Azure resources created by Azure Developer CLI. The name must be unique across all Azure subscriptions and must be between 3 and 24 characters long. The name can contain numbers and lowercase letters only.
+
+- Log in to Azure:
+
+ ```console
+ azd auth login
+ ```
+- Provision and deploy the resources to Azure:
+
+ ```console
+ azd up
+ ```
+
+ You'll be prompted for the following information:
+
+ - **Subscription**: The Azure subscription that your resources are deployed to.
+ - **Location**: The Azure region where your resources are deployed.
+
+ The deployment might take a few minutes to complete. The output from the `azd up` command includes the name of the newly created storage account, which you'll need later to run the code.
+
+## Run the sample code
+
+At this point, the resources are deployed to Azure and the code is almost ready to run. Follow these steps to update the name of the storage account in the code, and run the sample console app:
+
+- **Update the storage account name**:
+ 1. In the local directory, navigate to the *blob-quickstart/src/main/java/com/blobs/quickstart* directory.
+ 1. Open the file named **App.java** in your editor. Find the `<storage-account-name>` placeholder and replace it with the actual name of the storage account created by the `azd up` command.
+ 1. Save the changes.
+- **Run the project**:
+ 1. Navigate to the *blob-quickstart* directory containing the `pom.xml` file. Compile the project by using the following `mvn` command:
+ ```console
+ mvn compile
+ ```
+ 1. Package the compiled code in its distributable format:
+ ```console
+ mvn package
+ ```
+ 1. Run the following `mvn` command to execute the app:
+ ```console
+ mvn exec:java
+ ```
+- **Observe the output**: This app creates a test file in your local *data* folder and uploads it to a container in the storage account. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
+
+To learn more about how the sample code works, see [Code examples](#code-examples).
+
+When you're finished testing the code, see the [Clean up resources](#clean-up-resources) section to delete the resources created by the `azd up` command.
++ ## Object model Azure Blob Storage is optimized for storing massive amounts of unstructured data. Unstructured data doesn't adhere to a particular data model or definition, such as text or binary data. Blob storage offers three types of resources:
These example code snippets show you how to perform the following actions with t
- [List the blobs in a container](#list-the-blobs-in-a-container) - [Download blobs](#download-blobs) - [Delete a container](#delete-a-container)
-
++ > [!IMPORTANT] > Make sure you have the correct dependencies in pom.xml and the necessary directives for the code samples to work, as described in the [setting up](#setting-up) section. ++
+> [!NOTE]
+> The Azure Developer CLI template includes a file with sample code already in place. The following examples provide detail for each part of the sample code. The template implements the recommended passwordless authentication method, as described in the [Authenticate to Azure](#authenticate-to-azure-and-authorize-access-to-blob-data) section. The connection string method is shown as an alternative, but isn't used in the template and isn't recommended for production code.
++ ### Authenticate to Azure and authorize access to blob data [!INCLUDE [storage-quickstart-passwordless-auth-intro](../../../includes/storage-quickstart-passwordless-auth-intro.md)]
export AZURE_STORAGE_CONNECTION_STRING="<yourconnectionstring>"
The code below retrieves the connection string for the storage account from the environment variable created earlier, and uses the connection string to construct a service client object. + Add this code to the end of the `Main` method: + ```java // Retrieve the connection string for use with the application. String connectStr = System.getenv("AZURE_STORAGE_CONNECTION_STRING");
BlobServiceClient blobServiceClient = new BlobServiceClientBuilder()
### Create a container
-Decide on a name for the new container. The code below appends a UUID value to the container name to ensure that it's unique.
+Create a new container in your storage account by calling the [createBlobContainer](/java/api/com.azure.storage.blob.blobserviceclient#method-details) method on the `blobServiceClient` object. In this example, the code appends a GUID value to the container name to ensure that it's unique.
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-Next, create an instance of the [BlobContainerClient](/java/api/com.azure.storage.blob.blobcontainerclient) class, then call the [create](/java/api/com.azure.storage.blob.blobcontainerclient.create) method to actually create the container in your storage account.
Add this code to the end of the `Main` method: + :::code language="java" source="~/azure-storage-snippets/blobs/quickstarts/Java/blob-quickstart/src/main/java/com/blobs/quickstart/App.java" id="Snippet_CreateContainer"::: To learn more about creating a container, and to explore more code samples, see [Create a blob container with Java](storage-blob-container-create-java.md).
+> [!IMPORTANT]
+> Container names must be lowercase. For more information about naming containers and blobs, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
+ ### Upload blobs to a container
-Add this code to the end of the `Main` method:
+Upload a blob to a container by calling the [uploadFromFile](/java/api/com.azure.storage.blob.blobclient.uploadfromfile) method. The example code creates a text file in the local *data* directory to upload to the container.
+
+Add this code to the end of the `Main` method:
-The code snippet completes the following steps:
-1. Creates a text file in the local *data* directory.
-1. Gets a reference to a [BlobClient](/java/api/com.azure.storage.blob.blobclient) object by calling the [getBlobClient](/java/api/com.azure.storage.blob.blobcontainerclient.getblobclient) method on the container from the [Create a container](#create-a-container) section.
-1. Uploads the local text file to the blob by calling the [uploadFromFile](/java/api/com.azure.storage.blob.blobclient.uploadfromfile) method. This method creates the blob if it doesn't already exist, but won't overwrite it if it does.
To learn more about uploading blobs, and to explore more code samples, see [Upload a blob with Java](storage-blob-upload-java.md).
To learn more about uploading blobs, and to explore more code samples, see [Uplo
List the blobs in the container by calling the [listBlobs](/java/api/com.azure.storage.blob.blobcontainerclient.listblobs) method. In this case, only one blob has been added to the container, so the listing operation returns just that one blob. + Add this code to the end of the `Main` method: + :::code language="java" source="~/azure-storage-snippets/blobs/quickstarts/Java/blob-quickstart/src/main/java/com/blobs/quickstart/App.java" id="Snippet_ListBlobs"::: To learn more about listing blobs, and to explore more code samples, see [List blobs with Java](storage-blobs-list-java.md).
To learn more about listing blobs, and to explore more code samples, see [List b
Download the previously created blob by calling the [downloadToFile](/java/api/com.azure.storage.blob.specialized.blobclientbase.downloadtofile) method. The example code adds a suffix of "DOWNLOAD" to the file name so that you can see both files in local file system. + Add this code to the end of the `Main` method: + :::code language="java" source="~/azure-storage-snippets/blobs/quickstarts/Java/blob-quickstart/src/main/java/com/blobs/quickstart/App.java" id="Snippet_DownloadBlob"::: To learn more about downloading blobs, and to explore more code samples, see [Download a blob with Java](storage-blob-download-java.md).
The following code cleans up the resources the app created by removing the entir
The app pauses for user input by calling `System.console().readLine()` before it deletes the blob, container, and local files. This is a good chance to verify that the resources were created correctly, before they're deleted. + Add this code to the end of the `Main` method: + :::code language="java" source="~/azure-storage-snippets/blobs/quickstarts/Java/blob-quickstart/src/main/java/com/blobs/quickstart/App.java" id="Snippet_DeleteContainer"::: To learn more about deleting a container, and to explore more code samples, see [Delete and restore a blob container with Java](storage-blob-container-delete-java.md). + ## Run the code This app creates a test file in your local folder and uploads it to Blob storage. The example then lists the blobs in the container and downloads the file with a new name so that you can compare the old and new files.
Done
Before you begin the cleanup process, check your *data* folder for the two files. You can compare them and observe that they're identical. + ## Clean up resources + After you've verified the files and finished testing, press the **Enter** key to delete the test files along with the container you created in the storage account. You can also use [Azure CLI](storage-quickstart-blobs-cli.md#clean-up-resources) to delete resources. ++
+When you're done with the quickstart, you can clean up the resources you created by running the following command:
+
+```console
+azd down
+```
+
+You'll be prompted to confirm the deletion of the resources. Enter `y` to confirm.
++ ## Next steps In this quickstart, you learned how to upload, download, and list blobs using Java.
storage Storage Files Identity Auth Active Directory Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md
Title: Overview - On-premises AD DS authentication to Azure file shares
-description: Learn about Active Directory Domain Services (AD DS) authentication to Azure file shares. This article goes over supported scenarios, availability, and explains how the permissions work between your AD DS and Microsoft Entra ID.
+description: Learn about Active Directory Domain Services (AD DS) authentication to Azure file shares. This article goes over supported scenarios, availability, and explains how the permissions work between your AD DS and Microsoft Entra ID.
Previously updated : 11/21/2023 Last updated : 03/04/2024 recommendations: false # Overview - on-premises Active Directory Domain Services authentication over SMB for Azure file shares+ [!INCLUDE [storage-files-aad-auth-include](../../../includes/storage-files-aad-auth-include.md)] We strongly recommend that you review the [How it works section](./storage-files-active-directory-overview.md#how-it-works) to select the right AD source for authentication. The setup is different depending on the domain service you choose. This article focuses on enabling and configuring on-premises AD DS for authentication with Azure file shares.
We strongly recommend that you review the [How it works section](./storage-files
If you're new to Azure Files, we recommend reading our [planning guide](storage-files-planning.md). ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
To help you set up identity-based authentication for some common use cases, we p
|-|-| | [![Screencast of the replacing on-premises file servers video - click to play.](./media/storage-files-identity-auth-active-directory-enable/replace-on-prem-server-thumbnail.png)](https://www.youtube.com/watch?v=jd49W33DxkQ) | [![Screencast of the Using Azure Files as the profile container video - click to play.](./media/storage-files-identity-auth-active-directory-enable/files-ad-ds-fslogix-thumbnail.png)](https://www.youtube.com/watch?v=9S5A1IJqfOQ) | - ## Prerequisites
-Before you enable AD DS authentication for Azure file shares, make sure you've completed the following prerequisites:
+Before you enable AD DS authentication for Azure file shares, make sure you've completed the following prerequisites:
- Select or create your [AD DS environment](/windows-server/identity/ad-ds/get-started/virtual-dc/active-directory-domain-services-overview) and [sync it to Microsoft Entra ID](../../active-directory/hybrid/how-to-connect-install-roadmap.md) using either the on-premises [Microsoft Entra Connect Sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Microsoft Entra Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Microsoft Entra Admin Center.
Before you enable AD DS authentication for Azure file shares, make sure you've c
If a machine isn't domain joined, you can still use AD DS for authentication if the machine has unimpeded network connectivity to the on-premises AD domain controller and the user provides explicit credentials. For more information, see [Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm-or-a-vm-joined-to-a-different-ad-domain). -- Select or create an Azure storage account. For optimal performance, we recommend that you deploy the storage account in the same region as the client from which you plan to access the share. Then, [mount the Azure file share](storage-how-to-use-files-windows.md) with your storage account key. Mounting with the storage account key verifies connectivity.
+- Select or create an Azure storage account. For optimal performance, we recommend that you deploy the storage account in the same region as the client from which you plan to access the share. Then, [mount the Azure file share](storage-how-to-use-files-windows.md) with your storage account key. Mounting with the storage account key verifies connectivity.
Make sure that the storage account containing your file shares isn't already configured for identity-based authentication. If an AD source is already enabled on the storage account, you must disable it before enabling on-premises AD DS.
If you plan to enable any networking configurations on your file share, we recom
Enabling AD DS authentication for your Azure file shares allows you to authenticate to your Azure file shares with your on-premises AD DS credentials. Further, it allows you to better manage your permissions to allow granular access control. Doing this requires synching identities from on-premises AD DS to Microsoft Entra ID using either the on-premises [Microsoft Entra Connect Sync](../../active-directory/hybrid/whatis-azure-ad-connect.md) application or [Microsoft Entra Connect cloud sync](../../active-directory/cloud-sync/what-is-cloud-sync.md), a lightweight agent that can be installed from the Microsoft Entra Admin Center. You assign share-level permissions to hybrid identities synced to Microsoft Entra ID while managing file/directory-level access using Windows ACLs.
-Follow these steps to set up Azure Files for AD DS authentication:
+Follow these steps to set up Azure Files for AD DS authentication:
1. [Enable AD DS authentication on your storage account](storage-files-identity-ad-ds-enable.md) 1. [Assign share-level permissions to the Microsoft Entra identity (a user, group, or service principal) that is in sync with the target AD identity](storage-files-identity-ad-ds-assign-permissions.md) 1. [Configure Windows ACLs over SMB for directories and files](storage-files-identity-ad-ds-configure-permissions.md)
-
+ 1. [Mount an Azure file share to a VM joined to your AD DS](storage-files-identity-ad-ds-mount-file-share.md) 1. [Update the password of your storage account identity in AD DS](storage-files-identity-ad-ds-update-password.md)
-The following diagram illustrates the end-to-end workflow for enabling AD DS authentication over SMB for Azure file shares.
+The following diagram illustrates the end-to-end workflow for enabling AD DS authentication over SMB for Azure file shares.
-![Files AD workflow diagram](media/storage-files-active-directory-domain-services-enable/diagram-files-ad.png)
Identities used to access Azure file shares must be synced to Microsoft Entra ID to enforce share-level file permissions through the [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) model. Alternatively, you can use a default share-level permission. [Windows-style DACLs](/previous-versions/technet-magazine/cc161041(v=msdn.10)) on files/directories carried over from existing file servers will be preserved and enforced. This offers seamless integration with your enterprise AD DS environment. As you replace on-premises file servers with Azure file shares, existing users can access Azure file shares from their current clients with a single sign-on experience, without any change to the credentials in use.
synapse-analytics Tutorial Horovod Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-pytorch.md
Within Azure Synapse Analytics, users can quickly get started with Horovod using
> [!WARNING] > - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes. > - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
-> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+> - End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of life announced (EOLA) runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime will be retired and disabled as of July 8, 2024.
## Configure the Apache Spark session
synapse-analytics Tutorial Horovod Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-horovod-tensorflow.md
Within Azure Synapse Analytics, users can quickly get started with Horovod using
- Create a GPU-enabled Apache Spark pool in your Azure Synapse Analytics workspace. For details, see [Create a GPU-enabled Apache Spark pool in Azure Synapse](../spark/apache-spark-gpu-concept.md). For this tutorial, we suggest using the GPU-Large cluster size with 3 nodes. > [!WARNING]
-> - The GPU accelerated preview is only available on the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2](../spark/apache-spark-32-runtime.md) runtimes.
-> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date, strongly advising users to transition to a higher runtime version for continued functionality and security.
+> - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes.
+> - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
+> - End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of life announced (EOLA) runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime will be retired and disabled as of July 8, 2024.
## Configure the Apache Spark session
synapse-analytics Tutorial Load Data Petastorm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/tutorial-load-data-petastorm.md
For more information about Petastorm, you can visit the [Petastorm GitHub page](
> [!WARNING] > - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes. > - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
-> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+> - End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of life announced (EOLA) runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime will be retired and disabled as of July 8, 2024.
## Configure the Apache Spark session
synapse-analytics Quickstart Create Apache Gpu Pool Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-apache-gpu-pool-portal.md
In this quickstart, you learn how to use the Azure portal to create an Apache Sp
> [!WARNING] > - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](./spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](./spark/apache-spark-32-runtime.md) runtimes. > - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
-> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+> - End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of life announced (EOLA) runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime will be retired and disabled as of July 8, 2024.
> [!NOTE] > Azure Synapse GPU-enabled pools are currently in Public Preview.
synapse-analytics Apache Spark Gpu Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-gpu-concept.md
By using NVIDIA GPUs, data scientists and engineers can reduce the time necessar
> [!WARNING] > - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes. > - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
-> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+> - End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of life announced (EOLA) runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime will be retired and disabled as of July 8, 2024.
> [!NOTE] > Azure Synapse GPU-enabled pools are currently in Public Preview.
synapse-analytics Apache Spark Rapids Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-rapids-gpu.md
spark.conf.set('spark.rapids.sql.enabled','true/false')
> [!WARNING] > - The GPU accelerated preview is limited to the [Azure Synapse 3.1 (unsupported)](../spark/apache-spark-3-runtime.md) and [Apache Spark 3.2 (EOLA)](../spark/apache-spark-32-runtime.md) runtimes. > - Azure Synapse Runtime for Apache Spark 3.1 has reached its end of life (EOL) as of January 26, 2023, with official support discontinued effective January 26, 2024, and no further addressing of support tickets, bug fixes, or security updates beyond this date.
-> - Azure Synapse Runtime for Apache Spark 3.2 has reached its end of life (EOL) as of July 8, 2023, with no further bug or feature fixes, but security fixes may be backported based on risk assessment, and it will be retired and disabled as of July 8, 2024.
+> - End of life announced (EOLA) for Azure Synapse Runtime for Apache Spark 3.2 has been announced July 8, 2023. End of life announced (EOLA) runtimes will not have bug and feature fixes. Security fixes will be backported based on risk assessment. This runtime will be retired and disabled as of July 8, 2024.
## RAPIDS Accelerator for Apache Spark
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
Some general system constraints might affect your workload:
### Can't create a database in serverless SQL pool
-Serverless SQL pools have limitations, and you can't create more than 20 databases per workspace. If you need to separate objects and isolate them, use schemas.
+Serverless SQL pools have limitations, and you can't create more than 100 databases per workspace. If you need to separate objects and isolate them, use schemas.
If you get the error `CREATE DATABASE failed. User database limit has been already reached` you've created the maximum number of databases that are supported in one workspace.
virtual-desktop Administrative Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/administrative-template.md
Last updated 08/25/2023
-# Administrative template for Azure Virtual Desktop
+# Use the administrative template for Azure Virtual Desktop
We've created an administrative template for Azure Virtual Desktop to configure some features of Azure Virtual Desktop. You can use the template with:
virtual-desktop App Attach Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-overview.md
Last updated 12/08/2023
> App attach is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-There are two features in Azure Virtual Desktop that enable you to dynamically attach applications from an application package to a user session in Azure Virtual Desktop - *MSIX app attach* and *app attach (preview)*. *MSIX app attach* is generally available, but *app attach* is now available in preview, which improves the administrative experience and user experience. With both *MSIX app attach* and *app attach*, applications aren't installed locally on session hosts or images, making it easier to create custom images for your session hosts, and reducing operational overhead and costs for your organization. Applications run within containers, which separate user data, the operating system, and other applications, increasing security and making them easier to troubleshoot.
+There are two features in Azure Virtual Desktop that enable you to dynamically attach applications from an application package to a user session in Azure Virtual Desktop - *MSIX app attach* and *app attach (preview)*. *MSIX app attach* is generally available, but *app attach* is available in preview, which improves the administrative and user experiences. With both *MSIX app attach* and *app attach*, applications aren't installed locally on session hosts or images, making it easier to create custom images for your session hosts, and reducing operational overhead and costs for your organization. Applications run within containers, which separate user data, the operating system, and other applications, increasing security and making them easier to troubleshoot.
The following table compares MSIX app attach with app attach:
virtual-desktop App Attach Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-setup.md
Here's how to update an existing package using the [Az.DesktopVirtualization](/p
1. In the same PowerShell session, get the properties of the updated application and store them in a variable by running the following command: ```azurepowershell- # Get the properties of the application $parameters = @{ HostPoolName = '<HostPoolName>'
virtual-desktop Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-stack-hci-overview.md
To run Azure Virtual Desktop with Azure Stack HCI, you need to make sure you're
- **User access rights.** The same licenses that grant access to Azure Virtual Desktop on Azure also apply to Azure Virtual Desktop with Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/). -- **Infrastructure costs.** Learn more at [Azure Stack HCI pricing](https://azure.microsoft.com/pricing/details/azure-stack/hci/).
+- **Azure Stack HCI service fee.** Learn more at [Azure Stack HCI pricing](https://azure.microsoft.com/pricing/details/azure-stack/hci/).
-- **Hybrid service fee.** This fee requires you to pay for each active virtual CPU (vCPU) for your Azure Virtual Desktop session hosts running on Azure Stack HCI. This fee becomes active once the preview period ends.
+- **Azure Virtual Desktop on Azure Stack HCI service fee.** This fee requires you to pay for each active virtual CPU (vCPU) for your Azure Virtual Desktop session hosts running on Azure Stack HCI. Learn more at [Azure Virtual Desktop pricing](https://azure.microsoft.com/pricing/details/virtual-desktop/).
## Data storage
virtual-desktop Clipboard Transfer Direction Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/clipboard-transfer-direction-data-types.md
+
+ Title: Configure the clipboard transfer direction in Azure Virtual Desktop
+description: Learn how to configure the clipboard in Azure Virtual Desktop to function only in a single direction (unidirectional), from session host to client, or client to session host.
+++ Last updated : 02/29/2024++
+# Configure the clipboard transfer direction and types of data that can be copied in Azure Virtual Desktop
+
+> [!IMPORTANT]
+> Configuring the clipboard transfer direction in Azure Virtual Desktop is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+Clipboard redirection in Azure Virtual Desktop allows users to copy and paste content, such as text, images, and files between the user's device and the remote session in either direction. You might want to limit the direction of the clipboard for users, to help prevent data exfiltration or malicious files being copied to a session host. You can configure whether users can use the clipboard from session host to client, or client to session host, and the types of data that can be copied, from the following options:
+
+- Disable clipboard transfers from session host to client, client to session host, or both.
+- Allow plain text only.
+- Allow plain text and images only.
+- Allow plain text, images, and Rich Text Format only.
+- Allow plain text, images, Rich Text Format, and HTML only.
+
+You apply settings to your session hosts. It doesn't depend on a specific Remote Desktop client or its version. This article shows you how to configure the direction the clipboard and the types of data that can be copied using Microsoft Intune, or you can configure the local Group Policy or registry of session hosts.
+
+## Prerequisites
+
+To configure the clipboard transfer direction, you need:
+
+- Session hosts running Windows 11 Insider Preview Build 25898 or later.
+
+- Depending on the method you use to configure the clipboard transfer direction:
+
+ - For Intune, you need permission to configure and apply settings. For more information, see [Administrative template for Azure Virtual Desktop](administrative-template.md).
+
+ - For configuring the local Group Policy or registry of session hosts, you need an account that is a member of the local Administrators group.
+
+## Configure clipboard transfer direction
+
+Here's how to configure the clipboard transfer direction and the types of data that can be copied. Select the relevant tab for your scenario.
+
+# [Intune](#tab/intune)
+
+To configure the clipboard using Intune, follow these steps. This process [deploys an OMA-URI to target a CSP](/troubleshoot/mem/intune/device-configuration/deploy-oma-uris-to-target-csp-via-intune).
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create a profile with custom settings](/mem/intune/configuration/custom-settings-configure) for Windows 10 and later devices, with the **Templates** profile type and the **Custom** profile template name.
+
+1. For the **Basics** tab, enter a name and optional description for the profile, and then select **Next**.
+
+1. For the **Configuration settings** tab, select **Add** to show the **Add row** pane.
+
+1. In the **Add row** pane, enter one of the following sets of settings, depending on whether you want to configure the clipboard from session host to client, or client to session host.
+
+ - To configure the clipboard from **session host to client**:
+ - **Name**: (*example*) Session host to client
+ - **Description**: *Optional*
+ - **OMA-URI**: `./Vendor/MSFT/Policy/Config/RemoteDesktopServices/LimitServerToClientClipboardRedirection`
+ - **Data type**: `String`
+ - **Value**: Enter a value from the following table:
+
+ | Value | Description |
+ |--|--|
+ | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="0"/>]]>` | Disable clipboard transfers from session host to client. |
+ | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="1"/>]]>` | Allow plain text. |
+ | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="2"/>]]>` | Allow plain text and images. |
+ | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="3"/>]]>` | Allow plain text, images, and Rich Text Format. |
+ | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="4"/>]]>` | Allow plain text, images, Rich Text Format, and HTML. |
+
+ - To configure the clipboard from **client to session host**:
+ - **Name**: (*example*) Client to session host
+ - **Description**: *Optional*
+ - **OMA-URI**: `./Vendor/MSFT/Policy/Config/RemoteDesktopServices/LimitClientToServerClipboardRedirection`
+ - **Data type**: `String`
+ - **Value**: Enter a value from the following table:
+
+ | Value | Description |
+ |--|--|
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="0"/>]]>` | Disable clipboard transfers from session host to client. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="1"/>]]>` | Allow plain text. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="2"/>]]>` | Allow plain text and images. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="3"/>]]>` | Allow plain text, images, and Rich Text Format. |
+ | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION" value="4"/>]]>` | Allow plain text, images, Rich Text Format, and HTML. |
+
+1. Select **Save** to add the row. Repeat the previous two steps to configure the clipboard in the other direction, if necessary, then once you configure the settings you want, select **Next**.
+
+1. For the **Assignments** tab, select the users, devices, or groups to receive the profile, then select **Next**. For more information on assigning profiles, see [Assign user and device profiles](/mem/intune/configuration/device-profile-assign).
+
+1. For the **Applicability Rules** tab, select **Next**.
+
+1. On the **Review + create** tab, review the configuration information, then select **Create**.
+
+1. Once the policy configuration is created, resync your session hosts and reboot them for the settings to take effect.
+
+1. Connect to a remote session with a supported client and test the clipboard settings you configured are working by trying to copy and paste content.
+
+# [Group Policy](#tab/group-policy)
+
+To configure the clipboard using Group Policy, follow these steps.
+
+> [!IMPORTANT]
+> These policy settings appear in both **Computer Configuration** and **User Configuration**. If both policy settings are configured, the stricter restriction is used.
+
+1. Open **Local Group Policy Editor** from the Start menu or by running `gpedit.msc`.
+
+1. Browse to one of the following policy sections. Use the policy section in **Computer Configuration** to the session host you target, and use the policy section in **User Configuration** applies to specific users you target.
+
+ - Machine: `Computer Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Device and Resource Redirection`
+ - User: `User Configuration\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Device and Resource Redirection`
+
+1. Open one of the following policy settings, depending on whether you want to configure the clipboard from session host (server) to client, or client to session host:
+
+ - To configure the clipboard from **session host to client**, open the policy setting **Restrict clipboard transfer from server to client**, then select **Enabled**. Choose from the following options:
+ - **Disable clipboard transfers from server to client**.
+ - **Allow plain text.**
+ - **Allow plain text and images.**
+ - **Allow plain text, images, and Rich Text Format.**
+ - **Allow plain text, images, Rich Text Format, and HTML.**
+
+ - To configure the clipboard from **client to session host**, open the policy setting **Restrict clipboard transfer from client to server**, then select **Enabled** . Choose from the following options:
+ - **Disable clipboard transfers from client to server**.
+ - **Allow plain text.**
+ - **Allow plain text and images.**
+ - **Allow plain text, images, and Rich Text Format.**
+ - **Allow plain text, images, Rich Text Format, and HTML.**
+
+1. Select **OK** to save your changes.
+
+1. Once you configured settings, restart your session hosts for the settings to take effect.
+
+1. Connect to a remote session with a supported client and test the clipboard settings you configured are working by trying to copy and paste content.
+
+> [!TIP]
+> During the preview, you can also configure Group Policy centrally in an Active Directory domain by copying the `terminalserver.admx` and `terminalserver.adml` administrative template files from a session host to the [Group Policy Central Store](/troubleshoot/windows-client/group-policy/create-and-manage-central-store) in a test environment.
+
+# [Registry](#tab/registry)
+
+To configure the clipboard using the registry on a session host, follow these steps.
+
+1. Open **Registry Editor** from the Start menu or by running `regedit.exe`.
+
+1. Set one of the following registry keys and its value, depending on whether you want to configure the clipboard from session host to client, or client to session host.
+
+ - To configure the clipboard from **session host to client**, set one of the following registry keys and its value. Using the value for the machine applies to all users, and using the value for the user applies to the current user only.
+ - **Key**:
+ - Machine: `HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services`
+ - Users: `HKCU\Software\Policies\Microsoft\Windows NT\Terminal Services`
+ - **Type**: `REG_DWORD`
+ - **Value name**: `SCClipLevel`
+ - **Value data**: Enter a value from the following table:
+
+ | Value Data | Description |
+ |--|--|
+ | `0` | Disable clipboard transfers from session host to client. |
+ | `1` | Allow plain text. |
+ | `2` | Allow plain text and images. |
+ | `3` | Allow plain text, images, and Rich Text Format. |
+ | `4` | Allow plain text, images, Rich Text Format, and HTML. |
+
+ - To configure the clipboard from **client to session host**, set one of the following registry keys and its value. Using the value for the machine applies to all users, and using the value for the user applies to the current user only.
+ - **Key**:
+ - Machine: `HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services`
+ - Users: `HKCU\Software\Policies\Microsoft\Windows NT\Terminal Services`
+ - **Type**: `REG_DWORD`
+ - **Value name**: `CSClipLevel`
+ - **Value data**: Enter a value from the following table:
+
+ | Value Data | Description |
+ |--|--|
+ | `0` | Disable clipboard transfers from client to session host. |
+ | `1` | Allow plain text. |
+ | `2` | Allow plain text and images. |
+ | `3` | Allow plain text, images, and Rich Text Format. |
+ | `4` | Allow plain text, images, Rich Text Format, and HTML. |
+
+1. Restart your session host.
+
+1. Connect to a remote session with a supported client and test the clipboard settings you configured are working by trying to copy and paste content.
+++
+## Related content
+
+- Configure [Watermarking](watermarking.md).
+- Configure [Screen Capture Protection](screen-capture-protection.md).
+- Learn about how to secure your Azure Virtual Desktop deployment at [Security best practices](security-guide.md).
virtual-desktop Fslogix Profile Container Configure Azure Files Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/fslogix-profile-container-configure-azure-files-active-directory.md
To get the Storage account access key:
1. From the Azure portal, search for and select **storage account** in the search bar.
-1. From the list of storage accounts, select the account that you enabled Microsoft Entra Domain Services and assigned the RBAC role for in the previous sections.
+1. From the list of storage accounts, select the account that you enabled Active Directory Domain Services or Microsoft Entra Domain Services as the identity source and assigned the RBAC role for in the previous sections.
1. Under **Security + networking**, select **Access keys**, then show and copy the key from **key1**.
To set the correct NTFS permissions on the folder:
``` - Replace `<desired-drive-letter>` with a drive letter of your choice (for example, `y:`).
- - Replace all instances of `<storage-account-name>` with the name of the storage account you specified earlier.
+ - Replace both instances of `<storage-account-name>` with the name of the storage account you specified earlier.
- Replace `<share-name>` with the name of the share you created earlier. - Replace `<storage-account-key>` with the storage account key from Azure.
virtual-desktop Msix App Attach Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/msix-app-attach-migration.md
+
+ Title: Migrate MSIX packages from MSIX app attach to app attach - Azure Virtual Desktop
+description: Learn how to migrate MSIX packages from MSIX app attach to app attach in Azure Virtual Desktop using a PowerShell script.
+++ Last updated : 02/28/2024++
+# Migrate MSIX packages from MSIX app attach to app attach
+
+> [!IMPORTANT]
+> App attach is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+App attach (preview) improves the administrative and user experiences over MSIX app attach. If you use MSIX app attach, you can migrate your MSIX packages to app attach using a PowerShell script.
+
+The migration script can perform the following actions:
+
+- Creates a new app attach package object and can delete the original MSIX package object, if necessary.
+
+- Copy permissions from application groups associated with the host pool and MSIX package.
+
+- Copy the location and resource group of the host pool and MSIX package.
+
+- Log migration activity.
+
+## Prerequisites
+
+To use the migration script, you need:
+
+- A host pool configured as a validation environment, with at least one MSIX package added with MSIX app attach.
+
+- An Azure account with the [Desktop Virtualization Contributor](rbac.md#desktop-virtualization-contributor) Azure role-based access control (RBAC) role assigned on the host pool.
+
+- A local device with PowerShell. Make sure you have the latest versions of [Az PowerShell](/powershell/azure/install-azps-windows) and [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation) installed. Specifically, the following modules are required:
+
+ - Az.DesktopVirtualization
+ - Az.Accounts
+ - Az.Resources
+ - Microsoft.Graph.Authentication
+
+## Parameters
+
+Here are the parameters you can use with the migration script:
+
+| Parameter | Description |
+|--|--|
+| `MsixPackage` | The MSIX package object to migrate to an app attach object. This value can be passed in via pipeline. |
+| `PermissionSource` | Where to get permissions from for the new app attach object. Defaults to no permissions granted. The options are:<ul><li>`DAG`: the desktop application group associated with the host pool and MSIX package</li><li>`RAG`: one or more RemoteApp application groups associated with the host pool and MSIX package</li></ul>Both options grant permission to all users and groups with any permission that is scoped specifically to the application group. |
+| `HostPoolsForNewPackage` | Resource IDs of host pools to associate new app attach object with. Defaults to no host pools. Host pools must be in the same location as the app attach packages they're associated with. |
+| `TargetResourceGroupName` | Resource group to store the new app attach object. Defaults to resource group of host pool that the MSIX package is associated with. |
+| `Location` | Azure region to create new app attach object in. Defaults to location of host pool that the MSIX package is associated with. App attach packages have to be in the same location as the host pool they're associated with. |
+| `DeleteOrigin` | Delete source MSIX package after migration. |
+| `IsActive` | Enables the new app attach object. |
+| `DeactivateOrigin` | Disables source MSIX package object after migration. |
+| `PassThru` | Passes new app attach object through. `Passthru` returns the object for the created package. Use this value if you want to inspect it or pass it to another PowerShell command. |
+| `LogInJSON` | Write to the log file in JSON Format. |
+| `LogFilePath` | Path of the log file, defaults to `MsixMigration[Timestamp].log` in a temp folder, such as `C:\Users\%USERNAME%\AppData\Local\Temp\MsixMigration<DATETIME>.log`. The path for logging is written to the console when the script is run. |
+
+## Download and run the migration script
+
+Here's how to migrate MSIX packages from MSIX app attach to app attach.
+
+> [!IMPORTANT]
+> In the following examples, you'll need to change the `<placeholder>` values for your own.
+
+1. Open a PowerShell prompt on your local device.
+
+1. Download the PowerShell script `Migrate-MsixPackagesToAppAttach.ps1` and unblock it by running the following commands:
+
+ ```powershell
+ $url = "https://raw.githubusercontent.com/Azure/RDS-Templates/master/msix-app-attach/MigrationScript/Migrate-MsixPackagesToAppAttach.ps1"
+ $filename = $url.Split('/')[-1]
+
+ Invoke-WebRequest -Uri $url -OutFile $filename | Unblock-File
+ ```
+
+1. Import the required modules by running the following commands:
+
+ ```powershell
+ Import-Module Az.DesktopVirtualization
+ Import-Module Az.Accounts
+ Import-Module Az.Resources
+ Import-Module Microsoft.Graph.Authentication
+ ```
+
+1. Connect to Azure by running the following command and following the prompts to sign in to your Azure account:
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+1. Connect to Microsoft Graph by running the following command:
+
+ ```powershell
+ Connect-MgGraph -Scopes "Group.Read.All"
+ ```
+
+The following subsections contain some examples of how to use the migration script. Refer to the [parameters](#parameters) section for all the available parameters and a description of each parameter.
+
+> [!TIP]
+> If you don't pass any parameters to the migration script, it has the following default behavior:
+> - No permissions are granted to the new app attach package.
+> - The new app attach package isn't associated with any host pools and is inactive.
+> - The new app attach package is created in the same resource group and location as the host pool.
+> - The original MSIX package is still active isn't disable or deleted.
+> - Log information is written to the default file path.
+
+### Migrate a specific MSIX package added to a host pool and application group
+
+Here's an example to migrate a specific MSIX package added to a host pool from MSIX app attach to app attach. This example:
+
+ - Migrates the MSIX package to the same resource group and location as the host pool.
+ - Assigns the MSIX package in app attach to the same host pool and the same users as the RemoteApp application group source.
+ - Leaves the existing MSIX package configuration in MSIX app attach **active** on the host pool. If you want to disable the MSIX package immediately, use the `-DeactivateOrigin` parameter.
+ - Sets the new MSIX package configuration in app attach **inactive**. If you want to enable the MSIX package immediately, use the `-IsActive` parameter.
+ - Writes log information to the default file path and format.
+
+1. From the same PowerShell prompt, get a list of MSIX packages added to a host pool by running the following commands:
+
+ ```powershell
+ $parameters = @{
+ HostPoolName = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ Get-AzWvdMsixPackage @parameters | Select-Object DisplayName, Name
+ ```
+
+ The output is similar to the following output:
+
+ ```output
+ DisplayName Name
+ -- -
+ MyApp hp01/MyApp_1.0.0.0_neutral__abcdef123ghij
+ ```
+
+1. Find the MSIX package you want to migrate and use the value from the `Name` parameter in the previous output:
+
+ ```powershell
+ $parameters = @{
+ HostPoolName = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ $msixPackage = Get-AzWvdMsixPackage @parameters | ? Name -Match '<MSIXPackageName>'
+ $hostPoolId = (Get-AzWvdHostPool @parameters).Id
+ ```
+
+1. Migrate the MSIX package by running the following commands:
+
+ ```powershell
+ $parameters = @{
+ PermissionSource = 'RAG'
+ HostPoolsForNewPackage = $hostPoolId
+ PassThru = $true
+ }
+
+ $msixPackage | .\Migrate-MsixPackagesToAppAttach.ps1 @parameters
+ ```
+
+### Migrate all MSIX packages added to a host pool
+
+Here's an example to migrate all MSIX packages added to a host pool from MSIX app attach to app attach. This example:
+
+ - Migrates MSIX packages to the same resource group and location.
+ - Adds the new app attach packages to the same host pool.
+ - Sets all app attach packages to active.
+ - Sets all MSIX packages to inactive.
+ - Copies permissions from the associated desktop application group.
+ - Writes log information to a custom file path at `C:\MsixToAppAttach.log` in JSON format.
+
+1. From the same PowerShell prompt, get all MSIX packages added to a host pool and store them in a variable by running the following commands:
+
+ ```powershell
+ $parameters = @{
+ HostPoolName = '<HostPoolName>'
+ ResourceGroupName = '<ResourceGroupName>'
+ }
+
+ $msixPackages = Get-AzWvdMsixPackage @parameters
+ $hostPoolId = (Get-AzWvdHostPool @parameters).Id
+ ```
+
+1. Migrate the MSIX package by running the following commands:
+
+ ```powershell
+ $logFilePath = "C:\Temp\MsixToAppAttach.log"
+
+ $parameters = @{
+ IsActive = $true
+ DeactivateOrigin = $true
+ PermissionSource = 'DAG'
+ HostPoolsForNewPackage = $hostPoolId
+ PassThru = $true
+ LogInJSON = $true
+ LogFilePath = $LogFilePath
+ }
+
+ $msixPackages | .\Migrate-MsixPackagesToAppAttach.ps1 @parameters
+ ```
virtual-desktop Msixmgr Tool Syntax Description https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/msixmgr-tool-syntax-description.md
This article contains the command line parameters and syntax you can use with th
## Prerequisites
-Before you can follow the instructions in this article, you need:
+To use the MSIXMGR tool, you need:
- [Download the MSIXMGR tool](https://aka.ms/msixmgr). - Get an MSIX-packaged application (`.msix` file).
virtual-desktop Required Fqdn Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-fqdn-endpoint.md
description: A list of FQDNs and endpoints you must allow, ensuring your Azure V
Previously updated : 11/21/2023 Last updated : 03/01/2024 # Required FQDNs and endpoints for Azure Virtual Desktop
The following table lists optional FQDNs and endpoints that your session host vi
| `*.digicert.com` | TCP | 80 | Certificate revocation check | | `*.azure-dns.com` | TCP | 443 | Azure DNS resolution | | `*.azure-dns.net` | TCP | 443 | Azure DNS resolution |
+| `*eh.servicebus.windows.net` | TCP | 443 | Diagnostic settings |
# [Azure for US Government](#tab/azure-for-us-government)
The following table lists optional FQDNs and endpoints that your session host vi
| `*.digicert.com` | TCP | 80 | Certificate revocation check | | `*.azure-dns.com` | TCP | 443 | Azure DNS resolution | | `*.azure-dns.net` | TCP | 443 | Azure DNS resolution |
+| `*eh.servicebus.windows.net` | TCP | 443 | Diagnostic settings |
virtual-desktop Set Up Customize Master Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-customize-master-image.md
If you're installing Microsoft 365 Apps for enterprise and OneDrive on your VM,
If your users need to access certain LOB applications, we recommend you install them after completing this section's instructions.
-### Set up user profile container (FSLogix)
+### Set up FSLogix profile container
To include the FSLogix container as part of the image, follow the instructions in [Create a profile container for a host pool using a file share](create-host-pools-user-profile.md#configure-the-fslogix-profile-container). You can test the functionality of the FSLogix container with [this quickstart](/fslogix/configure-cloud-cache-tutorial/).
-### Configure Windows Defender
+### Configure antivirus exclusions for FSLogix
-If Windows Defender is configured in the VM, make sure it's configured to not scan the entire contents of VHD and VHDX files during attachment.
+If Windows Defender is configured in the VM, make sure it's configured to not scan the entire contents of VHD and VHDX files during attachment. You can find a list of exclusions for FSLogix at [Configure Antivirus file and folder exclusions](/fslogix/overview-prerequisites#configure-antivirus-file-and-folder-exclusions).
This configuration only removes scanning of VHD and VHDX files during attachment, but won't affect real-time scanning.
-For more detailed instructions for how to configure Windows Defender, see [Configure Windows Defender Antivirus exclusions on Windows Server](/windows/security/threat-protection/windows-defender-antivirus/configure-server-exclusions-windows-defender-antivirus/).
-
-To learn more about how to configure Windows Defender to exclude certain files from scanning, see [Configure and validate exclusions based on file extension and folder location](/windows/security/threat-protection/windows-defender-antivirus/configure-extension-file-exclusions-windows-defender-antivirus/).
+If you're using Windows Defender, you can learn more about how to configure Windows Defender to exclude certain files from scanning at [Configure and validate exclusions based on file extension and folder location](/windows/security/threat-protection/windows-defender-antivirus/configure-extension-file-exclusions-windows-defender-antivirus/).
### Disable Automatic Updates
New-ItemProperty -Path "HKLM:\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Se
### Disable Storage Sense
-For Azure Virtual Desktop session hosts that use Windows 10 Enterprise or Windows 10 Enterprise multi-session, we recommend disabling Storage Sense. Disks where the operating system is installed are typically small in size and user data is stored remotely through profile roaming. This scenario results in Storage Sense believing that the disk is critically low on free space. You can disable Storage Sense in the Settings menu under **Storage**, as shown in the following screenshot:
+For Azure Virtual Desktop session hosts that use Windows 10 Enterprise or Windows 10 Enterprise multi-session, we recommend disabling Storage Sense. Disks where the operating system is installed are typically small in size and user data is stored remotely through profile roaming. This scenario results in Storage Sense believing that the disk is critically low on free space. You can disable Storage Sense in the image using the registry, or use Group Policy or Intune to disable Storage Sense after the session hosts are deployed.
-> [!div class="mx-imgBorder"]
-> ![A screenshot of the Storage menu under Settings. The "Storage sense" option is turned off.](media/storagesense.png)
+- For the registry, you can run the following command from an elevated PowerShell prompt to disable Storage Sense:
-You can also run the following command from an elevated PowerShell prompt to disable Storage Sense:
+ ```powershell
+ New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy" -Name 01 -PropertyType DWORD -Value 0 -Force
+ ```
-```powershell
-New-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\StorageSense\Parameters\StoragePolicy" -Name 01 -PropertyType DWORD -Value 0 -Force
-```
+- For Group Policy, configure a Group Policy Object with the setting **Computer Configuration** > **Administrative Templates** > **System** > **Storage Sense** > **Allow Storage Sense** set to **Disabled**.
+
+- For Intune, configure a configuration profile using the settings catalog with the setting **Storage** > **Allow Storage Sense Global** set to **Block**.
### Include additional language support
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
Title: What's new in documentation - Azure Virtual Desktop
-description: Learn about new and updated articles to the Azure Virtual Desktop documentation
+description: Learn about new and updated articles to the Azure Virtual Desktop documentation.
Previously updated : 01/31/2024 Last updated : 02/29/2024 # What's new in documentation for Azure Virtual Desktop
-We update documentation for Azure Virtual Desktop regularly. In this article we highlight articles for new features and where there have been important updates to existing articles.
+We update documentation for Azure Virtual Desktop regularly. In this article, we highlight articles for new features and where there are important updates to existing articles.
+
+## February 2024
+
+In February 2024, we published the following changes:
+
+- Added guidance for MSIX and Appx package certificates when using MSIX app attach or app attach. For more information, see [MSIX app attach and app attach in Azure Virtual Desktop](app-attach-overview.md#msix-and-appx-package-certificates).
+- Consolidated articles for the three Remote Desktop clients available for Windows into a single article, [Connect to Azure Virtual Desktop with the Remote Desktop client for Windows](users/connect-windows.md).
+- Added Azure CLI guidance to [Configure personal desktop assignment](configure-host-pool-personal-desktop-assignment-type.md).
+- Updated [Drain session hosts for maintenance in Azure Virtual Desktop](drain-mode.md), including prerequisites and separating the Azure portal and Azure PowerShell steps into tabs.
+- Updated [Customize the feed for Azure Virtual Desktop users](customize-feed-for-virtual-desktop-users.md), including prerequisite, Azure PowerShell steps, and separating the Azure portal and Azure PowerShell steps into tabs.
## January 2024 In January 2024, we published the following changes: - Consolidated articles to [Create and assign an autoscale scaling plan for Azure Virtual Desktop](autoscale-scaling-plan.md) into a single article.- - Added PowerShell commands to [Create and assign an autoscale scaling plan for Azure Virtual Desktop](autoscale-scaling-plan.md).- - Removed the separate documentation section for RemoteApp streaming and combined it with the main Azure Virtual Desktop documentation. Some articles that were previously only in the RemoteApp section are now discoverable in the main Azure Virtual Desktop documentation, such as [Understand and estimate costs for Azure Virtual Desktop](understand-estimate-costs.md) and [Licensing Azure Virtual Desktop](licensing.md). ## December 2023
In January 2024, we published the following changes:
In December 2023, we published the following changes: - Published new content for the preview of *app attach*, which is now available alongside MSIX app attach. App attach brings many benefits over MSIX app attach, including assigning applications per user, using the same application package across multiple host pools, upgrading applications, and being able to run two versions of the same application concurrently on the same session host. For more information, see [MSIX app attach and app attach in Azure Virtual Desktop](app-attach-overview.md?pivots=app-attach).- - Updated the article [Use Microsoft Teams on Azure Virtual Desktop](teams-on-avd.md) to include support for [new Teams desktop client](/microsoftteams/new-teams-desktop-admin) on your session hosts.- - Updated the article [Configure single sign-on for Azure Virtual Desktop using Microsoft Entra ID authentication](configure-single-sign-on.md) to include example PowerShell commands to help configure single sign-on using Microsoft Entra ID authentication. ## November 2023
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Azure Virtual Desktop for Azure Stack HCI extends the capabilities of the Micros
For more information, see [Azure Virtual Desktop for Azure Stack HCI now available!](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/azure-virtual-desktop-for-azure-stack-hci-now-available/ba-p/4038030)
-### Azure Virtual Desktop web client version 2 is now available
+### New Azure Virtual Desktop web client is now available
-The Azure Virtual Desktop web client has now updated to web client version 2. All users automatically migrate to this new version of the web client to access their resources.
+We've updated the Azure Virtual Desktop web client to the new web client. All users automatically migrate to this new version of the web client to access their resources.
-For more information about the new features available in version 2, see [Use features of the Remote Desktop Web client](./users/client-features-web.md).
+For more information about the new features available in the new web client, see [Use features of the Remote Desktop Web client](./users/client-features-web.md).
## January 2024
virtual-machines Dalsv6 Daldsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dalsv6-daldsv6-series.md
+
+ Title: Dalsv6 and Daldsv6-series
+description: Specifications for Dalsv6 and Daldsv6-series VMS
+++++ Last updated : 01/29/2024++
+# Dalsv6 and Daldsv6-series (Preview)
+
+**Applies to:** ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets 
+
+> [!IMPORTANT]
+> Azure Virtual Machine Series Dalsv6 and Daldsv6 are currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. 
+
+The Dalsv6-series and Daldsv6-series utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processor in a multi-threaded configuration with up to 320 MB L3 cache. The Dalsv6 and Daldsv6 VM series provides 2GiBs of RAM per vCPU and are optimized for workloads that require less RAM per vCPU than standard VM sizes. The Dalsv6-series can reduce costs when running non-memory intensive applications, including web servers, gaming, video encoding, AI/ML, and batch processing. 
+
+> [!NOTE]
+> The new Dalsv6 and Daldsv6 VM series will only work on OS images that are tagged with NVMe support.  If your current OS image is not supported for NVMe, you’ll see an error message. NVMe support is available in 50+ of the most popular OS images, and we continuously improve the OS image coverage. Please refer to our up-to-date lists for information on which OS images are tagged as NVMe supported.  For more information on NVMe enablement, see our [FAQ](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-faqs).
+>
+> The new Dalsv6 and Daldsv6 VM series virtual machines public preview now available. For more information and to sign up for the preview, please visit our [announcement](https://techcommunity.microsoft.com/t5/azure-compute-blog/public-preview-new-amd-based-vms-with-increased-performance/ba-p/3981351) and follow the link to the [sign-up form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9RmLSiOpIpImo4Q01A_jJlUM1ZSRVlYU04wMUJQVjNQRFZHQzdEVFc1VyQlQCN0PWcu). This is an opportunity to experience our latest innovation.
+
+## Dalsv6-series
+
+Dalsv6-series VMs utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processors that can achieve a boosted maximum frequency of 3.7GHz. These virtual machines offer up to 96 vCPU and 192 GiB of RAM. These VM sizes can reduce cost when running non-memory intensive applications. The new VMs with no local disk provide a better value proposition for workloads that do not require local temporary storage. 
+
+> [!NOTE]
+> For frequently asked questions, seeΓÇ»[Azure VM sizes with no local temp disk](https://learn.microsoft.com/azure/virtual-machines/azure-vms-no-temp-disk).
++
+Dalsv6-series virtual machines do not have any temporary storage thus lowering the price of entry. You can attach Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). 
+
+[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported 
+[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported 
+[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview 
+[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported 
+[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 
+[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported 
+[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported 
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported 
++
+| Size | vCPU | Memory: GiB | Local NVMe Temporary storage (SSD) GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps<sup>1</sup> | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
+|--||-||-|--||--||-||
+| Standard_D2als_v6 | 2 | 4 | Remote Storage Only | 4 | 4000/90 | 20000/1250 | 4000/90 | 20000/1250 | 2 | 12500 |
+| Standard_D4als_v6 | 4 | 8 | Remote Storage Only | 8 | 7600/180 | 20000/1250 | 7600/180 | 20000/1250 | 2 | 12500 |
+| Standard_D8als_v6 | 8 | 16 | Remote Storage Only | 16 | 15200/360 | 20000/1250 | 15200/360 | 20000/1250 | 4 | 12500 |
+| Standard_D16als_v6 | 16 | 32 | Remote Storage Only | 32 | 30400/720 | 40000/1250 | 30400/720 | 40000/1250 | 8 | 16000 |
+| Standard_D32als_v6 | 32 | 64 | Remote Storage Only | 32 | 57600/1440 | 80000/1700 | 57600/1440 | 80000/1700 | 8 | 20000 |
+| Standard_D48als_v6 | 48 | 96 | Remote Storage Only | 32 | 86400/2160 | 90000/2550 | 86400/2160 | 90000/2550 | 8 | 28000 |
+| Standard_D64als_v6 | 64 | 128 | Remote Storage Only | 32 | 115200/2880 | 120000/3400 | 115200/2880 | 120000/3400 | 8 | 36000 |
+| Standard_D96als_v6 | 96 | 192 | Remote Storage Only | 32 | 175000/4320 | 175000/5090 | 175000/4320 | 175000/5090 | 8 | 40000 |
+
+<sup>1</sup> Dalsv6-series VMs can [burst](/azure/virtual-machines/disk-bursting) their disk performance and get up to their bursting max for up to 30 minutes at a time. 
+
+## Daldsv6-series
+Daldsv6-series VMs utilize AMD's 4th Generation EPYC<sup>TM</sup>  9004 processors that can achieve a boosted maximum frequency of 3.7GHz. These virtual machines offer up to 96 vCPUs, 192 GiB of RAM, and up to 5,280 GiB of fast local NVMe temporary storage. These VM sizes can reduce cost when running non-memory intensive applications. 
+Daldsv6-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). 
+
+[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported 
+[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported 
+[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview 
+[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported 
+[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 
+[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported 
+[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview 
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported 
+
+| Size | vCPU | Memory: GiB | Local NVMe Temporary storage (SSD) | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps<sup>1</sup> | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) | Max network bandwidth (Mbps) | Max temp storage read throughput: IOPS / MBps |
+|||-||-|--||--||-|||--|
+| Standard_D2alds_v6 | 2 | 4 | 1x110 GiB | 4 | 4000/90 | 20000/1250 | 4000/90 | 20000/1250 | 2 | 12500 | 12500 | 37500/180 |
+| Standard_D4alds_v6 | 4 | 8 | 1x220 GiB | 8 | 7600/180 | 20000/1250 | 7600/180 | 20000/1250 | 2 | 12500 | 12500 | 75000/360 |
+| Standard_D8alds_v6 | 8 | 16 | 1x440 GiB | 16 | 15200/360 | 20000/1250 | 15200/360 | 20000/1250 | 4 | 12500 | 12500 | 150000/720 |
+| Standard_D16alds_v6 | 16 | 32 | 2x440 GiB | 32 | 30400/720 | 40000/1250 | 30400/720 | 40000/1250 | 8 | 16000 | 12500 | 300000/1440 |
+| Standard_D32alds_v6 | 32 | 64 | 4x440 GiB | 32 | 57600/1440 | 80000/1700 | 57600/1440 | 80000/1700 | 8 | 20000 | 16000 | 600000/2880 |
+| Standard_D48alds_v6 | 48 | 96 | 6x440 GiB | 32 | 86400/2160 | 90000/2550 | 86400/2160 | 90000/2550 | 8 | 28000 | 24000 | 900000/4320 |
+| Standard_D64alds_v6 | 64 | 128 | 4x880 GiB | 32 | 115200/2880 | 120000/3400 | 115200/2880 | 120000/3400 | 8 | 36000 | 32000 | 1200000/5760 |
+| Standard_D96alds_v6 | 96 | 192 | 6x880 GiB | 32 | 175000/4320 | 175000/5090 | 175000/4320 | 175000/5090 | 8 | 40000 | 40000 | 1800000/8640 |
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on disk types, see [What disk types are available in Azure?](disks-types.md)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Dasv6 Dadsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dasv6-dadsv6-series.md
+
+ Title: 'Dasv6 and Dadsv6-series - Azure Virtual Machines'
+description: Specifications for the Dasv6 and Dadsv6-series VMs.
+++++ Last updated : 01/29/2024++
+# Dasv6 and Dadsv6-series (Preview)
+
+**Applies to:** ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets 
+
+> [!Important]
+> Azure Virtual Machine Series Dasv6 and Dadsv6 are currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. 
+
+The Dasv6-series and Dadsv6-series utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processor in a multi-threaded configuration with up to 320 MB L3 cache, increasing customer options for running their general-purpose workloads. The Dasv6-series VMs work well for many general computing workloads, such as e-commerce systems, web front ends, desktop virtualization solutions, customer relationship management applications, entry-level and mid-range databases, application servers, and more. 
+
+> [!NOTE]
+> The new Dasv6 and Dadsv6 VM series will only work on OS images that are tagged with NVMe support.  If your current OS image is not supported for NVMe, you’ll see an error message. NVMe support is available in 50+ of the most popular OS images, and we continuously improve the OS image coverage. Please refer to our up-to-date [lists](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-interface) for information on which OS images are tagged as NVMe supported.  For more information on NVMe enablement, see our [FAQ](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-faqs).
+>
+> The new Dasv6 and Dadsv6 VM series virtual machines public preview now available. For more information and to sign up for the preview, please visit our [announcement](https://techcommunity.microsoft.com/t5/azure-compute-blog/public-preview-new-amd-based-vms-with-increased-performance/ba-p/3981351) and follow the link to the [sign-up form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9RmLSiOpIpImo4Q01A_jJlUM1ZSRVlYU04wMUJQVjNQRFZHQzdEVFc1VyQlQCN0PWcu). This is an opportunity to experience our latest innovation.
+
+## Dasv6-series 
+Dasv6-series VMs utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processors that can achieve a boosted maximum frequency of 3.7GHz. These virtual machines offer up to 96 vCPU and 384 GiB of RAM. The Dasv6-series sizes offer a combination of vCPU and memory for most production workloads. The new VMs with no local disk provide a better value proposition for workloads that do not require local temporary storage. 
+
+> [!NOTE]
+> For frequently asked questions, seeΓÇ»[Azure VM sizes with no local temp disk](https://learn.microsoft.com/azure/virtual-machines/azure-vms-no-temp-disk).
+ 
+
+Dasv6-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines.ΓÇ»[See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported 
+[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported 
+[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview 
+[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported 
+[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 
+[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported 
+[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported 
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported 
+
+| Size | vCPU | Memory: GiB | Local NVMe Temporary storage (SSD) GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps<sup>1</sup> | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
+|-||-||-|--||--||-||
+| Standard_D2as_v6 | 2 | 8 | Remote Storage Only | 4 | 4000/90 | 20000/1250 | 4000/90 | 20000/1250 | 2 | 12500 |
+| Standard_D4as_v6 | 4 | 16 | Remote Storage Only | 8 | 7600/180 | 20000/1250 | 7600/180 | 20000/1250 | 2 | 12500 |
+| Standard_D8as_v6 | 8 | 32 | Remote Storage Only | 16 | 15200/360 | 20000/1250 | 15200/360 | 20000/1250 | 4 | 12500 |
+| Standard_D16as_v6 | 16 | 64 | Remote Storage Only | 32 | 30400/720 | 40000/1250 | 30400/720 | 40000/1250 | 8 | 16000 |
+| Standard_D32as_v6 | 32 | 128 | Remote Storage Only | 32 | 57600/1440 | 80000/1700 | 57600/1440 | 80000/1700 | 8 | 20000 |
+| Standard_D48as_v6 | 48 | 192 | Remote Storage Only | 32 | 86400/2160 | 90000/2550 | 86400/2160 | 90000/2550 | 8 | 28000 |
+| Standard_D64as_v6 | 64 | 256 | Remote Storage Only | 32 | 115200/2880 | 120000/3400 | 115200/2880 | 120000/3400 | 8 | 36000 |
+| Standard_D96as_v6 | 96 | 384 | Remote Storage Only | 32 | 175000/4320 | 175000/5090 | 175000/4320 | 175000/5090 | 8 | 40000 |
+
+<sup>1</sup> Dasv6-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+## Dadsv6-series
+Dadsv6-series VMs utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processors that can achieve a boosted maximum frequency of 3.7GHz. These virtual machines offer up to 96 vCPU and 384 GiB of RAM. The Dadsv6-series sizes offer a combination of vCPU, memory and fast local NVMe temporary storage for most production workloads.
+Daldsv6-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines.ΓÇ»[See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/).
+
+[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported 
+[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported 
+[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview 
+[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported 
+[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 
+[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported 
+[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview 
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported 
+
+| Size | vCPU | Memory: GiB | Local NVMe Temporary storage (SSD) | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps<sup>1</sup> | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) | Max network bandwidth (Mbps) | Max temp storage read throughput: IOPS / MBps |
+|--||-||-|--||--||-|||--|
+| Standard_D2ads_v6 | 2 | 8 | 1x110 GiB | 4 | 4000/90 | 20000/1250 | 4000/90 | 20000/1250 | 2 | 12500 | 12500 | 37500/180 |
+| Standard_D4ads_v6 | 4 | 16 | 1x220 GiB | 8 | 7600/180 | 20000/1250 | 7600/180 | 20000/1250 | 2 | 12500 | 12500 | 75000/360 |
+| Standard_D8ads_v6 | 8 | 32 | 1x440 GiB | 16 | 15200/360 | 20000/1250 | 15200/360 | 20000/1250 | 4 | 12500 | 12500 | 150000/720 |
+| Standard_D16ads_v6 | 16 | 64 | 2x440 GiB | 32 | 30400/720 | 40000/1250 | 30400/720 | 40000/1250 | 8 | 16000 | 12500 | 300000/1440 |
+| Standard_D32ads_v6 | 32 | 128 | 4x440 GiB | 32 | 57600/1440 | 80000/1700 | 57600/1440 | 80000/1700 | 8 | 20000 | 16000 | 600000/2880 |
+| Standard_D48ads_v6 | 48 | 192 | 6x440 GiB | 32 | 86400/2160 | 90000/2550 | 86400/2160 | 90000/2550 | 8 | 28000 | 24000 | 900000/4320 |
+| Standard_D64ads_v6 | 64 | 256 | 4x880 GiB | 32 | 115200/2880 | 120000/3400 | 115200/2880 | 120000/3400 | 8 | 36000 | 32000 | 1200000/5760 |
+| Standard_D96ads_v6 | 96 | 384 | 6x880 GiB | 32 | 175000/4320 | 175000/5090 | 175000/4320 | 175000/5090 | 8 | 40000 | 40000 | 1800000/8640 |
+
+<sup>1</sup> Dadsv6-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on disk types, see [What disk types are available in Azure?](disks-types.md)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Easv6 Eadsv6 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/easv6-eadsv6-series.md
+
+ Title: 'Easv6 and Eadsv6-series - Azure Virtual Machines'
+description: Specifications for the Easv6 and Eadsv6-series VMs.
+++++ Last updated : 01/29/2024++
+# Easv6 and Eadsv6-series
+
+**Applies to:** ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ Uniform scale sets 
+
+> [!IMPORTANT]
+> Azure Virtual Machine Series Easv6 and Eadsv6 are currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. 
+
+The Easv6-series and Eadsv6-series utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processor in a multi-threaded configuration with up to 320 MB L3 cache, increasing customer options for running most memory optimized workloads. The Easv6-series VMs are ideal for memory-intensive enterprise applications, data warehousing, business intelligence, in-memory analytics, and financial transactions. 
+
+> [!NOTE]
+> The new Easv6 and Eadsv6 VM series will only work on OS images that are tagged with NVMe support.  If your current OS image is not supported for NVMe, you’ll see an error message. NVMe support is available in 50+ of the most popular OS images, and we continuously improve the OS image coverage. Please refer to our up-to-date [lists](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-interface) for information on which OS images are tagged as NVMe supported.  For more information on NVMe enablement, see our [FAQ](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-faqs).
+>
+> The new Easv6 and Eadsv6 VM series virtual machines public preview now available. For more information and to sign up for the preview, please visit our [announcement](https://techcommunity.microsoft.com/t5/azure-compute-blog/public-preview-new-amd-based-vms-with-increased-performance/ba-p/3981351) and follow the link to the [sign-up form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR9RmLSiOpIpImo4Q01A_jJlUM1ZSRVlYU04wMUJQVjNQRFZHQzdEVFc1VyQlQCN0PWcu). This is an opportunity to experience our latest innovation.
++
+## Easv6-series 
+Easv6-series VMs utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processors that can achieve a boosted maximum frequency of 3.7GHz. These virtual machines offer up to 96 vCPU and 672 GiB of RAM. The Easv6-series sizes offer a combination of vCPU and memory that is ideal for memory-intensive enterprise applications. The new VMs with no local disk provide a better value proposition for workloads that do not require local temporary storage. 
+
+> [!Note]
+> For frequently asked questions, seeΓÇ»[Azure VM sizes with no local temp disk](/azure/virtual-machines/azure-vms-no-temp-disk).
+
+Easv6-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). 
+
+[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported 
+[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported 
+[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview 
+[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported 
+[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 
+[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported 
+[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported 
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported
+
+| Size | vCPU | Memory: GiB | Local NVMe Temporary storage (SSD) GiB | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps<sup>1</sup> | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) |
+|-||-|-|-|--||--||-||
+| Standard_E2as_v6 | 2 | 16 | Remote Storage Only | 4 | 4000/90 | 20000/1250 | 4000/90 | 20000/1250 | 2 | 12500 |
+| Standard_E4as_v6 | 4 | 32 | Remote Storage Only | 8 | 7600/180 | 20000/1250 | 7600/180 | 20000/1250 | 2 | 12500 |
+| Standard_E8as_v6 | 8 | 64 | Remote Storage Only | 16 | 15200/360 | 20000/1250 | 15200/360 | 20000/1250 | 4 | 12500 |
+| Standard_E16as_v6 | 16 | 128 | Remote Storage Only | 32 | 30400/720 | 40000/1250 | 30400/720 | 40000/1250 | 8 | 16000 |
+| Standard_E20as_v6 | 20 | 160 | Remote Storage Only | 32 | 38000/900 | 64000/1600 | 38000/900 | 64000/1600 | 8 | 16000 |
+| Standard_E32as_v6 | 32 | 256 | Remote Storage Only | 32 | 57600/1440 | 80000/1700 | 57600/1440 | 80000/1700 | 8 | 20000 |
+| Standard_E48as_v6 | 48 | 384 | Remote Storage Only | 32 | 86400/2160 | 90000/2550 | 86400/2160 | 90000/2550 | 8 | 28000 |
+| Standard_E64as_v6 | 64 | 512 | Remote Storage Only | 32 | 115200/2880 | 120000/3400 | 115200/2880 | 120000/3400 | 8 | 36000 |
+| Standard_E96as_v6 | 96 | 672 | Remote Storage Only | 32 | 175000/4320 | 175000/5090 | 175000/4320 | 175000/5090 | 8 | 40000 |
+
+<sup>1</sup> Easv6-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
+
+## Eadsv6-series 
+Eadsv6-series VMs utilize AMD's 4th Generation EPYC<sup>TM</sup> 9004 processors that can achieve a boosted maximum frequency of 3.7GHz. These virtual machines offer up to 96 vCPU and 672 GiB of RAM. The Eadsv6-series sizes offer a combination of vCPU, memory and fast local NVMe temporary storage that is ideal for memory-intensive enterprise applications. 
+Eadsv6-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). 
+
+[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported 
+[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported 
+[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview 
+[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported 
+[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 
+[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported 
+[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview 
+[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported 
+
+| Size | vCPU | Memory: GiB | Local NVMe Temporary storage (SSD) | Max data disks | Max uncached Premium SSD disk throughput: IOPS/MBps | Max burst uncached Premium SSD disk throughput: IOPS/MBps<sup>1</sup> | Max uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps | Max burst uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps<sup>1</sup> | Max NICs | Max network bandwidth (Mbps) | Max temp storage read throughput: IOPS / MBps |
+|--||-||-|--||--||-||--|
+| Standard_E2ads_v6 | 2 | 16 | 1x110 GiB | 4 | 4000/90 | 20000/1250 | 4000/90 | 20000/1250 | 2 | 12500 | 37500/180 |
+| Standard_E4ads_v6 | 4 | 32 | 1x220 GiB | 8 | 7600/180 | 20000/1250 | 7600/180 | 20000/1250 | 2 | 12500 | 75000/360 |
+| Standard_E8ads_v6 | 8 | 64 | 1x440 GiB | 16 | 15200/360 | 20000/1250 | 15200/360 | 20000/1250 | 4 | 12500 | 150000/720 |
+| Standard_E16ads_v6 | 16 | 128 | 2x440 GiB | 32 | 30400/720 | 40000/1250 | 30400/720 | 40000/1250 | 8 | 16000 | 300000/1440 |
+| Standard_E20ads_v6 | 20 | 160 | 2x550 GiB | 32 | 38000/900 | 64000/1600 | 38000/900 | 64000/1600 | 8 | 16000 | 375000/1800 |
+| Standard_E32ads_v6 | 32 | 256 | 4x440 GiB | 32 | 57600/1440 | 80000/1700 | 57600/1440 | 80000/1700 | 8 | 20000 | 600000/2880 |
+| Standard_E48ads_v6 | 48 | 384 | 6x440 GiB | 32 | 86400/2160 | 90000/2550 | 86400/2160 | 90000/2550 | 8 | 28000 | 900000/4320 |
+| Standard_E64ads_v6 | 64 | 512 | 4x880 GiB | 32 | 115200/2880 | 120000/3400 | 115200/2880 | 120000/3400 | 8 | 36000 | 1200000/5760 |
+| Standard_E96ads_v6 | 96 | 672 | 6x880 GiB | 32 | 175000/4320 | 175000/5090 | 175000/4320 | 175000/5090 | 8 | 40000 | 1800000/8640 |
+
+<sup>1</sup> Eadsv6-series VMs can [burst](disk-bursting.md) their disk performance and get up to their bursting max for up to 30 minutes at a time.
++
+## Other sizes and information
+
+- [General purpose](sizes-general.md)
+- [Memory optimized](sizes-memory.md)
+- [Storage optimized](sizes-storage.md)
+- [GPU optimized](sizes-gpu.md)
+- [High performance compute](sizes-hpc.md)
+- [Previous generations](sizes-previous-gen.md)
+
+Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
+
+For more information on disk types, see [What disk types are available in Azure?](disks-types.md)
+
+## Next steps
+
+Learn more about how [Azure compute units (ACU)](acu.md) can help you compare compute performance across Azure SKUs.
virtual-machines Flash Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/flash-azure-monitor.md
Select the VM availability metric chart on the overview page, to navigate to [Me
| Data Retention | Data for the VM availability metric is [stored for 93 days](../azure-monitor/essentials/data-platform-metrics.md#retention-of-metrics) to help trend analysis and historical lookback. | | Pricing | Refer to the [Pricing breakdown](https://azure.microsoft.com/pricing/details/monitor/#pricing), specifically in the "Metrics" and "Alert Rules" sections. |
-We plan to include impact details (user vs platform initiated, planned vs unplanned) as dimensions to the metric, so users are well equipped to interpret dips, and set up much more targeted metric alerts. With the emission of dimensions in 202, we also anticipate transitioning the offering to a general availability status.
+We plan to include impact details (user vs platform initiated, planned vs unplanned) as dimensions to the metric, so users are well equipped to interpret dips, and set up much more targeted metric alerts. With the emission of dimensions in 2023, we also anticipate transitioning the offering to a general availability status.
### Useful links
To learn more about the solutions offered, proceed to corresponding solution art
* [Use Azure Resource Graph to monitor Azure Virtual Machine availability](flash-azure-resource-graph.md) * [Use Event Grid system topics to monitor Azure Virtual Machine availability](flash-event-grid-system-topic.md)
-For a general overview of how to monitor Azure Virtual Machines, see [Monitor Azure virtual machines](monitor-vm.md) and the [Monitoring Azure virtual machines reference](monitor-vm-reference.md).
+For a general overview of how to monitor Azure Virtual Machines, see [Monitor Azure virtual machines](monitor-vm.md) and the [Monitoring Azure virtual machines reference](monitor-vm-reference.md).
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
Previously updated : 08/26/2022 Last updated : 03/04/2024
Generation 2 VMs support the following Marketplace images:
* SUSE Linux Enterprise Server 15 SP3, SP2 * SUSE Linux Enterprise Server 12 SP4 * Ubuntu Server 22.04 LTS, 20.04 LTS, 18.04 LTS, 16.04 LTS
-* RHEL 8.5, 8.4, 8.3, 8.2, 8.1, 8.0, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0
+* RHEL 9.3, 9.2, 9.1, 9.0, 8.9, 8.8, 8.7, 8.6, 8.5, 8.4, 8.3, 8.2, 8.1, 8.0, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0
* Cent OS 8.4, 8.3, 8.2, 8.1, 8.0, 7.7, 7.6, 7.5, 7.4
-* Oracle Linux 8.4 LVM, 8.3 LVM, 8.2 LVM, 8.1, 7.9 LVM, 7.9, 7.8, 7.7
+* Oracle Linux 9.3, 9.2, 9.1, 9.0, 8.9, 8.8, 8.7, 8.6, 8.5, 8.4, 8.3, 8.2, 8.1, 7.9, 7.9, 7.8, 7.7
> [!NOTE] > Specific Virtual machine sizes like Mv2-Series, DC-series, ND A100 v4-series, NDv2-series, Msv2 and Mdsv2-series may only support a subset of these images - please look at the relevant virtual machine size documentation for complete details.
virtual-wan Route Maps About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-about.md
description: Learn about Virtual WAN Route-maps.
Previously updated : 05/31/2023 Last updated : 03/04/2024 # About Route-maps for virtual hubs (Preview)
-Route-maps is a powerful feature that gives you the ability to control route advertisements and routing for Virtual WAN virtual hubs. Route-maps lets you have more control of the routing that enters and leaves Azure Virtual WAN site-to-site (S2S) VPN connections, User VPN point-to-site (P2S) connections, ExpressRoute (ER) connections, and virtual network (VNet) connections. Route-maps can be configured using the [Azure portal](route-maps-how-to.md).
+Route-maps is a feature that gives you the ability to control route advertisements and routing for Virtual WAN virtual hubs. Route-maps lets you have more control of the routing that enters and leaves Azure Virtual WAN site-to-site (S2S) VPN connections, User VPN point-to-site (P2S) connections, ExpressRoute (ER) connections, and virtual network (VNet) connections. Route-maps can be configured using the Azure portal. For configuration steps, see [How to configure Route-maps](route-maps-how-to.md).
-In Virtual WAN, the virtual hub router acts as a route manager, providing simplification in routing operations within and across virtual hubs. The virtual hub router simplifies routing management by being the central routing engine that talks to gateways (S2S, ER, and P2S), Azure Firewall, and Network Virtual Appliances (NVAs). While the gateways make their routing decisions, the virtual hub router provides central route management and enables advanced routing scenarios in the virtual hub with features such as custom route tables, route association, and propagation.
+> [!IMPORTANT]
+> [!INCLUDE [Preview text](../../includes/virtual-wan-route-maps-preview.md)]
+>
-Route-maps lets you perform route aggregation, route filtering, and gives you the ability to modify BGP attributes such as AS-PATH and Community to manage routes and routing decisions.
+## Why use Route-maps?
-* **Connection:** A route map can be applied to user, branch, ExpressRoute, and VNet connections.
+Some of the key benefits of using Route-maps are:
+
+* Route-maps can be used to summarize routes when you have on-premises networks connected to Virtual WAN via ExpressRoute or VPN and are limited by the number of routes that can be advertised from/to virtual hub.
+* You can use Route-maps to control routes entering and leaving your Virtual WAN deployment among on-premises and virtual-networks.
+* You can control routing decisions in your Virtual WAN deployment by modifying a BGP attribute such as *AS-PATH* to make a route more, or less preferable. This is helpful when there are destination prefixes reachable via multiple paths and customers want to use AS-PATH to control best path selection.
+* You can easily tag routes using the BGP Community attribute in order to manage routes.
+
+In Virtual WAN, the virtual hub router acts as a route manager, providing simplification in routing operations within and across virtual hubs. The virtual hub router simplifies routing management by being the central routing engine that talks to gateways (S2S, ER, and P2S), Azure Firewall, and Network Virtual Appliances (NVAs).
+
+While the gateways make their routing decisions, the virtual hub router provides central route management and enables advanced routing scenarios in the virtual hub with features such as custom route tables, route association, and propagation.
+
+Route-maps lets you perform route aggregation, route filtering, and gives you the ability to modify BGP attributes such as AS-PATH and Community to manage routes and routing decisions. Route-maps are configurable for the following resources and settings:
+
+* **Connections:** A route-map can be applied to user, branch, ExpressRoute, and VNet connections.
* ExpressRoute connection: The hub's connection to an ER circuit. * Site-to-site VPN connection: The hub's connection to a VPN site. * VNet connection: The hub's connection to a virtual network. * Point-to-site connection: The hubΓÇÖs connection to a P2S user.
- A virtual hub can have a route map applied to any of the connections, as shown in the following diagram:
+ A virtual hub can have a route-map applied to any of the connections, as shown in the following diagram:
:::image type="content" source="./media/route-maps-about/architecture.png" alt-text="Screenshot shows a diagram of the Virtual WAN architecture using Route-map."lightbox="./media/route-maps-about/architecture.png":::
-* **Route aggregation:** Route-maps lets you reduce the number of routes coming in and/or out of a connection by summarizing. (Example: 10.2.1.0.0/24, 10.2.2.0/24 and 10.2.3.0/24 can be summarized to 10.2.0.0/16)
+* **Route aggregation:** Route-maps lets you reduce the number of routes coming in and/or out of a connection by summarizing. (Example: 10.2.1.0.0/24, 10.2.2.0/24 and 10.2.3.0/24 can be summarized to 10.2.0.0/16).
* **Route Filtering:** Route-maps lets you exclude routes that are advertised or received from ExpressRoute connections, site-to-site VPN connections, VNet connections, and point-to-site connections. * **Modify BGP attributes:** Route-maps lets you modify AS-PATH and BGP Communities. You can now add or set ASNs (Autonomous system numbers).
-## Benefits and considerations
-
+## Considerations and limitations
-### Key benefits
+Before using Route-maps, take into consideration the following limitations:
-* If you have on-premises networks connected to Virtual WAN via ExpressRoute or VPN and are limited by the number of routes that can be advertised from/to virtual hub, you can use route maps to summarize routes.
-* You can use route maps to control routes entering and leaving your Virtual WAN deployment among on-premises and virtual-networks.
-* You can control routing decisions in your Virtual WAN deployment by modifying a BGP attribute such as *AS-PATH* to make a route more, or less preferable. This is helpful when there are destination prefixes reachable via multiple paths and customers want to use AS-PATH to control best path selection.
-* You can easily tag routes using the BGP Community attribute in order to manage routes.
-
-### Key considerations
-
-* During Preview, hubs using Route-maps must be deployed in their own virtual WANs.
-* Route-maps is only available for virtual hubs running on the Virtual Machine Scale Sets infrastructure. For more information, see the [FAQ](virtual-wan-faq.md).
-* When using route maps to summarize a set of routes, the hub router strips the *BGP Community* and *AS-PATH* attributes from those routes. This applies to both inbound and outbound routes.
+* During Preview, hubs that are using Route-maps must be deployed in their own virtual WANs.
+* The Route-maps feature is only available for virtual hubs running on the Virtual Machine Scale Sets infrastructure. For more information, see the [FAQ](virtual-wan-faq.md).
+* When using Route-maps to summarize a set of routes, the hub router strips the *BGP Community* and *AS-PATH* attributes from those routes. This applies to both inbound and outbound routes.
* When adding ASNs to the AS-PATH, don't use ASNs reserved by Azure: * Public ASNs: 8074, 8075, 12076 * Private ASNs: 65515, 65517, 65518, 65519, 65520
-* Route maps can't be applied to connections between on-premises and SD-WAN/Firewall NVAs in the virtual hub. These connections aren't supported during Preview. You can still apply route maps to other supported connections when an NVA in the virtual hub is deployed. This doesn't apply to the Azure Firewall, as the routing for Azure Firewall is provided through Virtual WAN [routing intent features](how-to-routing-policies.md).
+* You can't apply Route-maps to connections between on-premises and SD-WAN/Firewall NVAs in the virtual hub. These connections aren't supported during Preview. You can still apply route-maps to other supported connections when an NVA in the virtual hub is deployed. This doesn't apply to the Azure Firewall, as the routing for Azure Firewall is provided through Virtual WAN [routing intent features](how-to-routing-policies.md).
* Route-maps supports only 2-byte ASN numbers.
-* Recommended best practices:
- * Configure rules to only match the routes intended to avoid unintended traffic flows.
- * The Route-maps feature contains some implicit functions, such as when no match conditions or actions are defined in a rule. Review the rules for each section.
- * A prefix can either be modified by route maps, or can be modified by NAT, but not both.
- * Route maps won't be applied to the [hub address space](virtual-wan-site-to-site-portal.md#hub).
-* The point-to-site Multipool feature isn't currently supported with Route-maps.
+* The point-to-site (P2S) Multipool feature isn't currently supported with Route-maps.
+* Modifying the *Default* route is only supported when the default route is learned from on-premises or an NVA.
+* A prefix can be modified either by Route-maps, or by NAT, but not both.
+* Route-maps won't be applied to the [hub address space](virtual-wan-site-to-site-portal.md#hub).
## Configuration workflow
-This section outlines the basic workflow for Route-maps. You can [configure route maps](route-maps-how-to.md) using the Azure portal.
+You can configure Route-maps using the Azure portal. For configuration workflow and comprehensive steps, see [How to configure Route-maps](route-maps-how-to.md).
-1. Contact preview-route-maps@microsoft.com for access to the preview.
-1. Create a virtual WAN.
-1. Create all Virtual WAN virtual hubs needed for testing.
-1. Deploy any site-to-site VPN, point-to-site VPN, ExpressRoute gateways, and NVAs needed for testing.
-1. Verify that incoming and outgoing routes are working as expected.
-1. [Configure a route map and route map rules](route-maps-how-to.md), then save.
-1. Once a route map is configured, the virtual hub router and gateways begin an upgrade needed to support the Route-maps feature.
+## What are route-map rules?
- * The upgrade process takes around 30 minutes.
- * The upgrade process only happens the first time a route map is created on a hub.
- * If the route map is deleted, the virtual hub router remains on the new version of software.
- * Using Route-maps will incur an additional charge. For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
-1. The process is complete when the Provisioning state is 'Succeeded'. Open a support case if the process failed.
-1. The route map can now be applied to connections (ExpressRoute, S2S VPN, P2S VPN, VNet).
-1. Once the route map has been applied in the correct direction, use the [Route-map dashboard](route-maps-dashboard.md) to verify that the route map is working as expected.
+A route-map is an ordered sequence of one or more **route-map rules** that are applied to routes that are received or sent by the virtual hub. Route-map rules consist of [match conditions](#match-conditions), and [actions](#actions).
-## Route map rules
-
-A route map is an ordered sequence of one or more rules that are applied to routes received or sent by the virtual hub. Each route map rule is composed of 3 sections: match conditions, actions to be performed, and applying the route map to connections.
-
-### Match conditions
+When you're configuring a route-map rule, you use the **Next step** setting to specify whether routes that match this rule will continue on to be processed by the subsequent rules in the route-map, or stop (terminate). After route-map rules are configured for the route-map, the route-map can be applied to connections.
-Route-maps allows you to match routes using Route-prefix, BGP community, and AS-Path. These are the set of conditions that a processed route must meet to be considered as a match for the rule.
-
-* A route map rule can have any number of match conditions.
-* If a route map is created without a match condition, all routes from the applied connection will be matched. For example, a site-to-site VPN connection has routes 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24 being advertised from Azure to a branch office. A route map without a match condition will match 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24.
-* If a route map has multiple match conditions, then a route must meet all the match conditions to be considered a match for the rule. The order of the match conditions isn't relevant. For example, a site-to-site VPN connection has routes 10.2.1.0/24 with an AS path of 65535 and a BGP community of 65535:100 being advertised from Azure to a branch office. If a route map is created to on the connection with a rule to match on prefix 10.2.1.0, with another rule to match on 65535. Both conditions need to be met to be considered a match.
-* Multiple rules are supported. If the first rule isn't matched, then the second rule is evaluated. Select "Terminate" in the "Next step" field to the end of the rule. When no rule is matched, the default is to allow, not to deny.
-
-### Actions
-
-The match conditions are used to select a set of routes. Once those routes are selected, they can be dropped or modified.
+Things to consider:
-* **Drop:** All the matched routes are dropped (i.e filtered-out) from the route advertisement. For example, a site-to-site VPN connection has routes 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24 being advertised from Azure to a branch office. A route map can be configured to drop 10.2.1.0/24, 10.2.2.0/24, resulting in only 10.2.3.0/24 being advertised from Azure to a branch office.
+* A route-map rule can have any number of route modifications configured. It's possible to have a route-map without any rules.
+* If a route-map has no actions configured in a rule, the routes are unaltered.
+* If a route-map has multiple modifications configured in a rule, all configured actions are applied on the route. The order of the actions isn't relevant.
+* If a route isn't matched by all the match conditions in a rule, the route isn't considered a match for the rule. The route is passed on to the rule under the route-map, irrespective of the **Next step** setting.
+* Configure rules to only match the routes intended to avoid unintended traffic flows.
-* **Modify:** The possible route modifications are aggregating route-prefixes or modifying route BGP attributes. For example, a site-to-site VPN connection has routes 10.2.1.0/24 with an AS path of 65535 and a BGP community of 65535:100 being advertised from Azure to a branch office. A route map can be configured to add the AS path of [65535, 65005].
+### Match conditions
-After configuring a rule to drop or modify routes, it must be determined if the route map will continue to the next rule or stop. The "Next step" setting is used to determine if the route map will move to the next rule, or stop.
+Route-maps allows you to match routes using Route-prefix, BGP community, and AS-Path. **Match conditions** are the set of conditions that a processed route must meet to be considered as a match for the rule.
-Things to consider:
+* A route-map rule can have any number of match conditions.
+* If a route-map is created without a match condition, all routes from the applied connection will be matched.
-* A route map rule can have any number of route modifications configured. It's possible to have a route map without any rules.
-* If a route map has no actions configured in a rule, the routes are unaltered.
-* If a route map has multiple modifications configured in a rule, all configured actions are applied on the route. The order of the actions isn't relevant.
-* If a route isn't matched by all the match conditions in a rule, the route isn't considered a match for the rule. The route is passed on to the rule under the route map, irrespective of the **Next step** setting.
+ For example, a site-to-site VPN connection has routes 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24 being advertised from Azure to a branch office. A route-map without a match condition will match 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24.
+* If a route-map has multiple match conditions, then a route must meet all the match conditions to be considered a match for the rule. The order of the match conditions isn't relevant.
-## Applying route maps
+ For example, a site-to-site VPN connection has routes 10.2.1.0/24 with an AS Path of 65535 and a BGP community of 65535:100 being advertised from Azure to a branch office. If a route-map rule is created on the connection with a match condition to match on prefix 10.2.1.0, and another match condition for AS Path 65535, both conditions need to be met in order to be considered a match.
+* Multiple rules are supported. If the first rule isn't matched, then the second rule is evaluated. Select **Terminate** in the **Next step** field to end the list of rules in the route-map. When no rule is matched, the default is to allow, not to deny.
-On each connection, you can apply route maps for the inbound, outbound, or both inbound and outbound directions.
+### Actions
-When a route map is configured on a connection in the inbound direction, all the ingress route advertisements on that connection are processed by the route map before they're entered into the virtual hub routerΓÇÖs routing table, defaultRouteTable. Similarly, when a route map is configured on a connection in the outbound direction, all the egress route advertisements on that connection are processed by the route map before they're advertised by the virtual hub router across the connection.
+Match conditions are used to select a set of routes. Once those routes are selected, they can be dropped or modified. You can configure the following **Actions**:
-You can choose to apply same or different route maps in inbound and outbound directions, but only one route map can be applied in each direction.
+* **Drop:** All the matched routes are dropped (i.e filtered-out) from the route advertisement. For example, a site-to-site VPN connection has routes 10.2.1.0/24, 10.2.2.0/24 and 10.2.3.0/24 being advertised from Azure to a branch office. A route-map can be configured to drop 10.2.1.0/24, 10.2.2.0/24, resulting in only 10.2.3.0/24 being advertised from Azure to a branch office.
-You can view the routes from connections where a route map has been applied by using the **Route-map** dashboard. For ExpressRoute connections, a route map can't be applied on MSEE devices.
+* **Modify:** The possible route modifications are aggregating route-prefixes or modifying route BGP attributes. For example, a site-to-site VPN connection has routes 10.2.1.0/24 with an AS Path of 65535 and a BGP community of 65535:100 being advertised from Azure to a branch office. A route-map can be configured to add the AS Path of [65535, 65005].
-### Supported configurations for route map rules
+### Supported configurations for route-map rules
-The following section describes all the match conditions and actions supported for Preview.
+This section shows the match conditions and actions supported for the Route-maps Preview.
-**Match conditions**
+#### Match conditions
-|Property| Criterion| Value (example shown below)| Interpretation|
+|Property| Criterion| Value (example)| Interpretation|
||||| |Route-prefix| equals| 10.1.0.0/16,10.2.0.0/16,10.3.0.0/16,10.4.0.0/16|Matches these 4 routes only. Specific prefixes under these routes won't be matched. | |Route-prefix |contains| 10.1.0.0/16,10.2.0.0/16, 192.168.16.0/24, 192.168.17.0/24| Matches all the specified routes and all prefixes underneath. (Example 10.2.1.0/24 is underneath 10.2.0.0/16) | |Community| equals |65001:100,65001:200 |Community property of the route must have both the communities. Order isn't relevant.|
-|Community |contains| 65001:100,65001:200|Community property of the route may have one or more of the specified communities. |
+|Community |contains| 65001:100,65001:200|Community property of the route can have one or more of the specified communities. |
|AS-Path |equals| 65001,65002,65003| AS-PATH of the routes must have ASNs listed in the specified order.
-|AS-Path |contains| 65001,65002,65003| AS-PATH in the routes may contain one or more of the ASNs listed. Order isn't relevant.|
+|AS-Path |contains| 65001,65002,65003| AS-PATH in the routes can contain one or more of the ASNs listed. Order isn't relevant.|
-**Route modifications**
+#### Route modifications
|Property| Action| Value |Interpretation| ||||| |Route-prefix| Add |10.3.0.0/8,10.4.0.0/8 |The routes specified in the rules are added. | |Route-prefix | Replace| 10.0.0.0/8,192.168.0.0/16|Replace all the matched routes with the routes specified in the rule. |
-|As-Path | Add | 64580,64581 |Prepend AS-PATH with the list of ASNs specified in the rule. These ASNs will be applied in the same order for the matched routes. |
+|As-Path | Add | 64580,64581 |Prepend AS-PATH with the list of ASNs specified in the rule. These ASNs are applied in the same order for the matched routes. |
|As-Path | Replace | 65004,65005 |AS-PATH will be set to this list in the same order, for every matched route. See key considerations for reserved AS numbers. | |As-Path | Replace | No value specified |Remove all ASNs in the AS-PATH in the matched routes. | |Community | Add |64580:300,64581:300 |Add all the listed communities to all the matched routes Community attribute. |
The following section describes all the match conditions and actions supported f
|Community | Replace | No value specified |Remove Community attribute from all the matched routes. | |Community | Remove| 65001:100,65001:200|Remove any of the listed communities that are present in the matched routesΓÇÖ Community attribute. |
-## Troubleshooting
+## Apply route-maps to connections
+
+You can apply route-maps on each connection for the inbound, outbound, or both inbound and outbound directions. You can choose to apply same or different route-maps in inbound and outbound directions, but only one route-map can be applied in each direction. For ExpressRoute connections, a route-map can't be applied on MSEE devices.
+
+* **Inbound direction:** When a route-map is configured on a connection in the inbound direction, all the ingress route advertisements on that connection are processed by the route-map before they're entered into the virtual hub routerΓÇÖs routing table, *defaultRouteTable*.
+
+* **Outbound direction:** When a route-map is configured on a connection in the outbound direction, the route-map processes all the egress route advertisements on that connection before they're advertised by the virtual hub router across the connection.
+
+For steps to apply route-maps to connections, see [How to configure Route-maps](route-maps-how-to.md).
+
+## Monitor using the Route Map dashboard
+
+When a route-map is applied to a connection, you can use the Route Map dashboard to monitor and view:
-The following section describes common issues encountered when you configure Route-maps on your Virtual WAN hub. Read this section and, if your issue is still unresolved, please open a support case.
+* Routes
+* AS Path
+* BGP communities
+For more information and steps, see [Monitor Route-maps using the Route Map dashboard](route-maps-dashboard.md).
## Next steps
-* [Configure Route-maps](route-maps-how-to.md).
-* Use the [Route-maps dashboard](route-maps-dashboard.md) to monitor routes, AS Path, and BGP communities.
+* To configure Route-maps, see [How to configure Route-maps](route-maps-how-to.md).
+* To monitor routes, AS Path, and BGP communities, see the
+[Route Map dashboard](route-maps-dashboard.md).
virtual-wan Route Maps Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-dashboard.md
description: Learn how to use the Route Map dashboard to monitor routes, AS Path
Previously updated : 05/31/2023 Last updated : 03/04/2024
This article helps you use the Route Map dashboard to monitor Route-maps. Using the Route Map dashboard, you can monitor routes, AS Path, and BGP communities for routes in your Virtual WAN.
+> [!IMPORTANT]
+> Route-maps is currently in Public Preview and is provided without a service-level agreement. At this time, **Route-maps shouldn't be used for production workloads**. Certain features might not be supported, might have constrained capabilities, or might not be available in all Azure locations. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
## Dashboard view - The following steps walk you through how to navigate to the Route Map dashboard. 1. Go to the **Azure portal -> your Virtual WAN**.
After you open the Route Map dashboard, you can view the details of your connect
1. From the drop-down, select the **Connection type** that you want to view. The connections types are VPN (Site-to-Site and Point-to-Site), ExpressRoute, and Virtual Network. 1. From the drop-down, select **Connection** you want to view.
-1. Select the direction from the two options: **In the inbound direction.** or **In the outbound direction.**You can view inbound from a VNet or inbound to the virtual hub router from ExpressRoute, Branch or User connections. You can also view routes outbound from a VNet or outbound from the virtual hub router to ExpressRoute, Branch or User connections.
+1. Select the direction from the two options: **In the inbound direction.** or **In the outbound direction.** You can view inbound from a VNet or inbound to the virtual hub router from ExpressRoute, Branch or User connections. You can also view routes outbound from a VNet or outbound from the virtual hub router to ExpressRoute, Branch or User connections.
1. On the Route Map dashboard, the following values are available: |Value | Description|
In this example, you can use the Route Map Dashboard to view the routes on **Con
1. Go to the **Route Map** Dashboard. 1. From the **Choose connection type** drop-down, select "VPN". 1. From the **Connection** drop-down, select the connection. In this example, the connection is "Connection 1".
-1. For **View routs for Route-maps applied**, select **In the outbound direction**.
+1. For **View routes for Route-maps applied**, select **In the outbound direction**.
## Next steps * [Configure Route Maps](route-maps-how-to.md)
-* [About Route Maps](route-maps-about.md)
+* [About Route Maps](route-maps-about.md)
virtual-wan Route Maps How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-how-to.md
description: Learn how to configure Route-maps for Virtual WAN virtual hubs.
Previously updated : 01/16/2024 Last updated : 03/04/2024 # How to configure Route-maps (Preview)
-This article helps you create or edit a route map in an Azure Virtual WAN hub using the Azure portal. For more information about Virtual WAN Route-maps, see [About Route-maps](route-maps-about.md).
-
+This article helps you create or edit a route-map in an Azure Virtual WAN hub using the Azure portal. For more information about Virtual WAN Route-maps, see [About Route-maps](route-maps-about.md).
## Prerequisites
+> [!Important]
+> [!INCLUDE [Preview text](../../includes/virtual-wan-route-maps-preview.md)]
+ Verify that you've met the following criteria before beginning your configuration:
-* You have virtual WAN with a connection (S2S, P2S, or ExpressRoute) already configured.
- * For steps to create a VWAN with a S2S connection, see [Tutorial - Create a S2S connection with Virtual WAN](virtual-wan-site-to-site-portal.md).
- * For steps to create a virtual WAN with a P2S User VPN connection, see [Tutorial - Create a User VPN P2S connection with Virtual WAN](virtual-wan-point-to-site-portal.md).
+* You have virtual WAN (VWAN) with a connection (S2S, P2S, or ExpressRoute) already configured.
+ * For steps to create a VWAN with a S2S connection, see [Tutorial - Create a S2S connection with Virtual WAN](virtual-wan-site-to-site-portal.md).
+ * For steps to create a virtual WAN with a P2S User VPN connection, see [Tutorial - Create a User VPN P2S connection with Virtual WAN](virtual-wan-point-to-site-portal.md).
+* Be sure to view [About Route-maps](route-maps-about.md#considerations-and-limitations) for considerations and limitations before proceeding with configuration steps.
+
+## Configuration workflow
+
+1. Create a virtual WAN.
+1. Create all Virtual WAN virtual hubs needed for testing.
+1. Deploy any site-to-site VPN, point-to-site VPN, ExpressRoute gateways, and NVAs needed for testing.
+1. Verify that incoming and outgoing routes are working as expected.
+1. Configure a route-map and route-map rules, then save. For more information about route-map rules, see [About Route-maps](route-maps-about.md).
+1. Once a route-map is configured, the virtual hub router and gateways begin an upgrade needed to support the Route-maps feature.
+
+ * The upgrade process takes around 30 minutes.
+ * The upgrade process only happens the first time a route-map is created on a hub.
+ * If the route-map is deleted, the virtual hub router remains on the new version of software.
+ * Using Route-maps incurs an additional charge. For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/virtual-wan/) page.
+1. The process is complete when the **Provisioning state** is **Succeeded**. Open a support case if the process failed.
+1. The route-map can now be applied to connections (ER, S2S VPN, P2S VPN, VNet).
+1. Once the route-map is applied in the correct direction, use the [Route-map dashboard](route-maps-dashboard.md) to verify that the route-map is working as expected.
-## Create a route map
+## Create a route-map
-The following steps walk you through how to configure a route map.
+> [!NOTE]
+> [!INCLUDE [Preview text](../../includes/virtual-wan-route-maps-preview.md)]
+>
+
+The following steps walk you through how to configure a route-map.
1. In the Azure portal, go to your Virtual WAN resource. Select **Hubs** to view the list of hubs. :::image type="content" source="./media/route-maps-how-to/hub.png" alt-text="Screenshot shows selecting the hub you want to configure." lightbox="./media/route-maps-how-to/hub.png"::: 1. Select the hub that you want to configure to open the **Virtual Hub** page.
-1. On the Virtual Hub page, in the Routing section, select **Route-maps** to open the Route-maps page. On the Route-maps page, select **+ Add Route-map** to create a new route map.
+1. On the Virtual Hub page, in the Routing section, select **Route-maps** to open the Route-maps page. On the Route-maps page, select **+ Add Route-map** to create a new route-map.
:::image type="content" source="./media/route-maps-how-to/route-maps.png" alt-text="Screenshot shows Add Route-map selected." lightbox="./media/route-maps-how-to/route-maps.png":::
-1. On the **Create Route-map** page, provide a Name for the route map.
-1. Then, select **+ Add Route-map** to create rules in the route map.
+1. On the **Create Route-map** page, provide a Name for the route-map.
+1. Then, select **+ Add Route-map** to create rules in the route-map.
:::image type="content" source="./media/route-maps-how-to/add.png" alt-text="Screenshot shows add route-map." lightbox="./media/route-maps-how-to/add.png"::: 1. On the **Create Route-map rule** page, complete the necessary configuration.
- * Name ΓÇô Provide a name for the route map rule.
- * Next step ΓÇô Choose "Continue" if routes matching this rule must be processed by subsequent rules in the route map. Else choose "Terminate".
- * Match conditions ΓÇô Each Match Condition requires a Property, Criterion and Value. There can be 0 or more match conditions. To add a new match condition, select the empty row in the table. To delete a row, select delete icon at the end of the row. To add multiple values under Value, use comma (,) as the delimiter. Refer to [About Route-maps](route-maps-about.md) for list of supported match conditions.
- * Actions > Action on match routes ΓÇô Choose "Drop" to deny the matched routes, or "Modify" to permit and modify the matched routes.
- * Actions > Route modifications - Each Action statement requires a Property, Action and Value. There can be 0 or more route modification statements. To add a new statement, select the empty row in the table. To delete a row, select delete icon at the end of the row. To add multiple values under Value, use comma (,) as the delimiter. Refer to [About Route-maps](route-maps-about.md) for list of supported actions.
+ * **Name** ΓÇô Provide a name for the route-map rule.
+ * **Next step** ΓÇô From the dropdown, select **Continue** if routes matching this rule must be processed by subsequent rules in the route-map. If not, select **Terminate**.
+ * **Match conditions** ΓÇô Each **Match Condition** requires a Property, Criterion, and a Value. There can be 0 or more match conditions.
+ * To add a new match condition, select the empty row in the table.
+ * To delete a row, select delete icon at the end of the row.
+ * To add multiple values under Value, use comma (,) as the delimiter. Refer to [About Route-maps](route-maps-about.md) for list of supported match conditions.
+ * **Actions > Action on match routes** ΓÇô Select **Drop** to deny the matched routes, or **Modify** to permit and modify the matched routes.
+ * **Actions > Route modifications** ΓÇô Each **Action** statement requires a Property, an Action, and a Value. There can be 0 or more route modification statements.
+ * To add a new statement, select the empty row in the table.
+ * To delete a row, select delete icon at the end of the row.
+ * To add multiple values under Value, use comma (,) as the delimiter. Refer to [About Route-maps](route-maps-about.md) for list of supported actions.
:::image type="content" source="./media/route-maps-how-to/rule.png" alt-text="Screenshot shows Create Route-map rule page." lightbox="./media/route-maps-how-to/rule.png":::
-1. Select **Add** to complete rule configuration. Clicking "Add" stores the rule temporarily on the Azure portal, but isn't saved to the route map yet. Select "Okay" on the Reminder dialog box to acknowledge that the rule isn't completely saved yet and proceed to the next-step.
+1. Select **Add** to complete rule configuration. When you select **Add**, this stores the rule temporarily on the Azure portal, but isn't saved to the route-map yet. Select **Okay** on the **Reminder** dialog box to acknowledge that the rule isn't completely saved yet and proceed to the next-step.
+
+1. Repeat steps 6 and 7 to add additional rules, if necessary.
-1. Repeat steps 6 - 8 to add additional rules as required. On the **Create Route-map** page, after all the rules are added, ensure that the order of the rules is as desired. To adjust the order, follow the instructions in the following screenshot. Then, select **Save** to save all the rules to the route map.
+1. On the **Create Route-map** page, after all the rules are added, ensure that the order of the rules is as desired. Adjust the rules as necessary by hovering your mouse on a row, then clicking and holding the 3 dots and dragging the row up or down. When you finish adjusting the rule order, select **Save** to save all the rules to the route-map.
:::image type="content" source="./media/route-maps-how-to/adjust-order.png" alt-text="Screenshot shows how to adjust the order of rules." lightbox="./media/route-maps-how-to/adjust-order.png":::
-1. It takes a few minutes to save the route map and the route map rules. Once saved, the **Provisioning state** shows **Succeeded**.
+1. It takes a few minutes to save the route-map and the route-map rules. Once saved, the **Provisioning state** shows **Succeeded**.
:::image type="content" source="./media/route-maps-how-to/provisioning.png" alt-text="Screenshot shows Provisioning state is Succeeded." lightbox="./media/route-maps-how-to/provisioning.png":::
-## Apply a route map to connections
+## Apply a route-map to connections
-Once the route map is saved, you can apply the route map to the desired connections in the virtual hub.
+Once the route-map is saved, you can apply the route-map to the desired connections in the virtual hub.
1. On the **Route-maps** page, select **Apply Route-maps to connections**. :::image type="content" source="./media/route-maps-how-to/apply-to-connections.png" alt-text="Screenshot shows Apply Route-maps to connections." lightbox="./media/route-maps-how-to/apply-to-connections.png":::
-1. On the **Apply Route-maps to connections** page, configure the following settings. When you have finished configuring these settings, select **Save**.
+1. On the **Apply Route-maps to connections** page, configure the following settings.
+
+ * Select the drop-down box under **Inbound Route-map** and select the route-map you want to apply in the ingress direction.
+ * Select the drop-down box under **Outbound Route-map** and select the route-map you want to apply in the egress direction.
+ * The table at the bottom lists all the connections to the virtual hub. Select one or more connections you want to apply the route-maps to.
:::image type="content" source="./media/route-maps-how-to/save.png" alt-text="Screenshot shows configuring and saving settings." lightbox="./media/route-maps-how-to/save.png":::
- * Select the drop-down box under **Inbound Route-map** and select the route map you want to apply in the ingress direction.
- * Select the drop-down box under **Outbound Route-map** and select the route map you want to apply in the egress direction.
- * The table at the bottom lists all the connections to the virtual hub. Select one or more connections you want to apply the route maps to.
+1. When you finish configuring these settings, select **Save**.
-1. Verify the changes by opening **Apply Route-maps to connections** again from the Route-maps page.
+1. Verify the changes by opening **Apply Route-maps to connections** again from the **Route-maps** page.
:::image type="content" source="./media/route-maps-how-to/verify.png" alt-text="Screenshot shows Apply Route-maps to connections page to verify changes." lightbox="./media/route-maps-how-to/verify.png":::
-## Modify or remove existing route map or route map rules
+1. Once the route-map is applied in the correct direction, use the [Route-map dashboard](route-maps-dashboard.md) to verify that the route-map is working as expected.
+
+## Modify or remove a route-map or route-map rules
1. To modify or remove an existing Route-map, go to the **Route-maps** page.
-1. On the line for the route map that you want to work with, select **… > Edit** or **… > Delete**, respectively.
+1. On the line for the route-map that you want to work with, select **… > Edit** or **… > Delete**, respectively.
- :::image type="content" source="./media/route-maps-how-to/edit.png" alt-text="Screenshot shows how to modify or remove a route map or rules." lightbox="./media/route-maps-how-to/edit.png":::
+ :::image type="content" source="./media/route-maps-how-to/edit.png" alt-text="Screenshot shows how to modify or remove a route-map or rules." lightbox="./media/route-maps-how-to/edit.png":::
-## Modify or remove a route map from a connection
+## Modify or remove a route-map from a connection
To modify or remove an existing Route-map rule, use the following steps.
virtual-wan Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md
description: Learn what's new with Azure Virtual WAN such as the latest release
Previously updated : 05/30/2023 Last updated : 03/04/2024
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | |||||| | Feature| Routing | [Routing intent](how-to-routing-policies.md)| Routing intent is the mechanism through which you can configure Virtual WAN to send private or internet traffic via a security solution deployed in the hub.|May 2023|Routing Intent is Generally Available in Azure public cloud. See documentation for [additional limitations](how-to-routing-policies.md#knownlimitations).|
-|Feature| Routing |[Virtual hub routing preference](about-virtual-hub-routing-preference.md)|Hub routing preference gives you more control over your infrastructure by allowing you to select how your traffic is routed when a virtual hub router learns multiple routes across S2S VPN, ER and SD-WAN NVA connections. |October 2022| |
+|Feature| Routing |[Virtual hub routing preference](about-virtual-hub-routing-preference.md)|Hub routing preference gives you more control over your infrastructure by allowing you to select how your traffic is routed when a virtual hub router learns multiple routes across S2S VPN, ER, and SD-WAN NVA connections. |October 2022| |
|Feature| Routing|[Bypass next hop IP for workloads within a spoke VNet connected to the virtual WAN hub generally available](how-to-virtual-hub-routing.md)|Bypassing next hop IP for workloads within a spoke VNet connected to the virtual WAN hub lets you deploy and access other resources in the VNet with your NVA without any additional configuration.|October 2022| | |SKU/Feature/Validation | Routing | [BGP end point (General availability)](scenario-bgp-peering-hub.md) | The virtual hub router now exposes the ability to peer with it, thereby exchanging routing information directly through Border Gateway Protocol (BGP) routing protocol. | June 2022 | | |Feature|Routing|[0.0.0.0/0 via NVA in the spoke](scenario-route-through-nvas-custom.md)|Ability to send internet traffic to an NVA in spoke for egress.|March 2021| 0.0.0.0/0 doesn't propagate across hubs.<br><br>Can't specify multiple public prefixes with different next hop IP addresses.|
You can also find the latest Azure Virtual WAN updates and subscribe to the RSS
| Type |Area |Name |Description | Date added | Limitations | | ||||||
-| Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| Public Preview of Internet inbound/DNAT for Next-Generation Firewall NVA's| Destination NAT for Network Virtual Appliances in the Virtual WAN hub allows you to publish applications to the users in the internet without directly exposing the application or server's public IP. Consumers access applications through a public IP address assigned to a Firewall Network Virtual Appliance. |February 2024| Supported for Fortinet Next-Generation Firewall, Check Point CloudGuard. See [DNAT documentation](how-to-network-virtual-appliance-inbound.md) for the full list of limitations and considerations.|
+| Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| Public Preview of Internet inbound/DNAT for Next-Generation Firewall NVAs| Destination NAT for Network Virtual Appliances in the Virtual WAN hub allows you to publish applications to the users in the internet without directly exposing the application or server's public IP. Consumers access applications through a public IP address assigned to a Firewall Network Virtual Appliance. |February 2024| Supported for Fortinet Next-Generation Firewall, Check Point CloudGuard. See [DNAT documentation](how-to-network-virtual-appliance-inbound.md) for the full list of limitations and considerations.|
|Feature|Software-as-a-service|Palo Alto Networks Cloud NGFW|General Availability of [Palo Alto Networks Cloud NGFW](https://aka.ms/pancloudngfwdocs), the first software-as-a-serivce security offering deployable within the Virtual WAN hub.|July 2023|Palo Alto Networks Cloud NGFW is now deployable in all Virtual WAN hubs (new and old). See [Limitations of Palo Alto Networks Cloud NGFW](how-to-palo-alto-cloud-ngfw.md) for a full list of limitations and regional availability. Same limitations as routing intent.| |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Fortinet NGFW](https://www.fortinet.com/products/next-generation-firewall)|General Availability of [Fortinet NGFW](https://aka.ms/fortinetngfwdocumentation) and [Fortinet SD-WAN/NGFW dual-role](https://aka.ms/fortinetdualroledocumentation) NVAs.|May 2023| Same limitations as routing intent. Doesn't support internet inbound scenario.| |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Check Point CloudGuard Network Security for Azure Virtual WAN](https://www.checkpoint.com/cloudguard/microsoft-azure-security/wan/) |General Availability of Check Point CloudGuard Network Security NVA deployable from Azure Marketplace within the Virtual WAN hub in all Azure regions.|May 2023|Same limitations as routing intent. Doesn't support internet inbound scenario.|
The following features are currently in gated public preview. After working with
|Type of preview|Feature |Description|Contact alias|Limitations| ||||||
-| Managed preview | Route-maps | This feature allows you to perform route aggregation, route filtering, and modify BGP attributes for your routes in Virtual WAN. | preview-route-maps@microsoft.com | Known limitations are displayed here: [About Route-maps (preview)](route-maps-about.md#key-considerations).
+| Managed preview | Route-maps | This feature allows you to perform route aggregation, route filtering, and modify BGP attributes for your routes in Virtual WAN. | preview-route-maps@microsoft.com | Known limitations are displayed here: [About Route-maps](route-maps-about.md).
|Managed preview|Aruba EdgeConnect SD-WAN| Deployment of Aruba EdgeConnect SD-WAN NVA into the Virtual WAN hub| preview-vwan-aruba@microsoft.com| | ## Known issues
The following features are currently in gated public preview. After working with
|||||| |1|ExpressRoute connectivity with Azure Storage and the 0.0.0.0/0 route|If you have configured a 0.0.0.0/0 route statically in a virtual hub route table or dynamically via a network virtual appliance for traffic inspection, that traffic will bypass inspection when destined for Azure Storage and is in the same region as the ExpressRoute gateway in the virtual hub. | | As a workaround, you can either use [Private Link](../private-link/private-link-overview.md) to access Azure Storage or put the Azure Storage service in a different region than the virtual hub.| |2| Default routes (0/0) won't propagate inter-hub |0/0 routes won't propagate between two virtual WAN hubs. | June 2020 | None. Note: While the Virtual WAN team has fixed the issue, wherein static routes defined in the static route section of the VNet peering page propagate to route tables listed in "propagate to route tables" or the labels listed in "propagate to route tables" on the VNet connection page, default routes (0/0) won't propagate inter-hub. |
-|3| Two ExpressRoute circuits in the same peering location connected to multiple hubs |If you have two ExpressRoute circuits in the same peering location, and both of these circuits are connected to multiple virtual hubs in the same Virtual WAN, then connectivity to your Azure resources may be impacted. | July 2023 | Make sure each virtual hub has at least 1 virtual network connected to it. This will ensure connectivity to your Azure resources. The Virtual WAN team is also working on a fix for this issue. |
+|3| Two ExpressRoute circuits in the same peering location connected to multiple hubs |If you have two ExpressRoute circuits in the same peering location, and both of these circuits are connected to multiple virtual hubs in the same Virtual WAN, then connectivity to your Azure resources might be impacted. | July 2023 | Make sure each virtual hub has at least 1 virtual network connected to it. This ensures connectivity to your Azure resources. The Virtual WAN team is also working on a fix for this issue. |
## Next steps