Updates from: 01/10/2024 02:09:30
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/authentication.md
Now that you have a custom subdomain associated with your resource, you're going
New-AzADServicePrincipal -ApplicationId <APPLICATION_ID> ```
- >[!NOTE]
+ > [!NOTE]
> If you register an application in the Azure portal, this step is completed for you. 3. The last step is to [assign the "Cognitive Services User" role](/powershell/module/az.Resources/New-azRoleAssignment) to the service principal (scoped to the resource). By assigning a role, you're granting service principal access to this resource. You can grant the same service principal access to multiple resources in your subscription.
- >[!NOTE]
+
+ > [!NOTE]
> The ObjectId of the service principal is used, not the ObjectId for the application. > The ACCOUNT_ID will be the Azure resource Id of the Azure AI services account you created. You can find Azure resource Id from "properties" of the resource in Azure portal.
In this sample, a password is used to authenticate the service principal. The to
``` 2. Get a token:
- > [!NOTE]
- > If you're using Azure Cloud Shell, the `SecureClientSecret` class isn't available.
-
- #### [PowerShell](#tab/powershell)
```powershell-interactive
- $authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>"
- $secureSecretObject = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.SecureClientSecret" -ArgumentList $SecureStringPassword
- $clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, $secureSecretObject
- $token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result
- $token
- ```
+ $tenantId = $context.Tenant.Id
+ $clientId = $app.ApplicationId
+ $clientSecret = "<YOUR_PASSWORD>"
+ $resourceUrl = "https://cognitiveservices.azure.com/"
- #### [Azure Cloud Shell](#tab/azure-cloud-shell)
- ```Azure Cloud Shell
- $authContext = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext" -ArgumentList "https://login.windows.net/<TENANT_ID>"
- $clientCredential = New-Object "Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential" -ArgumentList $app.ApplicationId, <YOUR_PASSWORD>
- $token=$authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", $clientCredential).Result
- $token
- ```
-
-
+ $tokenEndpoint = "https://login.microsoftonline.com/$tenantId/oauth2/token"
+ $body = @{
+ grant_type = "client_credentials"
+ client_id = $clientId
+ client_secret = $clientSecret
+ resource = $resourceUrl
+ }
+
+ $responseToken = Invoke-RestMethod -Uri $tokenEndpoint -Method Post -Body $body
+ $accessToken = $responseToken.access_token
+ ```
+ > [!NOTE]
+ > Anytime you use passwords in a script, the most secure option is to use the PowerShell Secrets Management module and integrate with a solution such as Azure KeyVault.
+
3. Call the Computer Vision API: ```powershell-interactive $url = $account.Endpoint+"vision/v1.0/models"
- $result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"=$token.CreateAuthorizationHeader()} -Verbose
+ $result = Invoke-RestMethod -Uri $url -Method Get -Headers @{"Authorization"="Bearer $accessToken"} -Verbose
$result | ConvertTo-Json ```
ai-services Identity Api Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/identity-api-reference.md
Azure AI Face is a cloud-based service that provides algorithms for face detecti
- [LargeFaceList APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/5a157b68d2de3616c086f2cc): Used to manage a LargeFaceList for [Find Similar](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395237). - [PersonGroup Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f3039523c): Used to manage PersonGroup Person Faces for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239). - [PersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395244): Used to manage a PersonGroup dataset for [Identification](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395239).
+- [PersonDirectory Person APIs](https://westus.dev.cognitive.microsoft.com/docs/services/face-v1-0-preview/operations/5f06637aad1c4fba7238de25)
+- [PersonDirectory DynamicPersonGroup APIs](https://westus.dev.cognitive.microsoft.com/docs/services/face-v1-0-preview/operations/5f066b475d2e298611e11115)
- [Snapshot APIs](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/snapshot-take): Used to manage a Snapshot for data migration across subscriptions.
ai-services Concept Retrieval Augumented Generation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-retrieval-augumented-generation.md
If you're looking for a specific section in a document, you can use semantic chu
```python
-# Using SDK targeting 2023-10-31-preview
+# Using SDK targeting 2023-10-31-preview, make sure your resource is in one of these regions: East US, West US2, West Europe
# pip install azure-ai-documentintelligence==1.0.0b1 # pip install langchain langchain-community azure-ai-documentintelligence
splits
* [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio).
-* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
+* Complete a [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true) and get started creating a document processing app in the development language of your choice.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
- ignite-2023 Previously updated : 11/22/2023 Last updated : 01/09/2024 monikerRange: '<=doc-intel-4.0.0'
monikerRange: '<=doc-intel-4.0.0'
> * As of July 2023, Azure AI services encompass all of what were previously known as Cognitive Services and Azure Applied AI Services. > * There are no changes to pricing. > * The names *Cognitive Services* and *Azure Applied AI* continue to be used in Azure billing, cost analysis, price list, and price APIs.
-> * There are no breaking changes to application programming interfaces (APIs) or SDKs.
+> * There are no breaking changes to application programming interfaces (APIs) or SDKs prior to and including v3.1. Starting from v4.0, APIs and SDKs are updated to Document Intelligence.
> * Some platforms are still awaiting the renaming update. All mention of Form Recognizer or Document Intelligence in our documentation refers to the same Azure service. Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-services/index.yml) that enables you to build intelligent document processing solutions. Massive amounts of data, spanning a wide variety of data types, are stored in forms and documents. Document Intelligence enables you to effectively manage the velocity at which data is collected and processed and is key to improved operations, informed data-driven decisions, and enlightened innovation. </br></br>
Document analysis models enable text extraction from forms and documents and ret
:::column-end::: :::column span=""::: :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br>
- [**Layout**](#layout) | Extract text </br>and document structure.
+ [**Layout**](#layout) | Extract text, tables, </br>and document structure.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-general-document.png" link="#general-document":::</br>
- [**General document**](#general-document) | Extract text, </br>structure, and key-value pairs.
+ :::image type="icon" source="media/overview/icon-general-document.png" link="#general-document-deprecated-in-2023-10-31-preview":::</br>
+ [**General document**](#general-document-deprecated-in-2023-10-31-preview) | Extract text, </br>structure, and key-value pairs.
:::column-end::: :::row-end::: :::moniker-end
Prebuilt models enable you to add intelligent document processing to your apps a
## Add-on capabilities
-Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for`2023-07-31 (GA)` and later releases:
+Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases:
* [`ocr.highResolution`](concept-add-on-capabilities.md#high-resolution-extraction)
Document Intelligence supports optional features that can be enabled and disable
* [`ocr.barcode`](concept-add-on-capabilities.md#barcode-property-extraction)
-Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for`2023-10-31-preview` and later releases:
+Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for `2023-10-31-preview` and later releases:
* [`queryFields`](concept-add-on-capabilities.md#query-fields)
Document Intelligence supports optional features that can be enabled and disable
|Model ID|Content Extraction|Paragraphs|Paragraph Roles|Selection Marks|Tables|Key-Value Pairs|Languages|Barcodes|Document Analysis|Formulas*|Style Font*|High Resolution*|query fields| |:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-|:-| |prebuilt-read|Γ£ô|Γ£ô| | | | |O|O| |O|O|O| |
-|prebuilt-layout|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô| |O|O| |O|O|O|Γ£ô|
+|prebuilt-layout|Γ£ô|Γ£ô|Γ£ô|Γ£ô|Γ£ô|O|O|O| |O|O|O|Γ£ô|
|prebuilt-idDocument|Γ£ô| | | | | |O|O|Γ£ô|O|O|O|Γ£ô| |prebuilt-invoice|Γ£ô| | |Γ£ô|Γ£ô|O|O|O|Γ£ô|O|O|O|Γ£ô| |prebuilt-receipt|Γ£ô| | | | | |O|O|Γ£ô|O|O|O|Γ£ô|
You can use Document Intelligence to automate document processing in application
::: moniker range="doc-intel-3.1.0 || doc-intel-3.0.0"
-### General document
+### General document (deprecated in 2023-10-31-preview)
:::image type="content" source="media/overview/analyze-general-document.png" alt-text="Screenshot of General Document model analysis using Document Intelligence Studio."::: | Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-document**](concept-general-document.md)|&#9679; Extract **text,layout, and key-value pairs** from documents.</br>&#9679; [Data and field extraction](concept-general-document.md#data-extraction)|&#9679; Key-value pair extraction.</br>&#9679; Form processing.</br>&#9679; Survey data collection and analysis.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+|[**prebuilt-document**](concept-general-document.md)|&#9679; Extract **text,layout, and key-value pairs** from documents.</br>&#9679; [Data and field extraction](concept-general-document.md#data-extraction)|&#9679; Key-value pair extraction.</br>&#9679; Form processing.</br>&#9679; Survey data collection and analysis.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|
> [!div class="nextstepaction"] > [Return to model types](#document-analysis-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-invoice**](concept-invoice.md) |&#9679; Extract key information from invoices.</br>&#9679; [Data and field extraction](concept-invoice.md#field-extraction) |&#9679; Accounts payable processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-invoice**](concept-invoice.md) |&#9679; Extract key information from invoices.</br>&#9679; [Data and field extraction](concept-invoice.md#field-extraction) |&#9679; Accounts payable processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-receipt**](concept-receipt.md) |&#9679; Extract key information from receipts.</br>&#9679; [Data and field extraction](concept-receipt.md#field-extraction)</br>&#9679; Receipt model v3.0 supports processing of **single-page hotel receipts**.|&#9679; Expense management.</br>&#9679; Consumer behavior data analysis.</br>&#9679; Customer loyalty program.</br>&#9679; Merchandise return processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-receipt**](concept-receipt.md) |&#9679; Extract key information from receipts.</br>&#9679; [Data and field extraction](concept-receipt.md#field-extraction)</br>&#9679; Receipt model v3.0 supports processing of **single-page hotel receipts**.|&#9679; Expense management.</br>&#9679; Consumer behavior data analysis.</br>&#9679; Customer loyalty program.</br>&#9679; Merchandise return processing.</br>&#9679; Automated tax recording and reporting. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-idDocument**](concept-id-document.md) |&#9679; Extract key information from passports and ID cards.</br>&#9679; [Document types](concept-id-document.md#supported-document-types)</br>&#9679; Extract endorsements, restrictions, and vehicle classifications from US driver's licenses. |&#9679; Know your customer (KYC) financial services guidelines compliance.</br>&#9679; Medical account management.</br>&#9679; Identity checkpoints and gateways.</br>&#9679; Hotel registration. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-idDocument**](concept-id-document.md) |&#9679; Extract key information from passports and ID cards.</br>&#9679; [Document types](concept-id-document.md#supported-document-types)</br>&#9679; Extract endorsements, restrictions, and vehicle classifications from US driver's licenses. |&#9679; Know your customer (KYC) financial services guidelines compliance.</br>&#9679; Medical account management.</br>&#9679; Identity checkpoints and gateways.</br>&#9679; Hotel registration. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-| [**prebuilt-healthInsuranceCard.us**](concept-health-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-health-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)
+| [**prebuilt-healthInsuranceCard.us**](concept-health-insurance-card.md)|&#9679; Extract key information from US health insurance cards.</br>&#9679; [Data and field extraction](concept-health-insurance-card.md#field-extraction)|&#9679; Coverage and eligibility verification. </br>&#9679; Predictive modeling.</br>&#9679; Value-based analytics.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=healthInsuranceCard.us)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description| Development options | |-|--|-|
-|**prebuilt-contract**|Extract contract agreement and party details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
+|**prebuilt-contract**|Extract contract agreement and party details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID| Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-tax.us.W-2**](concept-w2.md) |&#9679; Extract key information from IRS US W2 tax forms (year 2018-2021).</br>&#9679; [Data and field extraction](concept-w2.md#field-extraction)|&#9679; Automated tax document management.</br>&#9679; Mortgage loan application processing. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model) |
+|[**prebuilt-tax.us.W-2**](concept-w2.md) |&#9679; Extract key information from IRS US W2 tax forms (year 2018-2021).</br>&#9679; [Data and field extraction](concept-w2.md#field-extraction)|&#9679; Automated tax document management.</br>&#9679; Mortgage loan application processing. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model) |
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description| Development options | |-|--|-|
-|**prebuilt-tax.us.1098**|Extract mortgage interest information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
+|**prebuilt-tax.us.1098**|Extract mortgage interest information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Development options | |-|--|-|
-|**prebuilt-tax.us.1098E**|Extract student loan information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
+|**prebuilt-tax.us.1098E**|Extract student loan information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID |Description|Development options | |-|--|--|
-|**prebuilt-tax.us.1098T**|Extract tuition information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
+|**prebuilt-tax.us.1098T**|Extract tuition information and details.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID |Description|Development options | |-|--|--|
-|**prebuilt-tax.us.1099(Variations)**|Extract information from 1099 form variations.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence)
+|**prebuilt-tax.us.1099(Variations)**|Extract information from 1099 form variations.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)</br>&#9679; [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services?pattern=intelligence)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
-|[**prebuilt-businessCard**](concept-business-card.md) |&#9679; Extract key information from business cards.</br>&#9679; [Data and field extraction](concept-business-card.md#field-extractions) |&#9679; Sales lead and marketing management. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-businessCard**](concept-business-card.md) |&#9679; Extract key information from business cards.</br>&#9679; [Data and field extraction](concept-business-card.md#field-extractions) |&#9679; Sales lead and marketing management. |&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases |Development options | |-|--|--|--|
-|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
+|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases | Development options | |-|--|-|--|
-|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| &#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| &#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases | Development options | |-|--|-|--|
- |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)
+|[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
You can use Document Intelligence to automate document processing in application
| About | Description |Automation use cases | Development options | |-|--|-|--|
-|[**Composed custom models**](concept-composed-models.md)| A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types.| Useful when you train several models and want to group them to analyze similar form types like purchase orders.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&#9679; [**Java SDK**](/jav?view=doc-intel-3.0.0&preserve-view=true)
+|[**Composed custom models**](concept-composed-models.md)| A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types.| Useful when you train several models and want to group them to analyze similar form types like purchase orders.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&#9679; [**C# SDK**](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&#9679; [**Java SDK**](/jav?view=doc-intel-3.0.0&preserve-view=true)|
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models) + #### Custom classification model :::image type="content" source="media/overview/custom-classifier-labeling.png" alt-text="{alt-text}"::: | About | Description |Automation use cases | Development options | |-|--|-|--|
-|[**Composed classification model**](concept-custom-classifier.md)| Custom classification models combine layout and language features to detect, identify, and classify documents within an input file.|&#9679; A loan application packaged containing application form, payslip, and, bank statement.</br>&#9679; A collection of scanned invoices. |&#9679; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [REST API](/rest/api/aiservices/document-classifiers/build-classifier?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>
+|[**Composed classification model**](concept-custom-classifier.md)| Custom classification models combine layout and language features to detect, identify, and classify documents within an input file.|&#9679; A loan application packaged containing application form, payslip, and, bank statement.</br>&#9679; A collection of scanned invoices. |&#9679; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&#9679; [REST API](/rest/api/aiservices/document-classifiers/build-classifier?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>|
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-serv
| Model type | Model name | ||--| |**Document analysis model**| &#9679; [**Layout analysis model**](concept-layout.md?view=doc-intel-2.1.0&preserve-view=true) </br> |
-| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md?view=doc-intel-2.1.0&preserve-view=true)</br>&#9679; [**Receipt model**](concept-receipt.md?view=doc-intel-2.1.0&preserve-view=true) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md?view=doc-intel-2.1.0&preserve-view=true) </br>&#9679; [**Business card model**](concept-business-card.md?view=doc-intel-2.1.0&preserve-view=true) </br>
+| **Prebuilt models** | &#9679; [**Invoice model**](concept-invoice.md?view=doc-intel-2.1.0&preserve-view=true)</br>&#9679; [**Receipt model**](concept-receipt.md?view=doc-intel-2.1.0&preserve-view=true) </br>&#9679; [**Identity document (ID) model**](concept-id-document.md?view=doc-intel-2.1.0&preserve-view=true) </br>&#9679; [**Business card model**](concept-business-card.md?view=doc-intel-2.1.0&preserve-view=true) </br>|
| **Custom models** | &#9679; [**Custom model**](concept-custom.md) </br>&#9679; [**Composed model**](concept-model-overview.md?view=doc-intel-2.1.0&preserve-view=true)| ::: moniker-end
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
+
+ Title: Azure OpenAI Service API version retirement
+description: Learn more about API version retirement in Azure OpenAI Services
++++ Last updated : 01/08/2024++
+recommendations: false
+++
+# Azure OpenAI API preview lifecycle
+
+This article is to help you understand the support lifecycle for the Azure OpenAI API previews.
+
+## Latest preview API release
+
+Azure OpenAI API version 2023-12-01-preview is currently the latest preview release.
+
+This version contains support for all the latest Azure OpenAI features including:
+
+- [Fine-tuning](./how-to/fine-tuning.md) `gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**]
+- [Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**]
+- [Function calling](./how-to/function-calling.md) [**Added in 2023-07-01-preview**]
+- [DALL-E](./dall-e-quickstart.md) [**Added in 2023-06-01-preview**]
+- [Retrieval augmented generation with the on your data feature](./use-your-data-quickstart.md). [**Added in 2023-06-01-preview**]
+
+## Retiring soon
+
+On April 2, 2024 the following API preview releases will be retired and will stop accepting API requests:
+
+- 2023-03-15-preview
+- 2023-06-01-preview
+- 2023-07-01-preview
+- 2023-08-01-preview
+
+To avoid service disruptions, you must update to use the latest preview version prior to the retirement date.
+
+## Updating API versions
+
+We recommend first testing the upgrade to new API versions to confirm there is no impact to your application from the API update prior to making the change globally across your environment.
+
+If you are using the OpenAI Python client library or the REST API, you will need to update your code directly to the latest preview API version.
+
+If you are using one of the Azure OpenAI SDKs for C#, Go, Java, or JavaScript you will instead need to update to the latest version of the SDK. Each SDK release is hardcoded to work with specific versions of the Azure OpenAI API.
+
+## Next steps
+
+- [Learn more about Azure OpenAI](overview.md)
+- [Learn about working with Azure OpenAI models](./how-to/working-with-models.md)
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
For details on the inference REST API endpoints for Azure OpenAI and how to crea
} ```
+## Streaming
+
+Azure OpenAI Service includes a content filtering system that works alongside core models. The following section describes the AOAI streaming experience and options in the context of content filters.
+
+### Default
+
+The content filtering system is integrated and enabled by default for all customers. In the default streaming scenario, completion content is buffered, the content filtering system runs on the buffered content, and ΓÇô depending on content filtering configuration ΓÇô content is either returned to the user if it does not violate the content filtering policy (Microsoft default or custom user configuration), or itΓÇÖs immediately blocked which returns a content filtering error, without returning harmful completion content. This process is repeated until the end of the stream. Content was fully vetted according to the content filtering policy before returned to the user. Content is not returned token-by-token in this case, but in ΓÇ£content chunksΓÇ¥ of the respective buffer size.
+
+### Asynchronous modified filter
+
+Customers who have been approved for modified content filters can choose Asynchronous Modified Filter as an additional option, providing a new streaming experience. In this case, content filters are run asynchronously, completion content is returned immediately with a smooth token-by-token streaming experience. No content is buffered, the content filters run asynchronously, which allows for zero latency in this context.
+
+> [!NOTE]
+> Customers must be aware that while the feature improves latency, it can bring a trade-off in terms of the safety and real-time vetting of smaller sections of model output. Because content filters are run asynchronously, content moderation messages and the content filtering signal in case of a policy violation are delayed, which means some sections of harmful content that would otherwise have been filtered immediately could be displayed to the user.
+
+**Annotations**: Annotations and content moderation messages are continuously returned during the stream. We strongly recommend to consume annotations and implement additional AI content safety mechanisms, such as redacting content or returning additional safety information to the user.
+
+**Content filtering signal**: The content filtering error signal is delayed; in case of a policy violation, itΓÇÖs returned as soon as itΓÇÖs available, and the stream is stopped. The content filtering signal is guaranteed within ~1,000-character windows in case of a policy violation.
+
+Approval for Modified Content Filtering is required for access to Streaming ΓÇô Asynchronous Modified Filter. The application can be found [here](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). To enable it via Azure OpenAI Studio please follow the instructions [here](/azure/ai-services/openai/how-to/content-filters) to create a new content filtering configuration, and select ΓÇ£Asynchronous Modified FilterΓÇ¥ in the Streaming section, as shown in the below screenshot.
+
+### Overview
+
+| Category | Streaming - Default | Streaming - Asynchronous Modified Filter |
+||||
+|Status |GA |Public Preview |
+| Access | Enabled by default, no action needed |Customers approved for Modified Content Filtering can configure directly via Azure OpenAI Studio (as part of a content filtering configuration; applied on deployment-level) |
+| Eligibility |All customers |Customers approved for Modified Content Filtering |
+|Modality and Availability |Text; all GPT-models |Text; all GPT-models except gpt-4-vision |
+|Streaming experience |Content is buffered and returned in chunks |Zero latency (no buffering, filters run asynchronously) |
+|Content filtering signal |Immediate filtering signal |Delayed filtering signal (in up to ~1,000 char increments) |
+|Content filtering configurations |Supports default and any customer-defined filter setting (including optional models) |Supports default and any customer-defined filter setting (including optional models) |
+
+### Annotations and sample response stream
+
+#### Prompt annotation message
+
+This is the same as default annotations.
+
+```json
+data: {
+ "id": "",
+ "object": "",
+ "created": 0,
+ "model": "",
+ "prompt_filter_results": [
+ {
+ "prompt_index": 0,
+ "content_filter_results": { ... }
+ }
+ ],
+ "choices": [],
+ "usage": null
+}
+```
+
+#### Completion token message
+
+Completion messages are forwarded immediately. No moderation is performed first, and no annotations are provided initially.
+
+```json
+data: {
+ "id": "chatcmpl-7rAJvsS1QQCDuZYDDdQuMJVMV3x3N",
+ "object": "chat.completion.chunk",
+ "created": 1692905411,
+ "model": "gpt-35-turbo",
+ "choices": [
+ {
+ "index": 0,
+ "finish_reason": null,
+ "delta": {
+ "content": "Color"
+ }
+ }
+ ],
+ "usage": null
+}
+```
+
+#### Annotation message
+
+The text field will always be an empty string, indicating no new tokens. Annotations will only be relevant to already-sent tokens. There may be multiple Annotation Messages referring to the same tokens.
+
+ΓÇ£start_offsetΓÇ¥ and ΓÇ£end_offsetΓÇ¥ are low-granularity offsets in text (with 0 at beginning of prompt) which the annotation is relevant to.
+
+ΓÇ£check_offsetΓÇ¥ represents how much text has been fully moderated. It is an exclusive lower bound on the end_offsets of future annotations. It is nondecreasing.
+
+```json
+data: {
+ "id": "",
+ "object": "",
+ "created": 0,
+ "model": "",
+ "choices": [
+ {
+ "index": 0,
+ "finish_reason": null,
+ "content_filter_results": { ... },
+ "content_filter_raw": [ ... ],
+ "content_filter_offsets": {
+ "check_offset": 44,
+ "start_offset": 44,
+ "end_offset": 198
+ }
+ }
+ ],
+ "usage": null
+}
+```
++
+### Sample response stream
+
+Below is a real chat completion response using Asynchronous Modified Filter. Note how prompt annotations are not changed; completion tokens are sent without annotations; and new annotation messages are sent without tokens, instead associated with certain content filter offsets.
+
+`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "What is color?"}], "stream": true}`
+
+```
+data: {"id":"","object":"","created":0,"model":"","prompt_annotations":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"choices":[],"usage":null}
+
+data: {"id":"chatcmpl-7rCNsVeZy0PGnX3H6jK8STps5nZUY","object":"chat.completion.chunk","created":1692913344,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"role":"assistant"}}],"usage":null}
+
+data: {"id":"chatcmpl-7rCNsVeZy0PGnX3H6jK8STps5nZUY","object":"chat.completion.chunk","created":1692913344,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":"Color"}}],"usage":null}
+
+data: {"id":"chatcmpl-7rCNsVeZy0PGnX3H6jK8STps5nZUY","object":"chat.completion.chunk","created":1692913344,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":" is"}}],"usage":null}
+
+data: {"id":"chatcmpl-7rCNsVeZy0PGnX3H6jK8STps5nZUY","object":"chat.completion.chunk","created":1692913344,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":" a"}}],"usage":null}
+
+...
+
+data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_reason":null,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"content_filter_offsets":{"check_offset":44,"start_offset":44,"end_offset":198}}],"usage":null}
+
+...
+
+data: {"id":"chatcmpl-7rCNsVeZy0PGnX3H6jK8STps5nZUY","object":"chat.completion.chunk","created":1692913344,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":"stop","delta":{}}],"usage":null}
+
+data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_reason":null,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"content_filter_offsets":{"check_offset":506,"start_offset":44,"end_offset":571}}],"usage":null}
+
+data: [DONE]
+```
+
+### Sample response stream (blocking)
+
+`{"temperature": 0, "frequency_penalty": 0, "presence_penalty": 1.0, "top_p": 1.0, "max_tokens": 800, "messages": [{"role": "user", "content": "Tell me the lyrics to \"Hey Jude\"."}], "stream": true}`
+
+```
+data: {"id":"","object":"","created":0,"model":"","prompt_filter_results":[{"prompt_index":0,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}}}],"choices":[],"usage":null}
+
+data: {"id":"chatcmpl-8JCbt5d4luUIhYCI7YH4dQK7hnHx2","object":"chat.completion.chunk","created":1699587397,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"role":"assistant"}}],"usage":null}
+
+data: {"id":"chatcmpl-8JCbt5d4luUIhYCI7YH4dQK7hnHx2","object":"chat.completion.chunk","created":1699587397,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":"Hey"}}],"usage":null}
+
+data: {"id":"chatcmpl-8JCbt5d4luUIhYCI7YH4dQK7hnHx2","object":"chat.completion.chunk","created":1699587397,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":" Jude"}}],"usage":null}
+
+data: {"id":"chatcmpl-8JCbt5d4luUIhYCI7YH4dQK7hnHx2","object":"chat.completion.chunk","created":1699587397,"model":"gpt-35-turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":","}}],"usage":null}
+
+...
+
+data: {"id":"chatcmpl-8JCbt5d4luUIhYCI7YH4dQK7hnHx2","object":"chat.completion.chunk","created":1699587397,"model":"gpt-35-
+
+turbo","choices":[{"index":0,"finish_reason":null,"delta":{"content":" better"}}],"usage":null}
+
+data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_reason":null,"content_filter_results":{"hate":{"filtered":false,"severity":"safe"},"self_harm":{"filtered":false,"severity":"safe"},"sexual":{"filtered":false,"severity":"safe"},"violence":{"filtered":false,"severity":"safe"}},"content_filter_offsets":{"check_offset":65,"start_offset":65,"end_offset":1056}}],"usage":null}
+
+data: {"id":"","object":"","created":0,"model":"","choices":[{"index":0,"finish_reason":"content_filter","content_filter_results":{"protected_material_text":{"detected":true,"filtered":true}},"content_filter_offsets":{"check_offset":65,"start_offset":65,"end_offset":1056}}],"usage":null}
+
+data: [DONE]
+```
## Best practices As part of your application design, consider the following best practices to deliver a positive experience with your application while minimizing potential harms:
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Previously updated : 11/14/2023 Last updated : 01/09/2023 recommendations: false
Use the following sections to help you configure Azure OpenAI on your data for o
### System message
-Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There's no token limit for the system message, but will be included with every API call and counted against the overall token limit. The system message will be truncated if it's greater than 400 tokens.
+Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There are token limits that apply to the system message, used with every API call, and counted against the overall token limit. The system message will be truncated if it exceeds the token limits listed in the [token estimation](#token-usage-estimation-for-azure-openai-on-your-data) section.
For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
Set a limit on the number of tokens per model response. The upper limit for Azur
This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model might more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario. + ### Interacting with the model Use the following practices for best results when chatting with the model.
Avoid asking long questions and break them down into multiple questions if possi
* *"**You are an AI assistant designed to help users extract information from retrieved Japanese documents. Please scrutinize the Japanese documents carefully before formulating a response. The user's query will be in Japanese, and you must response also in Japanese."* - * If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI. ### Deploying the model
After you upload your data through Azure OpenAI studio, you can make a call agai
++ |Parameter |Recommendation | ||| |`fieldsMapping` | Explicitly set the title and content fields of your index. This impacts the search retrieval quality of Azure AI Search, which impacts the overall response and citation quality. |
When you chat with a model, providing a history of the chat will help the model
## Token usage estimation for Azure OpenAI on your data + | Model | Total tokens available | Max tokens for system message | Max tokens for model response | |-|||| | ChatGPT Turbo (0301) 8k | 8000 | 400 | 1500 |
When you chat with a model, providing a history of the chat will help the model
The table above shows the total number of tokens available for each model type. It also determines the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens: + * The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt as well as the query rewriting prompts for retrieval. * User question and history: Variable but capped at 2000 tokens. * Retrieved documents (chunks): The number of tokens used by the retrieved document chunks depends on multiple factors. The upper bound for this is the number of retrieved document chunks multiplied by the chunk size. It will, however, be truncated based on the tokens available tokens for the specific model being used after counting the rest of fields.
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
To force the model to call a specific function set the `tool_choice` parameter w
> The default behavior (`tool_choice: "auto"`) is for the model to decide on its own whether to call a function and if so which function to call. ```python
+import os
from openai import AzureOpenAI import json
client = AzureOpenAI(
api_version="2023-12-01-preview" )
-from openai import OpenAI
-import json
# Example function hard coded to return the same weather # In production, this could be your backend API or an external API
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
client = AzureOpenAI(
response = client.embeddings.create( input = "Your text string goes here",
- model= "text-embedding-ada-002"
+ model= "text-embedding-ada-002" # model = "deployment_name".
) print(response.model_dump_json(indent=2))
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
client = AzureOpenAI(
<a name='azure-active-directory-authentication'></a>
-### Microsoft Entra authentication
+### Microsoft Entra ID authentication
<table> <tr>
ai-services Personal Voice How To Use https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/personal-voice-how-to-use.md
Here's example SSML in a request for text to speech with the voice name and the
```xml <speak version='1.0' xmlns='http://www.w3.org/2001/10/synthesis' xmlns:mstts='http://www.w3.org/2001/mstts' xml:lang='en-US'>
- <voice xml:lang='en-US' xml:gender='Male' name='PhoenixV2Neural'>
+ <voice name='PhoenixV2Neural'>
<mstts:ttsembedding speakerProfileId='your speaker profile ID here'> I'm happy to hear that you find me amazing and that I have made your trip planning easier and more fun. 我很高兴听到你觉得我很了不起,我让你的旅行计划更轻松、更有趣。Je suis heureux d'apprendre que vous me trouvez incroyable et que j'ai rendu la planification de votre voyage plus facile et plus amusante. </mstts:ttsembedding>
ai-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/custom-translator/how-to/create-manage-workspace.md
description: How to create and manage workspaces
Previously updated : 07/18/2023 Last updated : 01/08/2024
Workspaces are places to manage your documents, projects, and models. When you create a workspace, you can choose to use the workspace independently, or share it with teammates to divide up the work.
+ > [!NOTE]
+ >
+ > * [Custom Translator Portal](https://portal.customtranslator.azure.ai/) access can only be enabled through a public network.
+ > * For information on how to use selected networks and private endpoints, see [Enable Custom Translator through Azure Virtual Network](enable-vnet-service-endpoint.md).
+ ## Create workspace 1. After you sign in to Custom Translator, you'll be asked for permission to read your profile from the Microsoft identity platform to request your user access token and refresh token. Both tokens are needed for authentication and to ensure that you aren't signed out during your live session or while training your models. </br>Select **Yes**.
ai-studio Ai Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/ai-resources.md
Title: Azure AI resource concepts description: This article introduces concepts about Azure AI resources.-+ - ignite-2023 Last updated 12/14/2023-+
ai-studio Deployments Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/deployments-overview.md
Title: Deploy models, flows, and web apps with Azure AI Studio description: Learn about deploying models, flows, and web apps with Azure AI Studio.-+ - ignite-2023 Last updated 12/7/2023---+++ # Overview: Deploy models, flows, and web apps with Azure AI Studio
The model or flow that you deploy can be used in a web application hosted in Azu
## Planning AI safety for a deployed model
-For Azure OpenAI models such as GPT-4, Azure AI Studio provides AI safety filter during the deployment to ensure responsible use of AI. AI content safety filter allows moderation of harmful and sensitive contents to promote the safety of AI-enhanced applications. In addition to AI safety filter, Azure AI Studio offers model monitoring for deployed models. Model monitoring for LLMs uses the latest GPT language models to monitor and alert when the outputs of the model perform poorly against the set thresholds of generation safety and quality. For example, you can configure a monitor to evaluate how well the modelΓÇÖs generated answers align with information from the input source ("groundedness") and closely match to a ground truth sentence or document ("similarity").
+For Azure OpenAI models such as GPT-4, Azure AI Studio provides AI safety filter during the deployment to ensure responsible use of AI. AI content safety filter allows moderation of harmful and sensitive contents to promote the safety of AI-enhanced applications. In addition to AI safety filter, Azure AI Studio offers model monitoring for deployed models. Model monitoring for LLMs uses the latest GPT language models to monitor and alert when the outputs of the model perform poorly against the set thresholds of generation safety and quality. For example, you can configure a monitor to evaluate how well the model's generated answers align with information from the input source ("groundedness") and closely match to a ground truth sentence or document ("similarity").
## Optimizing the performance of a deployed model
ai-studio Connections Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/connections-add.md
Title: How to add a new connection in Azure AI Studio description: Learn how to add a new connection in Azure AI Studio-+ - ignite-2023 Last updated 11/15/2023-+
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
Title: How to deploy Llama 2 family of large language models with Azure AI Studio description: Learn how to deploy Llama 2 family of large language models with Azure AI Studio.-+ Last updated 12/11/2023---+++
Each time a project subscribes to a given offer from the Azure Marketplace, a ne
:::image type="content" source="../media/cost-management/marketplace/costs-model-as-service-cost-details.png" alt-text="A screenshot showing different resources corresponding to different model offers and their associated meters." lightbox="../media/cost-management/marketplace/costs-model-as-service-cost-details.png":::
-Quota is managed per deployment. Each deployment has a rate limit of 20,000 tokens per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits donΓÇÖt suffice your scenarios.
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits don't suffice your scenarios.
### Considerations for Llama 2 models deployed as real-time endpoints
ai-studio Deploy Models Open https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-open.md
Title: How to deploy open models with Azure AI Studio description: Learn how to deploy open models with Azure AI Studio.-+ Last updated 12/11/2023---+++ # How to deploy large language models with Azure AI Studio
ai-studio Deploy Models Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-openai.md
Title: How to deploy Azure OpenAI models with Azure AI Studio description: Learn how to deploy Azure OpenAI models with Azure AI Studio.-+ - ignite-2023 Last updated 12/11/2023---+++ # How to deploy Azure OpenAI models with Azure AI Studio
ai-studio Index Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md
Title: How to create vector indexes description: Learn how to create and use a vector index for performing Retrieval Augmented Generation (RAG).-+ - ignite-2023 Last updated 11/15/2023-+
ai-studio Monitor Quality Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/monitor-quality-safety.md
Title: Monitor quality and safety of deployed applications description: Learn how to monitor quality and safety of deployed applications with Azure AI Studio.-+ - ignite-2023 Last updated 11/15/2023---+++ # Monitor quality and safety of deployed applications
Test your deployment in the deployment **Test** tab.
> [!NOTE]
-> Monitoring requires the endpoint to be used at least 10 times to collect enough data to provide insights. If youΓÇÖd like to test sooner, manually send about 50 rows in the ΓÇÿtestΓÇÖ tab before running the monitor.
+> Monitoring requires the endpoint to be used at least 10 times to collect enough data to provide insights. If you'd like to test sooner, manually send about 50 rows in the 'test' tab before running the monitor.
Create your monitor by either enabling from the deployment details page, or the **Monitoring** tab.
ai-studio Troubleshoot Deploy And Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md
Title: How to troubleshoot your deployments and monitors in Azure AI Studio description: This article provides instructions on how to troubleshoot your deployments and monitors in Azure AI Studio.-+ - ignite-2023 Last updated 11/15/2023---+++ # How to troubleshoot your deployments and monitors in Azure AI Studio
For the general deployment error code reference, you can go to the [Azure Machin
**Question:** I got the following error message about the deployment failure. What should I do to troubleshoot? ```
-ResourceNotFound: Deployment failed due to timeout while waiting for Environment Image to become available. Check Environment Build Log in ML Studio Workspace or Workspace storage for potential failures. Image build summary: [N/A]. Environment info: Name: CliV2AnonymousEnvironment, Version: ΓÇÿVerΓÇÖ, you might be able to find the build log under the storage account 'NAME' in the container 'CONTAINER_NAME' at the Path 'PATH/PATH/image_build_aggregate_log.txt'.
+ResourceNotFound: Deployment failed due to timeout while waiting for Environment Image to become available. Check Environment Build Log in ML Studio Workspace or Workspace storage for potential failures. Image build summary: [N/A]. Environment info: Name: CliV2AnonymousEnvironment, Version: 'Ver', you might be able to find the build log under the storage account 'NAME' in the container 'CONTAINER_NAME' at the Path 'PATH/PATH/image_build_aggregate_log.txt'.
``` You might have come across an ImageBuildFailure error: This happens when the environment (docker image) is being built. For more information about the error, you can check the build log for your `<CONTAINER NAME>` environment.
aks Csi Secrets Store Identity Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-identity-access.md
In this security model, you can grant access to your cluster's resources to team
1. Access your key vault using the [`az aks show`][az-aks-show] command and the user-assigned managed identity created by the add-on. ```azurecli-interactive
- az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv
+ az aks show -g <resource-group> -n <cluster-name> --query addonProfiles.azureKeyvaultSecretsProvider.identity.objectId -o tsv
``` Alternatively, you can create a new managed identity and assign it to your virtual machine (VM) scale set or to each VM instance in your availability set using the following commands.
In this security model, you can grant access to your cluster's resources to team
2. Create a role assignment that grants the identity permission access to the key vault secrets, access keys, and certificates using the [`az role assignment create`][az-role-assignment-create] command. ```azurecli-interactive
- export IDENTITY_CLIENT_ID="$(az identity show -g <resource-group> --name <identity-name> --query 'clientId' -o tsv)"
+ export IDENTITY_OBJECT_ID="$(az identity show -g <resource-group> --name <identity-name> --query 'principalId' -o tsv)"
export KEYVAULT_SCOPE=$(az keyvault show --name <key-vault-name> --query id -o tsv)
- az role assignment create --role Key Vault Administrator --assignee <identity-client-id> --scope $KEYVAULT_SCOPE
+ az role assignment create --role "Key Vault Administrator" --assignee $IDENTITY_OBJECT_ID --scope $KEYVAULT_SCOPE
``` 3. Create a `SecretProviderClass` using the following YAML. Make sure to use your own values for `userAssignedIdentityID`, `keyvaultName`, `tenantId`, and the objects to retrieve from your key vault.
aks Csi Secrets Store Nginx Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-nginx-tls.md
Again, the instructions change slightly depending on your scenario. Follow the i
spec: type: ClusterIP ports:
- - port: 80
+ - port: 80
selector: app: aks-helloworld-one ```
Again, the instructions change slightly depending on your scenario. Follow the i
spec: type: ClusterIP ports:
- - port: 80
+ - port: 80
selector: app: aks-helloworld-two ```
Again, the instructions change slightly depending on your scenario. Follow the i
spec: type: ClusterIP ports:
- - port: 80
+ - port: 80
selector: app: aks-helloworld-one ```
Again, the instructions change slightly depending on your scenario. Follow the i
spec: type: ClusterIP ports:
- - port: 80
+ - port: 80
selector: app: aks-helloworld-two ```
We can now deploy a Kubernetes ingress resource referencing the secret.
spec: ingressClassName: nginx tls:
- - hosts:
+ - hosts:
- demo.azure.com secretName: ingress-tls-csi rules:
- - host: demo.azure.com
+ - host: demo.azure.com
http: paths: - path: /hello-world-one(/|$)(.*)
aks Free Standard Pricing Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/free-standard-pricing-tiers.md
Title: Azure Kubernetes Service (AKS) Free Standard and Premium pricing tiers for cluster management
+ Title: Azure Kubernetes Service (AKS) Free, Standard and Premium pricing tiers for cluster management
description: Learn about the Azure Kubernetes Service (AKS) Free, Standard, and Premium pricing plans and what features, deployment patterns, and recommendations to consider between each plan. Last updated 04/07/2023
-# Free Standard and Premium pricing tiers for Azure Kubernetes Service (AKS) cluster management
+# Free, Standard and Premium pricing tiers for Azure Kubernetes Service (AKS) cluster management
Azure Kubernetes Service (AKS) is now offering three pricing tiers for cluster management: the **Free tier**, the **Standard tier** and the **Premium tier**. All tiers are in the **Base** sku.
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cl
description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using Azure CLI. Previously updated : 12/27/2023 Last updated : 01/09/2024 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui
This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). - [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)]-- [!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]++ - This article requires version 2.0.64 or later of the Azure CLI. If you are using Azure Cloud Shell, then the latest version is already installed. - Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). - If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account](/cli/azure/account) command.
This article assumes a basic understanding of Kubernetes concepts. For more info
An [Azure resource group](../../azure-resource-manager/management/overview.md) is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're asked to specify a location. This location is where resource group metadata is stored and where your resources run in Azure if you don't specify another region during resource creation. -- Create a resource group using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus* location.
+- Create a resource group using the [az group create][az-group-create] command. The following example creates a resource group named *myResourceGroup* in the *eastus* location. Enter this command and other commands in this article into a BASH shell:
```azurecli az group create --name myResourceGroup --location eastus
An [Azure resource group](../../azure-resource-manager/management/overview.md) i
In this section, we create an AKS cluster with the following configuration: -- The cluster is configured with two nodes to ensure it operates reliably.
+- The cluster is configured with two nodes to ensure it operates reliably. A [node](../concepts-clusters-workloads.md#nodes-and-node-pools) is an Azure virtual machine (VM) that runs the Kubernetes node components and container runtime.
- The `--windows-admin-password` and `--windows-admin-username` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet [Windows Server password requirements][windows-server-password]. - The node pool uses `VirtualMachineScaleSets`.
-> [!NOTE]
-> To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI (advanced)][azure-cni] network plugin.
+To create the AKS cluster with Azure CLI, follow these steps:
-1. Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command (remember the commands in this article are entered into a BASH shell).
+1. Create a username to use as administrator credentials for the Windows Server nodes on your cluster. The following commands prompt you for a username and set it to *WINDOWS_USERNAME* for use in a later command.
```azurecli echo "Please enter the username to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_USERNAME ```
-1. Create a password for the administrator username you created in the previous step.
+1. Create a password for the administrator username you created in the previous step. The password must be a minimum of 14 characters and meet the [Windows Server password complexity requirements][windows-server-password].
```azurecli echo "Please enter the password to use as administrator credentials for Windows Server nodes on your cluster: " && read WINDOWS_PASSWORD
In this section, we create an AKS cluster with the following configuration:
--network-plugin azure ```
- If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. Also see [What are the password requirements when creating a VM?](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-). If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+
+ If you get a password validation error, and the password that you set meets the length and complexity requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
If you don't specify an administrator username and password when creating the node pool, the username is set to *azureuser* and the password is set to a random value. For more information, see [How do I change the administrator password for Windows Server nodes on my cluster?](../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster).
- The administrator username can't be changed, but you can change the administrator password your AKS cluster uses for Windows Server nodes using `az aks update`. For more information, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
+ The administrator username can't be changed, but you can change the administrator password that your AKS cluster uses for Windows Server nodes using `az aks update`. For more information, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
- After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+ To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI (advanced)][azure-cni] network plugin. The `--network-plugin azure` parameter specifies Azure CNI.
## Add a node pool
By default, an AKS cluster is created with a node pool that can run Linux contai
Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. If you don't specify a particular OS SKU, Azure creates the new node pool with the default SKU for the version of Kubernetes used by the cluster.
-### [Add a Windows node pool (default SKU)](#tab/add-windows-node-pool)
+### [Windows node pool (default SKU)](#tab/add-windows-node-pool)
To use the default OS SKU, create the node pool without specifying an OS SKU. The node pool is configured for the default operating system based on the Kubernetes version of the cluster.
az aks nodepool add \
--node-count 1 ```
-### [Add a Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool)
+### [Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool)
-To use Windows Server 2019, specify the following parameters:
+To use Windows Server 2022, specify the following parameters:
- `os-type` set to `Windows`-- `os-sku` set to `Windows2019`
+- `os-sku` set to `Windows2022`
> [!NOTE]
-> Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
+> Windows Server 2022 requires Kubernetes version 1.23.0 or higher.
-Add a Windows Server 2019 node pool using the `az aks nodepool add` command:
+Add a Windows Server 2022 node pool using the `az aks nodepool add` command:
```azurecli az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \ --os-type Windows \
- --os-sku Windows2019 \
+ --os-sku Windows2022 \
--name npwin \ --node-count 1 ```
-### [Add a Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool)
+### [Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool)
-To use Windows Server 2022, specify the following parameters:
+To use Windows Server 2019, specify the following parameters:
- `os-type` set to `Windows`-- `os-sku` set to `Windows2022`
+- `os-sku` set to `Windows2019`
> [!NOTE]
-> Windows Server 2022 requires Kubernetes version 1.23.0 or higher.
+> Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
-Add a Windows Server 2022 node pool using the `az aks nodepool add` command:
+Add a Windows Server 2019 node pool using the `az aks nodepool add` command:
```azurecli az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \ --os-type Windows \
- --os-sku Windows2022 \
+ --os-sku Windows2019 \
--name npwin \ --node-count 1 ```
az aks nodepool add \
## Connect to the cluster
-You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. To you want to install `kubectl` locally, you can use the [az aks install-cli][az-aks-install-cli] command.
+You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. If you want to install `kubectl` locally, you can use the [az aks install-cli][az-aks-install-cli] command.
1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them.
You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K
```output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
- aks-nodepool1-90538373-vmss000000 Ready agent 54m v1.25.6 10.224.0.33 <none> Ubuntu 22.04.2 LTS 5.15.0-1035-azure containerd://1.6.18+azure-1
- aks-nodepool1-90538373-vmss000001 Ready agent 55m v1.25.6 10.224.0.4 <none> Ubuntu 22.04.2 LTS 5.15.0-1035-azure containerd://1.6.18+azure-1
- aksnpwin000000 Ready agent 40m v1.25.6 10.224.0.62 <none> Windows Server 2022 Datacenter 10.0.20348.1668 containerd://1.6.14+azure
- ```
+ aks-nodepool1-20786768-vmss000000 Ready agent 22h v1.27.7 10.224.0.4 <none> Ubuntu 22.04.3 LTS 5.15.0-1052-azure containerd://1.7.5-1
+ aks-nodepool1-20786768-vmss000001 Ready agent 22h v1.27.7 10.224.0.33 <none> Ubuntu 22.04.3 LTS 5.15.0-1052-azure containerd://1.7.5-1
+ aksnpwin000000 Ready agent 20h v1.27.7 10.224.0.62 <none> Windows Server 2022 Datacenter 10.0.20348.2159 containerd://1.6.21+azure
+ ```
> [!NOTE]
- > The container runtime for each node pool is shown under *CONTAINER-RUNTIME*. Notice *aksnpwin987654* begins with `docker://`, which means it uses Docker for the container runtime. Notice *aksnpwcd123456* begins with `containerd://`, which means it uses `containerd` for the container runtime.
+ > The container runtime for each node pool is shown under *CONTAINER-RUNTIME*. The container runtime values begin with `containerd://`, which means that they each use `containerd` for the container runtime.
## Deploy the application
The ASP.NET sample application is provided as part of the [.NET Framework Sample
For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system.
+ 1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```console
aks Quick Windows Container Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md
You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K
```output NAME STATUS ROLES AGE VERSION
- aks-nodepool1-12345678-vmssfedcba Ready agent 13m v1.16.7
- aksnpwin987654 Ready agent 108s v1.16.7
+ aks-agentpool-41946322-vmss000001 Ready agent 7m51s v1.27.7
+ aks-agentpool-41946322-vmss000002 Ready agent 7m5s v1.27.7
+ aks-npwin-41946322-vmss000000 Ready agent 7m43s v1.27.7
+ aks-userpool-41946322-vmss000001 Ready agent 7m47s v1.27.7
+ aks-userpool-41946322-vmss000002 Ready agent 6m57s v1.27.7
```
The ASP.NET sample application is provided as part of the [.NET Framework Sample
For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
-1. If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system.
+ If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system.
+ 1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. ```console
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
Title: Create a Windows Server container on an Azure Kubernetes Service (AKS) cluster using PowerShell description: Learn how to quickly create a Kubernetes cluster and deploy an application in a Windows Server container in Azure Kubernetes Service (AKS) using PowerShell. Previously updated : 12/27/2023 Last updated : 01/09/2024 #Customer intent: As a developer or cluster operator, I want to quickly create an AKS cluster and deploy a Windows Server container so that I can see how to run applications running on a Windows Server container using the managed Kubernetes service in Azure.
In this section, we create an AKS cluster with the following configuration:
- The `-WindowsProfileAdminUserName` and `-WindowsProfileAdminUserPassword` parameters set the administrator credentials for any Windows Server nodes on the cluster and must meet the [Windows Server password complexity requirements][windows-server-password]. - The node pool uses `VirtualMachineScaleSets`.
-> [!NOTE]
-> To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI (advanced)][azure-cni-about] network plugin.
+To create the AKS cluster with Azure PowerShell, follow these steps:
-1. Create the administrator credentials for your Windows Server containers using the following command. This command prompts you to enter a `WindowsProfileAdminUserName` and `WindowsProfileAdminUserPassword`.
+1. Create the administrator credentials for your Windows Server containers using the following command. This command prompts you to enter a `WindowsProfileAdminUserName` and `WindowsProfileAdminUserPassword`. The password must be a minimum of 14 characters and meet the [Windows Server password complexity requirements][windows-server-password].
```azurepowershell $AdminCreds = Get-Credential `
In this section, we create an AKS cluster with the following configuration:
-GenerateSshKey ```
- If you get a password validation error, verify the password you set meets the [Windows Server password requirements][windows-server-password]. Also see [What are the password requirements when creating a VM?](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-). If your password meets the requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
+ After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+
+ If you get a password validation error, and the password that you set meets the length and complexity requirements, try creating your resource group in another region. Then try creating the cluster with the new resource group.
If you don't specify an administrator username and password when creating the node pool, the username is set to *azureuser* and the password is set to a random value. For more information, see [How do I change the administrator password for Windows Server nodes on my cluster?](../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster).
- If you're unable to create the AKS cluster because the version isn't supported in the region you selected, use the `Get-AzAksVersion -Location <location>` command to find the supported version list for the region.
+ The administrator username can't be changed, but you can change the administrator password that your AKS cluster uses for Windows Server nodes using `az aks update`. For more information, see [Windows Server node pools FAQ][win-faq-change-admin-creds].
- After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally, the cluster can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning.
+ To run an AKS cluster that supports node pools for Windows Server containers, your cluster needs to use a network policy that uses [Azure CNI (advanced)][azure-cni] network plugin. The `-NetworkPlugin azure` parameter specifies Azure CNI.
## Add a node pool
By default, an AKS cluster is created with a node pool that can run Linux contai
Windows Server 2022 is the default operating system for Kubernetes versions 1.25.0 and higher. Windows Server 2019 is the default OS for earlier versions. If you don't specify a particular OS SKU, Azure creates the new node pool with the default SKU for the version of Kubernetes used by the cluster.
-### [Add a Windows node pool (default SKU)](#tab/add-windows-node-pool)
+### [Windows node pool (default SKU)](#tab/add-windows-node-pool)
To use the default OS SKU, create the node pool without specifying an OS SKU. The node pool is configured for the default operating system based on the Kubernetes version of the cluster.
New-AzAksNodePool -ResourceGroupName myResourceGroup `
-Name npwin ```
-### [Add a Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool)
+### [Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool)
-To use Windows Server 2019, specify the following parameters:
+To use Windows Server 2022, specify the following parameters:
- `OsType` set to `Windows`-- `OsSKU` set to `Windows2019`
+- `OsSKU` set to `Windows2022`
> [!NOTE] >
-> - `OsSKU` requires PowerShell Az module version 9.2.0 or higher.
-> - Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
+> - Specifying the `OsSKU` parameter requires PowerShell Az module version 9.2.0 or higher.
+> - Windows Server 2022 requires Kubernetes version 1.23.0 or higher.
-To add a Windows Server 2019 node pool, call the [New-AzAksNodePool][new-azaksnodepool] cmdlet:
+To add a Windows Server 2022 node pool, call the [New-AzAksNodePool][new-azaksnodepool] cmdlet:
```azurepowershell New-AzAksNodePool -ResourceGroupName myResourceGroup ` -ClusterName myAKSCluster ` -VmSetType VirtualMachineScaleSets ` -OsType Windows `
- -OsSKU Windows2019 `
+ -OsSKU Windows2022 `
-Name npwin ```
-### [Add a Windows Server 2022 node pool](#tab/add-windows-server-2022-node-pool)
+### [Windows Server 2019 node pool](#tab/add-windows-server-2019-node-pool)
-To use Windows Server 2022, specify the following parameters:
+To use Windows Server 2019, specify the following parameters:
- `OsType` set to `Windows`-- `OsSKU` set to `Windows2022`
+- `OsSKU` set to `Windows2019`
> [!NOTE] >
-> - Specifying the `OsSKU` parameter requires PowerShell Az module version 9.2.0 or higher.
-> - Windows Server 2022 requires Kubernetes version 1.23.0 or higher.
+> - `OsSKU` requires PowerShell Az module version 9.2.0 or higher.
+> - Windows Server 2019 is being retired after Kubernetes version 1.32 reaches end of life (EOL) and won't be supported in future releases. For more information about this retirement, see the [AKS release notes][aks-release-notes].
-To add a Windows Server 2022 node pool, call the [New-AzAksNodePool][new-azaksnodepool] cmdlet:
+To add a Windows Server 2019 node pool, call the [New-AzAksNodePool][new-azaksnodepool] cmdlet:
```azurepowershell New-AzAksNodePool -ResourceGroupName myResourceGroup ` -ClusterName myAKSCluster ` -VmSetType VirtualMachineScaleSets ` -OsType Windows `
- -OsSKU Windows2022 `
+ -OsSKU Windows2019 `
-Name npwin ```
You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K
The following sample output shows all the nodes in the cluster. Make sure the status of all nodes is **Ready**: ```output
- NAME STATUS ROLES AGE VERSION
- aks-nodepool1-12345678-vmssfedcba Ready agent 13m v1.16.7
- aksnpwin987654 Ready agent 108s v1.16.7
+ NAME STATUS ROLES AGE VERSION
+ aks-nodepool1-20786768-vmss000000 Ready agent 22h v1.27.7
+ aks-nodepool1-20786768-vmss000001 Ready agent 22h v1.27.7
+ aksnpwin000000 Ready agent 21h v1.27.7
``` ## Deploy the application
The ASP.NET sample application is provided as part of the [.NET Framework Sample
For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests).
+ If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system.
+ 2. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```azurepowershell
To learn more about AKS, and to walk through a complete code-to-deployment examp
[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE [windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [new-azaksnodepool]: /powershell/module/az.aks/new-azaksnodepool
+[win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster
aks Quickstart Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/quickstart-dapr.md
Once your store is created, you need to add the keys to the `redis.yaml` file in
{ "orderId": "42" } ```
- > [!TIP]
- > This is a good time to get familiar with the Dapr dashboard, a convenient interface to check status, information, and logs of applications running on Dapr. To access the dashboard at `http://localhost:8080/`, run the following command:
- >
- > ```bash
- > kubectl port-forward svc/dapr-dashboard -n dapr-system 8080:8080
- > ```
- ## Deploy the Python app with the Dapr sidecar 1. Navigate to the Python app directory in the `hello-kubernetes` quickstart and open `app.py`.
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
This guidance helps you provide the required information to define how to authen
| neighborhood.heartbeat.port | UDP port used for instances of a self-hosted gateway deployment to send heartbeats to other instances. | No | 4291 | v2.0+ | | policy.rate-limit.sync.port | UDP port used for self-hosted gateway instances to synchronize rate limiting across multiple instances. | No | 4290 | v2.0+ |
+## HTTP
+
+| Name | Description | Required | Default | Availability |
+|-||-|-| -|
+| net.server.http.forwarded.proto.enabled | Capability to honor `X-Forwarded-Proto` header to identify scheme to resolve called API route (http/https only). | No | false | v2.5+ |
+ ## Kubernetes Integration ### Kubernetes Ingress
api-management Validate Jwt Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validate-jwt-policy.md
Previously updated : 10/19/2023 Last updated : 01/08/2024
The `validate-jwt` policy enforces existence and validity of a supported JSON we
| Element | Description | Required | | - | -- | -- |
-| openid-config |Add one or more of these elements to specify a compliant OpenID configuration endpoint URL from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. <br/><br/>The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Microsoft Entra ID use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- (v2) `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/> - (v2 multitenant) ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- (v1) `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/><br/> substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | No |
+| openid-config |Add one or more of these elements to specify a compliant OpenID configuration endpoint URL from which signing keys and issuer can be obtained.<br/><br/>Configuration including the JSON Web Key Set (JWKS) is pulled from the endpoint every 1 hour and cached. If the token being validated references a validation key (using `kid` claim) that is missing in cached configuration, or if retrieval fails, API Management pulls from the endpoint at most once per 5 min. These intervals are subject to change without notice. <br/><br/>The response should be according to specs as defined at URL: `https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata`. <br/><br/>For Microsoft Entra ID use the OpenID Connect [metadata endpoint](../active-directory/develop/v2-protocols-oidc.md#find-your-apps-openid-configuration-document-uri) configured in your app registration such as:<br/>- v2 `https://login.microsoftonline.com/{tenant-name}/v2.0/.well-known/openid-configuration`<br/>- v2 Multi-Tenant ` https://login.microsoftonline.com/organizations/v2.0/.well-known/openid-configuration`<br/>- v1 `https://login.microsoftonline.com/{tenant-name}/.well-known/openid-configuration` <br/>- Customer tenant (preview) `https://{tenant-name}.ciamlogin.com/{tenant-id}/v2.0/.well-known/openid-configuration` <br/><br/> Substituting your directory tenant name or ID, for example `contoso.onmicrosoft.com`, for `{tenant-name}`. | No |
| issuer-signing-keys | A list of Base64-encoded security keys, in [`key`](#key-attributes) subelements, used to validate signed tokens. If multiple security keys are present, then each key is tried until either all are exhausted (in which case validation fails) or one succeeds (useful for token rollover). <br/><br/>Optionally specify a key by using the `id` attribute to match a `kid` claim. To validate an RS256 signed token, optionally specify the public key using a `certificate-id` attribute with value the identifier of a certificate uploaded to API Management, or the RSA modulus `n` and exponent `e` pair of the RS256 signing key-in Base64url-encoded format. | No | | decryption-keys | A list of Base64-encoded keys, in [`key`](#key-attributes) subelements, used to decrypt the tokens. If multiple security keys are present, then each key is tried until either all keys are exhausted (in which case validation fails) or a key succeeds.<br/><br/>Optionally specify a key by using the `id` attribute to match a `kid` claim. To decrypt an RS256 signed token, optionally specify the public key using a `certificate-id` attribute with value the identifier of a certificate uploaded to API Management. | No | | audiences | A list of acceptable audience claims, in `audience` subelements, that can be present on the token. If multiple audience values are present, then each value is tried until either all are exhausted (in which case validation fails) or until one succeeds. At least one audience must be specified. | No |
The `validate-jwt` policy enforces existence and validity of a supported JSON we
<a name='azure-active-directory-token-validation'></a>
-### Microsoft Entra token validation
+### Microsoft Entra ID single tenant token validation
```xml <validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
The `validate-jwt` policy enforces existence and validity of a supported JSON we
</validate-jwt> ```
+### Microsoft Entra ID customer tenant token validation
+
+```xml
+<validate-jwt header-name="Authorization" failed-validation-httpcode="401" failed-validation-error-message="Unauthorized. Access token is missing or invalid.">
+ <openid-config url="https://<tenant-name>.ciamlogin.com/<tenant-id>/v2.0/.well-known/openid-configuration" />
+ <required-claims>
+ <claim name="azp" match="all">
+ <value>insert claim here</value>
+ </claim>
+ </required-claims>
+</validate-jwt>
+```
+ ### Azure Active Directory B2C token validation ```xml
The `validate-jwt` policy enforces existence and validity of a supported JSON we
</validate-jwt> ``` + ### Authorize access to operations based on token claims This example shows how to use the `validate-jwt` policy to authorize access to operations based on token claims value.
app-service Deploy Staging Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-staging-slots.md
Title: Set up staging environments description: Learn how to deploy apps to a nonproduction slot and autoswap into production. Increase the reliability and eliminate app downtime from deployments.- ms.assetid: e224fc4f-800d-469a-8d6a-72bcde612450 Last updated 07/30/2023- -+
+ai-usage: ai-assisted
+ # Set up staging environments in Azure App Service <a name="Overview"></a>
Each App Service plan tier supports a different number of deployment slots. Ther
To scale your app to a different tier, make sure that the target tier supports the number of slots your app already uses. For example, if your app has more than five slots, you can't scale it down to the **Standard** tier, because the **Standard** tier supports only five deployment slots.
+This video shows you how to set up staging environments in Azure App Service.
+> [!VIDEO 99aaff5e-fd3a-4568-b03a-a65745807d0f]
+
+The steps in the video are also described in the following sections.
+ ## Prerequisites For information on the permissions you need to perform the slot operation you want, see [Resource provider operations](../role-based-access-control/resource-provider-operations.md#microsoftweb) (search for *slot*, for example).
The slot's URL has the format `http://sitename-slotname.azurewebsites.net`. To k
### Swap operation steps
-When you swap two slots (usually from a staging slot into the production slot), App Service does the following to ensure that the target slot doesn't experience downtime:
+When you swap two slots (usually from a staging slot *as the source* into the production slot *as the target*), App Service does the following to ensure that the target slot doesn't experience downtime:
-1. Apply the following settings from the source slot (for example, the production slot) to all instances of the target slot:
+1. Apply the following settings from the target slot (for example, the production slot) to all instances of the source slot:
- [Slot-specific](#which-settings-are-swapped) app settings and connection strings, if applicable. - [Continuous deployment](deploy-continuous-deployment.md) settings, if enabled. - [App Service authentication](overview-authentication-authorization.md) settings, if enabled.
- Any of these cases trigger all instances in the target slot to restart. During [swap with preview](#Multi-Phase), this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot's settings.
+ Any of these cases trigger all instances in the source slot to restart. During [swap with preview](#Multi-Phase), this marks the end of the first phase. The swap operation is paused, and you can validate that the source slot works correctly with the target slot's settings.
1. Wait for every instance in the source slot to complete its restart. If any instance fails to restart, the swap operation reverts all changes to the source slot and stops the operation.
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
Title: Managed identities description: Learn how managed identities work in Azure App Service and Azure Functions, how to configure a managed identity and generate a token for a back-end resource.- Last updated 06/27/2023 - +
+ai-usage: ai-assisted
# How to use managed identities for App Service and Azure Functions
This article shows you how to create a managed identity for App Service and Azur
The managed identity configuration is specific to the slot. To configure a managed identity for a deployment slot in the portal, navigate to the slot first. To find the managed identity for your web app or deployment slot in your Microsoft Entra tenant from the Azure portal, search for it directly from the **Overview** page of your tenant. Usually, the slot name is similar to `<app-name>/slots/<slot-name>`.
+This video shows you how to use managed identities for App Service.
+> [!VIDEO 4fdf7a78-b3ce-48df-b3ce-cd7796d0ad5a]
+
+The steps in the video are also described in the following sections.
+ ## Add a system-assigned identity # [Azure portal](#tab/portal)
app-service Overview Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-name-resolution.md
If you require fine-grained control over name resolution, App Service allows you
>[!NOTE] > * Changing name resolution behavior is not supported on Windows Container apps.
-> * To enable DNS caching on Web App for Containers and Linux-based apps, you must add the app setting `WEBSITE_ENABLE_DNS_CACHE`. This setting defaults to 30 seconds.
+> * To configure `dnsMaxCacheTimeout`, you need to ensure that caching is enabled by adding the app setting `WEBSITE_ENABLE_DNS_CACHE`="true". If you enable caching, but do not configure `dnsMaxCacheTimeout`, the timeout will be set to 30.
Configure the name resolution behavior by using these CLI commands:
app-service Quickstart Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md
description: Learn how to run web apps in Azure App Service by deploying your fi
ms.assetid: b1e6bd58-48d1-4007-9d6c-53fd6db061e3 Last updated 05/03/2023- zone_pivot_groups: app-service-ide adobe-target: true adobe-target-activity: DocsExpΓÇô386541ΓÇôA/BΓÇôEnhanced-Readability-QuickstartsΓÇô2.19.2021
adobe-target-experience: Experience B
adobe-target-content: ./quickstart-dotnetcore-uiex +
+ai-usage: ai-assisted
<!-- NOTES:
In this quickstart, you learn how to create and deploy your first ASP.NET web ap
Alternatively, you can deploy an ASP.NET web app as part of a [Windows or Linux container in App Service](quickstart-custom-container.md).
+This video shows you how to deploy an ASP.NET web app.
+> [!VIDEO 31309745-82c2-4208-aed5-7ace0b7f7f4d]
+
+The steps in the video are also described in the following sections.
+ ## Prerequisites :::zone target="docs" pivot="development-environment-vs"
azure-app-configuration Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/cli-samples.md
Previously updated : 08/09/2022 Last updated : 1/9/2024 # Azure CLI samples
-The following table includes links to bash scripts for Azure App Configuration by using the [az appconfig](/cli/azure/appconfig) commands in the Azure CLI:
+The following table includes links to Azure CLI scripts for Azure App Configuration using the [az appconfig](/cli/azure/appconfig) commands in the Azure CLI:
| Script | Description | |-|-|
azure-app-configuration Howto Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-best-practices.md
editor: ''
ms.assetid: Previously updated : 09/08/2023 Last updated : 12/21/2023
To access an App Configuration store, you can use its connection string, which i
A better option is to use the managed identities feature in Microsoft Entra ID. With managed identities, you need only the App Configuration endpoint URL to bootstrap access to your App Configuration store. You can embed the URL in your application code (for example, in the *appsettings.json* file). See [Use managed identities to access App Configuration](howto-integrate-azure-managed-service-identity.md) for details.
+## Azure Kubernetes Service access to App Configuration
+
+The following options are available for workloads hosted in Azure Kubernetes Service (AKS) to access Azure App Configuration. These options also apply to Kubernetes in general.
+
+* **Add [Azure App Configuration Kubernetes Provider](./quickstart-azure-kubernetes-service.md) to your AKS cluster.** The Kubernetes provider runs as a pod in the cluster. It can construct ConfigMaps and Secrets from key-values and Key Vault references in your App Configuration store. The ConfigMap and Secret are consumable as environment variables or mounted files without requiring any modifications to your application code. If you have multiple applications running in the same AKS cluster, they can all access the generated ConfigMaps and Secrets, eliminating the need for individual requests to App Configuration. The Kubernetes provider also supports dynamic configuration updates. This is the recommended option if feasible for you.
+
+* **Update your application to use Azure App Configuration provider libraries.** The provider libraries are available in many frameworks and languages, such as [ASP.NET](./quickstart-aspnet-core-app.md), [.NET](./quickstart-dotnet-core-app.md), [Java Spring](./quickstart-java-spring-app.md), [JavaScript/Node.js](./quickstart-javascript-provider.md), and [Python](./quickstart-python-provider.md). This approach gives you full access to App Configuration's functionalities, including dynamic configuration and feature management. You have granular control of what data to load and from which App Configuration store for each application.
+
+* **[Integrate with Kubernetes deployment using Helm](./integrate-kubernetes-deployment-helm.md).** If you do not wish to update your application or add a new pod to your AKS cluster, you have the option of bringing data from App Configuration to your Kubernetes cluster by using Helm via deployment. This approach enables your application to continue accessing configuration from Kubernetes variables and Secrets. You can run Helm upgrade whenever you want your application to incorporate new configuration changes.
+ ## App Service or Azure Functions access to App Configuration Use the App Configuration provider or SDK libraries to access App Configuration directly in your application. This approach gives you full access to App Configuration's functionalities, including dynamic configuration and feature management. Your application running on App Service or Azure Functions can obtain access to your App Configuration store via any of the following methods:
azure-app-configuration Pull Key Value Devops Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/pull-key-value-devops-pipeline.md
The following parameters are used by the Azure App Configuration task:
- **Selection Mode**: Specifies how the key-values read from a configuration store are selected. The 'Default' selection mode allows the use of key and label filters. The 'Snapshot' selection mode allows key-values to be selected from a snapshot. Default value is **Default**. - **Key Filter**: The filter can be used to select what key-values are requested from Azure App Configuration. A value of * will select all key-values. For more information on, see [Query key values](concept-key-value.md#query-key-values). - **Label**: Specifies which label should be used when selecting key-values from the App Configuration store. If no label is provided, then key-values with the no label will be retrieved. The following characters are not allowed: , *.--**Snapshot Name**: Specifies snapshot from which key-values should be retrieved in Azure App Configuration.
+- **Snapshot Name**: Specifies snapshot from which key-values should be retrieved in Azure App Configuration.
- **Trim Key Prefix**: Specifies one or more prefixes that should be trimmed from App Configuration keys before setting them as variables. Multiple prefixes can be separated by a new-line character. - **Suppress Warning For Overridden Keys**: Default value is unchecked. Specifies whether to show warnings when existing keys are overridden. Enable this option when it is expected that the key-values downloaded from App Configuration have overlapping keys with what exists in pipeline variables.
azure-arc Conceptual Custom Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/platform/conceptual-custom-locations.md
Title: "Overview of custom locations with Azure Arc" Previously updated : 02/24/2022 Last updated : 01/08/2024 description: "This article provides a conceptual overview of the custom locations capability of Azure Arc." # Custom locations
-As an extension of the Azure location construct, a *custom location* provides a reference as deployment target which administrators can set up, and users can point to, when creating an Azure resource. It abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization.
+As an extension of the Azure location construct, a *custom location* provides a reference as a deployment target that administrators can set up when creating an Azure resource. The custom location feature abstracts the backend infrastructure details from application developers, database admin users, or other users in the organization. These users can then reference the custom location without having to be aware of these details.
-Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md), an administrator or operator can determine which users have access to create resource instances on:
+Custom locations can be used to enable [Azure Arc-enabled Kubernetes clusters](../kubernetes/overview.md) as target locations for deploying Azure services instances. Azure offerings that can be deployed on top of custom locations include databases, such as [SQL Managed Instance enabled by Azure Arc](/azure/azure-arc/data/managed-instance-overview) and [Azure Arc-enabled PostgreSQL server](/azure/azure-arc/data/what-is-azure-arc-enabled-postgresql).
-* A namespace within a Kubernetes cluster to target deployment of SQL Managed Instance enabled by Azure Arc and Azure Arc-enabled PostgreSQL servers.
-* The compute, storage, networking, and other vCenter or Azure Stack HCI resources to deploy and manage VMs.
+On Arc-enabled Kubernetes clusters, a custom location represents an abstraction of a namespace within the Azure Arc-enabled Kubernetes cluster. Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster.
-For example, a cluster operator could create a custom location **Contoso-Michigan-Healthcare-App** representing a namespace on a Kubernetes cluster in your organization's Michigan Data Center. The operator can then assign Azure RBAC permissions to application developers on this custom location so that they can deploy healthcare-related web applications. The developers can then deploy these applications without having to know details of the namespace and Kubernetes cluster.
+## Custom location permissions
-On Arc-enabled Kubernetes clusters, a custom location represents an abstraction of a namespace within the Azure Arc-enabled Kubernetes cluster. Custom locations create the granular [RoleBindings and ClusterRoleBindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding) necessary for other Azure services to access the cluster.
+Since the custom location is an Azure Resource Manager resource that supports [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md), an administrator or operator can determine which users have access to create resource instances on:
+
+* A namespace within a Kubernetes cluster to target deployment of SQL Managed Instance enabled by Azure Arc or Azure Arc-enabled PostgreSQL server.
+* The compute, storage, networking, and other vCenter or Azure Stack HCI resources to deploy and manage VMs.
-> [!IMPORTANT]
-> In the interest of ensuring new features are documented no later than their release, this page may include documentation for features that may not yet be publicly available.
+For example, a cluster operator could create a custom location **Contoso-Michigan-Healthcare-App** representing a namespace on a Kubernetes cluster in your organization's Michigan Data Center. The operator can assign Azure RBAC permissions to application developers on this custom location so that they can deploy healthcare-related web applications. The developers can then deploy these applications to **Contoso-Michigan-Healthcare-App** without having to know details of the namespace and Kubernetes cluster.
## Architecture for Arc-enabled Kubernetes
-When an administrator enables the custom locations feature on a cluster, a ClusterRoleBinding is created, authorizing the Microsoft Entra application used by the Custom Locations Resource Provider (RP). Once authorized, Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determines the list of RPs to authorize.
-
-[ ![Use custom locations](../kubernetes/media/conceptual-custom-locations-usage.png) ](../kubernetes/media/conceptual-custom-locations-usage.png#lightbox)
+When an administrator enables the custom locations feature on a cluster, a ClusterRoleBinding is created, authorizing the Microsoft Entra application used by the Custom Locations Resource Provider (RP). Once authorized, the Custom Locations RP can create ClusterRoleBindings or RoleBindings needed by other Azure RPs to create custom resources on this cluster. The cluster extensions installed on the cluster determine the list of RPs to authorize.
+[ ![Diagram showing custom locations architecture on Arc-enabled Kubernetes.](../kubernetes/media/conceptual-custom-locations-usage.png) ](../kubernetes/media/conceptual-custom-locations-usage.png#lightbox)
When the user creates a data service instance on the cluster:
When the user creates a data service instance on the cluster:
* The Azure Arc-enabled Data Services operator was deployed via cluster extension creation before the custom location existed. 1. The Azure Arc-enabled Data Services operator reads the new custom resource created on the cluster and creates the data controller, translating into realization of the desired state on the cluster.
-The sequence of steps to create the SQL managed instance and PostgreSQL instance are identical to the sequence of steps described above.
+The sequence of steps to create the SQL managed instance or PostgreSQL instance are identical to the sequence of steps described above.
## Next steps
-* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](../kubernetes/quickstart-connect-cluster.md). Then [create a custom location](../kubernetes/custom-locations.md) on your Azure Arc-enabled Kubernetes cluster.
+* Use our quickstart to [connect a Kubernetes cluster to Azure Arc](../kubernetes/quickstart-connect-cluster.md).
+* Learn how to [create a custom location](../kubernetes/custom-locations.md) on your Azure Arc-enabled Kubernetes cluster.
azure-arc Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Arc description: Sample Azure Resource Graph queries for Azure Arc showing use of resource types and tables to access Azure Arc related resources and properties. Previously updated : 07/07/2022 Last updated : 01/08/2024
This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample queries for Azure Arc. For a complete list of Azure Resource Graph samples, see
-[Resource Graph samples by Category](../governance/resource-graph/samples/samples-by-category.md)
-and [Resource Graph samples by Table](../governance/resource-graph/samples/samples-by-table.md).
+[Resource Graph sample queries by category](../governance/resource-graph/samples/samples-by-category.md)
+and [Resource Graph sample queries by table](../governance/resource-graph/samples/samples-by-table.md).
## Sample queries
and [Resource Graph samples by Table](../governance/resource-graph/samples/sampl
- Learn more about the [query language](../governance/resource-graph/concepts/query-language.md). - Learn more about how to [explore resources](../governance/resource-graph/concepts/explore-resources.md).-- See samples of [Starter language queries](../governance/resource-graph/samples/starter.md).-- See samples of [Advanced language queries](../governance/resource-graph/samples/advanced.md).
+- See samples of [starter Resource Graph queries](../governance/resource-graph/samples/starter.md).
+- See samples of [Advanced Resource Graph queries](../governance/resource-graph/samples/advanced.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/validation-program/overview.md
Title: Azure Arc-enabled services validation overview description: Explains the Azure Arc validation process to conform to the Azure Arc-enabled Kubernetes, Data Services, and cluster extensions. Previously updated : 07/30/2021 Last updated : 01/08/2024 # Overview of Azure Arc-enabled service validation
-Microsoft recommends running Azure Arc-enabled services on validated platforms. This article points you to content to explain how various Azure Arc-enabled components are validated.
+Microsoft recommends running Azure Arc-enabled services on validated platforms whenever possible. This article explains how various Azure Arc-enabled components are validated.
-Currently, validated solutions are available from partners for Kubernetes and data services.
+Currently, validated solutions are available from partners for [Azure Arc-enabled Kubernetes](../kubernetes/overview.md) and [Azure Arc-enabled data services](../dat).
-## Kubernetes
+## Validated Azure Arc-enabled Kubernetes distributions
-Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. The Azure Arc team has worked with key industry Kubernetes offering providers to validate Azure Arc-enabled Kubernetes with their [Kubernetes distributions](../kubernetes/validation-program.md). Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc-enabled Kubernetes.
+Azure Arc-enabled Kubernetes works with any Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. The Azure Arc team worked with key industry Kubernetes offering providers to [validate Azure Arc-enabled Kubernetes with their Kubernetes distributions](../kubernetes/validation-program.md?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json). Future major and minor versions of Kubernetes distributions released by these providers will be validated for compatibility with Azure Arc-enabled Kubernetes.
-## Data services
+## Validated data services solutions
-We have also partnered with original equipment manufacturer (OEM) partners and storage providers to validate [Azure Arc-enabled data services](../dat) solutions.
+The Azure Arc team worked with original equipment manufacturer (OEM) partners and storage providers to [validate Azure Arc-enabled data services solutions](../dat?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json). This includes partner solutions, versions, Kubernetes versions, SQL engine versions, and PostgreSQL server versions that have been verified to support the data services.
## Validation process
-The Azure Arc validation process is available in GitHub. To find out more details on how to validate your offering with Azure Arc, the test harness and strategy, please refer to the [Azure Arc validation process](https://github.com/Azure/azure-arc-validation/) in GitHub.
+For more details about the validation process, see the [Azure Arc validation process](https://github.com/Azure/azure-arc-validation/) in GitHub. Here you find information about how offerings are validated with Azure Arc, the test harness, strategy, and more.
## Next steps
-* [Validated Kubernetes distributions](../kubernetes/validation-program.md?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json)
-
-* [Validated Kubernetes distributions for data services](../dat?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json)
+* Learn about [Validated Kubernetes distributions](../kubernetes/validation-program.md?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json)
+* Learn about [validated solutions for data services](../dat?toc=/azure/azure-arc/toc.json&bc=/azure/azure-arc/breadcrumb/toc.json)
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
In contrast, for clustered caches, we recommend using the metrics with the suffi
- 99th Percentile Latency (preview) - Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond. - Useful for tracking the health of your Redis instance. Latency increases if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command.
- - This metric is only available in Standard and Premium tier caches
+ - This metric is only available in Standard and Premium tier caches.
+ - This metric is not available for caches that are affected by Cloud Service retirement. See more information [here](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic)
- Cache Latency (preview) - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval. - Cache Misses
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
adobe-target-content: ./create-first-function-cli-csharp-ieux
In this article, you use command-line tools to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
-This article creates an HTTP triggered function that runs on .NET 6 in an isolated worker process. For information about .NET versions supported for C# functions, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). There's also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
+This article creates an HTTP triggered function that runs on .NET 8 in an isolated worker process. For information about .NET versions supported for C# functions, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions). There's also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
Completing this quickstart incurs a small cost of a few USD cents or less in you
Before you begin, you must have the following:
-+ [.NET 6.0 SDK](https://dotnet.microsoft.com/download).
++ [.NET 8.0 SDK](https://dotnet.microsoft.com/download). + One of the following tools for creating Azure resources:
In Azure Functions, a function project is a container for one or more individual
1. Run the `func init` command, as follows, to create a functions project in a folder named *LocalFunctionProj* with the specified runtime: ```console
- func init LocalFunctionProj --worker-runtime dotnet-isolated --target-framework net6.0
+ func init LocalFunctionProj --worker-runtime dotnet-isolated --target-framework net8.0
```
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
# Quickstart: Create a C# function in Azure using Visual Studio Code
-This article creates an HTTP triggered function that runs on .NET 6 in an isolated worker process. For information about .NET versions supported for C# functions, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
+This article creates an HTTP triggered function that runs on .NET 8 in an isolated worker process. For information about .NET versions supported for C# functions, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
In this section, you use Visual Studio Code to create a local Azure Functions pr
|Prompt|Selection| |--|--| |**Select a language for your function project**|Choose `C#`.|
- | **Select a .NET runtime** | Choose `.NET 6.0 Isolated (LTS)`.|
+ | **Select a .NET runtime** | Choose `.NET 8.0 Isolated (LTS)`.|
|**Select a template for your project's first function**|Choose `HTTP trigger`.| |**Provide a function name**|Type `HttpExample`.| |**Provide a namespace** | Type `My.Functions`. |
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
Azure Functions lets you use Visual Studio to create local C# function projects and then easily publish this project to run in a scalable serverless environment in Azure. If you prefer to develop your C# apps locally using Visual Studio Code, you should instead consider the [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
-By default, this article shows you how to create C# functions that run on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). Function apps that run in an isolated worker process are supported on all versions of .NET that are supported by Functions. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
+By default, this article shows you how to create C# functions that run on .NET 8 in an [isolated worker process](dotnet-isolated-process-guide.md). Function apps that run in an isolated worker process are supported on all versions of .NET that are supported by Functions. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
In this article, you learn how to:
The Azure Functions project template in Visual Studio creates a C# class library
| Setting | Value | Description | | | - |-- |
- | **Functions worker** | **.NET 6.0 Isolated (Long Term Support)** | Your functions run on .NET 6 in an isolated worker process. |
+ | **Functions worker** | **.NET 8.0 Isolated (Long Term Support)** | Your functions run on .NET 8 in an isolated worker process. |
| **Function** | **HTTP trigger** | This value creates a function triggered by an HTTP request. | | **Use Azurite for runtime storage account (AzureWebJobsStorage)** | Enable | Because a function app in Azure requires a storage account, one is assigned or created when you publish your project to Azure. An HTTP trigger doesn't use an Azure Storage account connection string; all other trigger types require a valid Azure Storage account connection string. When you select this option, the [Azurite emulator](../storage/common/storage-use-azurite.md?tabs=visual-studio) is used. | | **Authorization level** | **Anonymous** | The created function can be triggered by any client without providing a key. This authorization setting makes it easy to test your new function. For more information about keys and authorization, see [Authorization keys](./functions-bindings-http-webhook-trigger.md#authorization-keys) and [HTTP and webhook bindings](./functions-bindings-http-webhook.md). |
azure-monitor Alerts Log Alert Query Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-log-alert-query-samples.md
# Sample log alert queries that include ADX and ARG
-A log alert rule monitors a resource by using a Log Analytics query to evaluate resource logs at a set frequency. You can include data from Azure Data Explorer and Azure Resource Graph in your log alert rule queries.
+A log alert rule monitors a resource by using a Log Analytics query to evaluate logs at a set frequency. You can include data from Azure Data Explorer and Azure Resource Graph in your log alert rule queries.
This article provides examples of log alert rule queries that use Azure Data Explorer and Azure Resource Graph. For more information about creating a log alert rule, see [Create a log alert rule](./alerts-create-log-alert-rule.md).
-## Query that checks virtual machine health
+## Queries that check virtual machine health
-This query finds virtual machines that are marked as critical and that had a heartbeat more than 24 hours ago, but that haven't had a heartbeat in the last 2 minutes.
+This query finds virtual machines marked as critical that haven't had a heartbeat in the last 2 minutes.
+
+```kusto
+ arg("").Resources
+ | where type == "microsoft.compute/virtualmachines"
+ | summarize LastCall = max(case(isnull(TimeGenerated), make_datetime(1970, 1, 1), TimeGenerated)) by name, id
+ | extend SystemDown = case(LastCall < ago(2m), 1, 0)
+ | where SystemDown == 1
+```
++
+This query finds virtual machines marked as critical that had a heartbeat more than 24 hours ago, but that haven't had a heartbeat in the last 2 minutes.
```kusto {
This query finds virtual machines that are marked as critical and that had a hea
## Query that filters virtual machines that need to be monitored ```kusto
-{
+ {
let RuleGroupTags = dynamic(['Linux']);
- Perf | where ObjectName == 'Processor' and CounterName == '% Idle Time' and (InstanceName == '_Total' or InstanceName == 'total')
+ Perf | where ObjectName == 'Processor' and CounterName == '% Idle Time' and (InstanceName in ('_Total,'total'))
| extend CpuUtilisation = (100 - CounterValue)    | join kind=inner hint.remote=left (arg("").Resources
- | where type =~ 'Microsoft.Compute/virtualMachines'
+ | where type =~ 'Microsoft.Compute/virtualMachines'
| project _ResourceId=tolower(id), tags) on _ResourceId | project-away _ResourceId1
- | where (isnull(tags.monitored) or tolower(tostring(tags.monitored)) != 'false') and (tostring(tags.monitorRuleGroup) in (RuleGroupTags) or isnull(tags.monitorRuleGroup) or tostring(tags.monitorRuleGroup) == '')
+ | where (tostring(tags.monitorRuleGroup) in (RuleGroupTags))
} ```
This query finds virtual machines that are marked as critical and that had a hea
```kusto { arg("").resourcechanges
- | extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
+ | extend changeTime = todatetime(properties.changeAttributes.timestamp),
changeType = tostring(properties.changeType),targetResourceType = tostring(properties.targetResourceType), changedBy = tostring(properties.changeAttributes.changedBy)
- | where changeType == "Create"
+ | where changeType == "Create" and changeTime <ago(1h)
| project changeTime,targetResourceId,changedBy } ```
azure-monitor Java Standalone Telemetry Processors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-telemetry-processors.md
To configure this option, under `exclude`, specify the `matchType` one or more `
### Sample usage
+The following sample shows how to exclude metrics with names "metricA" and "metricB".
+ ```json "processors": [ {
To configure this option, under `exclude`, specify the `matchType` one or more `
} ] ```+
+The following sample show how to turn off all metrics including the default auto-collected performance metrics like cpu and memory.
+
+```json
+"processors": [
+ {
+ "type": "metric-filter",
+ "exclude": {
+ "matchType": "regexp",
+ "metricNames": [
+ ".*"
+ ]
+ }
+ }
+]
+```
+ ### Default metrics captured by Java agent | Metric name | Metric type | Description | Filterable |
azure-monitor Azure Monitor Workspace Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-manage.md
Create a link between the Azure Monitor workspace and the Grafana workspace by u
If your cluster is already configured to send data to an Azure Monitor managed service for Prometheus, you must disable it first using the following command: ```azurecli
-az aks update --disable-azuremonitormetrics -g <cluster-resource-group> -n <cluster-name>
+az aks update --disable-azure-monitor-metrics -g <cluster-resource-group> -n <cluster-name>
``` Then, either enable or re-enable using the following command: ```azurecli
-az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
+az aks update --enable-azure-monitor-metrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id
<azure-monitor-workspace-name-resource-id> --grafana-resource-id <grafana-workspace-name-resource-id> ```
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/move-support-resources.md
Before starting your move operation, review the [checklist](./move-resource-grou
> [!div class="mx-tableFixed"] > | Resource type | Resource group | Subscription | Region move | > | - | -- | - | -- |
-> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recover-from-an-azure-regional-data-center-outage).
+> | servers | **Yes** | **Yes** | You can use a cross-region read replica to move an existing server. [Learn more](../../postgresql/howto-move-regions-portal.md).<br/><br/> If the service is provisioned with geo-redundant backup storage, you can use geo-restore to restore in other regions. [Learn more](../../mariadb/concepts-business-continuity.md#recovery-from-an-azure-regional-datacenter-outage).
## Microsoft.DBforMySQL
batch Budget https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/budget.md
- Title: Get cost analysis and set budgets for Azure Batch
-description: Learn how to get a cost analysis, set a budget, and reduce costs for the underlying compute resources and software licenses used to run your Batch workloads.
- Previously updated : 12/13/2021--
-# Get cost analysis and set budgets for Azure Batch
-
-This topic will help you understand costs that may be associated with Azure Batch, how to set a budget for a Batch pool or account, and ways to reduce the costs for Batch workloads.
-
-## Understand costs associated with Batch resources
-
-There are no costs for using Azure Batch itself, although there can be charges for the underlying compute resources and software licenses used to run Batch workloads. Costs may be incurred from virtual machines (VMs) in a pool, data transfer from the VM, or any input or output data stored in the cloud.
-
-### Virtual machines
-
-Virtual machines are the most significant resource used for Batch processing. The cost of using VMs for Batch is calculated based on the type, quantity, and the duration of use. VM billing options include [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0003p/) or [reservation](../cost-management-billing/reservations/save-compute-costs-reservations.md) (pay in advance). Both payment options have different benefits depending on your compute workload and will affect your bill differently.
-
-Each VM in a pool created with [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) has an associated OS disk that uses Azure-managed disks. Azure-managed disks have an additional cost, and other disk performance tiers have different costs as well.
-
-### Storage
-
-When applications are deployed to Batch nodes (VMs) using [application packages](batch-application-packages.md), you are billed for the Azure Storage resources that your application packages consume. You're also billed for the storage of any input or output files, such as resource files and other log data.
-
-In general, the cost of storage data associated with Batch is much lower than the cost of compute resources.
-
-### Networking resources
-
-Batch pools use networking resources, some of which have associated costs. In particular, for Virtual Machine Configuration pools, standard load balancers are used, which require static IP addresses. The load balancers used by Batch are visible for [accounts](accounts.md#batch-accounts) configured in user subscription mode, but not those in Batch service mode.
-
-Standard load balancers incur charges for all data passed to and from Batch pool VMs. Select Batch APIs that retrieve data from pool nodes (such as Get Task/Node File), task application packages, resource/output files, and container images will also incur charges.
-
-### Additional services
-
-Depending on which services you use with your Batch solution, you may incur additional fees. Refer to the [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/) to determine the cost of each additional service. Services commonly used with Batch that may have associated costs include:
--- Application Insights-- Data Factory-- Azure Monitor-- Virtual Network-- VMs with graphics applications-
-## View cost analysis and create budgets
-
-[Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) lets you plan, analyze and reduce your spending to maximize your cloud investment. The usage costs for all Azure services are available, including Azure Batch. You can view and filter Batch costs to be viewed and filtered, forecast future costs, and set spending limits with alerts when those limits are reached.
-
-In the Azure portal, you can create budgets and spending alerts for your Batch pools or Batch accounts. Budgets and alerts are useful for notifying stakeholders of any risks of overspending, although it's possible for there to be a delay in spending alerts and to slightly exceed a budget.
-
-The following screenshot shows an example of the **Cost analysis** view for a subscription, filtered to only display the accumulated costs associated with all Batch accounts. The lower charts show how the total cost for the period selected can be categorized by consumed service, location, and meter. While this is an example and is not meant to be reflective of costs you may see for your subscriptions, it is typical in that the largest cost is for the virtual machines that are allocated for Batch pool nodes.
--
-A further level of cost analysis detail can be obtained by specifying a **Resource** filter. For Batch accounts, these values are the Batch account name plus pool name. This allows you to view costs for a specific pool, multiple pools, or one or more accounts.
-
-### View cost analysis for a Batch pool
-
-#### Batch service pool allocation mode
-
-For Batch accounts created with the Batch service pool allocation mode:
-
-1. In the Azure portal, type in or select **Cost Management + Billing** .
-1. Select your subscription in the **Billing scopes** section.
-1. Under **Cost Management**, select **Cost analysis**.
-1. Select **Add Filter**. In the first drop-down, select **Resource**.
-1. In the second drop-down, select the Batch pool. When the pool is selected, you will see the cost analysis for your pool. The screenshot below shows example data.
- :::image type="content" source="media/batch-budget/pool-cost-analysis.png" alt-text="Screenshot showing cost analysis of a Batch pool in the Azure portal.":::
-
-The resulting cost analysis shows the cost of the pool as well as the resources that contribute to this cost. In this example, the VMs used in the pool are the most costly resource.
-
-> [!NOTE]
-> The pool in this example uses **Virtual Machine Configuration**, which is [recommended for most pools](batch-pool-cloud-service-to-virtual-machine-configuration.md) and are charged based on the Virtual Machines pricing structure. Pools that use **Cloud Services Configuration** are charged based on the Cloud Services pricing structure.
-
-[Tags](../azure-resource-manager/management/tag-resources.md) can be associated with Batch accounts, allowing tags to be used for further cost filtering. For example, tags can be used to associate project, user, or group information with a Batch account. Tags cannot currently be associated with Batch pools.
-
-#### User subscription pool allocation mode
-
-For Batch accounts created with the user subscription pool allocation mode:
-
-1. In the Azure portal, type in or select **Cost Management + Billing** .
-1. Select your subscription in the **Billing scopes** section.
-1. Under **Cost Management**, select **Cost analysis**.
-1. Select **Add Filter**. In the first drop-down, select **Tag**.
-1. In the second drop-down, select **poolname**.
-1. In the third drop-down, select the Batch pool. When the pool is selected, you will see the cost analysis for your pool. The screenshot below shows example data.
- :::image type="content" source="media/batch-budget/user-subscription-pool.png" alt-text="Screenshot showing cost analysis of a user subscription Batch pool in the Azure portal.":::
-
-Note that if you're interested in viewing cost data for all pools in a user subscription Batch account, you can select **batchaccountname** in the second drop-down and the name of your Batch account in the third drop-down.
-
-> [!NOTE]
-> Pools created by user subscription Batch accounts will not show up under the **Resource** filter, though their usage will still show up when filtering for "virtual machines" under service name.
-
-### Create a budget for a Batch pool
-
-Budgets can be created and cost alerts issued when various percentages of a budget are reached, such as 60%, 80%, and 100%. The budgets can specify one or more filters, so you can monitor and alert on Batch account costs at various granularities.
-
-1. From the **Cost analysis** page, select **Budget: none**.
-1. Select **Create new budget >**.
-1. Use the resulting window to configure a budget specifically for your pool. For more information, see [Tutorial: Create and manage Azure budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md).
-
-## Minimize costs associated with Azure Batch
-
-Depending on your scenario, you may want to reduce costs as much as possible. Consider using one or more of these strategies to maximize the efficiency of your workloads and reduce potential costs.
-
-### Reduce pool node use
-
-The largest costs associated with using Batch are typically from the virtual machines allocated for pool nodes. For Virtual Machine configuration pools, the associated managed disks used for the VM OS disks can also contribute significantly to costs.
-
-Evaluate your Batch application to determine if pool nodes are being well utilized by job tasks, or if pool nodes are idle for more than the expected time. It may be possible to reduce the number of pool nodes that are allocated, reduce the rate of pool node scale-up, or increase the rate of scale-down to increase utilization.
-
-In addition to custom monitoring, [Batch metrics](batch-diagnostics.md#view-batch-metrics) can help to identify nodes that are allocated but in an idle state. You can select a metric for most pool node states to view by using Batch monitoring metrics in the Azure portal. The 'Idle Node Count' and 'Running Node Count' could be viewed to give an indication of how well the pool nodes are utilized, for example.
-
-### Ensure pool nodes are able to run tasks
-
-Allocated nodes that are listed for a pool normally incur costs, but it is possible for pool nodes to be in a state where can't run tasks, such as 'unusable' or 'starttaskfailed'. Batch APIs or metrics can be used to monitor for and detect this category of VM. The reason for these states can then be determined and corrective action taken to reduce or eliminate these unhealthy nodes.
-
-### Use the right pool node VM size
-
-Ensure the appropriate VM size is being used, so that VMs are utilized well when running tasks while providing the performance necessary to complete your job tasks in the required time. Pool node VMs can be underutilized in some situations, such as low CPU usage. Costs can be saved by choosing a VM size with a lower price.
-
-To determine VM utilization, you can log in to a node when running tasks to view performance data or use [monitoring capabilities](monitoring-overview.md), such as Application Insights, to obtain performance data from pool nodes.
-
-### Use pool slots to reduce node requirements
-
-Multiple task slots can be specified for a pool, so that the corresponding number of tasks can be run in parallel on each node. Pool task slots can be used to reduce the number of nodes used in a pool by choosing larger VM sizes and running multiple tasks in parallel on the node to ensure the node is well utilized. If nodes are underutilized, slots can be used to increase utilization. For example, for a single-threaded task application, one slot per core could be configured. It is also possible to have more slots than cores. This would be applicable if the application blocks significantly waiting for calls to external services to be returned, for one example.
-
-Setting [`taskSchedulingPolicy`](/rest/api/batchservice/pool/add#taskschedulingpolicy) to `pack` will help ensure VMs are utilized as much as possible, with scaling more easily able to remove nodes not running any tasks.
-
-### Use Azure Spot virtual machines
-
-[Azure Spot VMs](batch-spot-vms.md) reduce the cost of Batch workloads by taking advantage of surplus computing capacity in Azure. When you specify Spot VMs in your pools, Batch uses this surplus to run your workload. There can be substantial cost savings when you use Spot VMs instead of dedicated VMs. Keep in mind that Spot VMs are not suitable for all workloads, since there may not be available capacity to allocate, or they may get preempted.
-
-### Use ephemeral OS disks
-
-By default, pool nodes use managed disks, which incur costs. Virtual Machine Configuration pools in some VM sizes can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks.
-
-### Purchase reservations for virtual machine instances
-
-If you intend to use Batch for a long period of time, you can reduce the cost of VMs by using [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) for your workloads. A reservation rate is considerably lower than a pay-as-you-go rate. Virtual machine instances used without a reservation are charged at the pay-as-you-go rate. When you purchase a reservation, the reservation discount is applied. By committing to one-year or three-year plans for [VM instances](../virtual-machines/prepay-reserved-vm-instances.md), significant discounts are applied to VM usage, including [VMs consumed via Batch pools](../virtual-machines/prepay-reserved-vm-instances.md#determine-the-right-vm-size-before-you-buy).
-
-It is important to note that reservation discount is "use-it-or-lose-it." If there no matching resources are used for an hour, you'll lose the reservation quantity for that hour. Unused reserved hours can't be carried forward, and are therefore lost if not used. Batch workloads often scale the number of allocated VMs according to load and have varying load, including periods where there is no load. Care therefore needs to be taken determining the reservation amount, given that reserved hours will be lost if Batch VMs are scaled down below the reservation quantity.
-
-### Use automatic scaling
-
-[Automatic scaling](batch-automatic-scaling.md) dynamically scales the number of VMs in your Batch pool based on demands of the current job. By scaling the pool based on the lifetime of a job, automatic scaling ensures that VMs are scaled up and used only when there is a job to perform. When the job is complete, or when there are no jobs, the VMs are automatically scaled down to save compute resources. Scaling allows you to lower the overall cost of your Batch solution by using only the resources you need.
-
-## Next steps
--- Learn more about [Azure Cost Management + Billing](../cost-management-billing/cost-management-billing-overview.md).-- Learn about using [Azure Spot VMs with Batch](batch-spot-vms.md).
batch Plan To Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/plan-to-manage-costs.md
+
+ Title: Plan to manage costs for Azure Batch
+description: Learn how to plan for and manage costs for Azure Batch workloads by using cost analysis in the Azure portal.
++++ Last updated : 01/09/2024++
+# Plan to manage costs for Azure Batch
+
+This article describes how you plan for and manage costs for Azure Batch. Before you deploy the service, you can use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs for Azure Batch. Later, as you deploy Azure resources, review the estimated costs.
+
+After you start running Batch workloads, use Cost Management features to set budgets and monitor costs. You can also review forecasted costs and identify spending trends to identify areas where you might want to act. Costs for Azure Batch are only a portion of the monthly costs in your Azure bill. Although this article explains how to plan for and manage costs for Azure Batch, you're billed for all Azure services and resources used in your Azure subscription, including the third-party services.
+
+## Prerequisites
+
+Cost analysis in Cost Management supports most Azure account types, but not all of them. To view the full list of supported account types, see [Understand Cost Management data](../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for an Azure account. For information about assigning access to Microsoft Cost Management data, see [Assign access to data](../cost-management/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+
+## Estimate costs before using Azure Batch
+
+Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate costs before you add virtual machines.
+
+1. On the **Products** tab, go to the **Compute** section or search for *Batch* in the search bar. on the **Batch** tile, select **Add to estimate** and scroll down to the **Your Estimate** section.
+
+1. Notice that Azure Batch is a free service and that the costs associated with Azure Batch are for the underlying resources that run your workloads. When adding Azure Batch to your estimate, the pricing calculator automatically creates a selection for **Cloud Services** and **Virtual machines**. You can read more about [Azure Cloud Services](../cloud-services/cloud-services-choose-me.md) and [Azure Virtual Machines (VMs)](../virtual-machines/overview.md) in each product's documentation. What you need to know for estimated the cost of Azure Batch is that virtual machines are the most significant resource.
+
+ Select options from the drop-downs. There are various options available to choose from. The options that have the largest impact on your estimate total are your virtual machine's operating system, the operating system license if applicable, the [VM size](../virtual-machines/sizes.md) you select under **INSTANCE**, the number of instances you choose, and the amount of time your month your instances to run.
+
+ Notice that the total estimate changes as you select different options. The estimate appears in the upper corner and the bottom of the **Your Estimate** section.
+
+ ![Screenshot showing the your estimate section and main options available for Azure Batch.](media/plan-to-manage-costs/batch-pricing-calculator-overview.png)
+
+ You can learn more about the cost of running virtual machines from the [Plan to manage costs for virtual machines documentation](../virtual-machines/plan-to-manage-costs.md).
+
+## Understand the full billing model for Azure Batch
+
+Azure Batch runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+
+### How you're charged for Azure Batch
+
+Azure Batch is a free service. There are no costs for Batch itself. However, there can be charges for the underlying compute resources and software licenses used to run Batch workloads. Costs may be incurred from virtual machines in a pool, data transfer from the VM, or any input or output data stored in the cloud.
+
+### Costs that might accrue with Azure Batch
+
+Although Batch itself is a free service, many of the underlying resources that run your workloads aren't. These include:
+
+- [Virtual Machines](https://azure.microsoft.com/pricing/details/virtual-machines/windows/)
+ - To learn more about the costs associated with virtual machines, see the [How you're charged for virtual machines section of Plan to manage costs for virtual machines](../virtual-machines/plan-to-manage-costs.md#how-youre-charged-for-virtual-machines).
+ - Each VM in a pool created with [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration) has an associated OS disk that uses Azure-managed disks. Azure-managed disks have an additional cost, and other disk performance tiers have different costs as well.
+- Storage
+ - When applications are deployed to Batch node virtual machines using [application packages](batch-application-packages.md), you're billed for the Azure Storage resources that your application packages consume. You're also billed for the storage of any input or output files, such as resource files and other log data.
+ - In general, the cost of storage data associated with Batch is much lower than the cost of compute resources.
+- In some cases, a [load balancer](https://azure.microsoft.com/pricing/details/load-balancer/)
+- Networking resources
+ - For Virtual Machine Configuration pools, standard load balancers are used, which require static IP addresses. The load balancers used by Batch are visible for [accounts](accounts.md#batch-accounts) configured in user subscription mode, but not those in Batch service mode.
+ - Standard load balancers incur charges for all data passed to and from Batch pool VMs. Select Batch APIs that retrieve data from pool nodes (such as Get Task/Node File), task application packages, resource/output files, and container images also incur charges.
+ - [Virtual Network](https://azure.microsoft.com/pricing/details/virtual-network/)
+- Depending on what services you use, your Batch solution may incur additional fees. Services commonly used with Batch that may have associated costs include:
+ - Application Insights
+ - Data Factory
+ - Azure Monitor
+
+### Costs might accrue after resource deletion
+
+After you delete Azure Batch resources, the following resources might continue to exist. They continue to accrue costs until you delete them.
+
+- Virtual machine
+- Any disks deployed other than the OS and local disks
+ - By default, the OS disk is deleted with the VM, but it can be [set not to during the VM's creation](../virtual-machines/delete.md)
+- Virtual network
+ - Your virtual NIC and public IP, if applicable, can be set to delete along with your virtual machine
+- Bandwidth
+- Load balancer
+
+For virtual networks, one virtual network is billed per subscription and per region. Virtual networks cannot span regions or subscriptions. Setting up private endpoints in vNet setups may also incur charges.
+
+Bandwidth is charged by usage; the more data transferred, the more you're charged.
+
+### Using Azure Prepayment with Azure Batch
+
+While Azure Batch is a free service, you can pay for underlying resource charges with your Azure Prepayment credit. However, you can't use Azure Prepayment credit to pay for charges for third party products and services including those from the Azure Marketplace.
+
+## View cost analysis and create budgets
+
+As you use Azure resources with Azure Batch, you incur costs. Azure resource usage unit costs vary by time intervals (seconds, minutes, hours, and days) or by unit usage (bytes, megabytes, and so on.) As soon as Azure resource use starts, costs are incurred, and you can see the costs in [cost analysis](../cost-management/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). [Microsoft Cost Management](../cost-management-billing/cost-management-billing-overview.md) lets you plan, analyze and reduce your spending to maximize your cloud investment. You can view and filter Batch costs to be viewed and filtered, forecast future costs, and set spending limits with alerts when those limits are reached.
+
+In the Azure portal, you can create budgets and spending alerts for your Batch pools or Batch accounts. Budgets and alerts are useful for notifying stakeholders of any risks of overspending, although it's possible for there to be a delay in spending alerts and to slightly exceed a budget.
+
+The following screenshot shows an example of the **Cost analysis** view for a subscription, filtered to only display the accumulated costs associated with all Batch accounts. The lower charts show how the total cost for the period selected can be categorized by consumed service, location, and meter. While this is an example and is not meant to be reflective of costs you may see for your subscriptions, it is typical in that the largest cost is for the virtual machines that are allocated for Batch pool nodes.
++
+A further level of cost analysis detail can be obtained by specifying a **Resource** filter. For Batch accounts, these values are the Batch account name plus pool name. This allows you to view costs for a specific pool, multiple pools, or one or more accounts.
+
+### View cost analysis for a Batch pool
+
+#### Batch service pool allocation mode
+
+For Batch accounts created with the Batch service pool allocation mode:
+
+1. In the Azure portal, type in or select **Cost Management + Billing** .
+1. Select your subscription in the **Billing scopes** section.
+1. Under **Cost Management**, select **Cost analysis**.
+1. Select **Add Filter**. In the first drop-down, select **Resource**.
+1. In the second drop-down, select the Batch pool. When the pool is selected, you see the cost analysis for your pool. The screenshot below shows example data.
+ :::image type="content" source="media/plan-to-manage-costs/pool-cost-analysis.png" alt-text="Screenshot showing cost analysis of a Batch pool in the Azure portal.":::
+
+The resulting cost analysis shows the cost of the pool as well as the resources that contribute to this cost. In this example, the VMs used in the pool are the most costly resource.
+
+> [!NOTE]
+> The pool in this example uses **Virtual Machine Configuration**, which is [recommended for most pools](batch-pool-cloud-service-to-virtual-machine-configuration.md) and are charged based on the Virtual Machines pricing structure. Pools that use **Cloud Services Configuration** are charged based on the Cloud Services pricing structure.
+
+[Tags](../azure-resource-manager/management/tag-resources.md) can be associated with Batch accounts, allowing tags to be used for further cost filtering. For example, tags can be used to associate project, user, or group information with a Batch account. Tags cannot currently be associated with Batch pools.
+
+#### User subscription pool allocation mode
+
+For Batch accounts created with the user subscription pool allocation mode:
+
+1. In the Azure portal, type in or select **Cost Management + Billing** .
+1. Select your subscription in the **Billing scopes** section.
+1. Under **Cost Management**, select **Cost analysis**.
+1. Select **Add Filter**. In the first drop-down, select **Tag**.
+1. In the second drop-down, select **poolname**.
+1. In the third drop-down, select the Batch pool. When the pool is selected, you see the cost analysis for your pool. The screenshot below shows example data.
+ :::image type="content" source="media/plan-to-manage-costs/user-subscription-pool.png" alt-text="Screenshot showing cost analysis of a user subscription Batch pool in the Azure portal.":::
+
+Note that if you're interested in viewing cost data for all pools in a user subscription Batch account, you can select **batchaccountname** in the second drop-down and the name of your Batch account in the third drop-down.
+
+> [!NOTE]
+> Pools created by user subscription Batch accounts don't show up under the **Resource** filter, though their usage still shows up when filtering for "virtual machines" under service name.
+
+### Create a budget for a Batch pool
+
+Budgets can be created and cost alerts issued when various percentages of a budget are reached, such as 60%, 80%, and 100%. The budgets can specify one or more filters, so you can monitor and alert on Batch account costs at various granularities.
+
+1. From the **Cost analysis** page, select **Budget: none**.
+1. Select **Create new budget >**.
+1. Use the resulting window to configure a budget specifically for your pool. For more information, see [Tutorial: Create and manage Azure budgets](../cost-management-billing/costs/tutorial-acm-create-budgets.md).
+
+## Minimize costs associated with Azure Batch
+
+Depending on your scenario, you may want to reduce costs as much as possible. Consider using one or more of these strategies to maximize the efficiency of your workloads and reduce potential costs.
+
+### Reduce pool node use
+
+The largest costs associated with using Batch are typically from the virtual machines allocated for pool nodes. For Virtual Machine configuration pools, the associated managed disks used for the VM OS disks can also contribute significantly to costs.
+
+Evaluate your Batch application to determine if pool nodes are being well utilized by job tasks, or if pool nodes are idle for more than the expected time. It may be possible to reduce the number of pool nodes that are allocated, reduce the rate of pool node scale-up, or increase the rate of scale-down to increase utilization.
+
+In addition to custom monitoring, [Batch metrics](batch-diagnostics.md#view-batch-metrics) can help to identify nodes that are allocated but in an idle state. You can select a metric for most pool node states to view by using Batch monitoring metrics in the Azure portal. The 'Idle Node Count' and 'Running Node Count' could be viewed to give an indication of how well the pool nodes are utilized, for example.
+
+### Ensure pool nodes are able to run tasks
+
+Allocated nodes that are listed for a pool normally incur costs, but it is possible for pool nodes to be in a state where can't run tasks, such as 'unusable' or 'starttaskfailed'. Batch APIs or metrics can be used to monitor for and detect this category of VM. The reason for these states can then be determined and corrective action taken to reduce or eliminate these unhealthy nodes.
+
+### Use the right pool node VM size
+
+Ensure the appropriate VM size is being used, so that VMs are utilized well when running tasks while providing the performance necessary to complete your job tasks in the required time. Pool node VMs can be underutilized in some situations, such as low CPU usage. Costs can be saved by choosing a VM size with a lower price.
+
+To determine VM utilization, you can log in to a node when running tasks to view performance data or use [monitoring capabilities](monitoring-overview.md), such as Application Insights, to obtain performance data from pool nodes.
+
+### Use pool slots to reduce node requirements
+
+Multiple task slots can be specified for a pool, so that the corresponding number of tasks can be run in parallel on each node. Pool task slots can be used to reduce the number of nodes used in a pool by choosing larger VM sizes and running multiple tasks in parallel on the node to ensure the node is well utilized. If nodes are underutilized, slots can be used to increase utilization. For example, for a single-threaded task application, one slot per core could be configured. It is also possible to have more slots than cores. This would be applicable if the application blocks significantly waiting for calls to external services to be returned, for one example.
+
+Setting [`taskSchedulingPolicy`](/rest/api/batchservice/pool/add#taskschedulingpolicy) to `pack` helps ensure VMs are utilized as much as possible, with scaling more easily able to remove nodes not running any tasks.
+
+### Use Azure Spot virtual machines
+
+[Azure Spot VMs](batch-spot-vms.md) reduce the cost of Batch workloads by taking advantage of surplus computing capacity in Azure. When you specify Spot VMs in your pools, Batch uses this surplus to run your workload. There can be substantial cost savings when you use Spot VMs instead of dedicated VMs. Keep in mind that Spot VMs are not suitable for all workloads, since there may not be available capacity to allocate, or they may get preempted.
+
+### Use ephemeral OS disks
+
+By default, pool nodes use managed disks, which incur costs. Virtual Machine Configuration pools in some VM sizes can use [ephemeral OS disks](create-pool-ephemeral-os-disk.md), which create the OS disk on the VM cache or temporary SSD, to avoid extra costs associated with managed disks.
+
+### Purchase reservations for virtual machine instances
+
+If you intend to use Batch for a long period of time, you can reduce the cost of VMs by using [Azure Reservations](../cost-management-billing/reservations/save-compute-costs-reservations.md) for your workloads. A reservation rate is considerably lower than a pay-as-you-go rate. Virtual machine instances used without a reservation are charged at the pay-as-you-go rate. When you purchase a reservation, the reservation discount is applied. When you commit to one-year or three-year plans for [VM instances](../virtual-machines/prepay-reserved-vm-instances.md), significant discounts are applied to VM usage, including [VMs consumed via Batch pools](../virtual-machines/prepay-reserved-vm-instances.md#determine-the-right-vm-size-before-you-buy).
+
+It is important to note that reservation discount is "use-it-or-lose-it." If there no matching resources are used for an hour, you'll lose the reservation quantity for that hour. Unused reserved hours can't be carried forward, and are therefore lost if not used. Batch workloads often scale the number of allocated VMs according to load and have varying load, including periods where there is no load. Care therefore needs to be taken determining the reservation amount, given that reserved hours are lost if Batch VMs are scaled down below the reservation quantity.
+
+### Use automatic scaling
+
+[Automatic scaling](batch-automatic-scaling.md) dynamically scales the number of VMs in your Batch pool based on demands of the current job. When you scale the pool based on the lifetime of a job, automatic scaling ensures that VMs are scaled up and used only when there is a job to perform. When the job is complete, or when there are no jobs, the VMs are automatically scaled down to save compute resources. Scaling allows you to lower the overall cost of your Batch solution by using only the resources you need.
+
+## Next steps
+
+- Learn more about [Microsoft Cost Management + Billing](../cost-management-billing/cost-management-billing-overview.md).
+- Learn about using [Azure Spot VMs with Batch](batch-spot-vms.md).
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Confidential VMs support the following OS options:
| 22.04 <span class="pill purple">LTS</span> | 22H2 Pro <span class="pill red">ZH-CN</span> | 2019 Server Core | | | 22H2 Pro N | | | **RHEL** | 22H2 Enterprise | 2022 |
-| 9.2 <span class="pill purple">TECH PREVIEW</span> | 22H2 Enterprise N | 2022 Server Core |
-| | 22H2 Enterprise Multi-session | 2022 Azure Edition |
+| 9.2 <span class="pill purple">Tech Preview (SEV-SNP Only)</span> | 22H2 Enterprise N | 2022 Server Core |
+| 9.3 (SEV-SNP Only) | 22H2 Enterprise Multi-session | 2022 Azure Edition |
| | | 2022 Azure Edition Core | ### Regions
container-apps Managed Identity Image Pull https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/managed-identity-image-pull.md
The following steps describe the process to configure your container app to use
- An Azure account with an active subscription. - If you don't have one, you [can create one for free](https://azure.microsoft.com/free/). - A private Azure Container Registry containing an image you want to pull.
+- Your Azure Container Registry must allow ARM audience tokens for authentication in order to use managed identity to pull images.
+ Use the following command to check if ARM tokens are allowed to access your ACR:
+
+ ```azurecli
+ az acr config authentication-as-arm show -r <REGISTRY>
+ ```
+
+ If ARM tokens are disallowed, you can allow them with the following command:
+
+ ```azurecli
+ az acr config authentication-as-arm update -r <REGISTRY> --status enabled
+ ```
- Create a user-assigned managed identity. For more information, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity). ### Create a container app
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
Title: Link a partner ID to your Power Platform and Dynamics Customer Insights accounts with your Azure credentials description: This article helps Microsoft partners use their Azure credentials to provide customers with services for Microsoft Power Apps, Power Automate, Power BI and Dynamics Customer Insights. -+ Previously updated : 08/10/2023 Last updated : 01/09/2024 ms.devlang: azurecli
cost-management-billing Savings Plan Compute Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/savings-plan/savings-plan-compute-overview.md
Savings plan purchases can't be canceled or refunded.
- Azure Container Apps - Azure Premium Functions - Azure App Services - The Azure savings plan for compute can only be applied to the App Service upgraded Premium v3 plan and the upgraded Isolated v2 plan.
+- Azure Spring Apps - The Azure savings plan for compute can only be applied to the Azure Spring Apps Enterprise plan.
- On-demand Capacity Reservation - Azure Spring Apps Enterprise
data-factory Airflow Create Private Requirement Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-create-private-requirement-package.md
Last updated 09/23/2023
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-A python package is a way to organize related Python modules into a single directory hierarchy. A package is typically represented as a directory that contains a special file called `__init__.py`. Inside a package directory, you can have multiple Python module files (.py files) that define functions, classes, and variables.
-In the context of Managed Airflow, you can use Python packages to organize and distribute your custom Airflow Plugins and Provider packages.
+A Python package is a way to organize related Python modules into a single directory hierarchy. A package is typically represented as a directory that contains a special file called `__init__.py`. Inside a package directory, you can have multiple Python module files (.py files) that define functions, classes, and variables.
-This guide provides step-by-step instructions on installing `.whl` (Wheel) file, which serve as a binary distribution format for Python package, as a requirement in your Managed Airflow runtime.
+In the context of Azure Data Factory Managed Airflow, you can use Python packages to organize and distribute your custom Airflow Plugins and Provider packages.
-For illustration purpose, I create custom operator as python package that can be imported as a module inside dags file.
+This article provides step-by-step instructions on how to install a .whl (Wheel) file, which serves as a binary distribution format for a Python package, as a requirement in your Managed Airflow runtime.
-### Step 1: Develop a custom operator.
-- Create a file `sample_operator.py`
-```python
-from airflow.models.baseoperator import BaseOperator
+For illustration purposes, you create a custom operator as a Python package that you can import as a module inside a directed acyclic graph (DAG) file.
+## Prerequisites
-class SampleOperator(BaseOperator):
- def __init__(self, name: str, **kwargs) -> None:
- super().__init__(**kwargs)
- self.name = name
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure Data Factory**: Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) instance in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).
+- **Azure Storage account**: If you don't have a storage account, see [Create an Azure Storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
- def execute(self, context):
- message = f"Hello {self.name}"
- return message
-```
+## Develop a custom operator
-- To create Python package for this file, Refer to the guide: [Creating a package in python](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/modules_management.html#creating-a-package-in-python)
+1. Create the file `sample_operator.py`.
-- Create a dag file, `sample_dag.py` to test your operator.
-```python
-from airflow_operator.hello_operator import SampleCustomOperator
-from airflow import DAG
+ ```python
+ from airflow.models.baseoperator import BaseOperator
+
+
+ class SampleOperator(BaseOperator):
+ def __init__(self, name: str, **kwargs) -> None:
+ super().__init__(**kwargs)
+ self.name = name
+
+ def execute(self, context):
+ message = f"Hello {self.name}"
+ return message
+ ```
+1. To create a Python package for this file, see [Create a package in Python](https://airflow.apache.org/docs/apache-airflow/stable/administration-and-deployment/modules_management.html#creating-a-package-in-python).
-with DAG(
- "tutorial",
- tags=["example"],
-) as dag:
- sample_task = SampleCustomOperator(task_id="sample-task", name="foo_bar")
-```
+1. Create the DAG file `sample_dag.py` to test your operator.
+
+ ```python
+ from airflow_operator.hello_operator import SampleCustomOperator
+ from airflow import DAG
+
+
+ with DAG(
+ "tutorial",
+ tags=["example"],
+ ) as dag:
+ sample_task = SampleCustomOperator(task_id="sample-task", name="foo_bar")
+ ```
-### Step 2: Create a storage container.
+## Create a storage container
-Use the steps described in [Manage blob containers using the Azure portal](/azure/storage/blobs/blob-containers-portal) to create a storage account to upload dag and your package file.
+Use the steps described in [Manage blob containers using the Azure portal](/azure/storage/blobs/blob-containers-portal) to create a storage account to upload DAGs and your package file.
-### Step 3: Upload the private package into your storage account.
+## Upload the private package into your storage account
-1. Navigate to the designated container where you intend to store your Airflow DAGs and Plugins files.
-1. Upload your private package file to the container. Common file formats include `zip`, `.whl`, or `tar.gz`. Place the file within either the 'Dags' or 'Plugins' folder, as appropriate.
+1. Go to the designated container where you intend to store your Airflow DAG and Plugin files.
+1. Upload your private package file to the container. Common file formats include .zip, .whl, or `tar.gz`. Place the file within either the `Dags` or `Plugins` folder, as appropriate.
-### Step 4: Add your private package as a requirement.
+## Add your private package as a requirement
-1. Add your private package as a requirement in the requirements.txt file. Add this file if it doesn't already exist.
-1. Be sure to prepend the prefix "**/opt/airflow**" to the package path. For instance, if your private package resides at _/dats/test/private.wht_, your requirements.txt file should feature the requirement _/opt/airflow/dags/test/private.wht_.
+1. Add your private package as a requirement in the `requirements.txt` file. Add this file if it doesn't already exist.
+1. Be sure to prepend the prefix `/opt/airflow` to the package path. For instance, if your private package resides at `/dats/test/private.wht`, your `requirements.txt` file should feature the requirement `/opt/airflow/dags/test/private.wht`.
-### Step 5: Import your folder to an Airflow integrated runtime (IR) environment.
+## Import your folder to an Airflow integration runtime environment
-When performing the import of your folder into an Airflow IR environment, ensure that you check the import requirements checkbox to load your requirements inside your airflow env.
+When you import your folder into an Airflow integration runtime environment, select the **Import requirements** checkbox to load your requirements inside your Airflow environment.
-### Step 6: Inside Airflow UI, you can run your dag file created at step 1, to check if import is successful.
+### Check the import
+Inside the Airflow UI, you can run the DAG file you created in step 1 to check if the import was successful.
## Related content
data-factory Airflow Get Ip Airflow Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-get-ip-airflow-cluster.md
Last updated 09/19/2023
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-This document explains how to enhance security of your data stores and resources by restricting access solely to your Managed Airflow cluster. To achieve this, you'll walk through the process of retrieving and adding the unique IP address associated with your Managed Airflow cluster to your storage firewall's allowlist. This enables you to access data stores or resources through the list of permitted IP addresses on firewall's allowlist, thus preventing access from all other IP addresses via the public endpoint.
+This article explains how to enhance security of your data stores and resources by restricting access solely to your Azure Data Factory Managed Airflow cluster. In this article, you walk through the process of retrieving and adding the unique IP address associated with your Managed Airflow cluster to your storage firewall's allowlist. This process enables you to access data stores or resources through the list of permitted IP addresses on the firewall's allowlist. Access from all other IP addresses via the public endpoint is prevented.
> [!NOTE]
-> Importing DAGs is currently not supported using blob storage with IP allow listing or using private endpoints. We suggest using Git-sync instead.
+> Importing DAGs is currently not supported by using blob storage with IP allow listing or by using private endpoints. We suggest using Git sync instead.
-### Step 1: Retrieve the bearer token for the Airflow API.
-- Similar to the authentication process used in the standard Azure REST API, acquiring an access token from Azure AD is required before making a call to the Airflow REST API. A guide on how to obtain the token from Azure AD can be found at [https://learn.microsoft.com/rest/api/azure](/rest/api/azure).-- Additionally, the service principal used to obtain the access token needs to have atleast a **contributor role** on the Data Factory where the Airflow Integration Runtime is located.
-
-For more information, see the below screenshots.
+## Prerequisites
-1. Use Azure AD API call to get access token.
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
- :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-access-token.png" alt-text="Screenshot showing the API used to retrieve the access token to invoke airflow apis." lightbox="media/airflow-get-ip-airflow-cluster/get-access-token.png":::
+### Retrieve the bearer token for the Airflow API
-2. Use the access token acquired as a bearer token from step 1 to invoke the Airflow API.
-
- :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-dags.png" alt-text="Screenshot showing sample airflow API request using bearer token fetched in initial step." lightbox="media/airflow-get-ip-airflow-cluster/get-dags.png":::
+- Similar to the authentication process used in the standard Azure REST API, acquiring an access token from Microsoft Entra ID is required before you make a call to the Airflow REST API. For more information on how to obtain the token from Microsoft Entra ID, see [Azure REST API reference](/rest/api/azure).
+- Also, the service principal used to obtain the access token needs to have at least a Contributor role on the Azure Data Factory instance where the Airflow integration runtime is located.
-### Step 2: Retrieve the Managed Airflow cluster's IP address.
+For more information, see the following screenshots.
-1. Using Managed Airflow's UI:
+1. Use the Microsoft Entra ID API call to get an access token.
- :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-ui.png" alt-text="Screenshot showing how to retrieve cluster's IP using UI." lightbox="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-ui.png":::
+ :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-access-token.png" alt-text="Screenshot that shows the API used to retrieve the access token to invoke Airflow APIs." lightbox="media/airflow-get-ip-airflow-cluster/get-access-token.png":::
-2. Using Rest API:
- Refer to the documentation [Managed Airflow IP address - Get](/rest/api/datafactory/integration-runtimes/get?tabs=HTTP#code-try-0).
+1. Use the access token acquired as a bearer token from step 1 to invoke the Airflow API.
- You should retrieve the Airflow cluster's IP address from the response as shown in the screenshot:
-
- #### Sample Response:
+ :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-dags.png" alt-text="Screenshot that shows a sample Airflow API request using a bearer token fetched in the initial step." lightbox="media/airflow-get-ip-airflow-cluster/get-dags.png":::
- :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-api.png" alt-text="Screenshot showing how to retrieve cluster's IP using API." lightbox="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-api.png":::
+### Retrieve the Managed Airflow cluster's IP address
-### Step 3: Add the Managed Airflow cluster IP address to the storage account you want to secure
+1. Use the Managed Airflow UI.
+
+ :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-ui.png" alt-text="Screenshot that shows how to retrieve a cluster's IP by using the UI." lightbox="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-ui.png":::
+
+1. Use the Rest API.
+ For more information, see [Managed Airflow IP address - Get](/rest/api/datafactory/integration-runtimes/get?tabs=HTTP#code-try-0).
+
+ You should retrieve the Airflow cluster's IP address from the response, as shown in the screenshot.
+
+ #### Sample response
+
+ :::image type="content" source="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-api.png" alt-text="Screenshot that shows how to retrieve a cluster's IP by using an API." lightbox="media/airflow-get-ip-airflow-cluster/get-cluster-ip-from-api.png":::
+
+### Add the Managed Airflow cluster IP address to the storage account you want to secure
> [!NOTE]
-> You can add the Managed Airflow IP address to other storage services as well like Azure SQL DB, Azure Key Vault, etc.
+> You can add the Managed Airflow IP address to other storage services too, like Azure SQL Database and Azure Key Vault.
-- To add managed Airflow Cluster IP address into Azure Key Vault, refer to [Azure SQL Database and Azure Synapse IP firewall rules](/azure/key-vault/general/network-security) -- To add managed Airflow Cluster IP address into Azure Blob Storage, refer to [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-an-internet-ip-range)-- To add managed Airflow Cluster IP address into Azure SQL Database, refer to [Configure Azure Key Vault firewalls and virtual networks](/azure/azure-sql/database/firewall-configure)-- To add managed Airflow Cluster IP address into Azure PostgreSQL Database, refer to [Create and manage firewall rules for Azure Database for PostgreSQL - Single Server using the Azure portal](/azure/postgresql/single-server/how-to-manage-firewall-using-portal)
+- To add a Managed Airflow cluster IP address in Azure Key Vault, see [Azure SQL Database and Azure Synapse IP firewall rules](/azure/key-vault/general/network-security).
+- To add a Managed Airflow cluster IP address in Azure Blob Storage, see [Configure Azure Storage firewalls and virtual networks](/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-an-internet-ip-range).
+- To add a Managed Airflow cluster IP address in Azure SQL Database, see [Configure Azure Key Vault firewalls and virtual networks](/azure/azure-sql/database/firewall-configure).
+- To add a Managed Airflow cluster IP address in Azure PostgreSQL Database, see [Create and manage firewall rules for Azure Database for PostgreSQL - Single server using the Azure portal](/azure/postgresql/single-server/how-to-manage-firewall-using-portal).
## Related content - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)-- [How to change the password for Managed Airflow environments](password-change-airflow.md)
+- [Change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Airflow Import Dags Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-import-dags-blob-storage.md
Title: Import Airflow DAGs using Azure Blob Storage
+ Title: Import Airflow DAGs by using Azure Blob Storage
-description: This document shows the steps required to import Airflow DAGs using Azure Blob Storage
+description: This document shows the steps required to import Airflow DAGs by using Azure Blob Storage.
Last updated 10/20/2023
-# Import DAGs using Azure Blob Storage
+# Import DAGs by using Azure Blob Storage
-This guide will give you step by step instructions on how to import DAGs into Managed Airflow using Azure Blob storage.
+This article shows you step-by-step instructions on how to import directed acyclic graphs (DAGs) into Azure Data Factory Managed Airflow by using Azure Blob Storage.
## Prerequisites -- **Azure subscription** - If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).-- **Azure storage account** - If you don't have a storage account, see [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure Data Factory**: Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) instance in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).
+- **Azure Storage account**: If you don't have a storage account, see [Create an Azure Storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
-> [!NOTE]
-> Blob Storage behind VNet are not supported during the preview.
-> KeyVault configuration in storageLinkedServices not supported to import dags.
+Blob Storage behind virtual networks isn't supported during the preview. Azure Key Vault configuration in `storageLinkedServices` isn't supported to import DAGs.
+## Import DAGs
-## Steps to import DAGs
-1. Copy-paste the content (either [Sample Apache Airflow v2.x DAG](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/fundamentals.html) or [Sample Apache Airflow v1.10 DAG](https://airflow.apache.org/docs/apache-airflow/1.10.11/_modules/airflow/example_dags/tutorial.html) based on the Airflow environment that you have setup) into a new file called as **tutorial.py**.
+1. Copy either [Sample Apache Airflow v2.x DAG](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/fundamentals.html) or [Sample Apache Airflow v1.10 DAG](https://airflow.apache.org/docs/apache-airflow/1.10.11/_modules/airflow/example_dags/tutorial.html) based on the Airflow environment that you set up. Paste the content into a new file called *tutorial.py*.
- Upload the **tutorial.py** to a blob storage. ([How to upload a file into blob](../storage/blobs/storage-quickstart-blobs-portal.md))
+ Upload the *tutorial.py* file to Blob Storage. For more information, see [Upload a file into a blob](../storage/blobs/storage-quickstart-blobs-portal.md).
> [!NOTE]
- > You will need to select a directory path from a blob storage account that contains folders named **dags** and **plugins** to import those into the Airflow environment. **Plugins** are not mandatory. You can also have a container named **dags** and upload all Airflow files within it.
+ > You need to select a directory path from a Blob Storage account that contains folders named *dags* and *plugins* to import them into the Airflow environment. Plugins aren't mandatory. You can also have a container named **dags** and upload all Airflow files within it.
-1. Select on **Apache Airflow** under **Manage** hub. Then hover over the earlier created **Airflow** environment and select on **Import files** to Import all DAGs and dependencies into the Airflow Environment.
+1. Under the **Manage** hub, select **Apache Airflow**. Then hover over the previously created **Airflow** environment and select **Import files** to import all DAGs and dependencies into the Airflow environment.
- :::image type="content" source="media/how-does-managed-airflow-work/import-files.png" alt-text="Screenshot shows import files in manage hub." lightbox="media/how-does-managed-airflow-work/import-files.png":::
+ :::image type="content" source="media/how-does-managed-airflow-work/import-files.png" alt-text="Screenshot that shows importing files in the Manage hub." lightbox="media/how-does-managed-airflow-work/import-files.png":::
-1. Create a new Linked Service to the accessible storage account mentioned in the prerequisite (or use an existing one if you already have your own DAGs).
+1. Create a new linked service to the accessible storage account mentioned in the "Prerequisites" section. You can also use an existing one if you already have your own DAGs.
:::image type="content" source="media/how-does-managed-airflow-work/create-new-linked-service.png" alt-text="Screenshot that shows how to create a new linked service." lightbox="media/how-does-managed-airflow-work/create-new-linked-service.png":::
-1. Use the storage account where you uploaded the DAG (check prerequisite). Test connection, then select **Create**.
+1. Use the storage account where you uploaded the DAG. (Check the "Prerequisites" section.) Test the connection and then select **Create**.
- :::image type="content" source="media/how-does-managed-airflow-work/linked-service-details.png" alt-text="Screenshot shows some linked service details." lightbox="media/how-does-managed-airflow-work/linked-service-details.png":::
+ :::image type="content" source="media/how-does-managed-airflow-work/linked-service-details.png" alt-text="Screenshot that shows some linked service details." lightbox="media/how-does-managed-airflow-work/linked-service-details.png":::
-1. Browse and select **airflow** if using the sample SAS URL or select the folder that contains **dags** folder with DAG files.
+1. Browse and select **airflow** if you're using the sample SAS URL. You can also select the folder that contains the *dags* folder with DAG files.
> [!NOTE]
- > You can import DAGs and their dependencies through this interface. You will need to select a directory path from a blob storage account that contains folders named **dags** and **plugins** to import those into the Airflow environment. **Plugins** are not mandatory.
+ > You can import DAGs and their dependencies through this interface. You need to select a directory path from a Blob Storage account that contains folders named *dags* and *plugins* to import those into the Airflow environment. Plugins aren't mandatory.
- :::image type="content" source="media/how-does-managed-airflow-work/browse-storage.png" alt-text="Screenshot shows browse storage in import files." lightbox="media/how-does-managed-airflow-work/browse-storage.png" :::
+ :::image type="content" source="media/how-does-managed-airflow-work/browse-storage.png" alt-text="Screenshot that shows the Browse storage button on the Import Files screen." lightbox="media/how-does-managed-airflow-work/browse-storage.png" :::
- :::image type="content" source="media/how-does-managed-airflow-work/browse.png" alt-text="Screenshot that shows browse in airflow." lightbox="media/how-does-managed-airflow-work/browse.png" :::
+ :::image type="content" source="media/how-does-managed-airflow-work/browse.png" alt-text="Screenshot that shows the airflow root folder on Browse." lightbox="media/how-does-managed-airflow-work/browse.png" :::
- :::image type="content" source="media/how-does-managed-airflow-work/import-in-import-files.png" alt-text="Screenshot shows import in import files." lightbox="media/how-does-managed-airflow-work/import-in-import-files.png" :::
+1. Select **Import** to import files.
- :::image type="content" source="media/how-does-managed-airflow-work/import-dags.png" alt-text="Screenshot shows import dags." lightbox="media/how-does-managed-airflow-work/import-dags.png" :::
+ :::image type="content" source="media/how-does-managed-airflow-work/import-in-import-files.png" alt-text="Screenshot that shows the Import button on the Import Files screen." lightbox="media/how-does-managed-airflow-work/import-in-import-files.png" :::
-> [!NOTE]
-> Importing DAGs could take a couple of minutes during **Preview**. The notification center (bell icon in ADF UI) can be used to track the import status updates.
+ :::image type="content" source="media/how-does-managed-airflow-work/import-dags.png" alt-text="Screenshot that shows importing DAGs." lightbox="media/how-does-managed-airflow-work/import-dags.png" :::
+
+Importing DAGs could take a couple of minutes during the preview. You can use the notification center (bell icon in the Data Factory UI) to track import status updates.
data-factory Airflow Sync Github Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-sync-github-repository.md
Title: Sync a GitHub repository in Managed Airflow
-description: This article provides step-by-step instructions for how to sync a GitHub repository using Managed Airflow in Azure Data Factory.
+description: This article provides step-by-step instructions for how to sync a GitHub repository by using Managed Airflow in Azure Data Factory.
Last updated 09/19/2023
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-In this guide, you will learn how to synchronize your GitHub repository in Managed Airflow in two different ways.
+In this article, you learn how to synchronize your GitHub repository in Azure Data Factory Managed Airflow in two different ways:
-- Using the ``Enable Git Sync`` in the Managed Airflow UI-- Using the Rest API
+- By using **Enable git sync** in the Managed Airflow UI.
+- By using the Rest API.
## Prerequisites -- **Azure subscription** - If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).-- **Access to a GitHub repository**
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) instance in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).
+- **GitHub repository**: You need access to a GitHub repository.
-## Using the Managed Airflow UI
+## Use the Managed Airflow UI
-The following steps describe how to sync your GitHub repository using Managed Airflow UI:
+To sync your GitHub repository by using the Managed Airflow UI:
-1. Ensure your repository contains the necessary folders and files.
- - **Dags/** - for Apache Airflow Dags (required)
- - **Plugins/** - for integrating external features to Airflow.
- :::image type="content" source="media/airflow-git-sync-repository/airflow-folders.png" alt-text="Screenshot showing the Airflow folders structure in GitHub.":::
+1. Ensure that your repository contains the necessary folders and files:
+ - **Dags/**: For Apache Airflow directed acyclic graphs (DAGs) (required).
+ - **Plugins/**: For integrating external features to Airflow.
-1. While creating an Airflow integrated runtime (IR), select **Enable git sync** on the Airflow environment setup dialog.
+ :::image type="content" source="media/airflow-git-sync-repository/airflow-folders.png" alt-text="Screenshot that shows the Airflow folders structure in GitHub.":::
- :::image type="content" source="media/airflow-git-sync-repository/enable-git-sync.png" alt-text="Screenshot showing the Enable git sync checkbox on the Airflow environment setup dialog that appears during creation of an Airflow IR.":::
+1. When you create an Airflow integration runtime, select **Enable git sync** in the **Airflow environment setup** dialog.
-1. Select one of the following supported git service types:
- - GitHub
- - ADO
- - GitLab
- - Bitbucket
+ :::image type="content" source="media/airflow-git-sync-repository/enable-git-sync.png" alt-text="Screenshot that shows the Enable git sync checkbox in the Airflow environment setup dialog that appears during creation of an Airflow integration runtime.":::
+
+1. Select one of the following supported Git service types:
+ - **GitHub**
+ - **ADO**
+ - **GitLab**
+ - **BitBucket**
+
+ :::image type="content" source="media/airflow-git-sync-repository/git-service-type.png" alt-text="Screenshot that shows the Git service type selection dropdown in the Airflow environment setup dialog that appears during creation of an Airflow integration runtime.":::
+
+1. Select a credential type:
+
+ - **None** (for a public repo): When you select this option, make sure that your repository's visibility is public. Then fill out the details:
+ - **Git repo url** (required): The clone URL for the GitHub repository you want.
+ - **Git branch** (required): The current branch, where the Git repository you want is located.
+ - **Git personal access token**:
+ After you select this option for a personal access token (PAT), fill out the remaining fields based on the selected **Git service type**:
+ - GitHub personal access token
+ - ADO personal access token
+ - GitLab personal access token
+ - BitBucket personal access token
- :::image type="content" source="media/airflow-git-sync-repository/git-service-type.png" alt-text="Screenshot showing the Git service type selection dropdown on the Airflow environment setup dialog that appears during creation of an Airflow IR.":::
-
-1. Select credential type:
-
- - **None** (for a public repo)
- When you select this option, make sure to make your repositoryΓÇÖs visibility is public. Once you select this option, fill out the details:
- - **Git Repo URL** (required): The clone URL for your desired GitHub repository
- - **Git branch** (required): The current branch, where your desired git repository is located
- - **PAT** (Personal Access Token)
- Once you select this option, fill out the remaining fields based upon on the selected Git Service type:
- - GitHub Personal Access Token
- - ADO Personal Access Token
- - GitLab Personal Access Token
- - Bitbucket Personal Access Token
- :::image type="content" source="media/airflow-git-sync-repository/git-pat-credentials.png" alt-text="Screenshot showing the Git PAT credential options on the Airflow environment setup dialog that appears during creation of an Airflow IR.":::
- - **SPN** ([Service Principal Name](https://devblogs.microsoft.com/devops/introducing-service-principal-and-managed-identity-support-on-azure-devops/) - Only ADO supports this credential type.)
- Once you select this option, fill out the remaining fields based upon on the selected **Git service type**:
- - **Git repo URL** (Required): The clone URL to the git repository to sync
- - **Git branch** (Required): The branch in the repository to sync
- - **Service principal app id** (Required): The service principal app id with access to the ADO repo to sync
- - **Service principal secret** (Required): A manually generated secret in service principal whose value is to be used to authenticate and access the ADO repo
- - **Service principal tenant id** (Required): The service principal tenant id
- :::image type="content" source="media/airflow-git-sync-repository/git-spn-credentials.png" alt-text="Screenshot showing the Git SPN credential options on the Airflow environment setup dialog that appears during creation of an Airflow IR.":::
+ :::image type="content" source="media/airflow-git-sync-repository/git-pat-credentials.png" alt-text="Screenshot that shows the Git PAT credential options in the Airflow environment setup dialog that appears during creation of an Airflow integration runtime.":::
+ - **SPN** ([service principal name](https://devblogs.microsoft.com/devops/introducing-service-principal-and-managed-identity-support-on-azure-devops/)): Only ADO supports this credential type.
+ After you select this option, fill out the remaining fields based on the selected **Git service type**:
+ - **Git repo url** (required): The clone URL to the Git repository to sync.
+ - **Git branch** (required): The branch in the repository to sync.
+ - **Service principal app id** (required): The service principal app ID with access to the ADO repo to sync.
+ - **Service principal secret** (required): A manually generated secret in the service principal whose value is used to authenticate and access the ADO repo.
+ - **Service principal tenant id** (required): The service principal tenant ID.
+
+ :::image type="content" source="media/airflow-git-sync-repository/git-spn-credentials.png" alt-text="Screenshot that shows the Git SPN credential options in the Airflow environment setup dialog that appears during creation of an Airflow integration runtime.":::
1. Fill in the rest of the fields with the required information.
-1. Select Create.
+1. Select **Create**.
-## Using the REST API
+## Use the REST API
-The following steps describe how to sync your GitHub repository using the Rest APIs:
+To sync your GitHub repository by using the Rest API:
- **Method**: PUT - **URL**: ```https://management.azure.com/subscriptions/<subscriptionid>/resourcegroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<datafactoryName>/integrationruntimes/<airflowEnvName>?api-version=2018-06-01```
The following steps describe how to sync your GitHub repository using the Rest A
|Type |string |The resource type (**Airflow** in this scenario) | |typeProperties |typeProperty |Airflow | -- **Type property**
+- **Type property**:
|Name |Type |Description | ||||
- |computeProperties |computeProperty |Configuration of the compute type used for the environment. |
- |airflowProperties |airflowProperty |Configuration of the Airflow properties for the environment. |
+ |computeProperties |computeProperty |Configuration of the compute type used for the environment |
+ |airflowProperties |airflowProperty |Configuration of the Airflow properties for the environment |
-- **Compute property**
+- **Compute property**:
|Name |Type |Description | ||||
- |location |string |The Airflow integrated runtime location defaults to the data factory region. To create an integrated runtime in a different region, create a new data factory in the required region. |
- | computeSize | string |The size of the compute node you want your Airflow environment to run on. Example: ΓÇ£LargeΓÇ¥, ΓÇ£SmallΓÇ¥. 3 nodes are allocated initially. |
- | extraNodes | integer |Each extra node adds 3 more workers. |
+ |location |string |The Airflow integration runtime location defaults to the data factory region. To create an integration runtime in a different region, create a new data factory in the required region. |
+ | computeSize | string |The size of the compute node you want your Airflow environment to run on. Examples are Large or Small. Three nodes are allocated initially. |
+ | extraNodes | integer |Each extra node adds three more workers. |
-- **Airflow property**
+- **Airflow property**:
|Name |Type |Description | ||||
- |airflowVersion | string | Current version of Airflow (Example: 2.4.3) |
- |airflowRequirements | Array\<string\> | Python libraries you wish to use. Example: ["flask-bcrypy=0.7.1"]. Can be a comma delimited list. |
- |airflowEnvironmentVariables | Object (Key/Value pair) | Environment variables you wish to use. Example: { ΓÇ£SAMPLE_ENV_NAMEΓÇ¥: ΓÇ£testΓÇ¥ } |
- |gitSyncProperties | gitSyncProperty | Git configuration properties |
- |enableAADIntegration | boolean | Allows Microsoft Entra ID to login to Airflow |
- |userName | string or null | Username for Basic Authentication |
- |password | string or null | Password for Basic Authentication |
+ |airflowVersion | string | Current version of Airflow. For example, 2.4.3. |
+ |airflowRequirements | Array\<string\> | Python libraries you want to use. For example, ["flask-bcrypy=0.7.1"]. Can be a comma-delimited list. |
+ |airflowEnvironmentVariables | Object (Key/Value pair) | Environment variables you want to use. For example, { "SAMPLE_ENV_NAME": "test" }. |
+ |gitSyncProperties | gitSyncProperty | Git configuration properties. |
+ |enableAADIntegration | boolean | Allows Microsoft Entra ID to log in to Airflow. |
+ |userName | string or null | Username for Basic Authentication. |
+ |password | string or null | Password for Basic Authentication. |
-- **Git sync property**
+- **Git sync property**:
|Name |Type |Description | ||||
- |gitServiceType | string | The Git service your desired repo is located in. Values: GitHub, AOD, GitLab, or BitBucket |
- |gitCredentialType | string | Type of Git credential. Values: PAT (for Personal Access Token), SPN (supported only by ADO), None |
- |repo | string | Repository link |
- |branch | string | Branch to use in the repository |
- |username | string | GitHub username |
- |Credential | string | Value of the Personal Access Token |
- |tenantId | string | The service principal tenant id (supported only by ADO) |
+ |gitServiceType | string | The Git service where your desired repository is located. Values are GitHub, ADO, GitLab, or BitBucket. |
+ |gitCredentialType | string | Type of Git credential. Values are PAT (for personal access token), SPN (supported only by ADO), and None. |
+ |repo | string | Repository link. |
+ |branch | string | Branch to use in the repository. |
+ |username | string | GitHub username. |
+ |Credential | string | Value of the PAT. |
+ |tenantId | string | The service principal tenant ID (supported only by ADO). |
-- **Responses**
+- **Responses**:
|Name |Status code |Type |Description | ||||-| |Accepted | 200 | [Factory](/rest/api/datafactory/factories/get?tabs=HTTP#factory) | OK |
- |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with additional error details |
+ |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with more error details |
### Examples
+Review the following examples.
+ Sample request: ```rest
HTTP
PUT https://management.azure.com/subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01 ```
-Sample Body:
+Sample body:
```rest {
Sample Body:
} ```
-Sample Response:
+Sample response:
```rest Status code: 200 OK ```
-Response Body:
+Response body:
```rest {
Response Body:
Here are some API payload examples: -- Git sync properties for GitHub with PAT:
+- Git sync properties for GitHub with PAT:
+ ```rest "gitSyncProperties": { "gitServiceType": "Github",
Here are some API payload examples:
"credential": <personal access token> } ```
-
-- Git sync properties for ADO with PAT: +
+- Git sync properties for ADO with PAT:
+ ```rest "gitSyncProperties": { "gitServiceType": "ADO",
Here are some API payload examples:
"username": <username>, "credential": <personal access token> }```
-
-- Git sync properties for ADO with Service Principal: +
+- Git sync properties for ADO with service principal:
+ ```rest "gitSyncProperties": { "gitServiceType": "ADO",
Here are some API payload examples:
"credential": <service principal secret value> "tenantId": <service principal tenant id> }```
-
-- Git sync properties for GitHub public repo: +
+- Git sync properties for a GitHub public repo:
+ ```rest "gitSyncProperties": { "gitServiceType": "Github",
Here are some API payload examples:
"branch": <repo branch to sync> }```
-## Importing a private package with git-sync (Optional - only applies when using private packages)
+## Import a private package with Git sync
+
+This optional process only applies when you use private packages.
+
+This process assumes that your private package was autosynced via Git sync. You add the package as a requirement in the Data Factory Airflow UI along with the path prefix `/opt/airflow/git/\<repoName\>/`, if you're connecting to an ADO repo. Use `/opt/airflow/git/\<repoName\>.git/` for all other Git services.
-Assuming your private package has already been auto synced via git-sync, all you need to do is add the package as a requirement in the data factory Airflow UI along with the path prefix _/opt/airflow/git/\<repoName\>/__ if you are connecting to an ADO repo or _/opt/airflow/git/\<repoName\>.git/_ for all other git services. For example, if your private package is in _/dags/test/private.whl_ in a GitHub repo, then you should add the requirement _/opt/airflow/git/\<repoName\>.git/dags/test/private.whl_ to the Airflow environment.
+For example, if your private package is in `/dags/test/private.whl` in a GitHub repo, you should add the requirement `/opt/airflow/git/\<repoName\>.git/dags/test/private.whl` to the Airflow environment.
## Related content - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)-- [How to change the password for Managed Airflow environments](password-change-airflow.md)
+- [Change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Create Managed Airflow Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-managed-airflow-environment.md
Title: Create a Managed Airflow environment
-description: Learn how to create a Managed Airflow environment
+description: Learn how to create a Managed Airflow environment.
Last updated 10/20/2023
# Create a Managed Airflow environment
-The following steps set up and configure your Managed Airflow environment.
+
+This article describes how to set up and configure your Azure Data Factory Managed Airflow environment.
## Prerequisites
-**Azure subscription**: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
- Create or select an existing Data Factory in the region where the managed airflow preview is supported.
-## Steps to create the environment
-1. Create new Managed Airflow environment.
- Go to **Manage** hub -> **Airflow (Preview)** -> **+New** to create a new Airflow environment
+- **Azure subscription**: If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+- **Azure Data Factory**: Create or select an existing Data Factory instance in the region where the Managed Airflow preview is supported.
+
+## Create the environment
+
+To create a new Managed Airflow environment:
+
+1. Go to the **Manage** hub and select **Airflow (Preview)** > **+ New** to open the **Airflow environment setup** page.
- :::image type="content" source="media/how-does-managed-airflow-work/create-new-airflow.png" alt-text="Screenshot that shows how to create a new Managed Apache Airflow environment.":::
+ :::image type="content" source="media/how-does-managed-airflow-work/create-new-airflow.png" alt-text="Screenshot that shows how to create a new Managed Airflow environment.":::
-1. Provide the details (Airflow config)
+1. Enter information and select options for your Airflow configuration.
- :::image type="content" source="media/how-does-managed-airflow-work/airflow-environment-details.png" alt-text="Screenshot that shows Managed Airflow environment details." lightbox="media/how-does-managed-airflow-work/airflow-environment-details.png":::
+ :::image type="content" source="media/how-does-managed-airflow-work/airflow-environment-details.png" alt-text="Screenshot that shows Airflow environment details." lightbox="media/how-does-managed-airflow-work/airflow-environment-details.png":::
> [!IMPORTANT]
- > When using **Basic** authentication, remember the username and password specified in this screen. It will be needed to login later in the Managed Airflow UI. The default option is **Azure AD** and it does not require creating username/ password for your Airflow environment, but instead uses the logged in user's credential to Azure Data Factory to login/ monitor DAGs.
-1. **Enable git sync"** Allow your Airflow environment to automatically sync with a git repository instead of manually importing DAGs. Refer to [Sync a GitHub repository in Managed Airflow](airflow-sync-github-repository.md)
-1. **Airflow configuration overrides** You can override any Airflow configurations that you set in `airflow.cfg`. For example, ``name: AIRFLOW__VAR__FOO``, ``value: BAR``. For more information, see [Airflow Configurations](airflow-configurations.md)
-1. **Environment variables** a simple key value store within Airflow to store and retrieve arbitrary content or settings.
-1. **Requirements** can be used to preinstall python libraries. You can update these requirements later as well.
-1. **Kubernetes secrets** Custom Kubernetes secret you wish to add in your Airflow environment. For Example: [Private registry credentials to pull images for KubernetesPodOperator](kubernetes-secret-pull-image-from-private-container-registry.md)
-1. After filling out all the details according to the requirements. Click on ``Create`` Button.
+ > When you use **Basic** authentication, remember the username and password specified on this page. You need them to sign in later in the Airflow UI. The default option is **Azure AD**. It doesn't require creating a username and password for your Airflow environment. Instead, it uses the signed-in user's credential for Azure Data Factory to sign in and monitor directed acyclic graphs (DAGs).
+
+ More options on the **Airflow environment setup** page:
+
+ - **Enable git sync**: You can allow your Airflow environment to automatically sync with a Git repository instead of manually importing DAGs. For more information, see [Sync a GitHub repository in Managed Airflow](airflow-sync-github-repository.md).
+ - **Airflow configuration overrides** You can override any Airflow configurations that you set in `airflow.cfg`. Examples are ``name: AIRFLOW__VAR__FOO`` and ``value: BAR``. For more information, see [Airflow configurations](airflow-configurations.md).
+ - **Environment variables**: You can use this key value store within Airflow to store and retrieve arbitrary content or settings.
+ - **Requirements**: You can use this option to preinstall Python libraries. You can update these requirements later.
+ - **Kubernetes secrets**: You can create a custom Kubernetes secret for your Airflow environment. An example is [Private registry credentials to pull images for KubernetesPodOperator](kubernetes-secret-pull-image-from-private-container-registry.md).
+
+1. After you fill out all the information according to the requirements, select **Create**.
data-factory Delete Dags In Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/delete-dags-in-managed-airflow.md
Title: Delete files in Managed Airflow
-description: This document explains how to delete files in Managed Airflow.
+description: This article explains how to delete files in Managed Airflow.
Last updated 10/01/2023
# Delete files in Managed Airflow
-This guide walks you through the steps to delete DAG files in Managed Airflow environment.  
+This article walks you through the steps to delete directed acyclic graph (DAG) files in an Azure Data Factory Managed Airflow environment.
## Prerequisites
-**Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure Data Factory**: Create or select an existing [Data Factory](https://azure.microsoft.com/products/data-factory#get-started) instance in a [region where the Managed Airflow preview is supported](concept-managed-airflow.md#region-availability-public-preview).
+## Delete DAGs by using Git sync
-## Delete DAGs using Git-Sync Feature.  
+When you use the Git sync feature, it isn't possible to delete DAGs in Managed Airflow because all your Git source files are synchronized with Managed Airflow. We recommend removing the file from your source code repository so that your commit syncs with Managed Airflow.
-While using Git-sync feature, deleting DAGs in Managed Airflow isn't possible because all your Git source files are synchronized with Managed Airflow. The recommended approach is to remove the file from your source code repository and your commit gets synchronized with Managed Airflow. 
+## Delete DAGs by using Azure Blob Storage
-## Delete DAGs using Blob Storage.
+1. In this example, you want to delete the DAG named `adf.py`.
-### Delete DAGs
+ :::image type="content" source="media/airflow-import-delete-dags/sample-dag-to-be-deleted.png" alt-text="Screenshot that shows the DAG to delete.":::
-1. Let’s say you want to delete the DAG named ``Tutorial.py`` as shown in the image. 
-
- :::image type="content" source="media/airflow-import-delete-dags/sample-dag-to-be-deleted.png" alt-text="Screenshot shows the DAG to be deleted.":::
+1. Select the ellipsis icon and select **Delete DAG**.
-1. Click on ellipsis icon -> Click on Delete DAG Button.
-
- :::image type="content" source="media/airflow-import-delete-dags/delete-dag-button.png" alt-text="Screenshot shows the delete button.":::
+ :::image type="content" source="media/airflow-import-delete-dags/delete-dag-button.png" alt-text="Screenshot that shows the Delete DAG button.":::
-1. Fill out the name of your Dag file. 
-
- :::image type="content" source="media/airflow-import-delete-dags/dag-filename-input.png" alt-text="Screenshot shows the DAG filename.":::
+1. Enter the name of your DAG file.
-1. Click Delete Button.
-
-1. Successfully deleted file. 
-
- :::image type="content" source="media/airflow-import-delete-dags/dag-delete-success.png" alt-text="Screenshot shows successful DAG deletion.":::
+ :::image type="content" source="media/airflow-import-delete-dags/dag-filename-input.png" alt-text="Screenshot that shows the DAG filename.":::
+
+1. Select **Delete**.
+
+1. You see a message that tells you the file was successfully deleted.
+
+ :::image type="content" source="media/airflow-import-delete-dags/dag-delete-success.png" alt-text="Screenshot that shows successful DAG deletion.":::
data-factory Enable Azure Key Vault For Managed Airflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-azure-key-vault-for-managed-airflow.md
Title: Enable Azure Key Vault for airflow
-description: This article explains how to enable Azure Key Vault as the secret backend for a Managed Airflow instance.
+description: This article explains how to enable Azure Key Vault as the secret back end for a Managed Airflow instance.
Last updated 08/29/2023
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-> [!NOTE]
-> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
+Apache Airflow offers various back ends for securely storing sensitive information such as variables and connections. One of these options is Azure Key Vault. This article walks you through the process of configuring Key Vault as the secret back end for Apache Airflow within a Managed Airflow environment.
-Apache Airflow offers various backends for securely storing sensitive information such as variables and connections. One of these options is Azure Key Vault. This guide is designed to walk you through the process of configuring Azure Key Vault as the secret backend for Apache Airflow within Managed Airflow Environment.
+> [!NOTE]
+> Managed Airflow for Azure Data Factory relies on the open-source Apache Airflow application. For documentation and more tutorials for Airflow, see the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) webpages.
-## Prerequisites
+## Prerequisites
-- **Azure subscription** - If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.-- **Azure storage account** - If you don't have a storage account, see [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.-- **Azure Key Vault** - You can follow [this tutorial to create a new Azure Key Vault](/azure/key-vault/general/quick-create-portal) if you donΓÇÖt have one.-- **Service Principal** - You can [create a new service principal](/azure/active-directory/develop/howto-create-service-principal-portal) or use an existing one and grant it permission to access Azure Key Vault (example - grant the **key-vault-contributor role** to the SPN for the key vault, so the SPN can manage it). Additionally, you'll need to get the service principal **Client ID** and **Client Secret** (API Key) to add them as environment variables, as described later in this article.
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure Storage account**: If you don't have a storage account, see [Create an Azure Storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
+- **Azure Key Vault**: You can follow [this tutorial to create a new Key Vault instance](/azure/key-vault/general/quick-create-portal) if you don't have one.
+- **Service principal**: You can [create a new service principal](/azure/active-directory/develop/howto-create-service-principal-portal) or use an existing one and grant it permission to access your Key Vault instance. For example, you can grant the **key-vault-contributor role** to the service principal name (SPN) for your Key Vault instance so that the SPN can manage it. You also need to get the service principal's **Client ID** and **Client Secret** (API Key) to add them as environment variables, as described later in this article.
## Permissions
-Assign your SPN the following roles in your key vault from the [Built-in roles](/azure/role-based-access-control/built-in-roles).
+Assign your SPN the following roles in your Key Vault instance from the [built-in roles](/azure/role-based-access-control/built-in-roles):
- Key Vault Contributor - Key Vault Secrets User
-## Enable the Azure Key Vault backend for a Managed Airflow instance
+## Enable the Key Vault back end for a Managed Airflow instance
-Follow these steps to enable the Azure Key Vault as the secret backend for your Managed Airflow instance.
+To enable Key Vault as the secret back end for your Managed Airflow instance:
-1. Navigate to the [Managed Airflow instance's integrated runtime (IR) environment](how-does-managed-airflow-work.md).
-1. Install the [**apache-airflow-providers-microsoft-azure**](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/https://docsupdatetracker.net/index.html) for the **Airflow requirements** during your initial Airflow environment setup.
+1. Go to the [Managed Airflow instance's integration runtime environment](how-does-managed-airflow-work.md).
+1. Install [apache-airflow-providers-microsoft-azure](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/https://docsupdatetracker.net/index.html) for the **Airflow requirements** during your initial Airflow environment setup.
- :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/airflow-environment-setup.png" alt-text="Screenshot showing the Airflow Environment Setup window highlighting the Airflow requirements." lightbox="media/enable-azure-key-vault-for-managed-airflow/airflow-environment-setup.png":::
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/airflow-environment-setup.png" alt-text="Screenshot that shows the Airflow Environment Setup window highlighting the Airflow requirements." lightbox="media/enable-azure-key-vault-for-managed-airflow/airflow-environment-setup.png":::
-1. Add the following settings for the **Airflow configuration overrides** in integrated runtime properties:
+1. Add the following settings for the **Airflow configuration overrides** in integration runtime properties:
- - **AIRFLOW__SECRETS__BACKEND**: "airflow.providers.microsoft.azure.secrets.key_vault.AzureKeyVaultBackend"
- - **AIRFLOW__SECRETS__BACKEND_KWARGS**: "{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "vault_url": **\<your keyvault uri\>**}ΓÇ¥
+ - **AIRFLOW__SECRETS__BACKEND**: `airflow.providers.microsoft.azure.secrets.key_vault.AzureKeyVaultBackend`
+ - **AIRFLOW__SECRETS__BACKEND_KWARGS**: `{"connections_prefix": "airflow-connections", "variables_prefix": "airflow-variables", "vault_url": **\<your keyvault uri\>**}`
- :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/airflow-configuration-overrides.png" alt-text="Screenshot showing the configuration of the Airflow configuration overrides setting in the Airflow environment setup." lightbox="media/enable-azure-key-vault-for-managed-airflow/airflow-configuration-overrides.png":::
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/airflow-configuration-overrides.png" alt-text="Screenshot that shows the configuration of the Airflow configuration overrides setting in the Airflow environment setup." lightbox="media/enable-azure-key-vault-for-managed-airflow/airflow-configuration-overrides.png":::
-1. Add the following for the **Environment variables** configuration in the Airflow integrated runtime properties:
+1. Add the following variables for the **Environment variables** configuration in the Airflow integration runtime properties:
- **AZURE_CLIENT_ID** = \<Client ID of SPN\> - **AZURE_TENANT_ID** = \<Tenant Id\> - **AZURE_CLIENT_SECRET** = \<Client Secret of SPN\>
- :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/environment-variables.png" alt-text="Screenshot showing the Environment variables section of the Airflow integrated runtime properties." lightbox="media/enable-azure-key-vault-for-managed-airflow/environment-variables.png":::
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/environment-variables.png" alt-text="Screenshot that shows the Environment variables section of the Airflow integration runtime properties." lightbox="media/enable-azure-key-vault-for-managed-airflow/environment-variables.png":::
-1. Then you can use variables and connections and they will automatically be stored in Azure Key Vault. The name of connections and variables need to follow AIRFLOW__SECRETS__BACKEND_KWARGS as defined previously. For more information, refer to [Azure-key-vault as secret backend](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html).
+1. Then you can use variables and connections and they're stored automatically in Key Vault. The names of the connections and variables need to follow `AIRFLOW__SECRETS__BACKEND_KWARGS`, as defined previously. For more information, see [Azure Key Vault as secret back end](https://airflow.apache.org/docs/apache-airflow-providers-microsoft-azure/stable/secrets-backends/azure-key-vault.html).
-## Sample DAG using Azure Key Vault as the backend
+## Sample DAG using Key Vault as the back end
-1. Create a new Python file **adf.py** with the following contents:
+1. Create the new Python file `adf.py` with the following contents:
```python from datetime import datetime, timedelta
Follow these steps to enable the Azure Key Vault as the secret backend for your
get_variable_task ```
-1. Store variables for connections in Azure Key Vault. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md)
+1. Store variables for connections in Key Vault. For more information, see [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md).
- :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png" alt-text="Screenshot showing the configuration of secrets in Azure Key Vault." lightbox="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png":::
+ :::image type="content" source="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png" alt-text="Screenshot that shows the configuration of secrets in Azure Key Vault." lightbox="media/enable-azure-key-vault-for-managed-airflow/secrets-configuration.png":::
## Related content - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)-- [How to change the password for Managed Airflow environments](password-change-airflow.md)
+- [Change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Kubernetes Secret Pull Image From Private Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/kubernetes-secret-pull-image-from-private-container-registry.md
Last updated 08/30/2023
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-> [!NOTE]
-> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
+This article explains how to add a Kubernetes secret to pull a custom image from a private Azure Container Registry within the Azure Data Factory Managed Airflow environment.
-This article explains how to add a Kubernetes secret to pull a custom image from a private Azure Container Registry within Azure Data Factory's Managed Airflow environment.
+> [!NOTE]
+> Managed Airflow for Azure Data Factory relies on the open-source Apache Airflow application. You can find documentation and more tutorials for Airflow on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) webpages.
## Prerequisites -- **Azure subscription** - If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.-- **Azure storage account** - If you don't have a storage account, see [Create an Azure storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.-- **Azure Container Registry** - Configure an [Azure Container Registry](/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli) with the custom Docker image you want to use in the DAG. For more information on push and pull container images, see [Push & pull container image - Azure Container Registry](/azure/container-registry/container-registry-get-started-docker-cli?tabs=azure-cli).
+- **Azure subscription**: If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin.
+- **Azure Storage account**: If you don't have a storage account, see [Create an Azure Storage account](/azure/storage/common/storage-account-create?tabs=azure-portal) for steps to create one. Ensure the storage account allows access only from selected networks.
+- **Azure Container Registry**: Configure an [Azure Container Registry](/azure/container-registry/container-registry-get-started-portal?tabs=azure-cli) with the custom Docker image you want to use in the directed acyclic graph (DAG). For more information on push and pull container images, see [Push and pull container image - Azure Container Registry](/azure/container-registry/container-registry-get-started-docker-cli?tabs=azure-cli).
-### Step 1: Create a new Managed Airflow environment
+### Create a new Managed Airflow environment
-Open the Azure Data Factory Studio and select the **Manage** tab from the left toolbar, then select **Apache Airflow** under **Workflow Orchestration Manager**. Finally, select **+ New** to create a new Managed Airflow environment.
+Open Azure Data Factory Studio and on the toolbar on the left, select the **Manage** tab. Then under **Workflow orchestration manager**, select **Apache Airflow**. Finally, select **+ New** to create a new Managed Airflow environment.
-### Step 2: Add a Kubernetes secret
+### Add a Kubernetes secret
-In the Airflow environment setup window, scroll to the bottom and expand the **Advanced** section, then select **+ New** under **Kubernetes secrets**.
+On the **Airflow environment setup** window, scroll to the bottom and expand the **Advanced** section. Then under **Kubernetes secrets**, select **+ New**.
-### Step 3: Configure authentication
+### Configure authentication
-Provide the required field **Secret name**, select **Private registry auth** for the **Secret type**, and enter the other required fields. The **Registry server URL** should be the URL of your private container registry, for example, ```\registry_name\>.azurecr.io```.
+Provide the required field **Secret name**. For **Secret type**, select **Private registry auth**. Then enter information in the other required fields. The **Registry server URL** should be the URL of your private container registry, for example, ```\registry_name\>.azurecr.io```.
-Once you provide the required fields, select **Apply** to add the secret.
+After you enter information in the required fields, select **Apply** to add the secret.
## Related content - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)-- [How to change the password for Managed Airflow environments](password-change-airflow.md)
+- [Change the password for Managed Airflow environments](password-change-airflow.md)
data-factory Rest Apis For Airflow Integrated Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/rest-apis-for-airflow-integrated-runtime.md
Title: REST APIs for the Managed Airflow integrated runtime
-description: This article documents the REST APIs for the Managed Airflow integrated runtime.
+ Title: REST APIs for the Managed Airflow integration runtime
+description: This article documents the REST APIs for the Managed Airflow integration runtime.
Last updated 08/09/2023
-# REST APIs for the Managed Airflow integrated runtime
+# REST APIs for the Managed Airflow integration runtime
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-> [!NOTE]
-> Managed Airflow for Azure Data Factory relies on the open source Apache Airflow application. Documentation and more tutorials for Airflow can be found on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) pages.
+This article documents the REST APIs for the Azure Data Factory Managed Airflow integration runtime.
-This article documents the REST APIs for the Managed Airflow integrated runtime.
+> [!NOTE]
+> Managed Airflow for Azure Data Factory relies on the open-source Apache Airflow application. You can find documentation and more tutorials for Airflow on the Apache Airflow [Documentation](https://airflow.apache.org/docs/) or [Community](https://airflow.apache.org/community/) webpages.
## Create a new environment
This article documents the REST APIs for the Managed Airflow integrated runtime.
|||||| |Subscription Id | path | True | string | Subscription identifier | |ResourceGroup Name | path | True | string | Resource group name (Regex pattern: ```^[-\w\._\(\)]+$```) |
- |dataFactoryName | path | True | string | Name of the Azure Data Factory (Regex pattern: ```^[A-Za-z0-9]+(?:-[A-Za-z0-9]+)*$``` |
+ |dataFactoryName | path | True | string | Name of the Azure Data Factory instance (Regex pattern: ```^[A-Za-z0-9]+(?:-[A-Za-z0-9]+)*$``` |
|airflowEnvName | path | True | string | Name of the Managed Airflow environment | |Api-version | query | True | string | The API version |
This article documents the REST APIs for the Managed Airflow integrated runtime.
|Type |string |The resource type (**Airflow** in this scenario) | |typeProperties |typeProperty |Airflow | -- **Type property**
+- **Type property**:
|Name |Type |Description | ||||
- |computeProperties |computeProperty |Configuration of the compute type used for the environment. |
- |airflowProperties |airflowProperty |Configuration of the Airflow properties for the environment. |
+ |computeProperties |computeProperty |Configuration of the compute type used for the environment |
+ |airflowProperties |airflowProperty |Configuration of the Airflow properties for the environment |
-- **Compute property**
+- **Compute property**:
|Name |Type |Description | ||||
- |location |string |The Airflow integrated runtime location defaults to the data factory region. To create an integrated runtime in a different region, create a new data factory in the required region. |
- | computeSize | string |The size of the compute node you want your Airflow environment to run on. Example: ΓÇ£LargeΓÇ¥, ΓÇ£SmallΓÇ¥. 3 nodes are allocated initially. |
- | extraNodes | integer |Each extra node adds 3 more workers. |
+ |location |string |The Airflow integration runtime location defaults to the data factory region. To create an integration runtime in a different region, create a new data factory in the required region. |
+ | computeSize | string |The size of the compute node you want your Airflow environment to run on. Examples are Large or Small. Three nodes are allocated initially. |
+ | extraNodes | integer |Each extra node adds three more workers. |
-- **Airflow property**
+- **Airflow property**:
|Name |Type |Description | ||||
- |airflowVersion | string | Current version of Airflow (Example: 2.4.3) |
- |airflowRequirements | Array\<string\> | Python libraries you wish to use. Example: ["flask-bcrypy=0.7.1"]. Can be a comma delimited list. |
- |airflowEnvironmentVariables | Object (Key/Value pair) | Environment variables you wish to use. Example: { ΓÇ£SAMPLE_ENV_NAMEΓÇ¥: ΓÇ£testΓÇ¥ } |
- |gitSyncProperties | gitSyncProperty | Git configuration properties |
- |enableAADIntegration | boolean | Allows Microsoft Entra ID to login to Airflow |
- |userName | string or null | Username for Basic Authentication |
- |password | string or null | Password for Basic Authentication |
+ |airflowVersion | string | Current version of Airflow. For example, 2.4.3. |
+ |airflowRequirements | Array\<string\> | Python libraries you want to use. For example, ["flask-bcrypy=0.7.1"]. Can be a comma-delimited list. |
+ |airflowEnvironmentVariables | Object (Key/Value pair) | Environment variables you want to use. For example, { "SAMPLE_ENV_NAME": "test" }. |
+ |gitSyncProperties | gitSyncProperty | Git configuration properties. |
+ |enableAADIntegration | boolean | Allows Microsoft Entra ID to log in to Airflow. |
+ |userName | string or null | Username for Basic Authentication. |
+ |password | string or null | Password for Basic Authentication. |
-- **Git sync property**
+- **Git sync property**:
|Name |Type |Description | ||||
- |gitServiceType | string | The Git service your desired repo is located in. Values: GitHub, AOD, GitLab, or BitBucket |
- |gitCredentialType | string | Type of Git credential. Values: PAT (for Personal Access Token), None |
- |repo | string | Repository link |
- |branch | string | Branch to use in the repository |
- |username | string | GitHub username |
- |Credential | string | Value of the Personal Access Token |
+ |gitServiceType | string | The Git service where your desired repository is located. Values are GitHub, ADO, GitLab, or BitBucket. |
+ |gitCredentialType | string | Type of Git credential. Values are PAT (for personal access token) and None. |
+ |repo | string | Repository link. |
+ |branch | string | Branch to use in the repository. |
+ |username | string | GitHub username. |
+ |Credential | string | Value of the PAT. |
-- **Responses**
+- **Responses**:
|Name |Status code |Type |Description | ||||-| |Accepted | 200 | [Factory](/rest/api/datafactory/factories/get?tabs=HTTP#factory) | OK |
- |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with additional error details |
+ |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with more error details |
## Import DAGs
This article documents the REST APIs for the Managed Airflow integrated runtime.
|Name |Type |Description | ||||
- |IntegrationRuntimeName | string | Airflow environment name |
- |LinkedServiceName | string | Azure Blob Storage account name where DAGs to be imported are located |
- |StorageFolderPath | string | Path to the folder in blob storage with the DAGs |
- |Overwrite | boolean | Overwrite the existing DAGs (Default=True) |
- |CopyFolderStructure | boolean | Controls whether the folder structure will be copied or not |
+ |IntegrationRuntimeName | string | Airflow environment name. |
+ |LinkedServiceName | string | Azure Blob Storage account name where DAGs to be imported are located. |
+ |StorageFolderPath | string | Path to the folder in Azure Blob Storage with the DAGs. |
+ |Overwrite | boolean | Overwrite the existing DAGs (Default=True). |
+ |CopyFolderStructure | boolean | Controls whether the folder structure is copied or not. |
|AddRequirementsFromFile | boolean | Add requirements from the DAG files | -- **Responses**
+- **Responses**:
|Name |Status code |Type |Description | ||||-| |Accepted | 200 | [Factory](/rest/api/datafactory/factories/get?tabs=HTTP#factory) | OK |
- |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with additional error details |
+ |Unauthorized | 401 | [Cloud Error](/rest/api/datafactory/factories/get?tabs=HTTP#clouderror) | Array with more error details |
## Examples
-### Create a new environment using REST APIs
+Review the following examples.
+
+### Create a new environment by using REST APIs
Sample request:
HTTP
PUT https://management.azure.com/subscriptions/222f1459-6ebd-4896-82ab-652d5f6883cf/resourcegroups/abnarain-rg/providers/Microsoft.DataFactory/factories/ambika-df/integrationruntimes/sample-2?api-version=2018-06-01 ```
-Sample Body:
+Sample body:
```rest {
Sample Body:
} ```
-Sample Response:
+Sample response:
```rest Status code: 200 OK ```
-Response Body:
+Response body:
```rest {
Response Body:
"etag": "3402279e-0000-0100-0000-64ecb1cb0000" } ```+ ### Import DAGs
-Sample Request:
+Sample request:
```rest HTTP
HTTP
POST https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/your-rg/providers/Microsoft.DataFactory/factories/your-df/airflow/sync?api-version=2018-06-01 ```
-Sample Body:
+Sample body:
```rest {
Sample Body:
} ```
-Sample Response:
+Sample response:
```rest Status Code: 202
defender-for-cloud Agentless Vulnerability Assessment Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-aws.md
The triggers for an image scan are:
- Each image pushed to a container registry is triggered to be scanned. In most cases, the scan is completed within a few hours, but in rare cases it might take up to 24 hours. - Each image pulled from a registry is triggered to be scanned within 24 hours. - - **Continuous rescan triggering** ΓÇô continuous rescan is required to ensure images that have been previously scanned for vulnerabilities are rescanned to update their vulnerability reports in case a new vulnerability is published. - **Re-scan** is performed once a day for: - Images pushed in the last 90 days.
A detailed description of the scan process is described as follows:
- All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [Agentless discovery for Kubernetes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability) and [inventory collected via the Defender agent running on EKS nodes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability)
- - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+ - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AwsContainerRegistryRecommendationDetailsBlade/assessmentKey/c27441ae-775c-45be-8ffa-655de37362ce).
- For customers using either [Agentless discovery for Kubernetes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability) or [inventory collected via the Defender agent running on EKS nodes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an EKS cluster. For customers using only [Agentless discovery for Kubernetes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender agent](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours. > [!NOTE]
defender-for-cloud Agentless Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/agentless-vulnerability-assessment-azure.md
A detailed description of the scan process is described as follows:
- All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.ΓÇï - Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. To determine if an image is currently running, Defender for Cloud uses both [Agentless discovery for Kubernetes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability) and [inventory collected via the Defender agent running on AKS nodes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability)
- - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
+ - Vulnerability reports for registry container images are provided as a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5).
- For customers using either [Agentless discovery for Kubernetes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability) or [inventory collected via the Defender agent running on AKS nodes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability), Defender for Cloud also creates a [recommendation](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5) for remediating vulnerabilities for vulnerable images running on an AKS cluster. For customers using only [Agentless discovery for Kubernetes](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability), the refresh time for inventory in this recommendation is once every seven hours. Clusters that are also running the [Defender agent](/azure/defender-for-cloud/defender-for-containers-enable#enablement-method-per-capability) benefit from a two hour inventory refresh rate. Image scan results are updated based on registry scan in both cases, and are therefore only refreshed every 24 hours. > [!NOTE]
defender-for-cloud Defender For Containers Vulnerability Assessment Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure.md
Title: Vulnerability assessment for Azure powered by Qualys
+ Title: Vulnerability assessment for Azure powered by Qualys (Deprecated)
description: Learn how to use Defender for Containers to scan images in your Azure Container Registry to find vulnerabilities. Previously updated : 12/19/2023 Last updated : 12/25/2023
-# Vulnerability assessment for Azure powered by Qualys
+# Vulnerability assessment for Azure powered by Qualys (Deprecated)
+
+> [!IMPORTANT]
+>
+> The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**. If you are currently using container vulnerability assessment powered by Qualys, start planning your transition to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md).
+>
+> - For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+>
+> - For more information about migrating to our new container vulnerability assessment offering powered by Microsoft Defender Vulnerability Management, see [Transition from Qualys to Microsoft Defender Vulnerability Management](transition-to-defender-vulnerability-management.md).
+>
+> - For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md).
Vulnerability assessment for Azure, powered by Qualys, is an out-of-box solution that empowers security teams to easily discover and remediate vulnerabilities in Linux container images, with zero configuration for onboarding, and without deployment of any agents.
defender-for-cloud Deploy Vulnerability Assessment Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md
Title: Enable vulnerability scanning with Microsoft Defender Vulnerability Manag
description: Enable, deploy, and use Microsoft Defender Vulnerability Management with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines Previously updated : 06/29/2023 Last updated : 01/08/2024 # Enable vulnerability scanning with Microsoft Defender Vulnerability Management
+> [!IMPORTANT]
+> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to [transition to the Microsoft Defender Vulnerability Management vulnerability scanning solution](how-to-transition-to-built-in.md).
+>
+> For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+>
+> Check out the [common questions](faq-scanner-detection.yml) regarding the transition to Microsoft Defender Vulnerability Management.
+>
+> Customers who want to continue using Qualys, can do so with the [Bring Your Own License (BYOL) method](deploy-vulnerability-assessment-byol-vm.md).
+ [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management) is included with Microsoft Defender for Servers and uses built-in and agentless scanners to: - Discover vulnerabilities and misconfigurations in near real time
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Title: Enable vulnerability scanning with the integrated Qualys scanner
+ Title: Enable vulnerability scanning with the integrated Qualys scanner (deprecated)
description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines Previously updated : 12/18/2023 Last updated : 01/08/2024
-# Enable vulnerability scanning with the integrated Qualys scanner
+# Enable vulnerability scanning with the integrated Qualys scanner (deprecated)
+
+> [!IMPORTANT]
+> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to [transition to the Microsoft Defender Vulnerability Management vulnerability scanning solution](how-to-transition-to-built-in.md).
+>
+> For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+>
+> Check out the [common questions](faq-scanner-detection.yml) regarding the transition to Microsoft Defender Vulnerability Management.
+>
+> Customers who want to continue using Qualys, can do so with the [Bring Your Own License (BYOL) method](deploy-vulnerability-assessment-byol-vm.md).
A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Defender for Cloud regularly checks your connected machines to ensure they're running vulnerability assessment tools.
defender-for-cloud Episode Forty One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-one.md
Last updated 01/03/2024
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [Agentless secrets scanning for virtual machines](episode-forty-two.md)
defender-for-cloud Episode Forty Two https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-forty-two.md
+
+ Title: Agentless secrets scanning for virtual machines | Defender for Cloud in the field
+description: Learn about agentless secrets scanning for virtual machines
+ Last updated : 01/08/2024++
+# Agentless secrets scanning for virtual machines
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Ortal Parpara joins Yuri Diogenes to talk about agentless secrets scanning for virtual machines in Microsoft Defender for Cloud. Ortal explains the use case scenario for this feature, how the feature works, and the prerequisites for this feature to work. The demonstration shows how the attack path uses this feature to provide more insights about secrets that are located in virtual machines, and how it can be used to detect potential cross-cloud attacks.
+
+> [!VIDEO https://aka.ms/docs/player?id=3eb19963-8988-44eb-8052-e2255616a95e]
+
+- [01:18](/shows/mdc-in-the-field/agentless-secret-scanning-for-virtual-machines#time=01m18s) - Understanding secrets scanning capability for VMs in Defender for Cloud
+- [02:40](/shows/mdc-in-the-field/agentless-secret-scanning-for-virtual-machines#time=02m40s) - How agentless scanning for VMs works
+- [04:30](/shows/mdc-in-the-field/agentless-secret-scanning-for-virtual-machines#time=04m30s) - Secrets detection
+- [06:50](/shows/mdc-in-the-field/agentless-secret-scanning-for-virtual-machines#time=06m50s) - Performance considerations
+- [08:32](/shows/mdc-in-the-field/agentless-secret-scanning-for-virtual-machines#time=08m32s) - Demonstration
+
+## Recommended resources
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY).
+- Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS).
+++
+- Follow us on social media:
+
+ - [LinkedIn](https://www.linkedin.com/showcase/microsoft-security/)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud How To Transition To Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-transition-to-built-in.md
Title: Transition to the integrated Microsoft Defender Vulnerability Management vulnerability assessment solution
+ Title: Transition to Microsoft Defender Vulnerability Management for servers
description: Learn how to transition to the Microsoft Defender Vulnerability Management solution in Microsoft Defender for Cloud Previously updated : 12/18/2023 Last updated : 01/09/2024
-# Transition to the integrated Microsoft Defender Vulnerability Management vulnerability assessment solution
+# Transition to Microsoft Defender Vulnerability Management for servers
+
+> [!IMPORTANT]
+> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that is set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to transition to the Microsoft Defender Vulnerability Management vulnerability scanning using the steps on this page.
+>
+> For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+>
+> Check out the [common questions](faq-scanner-detection.yml) regarding the transition to Microsoft Defender Vulnerability Management.
+>
+> Customers who want to continue using Qualys, can do so with the [Bring Your Own License (BYOL) method](deploy-vulnerability-assessment-byol-vm.md).
With the Defender for Servers plan in Microsoft Defender for Cloud, you can scan compute assets for vulnerabilities. If you're currently using a vulnerability assessment solution other than the Microsoft Defender Vulnerability Management vulnerability assessment solution, this article provides instructions on transitioning to the integrated Defender Vulnerability Management solution.
defender-for-cloud Transition To Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/transition-to-defender-vulnerability-management.md
Last updated 01/08/2024
Microsoft Defender for Cloud is unifying all vulnerability assessment solutions to utilize the Microsoft Defender Vulnerability Management vulnerability scanner.
-Microsoft Defender Vulnerability Management integrates across many cloud native use cases, such as containers ship and runtime scenarios.
+Microsoft Defender Vulnerability Management integrates across many cloud native use cases, such as containers ship and runtime scenarios. As part of this change, we're retiring our built-in vulnerability assessments offering powered by Qualys.
+
+> [!IMPORTANT]
+> The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**.
+>
+> Customers that onboarded at least one subscription to Defender for Containers prior to **November 15th, 2023** can continue to use Container Vulnerability Assessment powered by Qualys until **March 1st, 2024**.
+>
+> For more information about the change, see [Defender for Cloud unifies Vulnerability Assessment solution powered by Microsoft Defender Vulnerability Management](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+
+If you're currently using the built vulnerability assessment solution powered by Qualys, start planning for the upcoming retirement by following the steps on this page.
## Step 1: Verify that scanning is enabled
If your organization is ready to transition to container vulnerability assessmen
| Recommendation | Description | Assessment Key |--|--|--|
-| [Azure registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)-Preview](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/PhoenixContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
+| [Azure registry container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)-Preview](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/AzureContainerRegistryRecommendationDetailsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. Resolving vulnerabilities can greatly improve your security posture, ensuring images are safe to use prior to deployment. | c0b7cfc6-3172-465a-b378-53c7ff2cc0d5 |
| [Azure running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security_CloudNativeCompute/ContainersRuntimeRecommendationDetailsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5)  | Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5 | ### Disable using the Qualys recommendations for Azure commercial clouds
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/03/2024 Last updated : 01/09/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 |
+| [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 |
| [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 | | [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 | | [Deprecation and severity changes to security alerts](#deprecation-and-severity-changes-to-security-alerts) | December 27, 2023 | January 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Defender for Servers built-in vulnerability assessment (Qualys) retirement path
+
+**Announcement date: January 9, 2024**
+
+**Estimated date for change: May 2024**
+
+The Defender for Servers built-in vulnerability assessment solution powered by Qualys is on a retirement path which is estimated to complete on **May 1st, 2024**. If you are currently using the vulnerability assessment solution powered by Qualys, you should plan your [transition to the integrated Microsoft defender vulnerability management solution](how-to-transition-to-built-in.md).
+
+For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, you can read [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+
+You can also check out the [common questions about the transition to Microsoft Defender Vulnerability Management solution](faq-scanner-detection.yml).
+
+## Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys
+
+**Announcement date: January 9, 2023**
+
+**Estimated date for change: March 2024**
+
+The Defender for Cloud Containers Vulnerability Assessment powered by Qualys is now on a retirement path completing on **March 1st, 2024**. If you are currently using container vulnerability assessment powered by Qualys, start planning your transition to [Vulnerability assessments for Azure with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-azure.md).
+
+For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112).
+
+For more information about transitioning to our new container vulnerability assessment offering powered by Microsoft Defender Vulnerability Management, see [Transition from Qualys to Microsoft Defender Vulnerability Management](transition-to-defender-vulnerability-management.md).
+
+For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md).
+ ## New version of Defender Agent for Defender for Containers **Announcement date: January 4, 2024**
dns Dns Private Resolver Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-private-resolver-get-started-portal.md
Next, add a virtual network to the resource group that you created, and configur
5. Select the **default** subnet. 6. Enter the following values on the **Edit subnet** page: - Name: snet-inbound
- - IPv4 address range: 10.0.0.0.16
+ - IPv4 address range: 10.0.0.0/16
- Starting address: 10.0.0.0 - Size: /28 (16 IP addresses) - Select **Save**
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Access management is a critical function for any service or resource. The entitlement service lets you control who can use your Azure Data Manager for Energy, what they can see or change, and which services or data they can use.
-## OSDU Groups Structure
+## OSDU groups structure and naming
-The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for a given data partition in your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions.
+The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for a given data partition in your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions. All group identifiers (emails) are of form `{groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}`.
Please note that different groups and associated user entitlements need to be set for every **new data partition** even in the same Azure Data Manager for Energy instance. The entitlements service enables three use cases for authorization: 1. **Data groups** are used to enable authorization for data.
- 1. Some examples are data.welldb.viewers and data.welldb.owners.
+ 1. The data groups start with the word "data." such as data.welldb.viewers and data.welldb.owners.
2. Individual users are added to the data groups which are added in the ACL of individual data records to enable `viewer` and `owner` access of the data once the data has been loaded in the system. 3. To `upload` the data, you need to have entitlements of various OSDU services which are used during ingestion process. The combination of OSDU services depends on the method of ingestion. E.g., for manifest ingestion, refer [this](concepts-manifest-ingestion.md) to understand the OSDU services APIs used. The user **need not be part of the ACL** to upload the data. 2. **Service groups** are used to enable authorization for services.
- 1. Some examples are service.storage.user and service.storage.admin.
+ 1. The service groups start with the word "service." such as service.storage.user and service.storage.admin.
2. The service groups are **predefined** when OSDU services are provisioned in each data partition of Azure Data Manager for Energy instance. 3. These groups enable `viewer`, `editor`, and `admin` access to call the OSDU APIs corresponding to the OSDU services. 3. **User groups** are used for hierarchical grouping of user and service groups.
- 1. Some examples are users.datalake.viewers and users.datalake.editors.
+ 1. The service groups start with the word "users." such as users.datalake.viewers and users.datalake.editors.
2. Some user groups are created by default when a data partition is provisioned. Details of these groups and their hierarchy scope are in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
- 3. The `users@{partition}.{domain}` has the list of all the users with any type of access in a given data partition. Before adding a new user to any entitlement groups, you need to add the new user to the `users@{partition}.{domain}` group as well.
+ 3. There's one exception of this group naming rule for "users" group. It gets created when a new data partition is provisioned and its name follows the pattern of `users@{partition}.{domain}`. It has the list of all the users with any type of access in a given data partition. Before adding a new user to any entitlement groups, you need to add the new user to the `users@{partition}.{domain}` group as well.
Individual users can be added to a `user group`. The `user group` is then added to a `data group`. The data group is added to the ACL of the data record. It enables abstraction for the data groups since individual users need not be added one by one to the data group and instead can be added to the `user group`. This `user group` can then be used repeatedly for multiple `data groups`. The nested structure thus helps provide scalability to manage memberships in OSDU.
-## Group naming
-
-All group identifiers (emails) are of form `{groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}`. A group naming convention is adopted by OSDU such that the group's name starts with
-1. the word "data." for data groups;
-2. the word "service." for service groups;
-3. the word "users." for user groups. There's one exception of this group naming rule for "users" group. It gets created when a new data partition is provisioned and its name follows the pattern of `users@{partition}.{domain}`.
- ## Users For each OSDU group, you can either add a user as an OWNER or a MEMBER.
energy-data-services How To Generate Auth Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-auth-token.md
In this article, you learn how to generate the service principal auth token, user's auth token and user's refresh token. ## Register your app with Microsoft Entra ID
-To use the Azure Data Manager for Energy platform endpoint, you must register your app in the [Azure portal app registration page](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. For steps on how to configure, see [Register your app documentation](../active-directory/develop/quickstart-register-app.md#register-an-application).
-
-To use the OAuth 2.0 authorization code grant flow, save the following values when registering the app:
--- The `Directory (tenant) ID` is used as `{tenant-id}`-- The `application (client) ID` assigned by the app registration portal is used as `client-id`.-- A `client (application) secret`, either a password or a public/private key pair (certificate). The client secret isn't required for native apps. This secret is used as `{client-secret}`.-- A `redirect URI (or reply URL)` for your app to receive responses from Microsoft Entra ID. If there's no redirect URIs specified, you can add a platform, select "Web", add `http://localhost:8080`, and select save.
+1. To provision the Azure Data Manager for Energy platform, you must register your app in the [Azure portal app registration page](https://go.microsoft.com/fwlink/?linkid=2083908). You can use either a Microsoft account or a work or school account to register an app. For steps on how to configure, see [Register your app documentation](../active-directory/develop/quickstart-register-app.md#register-an-application).
+2. In the app overview section, if there's no redirect URIs specified, you can add a platform, select "Web", add `http://localhost:8080`, and select save.
:::image type="content" source="media/how-to-generate-auth-token/app-registration-uri.png" alt-text="Screenshot of adding URI to the app.":::
+3. Fetch the `redirect-uri` (or reply URL) for your app to receive responses from Microsoft Entra ID.
+ ## Fetch parameters You can also find the parameters once the app is registered on the Azure portal.
A `client-secret` is a string value your app can use in place of a certificate t
:::image type="content" source="media/how-to-generate-auth-token/client-secret.png" alt-text="Screenshot of finding the client secret."::: #### Find the `URL` for your Azure Data Manager for Energy instance
-1. Navigate to your Azure Data Manager for Energy *Overview* page on the Azure portal.
-2. Copy the URI from the essentials pane.
+1. Create [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
+2. Navigate to your Azure Data Manager for Energy *Overview* page on the Azure portal.
+3. Copy the URI from the essentials pane.
:::image type="content" source="media/how-to-generate-auth-token/endpoint-url.png" alt-text="Screenshot of finding the URL from Azure Data Manager for Energy instance.":::
Generating a user's auth token is a two step process.
### Get authorization code The first step to getting an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Microsoft Entra ID signs the user in and requests their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Microsoft Entra ID returns an `authorization_code` to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token.
-#### Request format
-1. After replacing the parameters, you can paste the below in the URL of any browser and hit enter.
+1. After replacing the parameters, you can paste the request in the URL of any browser and hit enter.
2. It asks you to log in to your Azure portal if not logged in already.
-3. You get the response in the URL.
-
-```bash
+3. You might see 'can't reach this page' error in the browser. You can ignore that.
+
+
+4. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
+5. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`
+6. This is the `authorization_code` to keep handy for future use.
+
+#### Request format
+ ```bash
https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/authorize?client_id={client-id} &response_type=code
- &redirect_uri=http%3a%2f%2flocalhost%3a8080
+ &redirect_uri={redirect-uri}
&response_mode=query &scope={client-id}%2f.default&state=12345&sso_reload=true ```
The first step to getting an access token for many OpenID Connect (OIDC) and OAu
| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. | #### Sample response
-1. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
-2. In the URL bar, you see the response of the below format.
- ```bash http://localhost:8080/?code=0.BRoAv4j5cvGGr0...au78f&state=12345&session.... ```
-3. Copy the response and fetch the text between `code=` and `&state`
-4. This is the `authorization_code` to keep handy for future use.
- > [!NOTE] > The browser may say that the site can't be reached, but it should still have the authorization code in the URL bar.
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
In this article, you learn how to manage users and their memberships in OSDU gro
## Get the list of all available groups in a data partition
-Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for the Energy instance and its data partitions.
+Run the below curl command in Azure Cloud Bash to get all the groups that are available for you or you have access to in the given data partition of Azure Data Manager for the Energy instance.
```bash curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \
governance Assign Policy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-bicep.md
Title: "Quickstart: New policy assignment with Bicep file"
-description: In this quickstart, you use a Bicep file to create a policy assignment to identify non-compliant resources.
Previously updated : 03/24/2022
+ Title: Create a policy assignment with Bicep file
+description: In this quickstart, you use a Bicep file to create an Azure policy assignment that identifies non-compliant resources.
Last updated : 01/08/2024
-# Quickstart: Create a policy assignment to identify non-compliant resources by using a Bicep file
-The first step in understanding compliance in Azure is to identify the status of your resources.
-This quickstart steps you through the process of using a
-[Bicep](https://github.com/Azure/bicep) file compiled to an Azure Resource
-Manager (ARM) deployment template to create a policy assignment to identify virtual machines that
-aren't using managed disks. At the end of this process, you'll successfully identify virtual
-machines that aren't using managed disks. They're _non-compliant_ with the policy assignment.
+# Quickstart: Create a policy assignment to identify non-compliant resources by using a Bicep file
+In this quickstart, you use a Bicep file to create a policy assignment that validates resource's compliance with an Azure policy. The policy is assigned to a resource group scope and audits if virtual machines use managed disks. Virtual machines deployed in the resource group that don't use managed disks are _non-compliant_ with the policy assignment.
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the
-**Deploy to Azure** button. The template opens in the Azure portal.
+> [!NOTE]
+> Azure Policy is a free service. For more information, go to [Overview of Azure Policy](./overview.md).
## Prerequisites -- If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/)
- account before you begin.
-- Bicep version `0.3` or higher installed. If you don't yet have Bicep CLI or need to update, see
- [Install Bicep](../../azure-resource-manager/bicep/install.md).
+- If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- [Bicep](../../azure-resource-manager/bicep/install.md).
+- [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
+- [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep).
+- `Microsoft.PolicyInsights` must be [registered](../../azure-resource-manager/management/resource-providers-and-types.md) in your Azure subscription.
## Review the Bicep file
-In this quickstart, you create a policy assignment and assign a built-in policy definition called [_Audit VMs that do not use managed disks_](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json). For a partial
-list of available built-in policies, see [Azure Policy samples](./samples/index.md).
+The Bicep file creates a policy assignment for a resource group scope and assigns the built-in policy definition [Audit VMs that do not use managed disks](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json). For a list of available built-in policies, see [Azure Policy samples](./samples/index.md).
-Create the following Bicep file as `assignment.bicep`:
+Create the following Bicep file as _policy-assignment.bicep_.
+
+1. Open Visual Studio Code and select **File** > **New Text File**.
+1. Copy and paste the Bicep file into Visual Studio Code.
+1. Select **File** > **Save** and use the filename _policy-policy-assignment.bicep_.
```bicep
-param policyAssignmentName string = 'audit-vm-manageddisks'
+param policyAssignmentName string = 'audit-vm-managed-disks'
param policyDefinitionID string = '/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d'
-resource assignment 'Microsoft.Authorization/policyAssignments@2021-09-01' = {
- name: policyAssignmentName
- scope: subscriptionResourceId('Microsoft.Resources/resourceGroups', resourceGroup().name)
- properties: {
- policyDefinitionId: policyDefinitionID
- }
+resource assignment 'Microsoft.Authorization/policyAssignments@2023-04-01' = {
+ name: policyAssignmentName
+ scope: resourceGroup()
+ properties: {
+ policyDefinitionId: policyDefinitionID
+ description: 'Policy assignment to resource group scope created with Bicep file'
+ displayName: 'audit-vm-managed-disks'
+ nonComplianceMessages: [
+ {
+ message: 'Virtual machines should use managed disks'
+ }
+ ]
+ }
} output assignmentId string = assignment.id ```
-The resource defined in the file is:
+The resource type defined in the Bicep file is [Microsoft.Authorization/policyAssignments](/azure/templates/microsoft.authorization/policyassignments).
-- [Microsoft.Authorization/policyAssignments](/azure/templates/microsoft.authorization/policyassignments)
+For more information about Bicep files:
-## Deploy the template
+- To find more Bicep samples, go to [Browse code samples](/samples/browse/?expanded=azure&languages=bicep).
+- To learn more about template reference's for deployments, go to [Azure template reference](/azure/templates/microsoft.authorization/allversions).
+- To learn how to develop Bicep files, go to [Bicep documentation](../../azure-resource-manager/bicep/overview.md).
+- To learn about subscription-level deployments, go to [Subscription deployments with Bicep files](../../azure-resource-manager/bicep/deploy-to-subscription.md).
-> [!NOTE]
-> Azure Policy service is free. For more information, see
-> [Overview of Azure Policy](./overview.md).
+## Deploy the Bicep file
+
+You can deploy the Bicep file with Azure PowerShell or Azure CLI.
+
+From a Visual Studio Code terminal session, connect to Azure. If you have more than one subscription, run the commands to set context to your subscription. Replace `<subscriptionID>` with your Azure subscription ID.
+
+# [PowerShell](#tab/azure-powershell)
+```azurepowershell
+Connect-AzAccount
+
+# Run these commands if you have multiple subscriptions
+Get-AzSubScription
+Set-AzContext -Subscription <subscriptionID>
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az login
+
+# Run these commands if you have multiple subscriptions
+az account list --output table
+az account set --subscription <subscriptionID>
+```
++
-After the Bicep CLI is installed and file created, you can deploy the Bicep file with:
+The following commands create a resource group and deploy the policy definition.
# [PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
+New-AzResourceGroup -Name "PolicyGroup" -Location "westus"
+ New-AzResourceGroupDeployment ` -Name PolicyDeployment ` -ResourceGroupName PolicyGroup `
- -TemplateFile assignment.bicep
+ -TemplateFile policy-assignment.bicep
``` # [Azure CLI](#tab/azure-cli)
-```azurecli-interactive
+```azurecli
+az group create --name "PolicyGroup" --location "westus"
+ az deployment group create \ --name PolicyDeployment \ --resource-group PolicyGroup \
- --template-file assignment.bicep
+ --template-file policy-assignment.bicep
```
-Some other resources:
--- To find more samples templates, see
- [Azure Quickstart Template](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Authorization&pageNumber=1&sort=Popular).
-- To see the template reference, go to
- [Azure template reference](/azure/templates/microsoft.authorization/allversions).
-- To learn how to develop ARM templates, see
- [Azure Resource Manager documentation](../../azure-resource-manager/management/overview.md).
-- To learn subscription-level deployment, see
- [Create resource groups and resources at the subscription level](../../azure-resource-manager/templates/deploy-to-subscription.md).
+The Bicep file outputs the policy `assignmentId`. You create a variable for the policy assignment ID in the commands that validate the deployment.
## Validate the deployment
-Select **Compliance** in the left side of the page. Then locate the _Audit VMs that do not use
-managed disks_ policy assignment you created.
+After the policy assignment is deployed, virtual machines that are deployed to the _PolicyGroup_ resource group are audited for compliance with the managed disk policy.
+1. Sign in to [Azure portal](https://portal.azure.com)
+1. Go to **Policy** and select **Compliance** on the left side of the page.
+1. Search for the _audit-vm-managed-disks_ policy assignment.
-If there are any existing resources that aren't compliant with this new assignment, they appear
-under **Non-compliant resources**.
+The **Compliance state** for a new policy assignment is shown as **Not started** because it takes a few minutes to become active.
-For more information, see
-[How compliance works](./concepts/compliance-states.md).
-## Clean up resources
+For more information, go to [How compliance works](./concepts/compliance-states.md).
+
+You can also get the compliance state with Azure PowerShell or Azure CLI.
+
+# [PowerShell](#tab/azure-powershell)
+```azurepowershell
+# Verifies policy assignment was deployed
+$rg = Get-AzResourceGroup -Name "PolicyGroup"
+Get-AzPolicyAssignment -Name "audit-vm-managed-disks" -Scope $rg.ResourceId
+
+# Shows the number of non-compliant resources and policies
+$policyid = (Get-AzPolicyAssignment -Name "audit-vm-managed-disks" -Scope $rg.ResourceId)
+Get-AzPolicyStateSummary -ResourceId $policyid.ResourceId
+```
+
+The `$rg` variable stores the resource group's properties and `Get-AzPolicyAssignment` shows your policy assignment. The `$policyid` variable stores the policy assignment's resource ID, and `Get-AzPolicyStateSummary` shows the number of non-compliant resources and policies.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+# Verifies policy assignment was deployed
+rg=$(az group show --resource-group PolicyGroup --query id --output tsv)
+az policy assignment show --name "audit-vm-managed-disks" --scope $rg
-To remove the assignment created, follow these steps:
+# Shows the number of non-compliant resources and policies
+policyid=$(az policy assignment show --name "audit-vm-managed-disks" --scope $rg --query id --output tsv)
+az policy state summarize --resource $policyid
+```
+
+The `$rg` variable stores the resource group's properties and `az policy assignment show` displays your policy assignment. The `$policyid` variable stores the policy assignment's resource ID and `az policy state summarize` shows the number of non-compliant resources and policies.
+++
+## Clean up resources
-1. Select **Compliance** (or **Assignments**) in the left side of the Azure Policy page and locate
- the _Audit VMs that do not use managed disks_ policy assignment you created.
+To remove the assignment from Azure, follow these steps:
-1. Right-click the _Audit VMs that do not use managed disks_ policy assignment and select **Delete
+1. Select **Compliance** in the left side of the Azure Policy page.
+1. Locate the _audit-vm-managed-disks_ policy assignment.
+1. Right-click the _audit-vm-managed-disks_ policy assignment and select **Delete
assignment**.
- :::image type="content" source="./media/assign-policy-template/delete-assignment.png" alt-text="Screenshot of using the context menu to delete an assignment from the Compliance page." border="false":::
+ :::image type="content" source="./media/assign-policy-bicep/delete-assignment.png" alt-text="Screenshot of the context menu to delete an assignment from the Policy Compliance page.":::
-1. Delete the `assignment.bicep` file.
+1. Delete the resource group _PolicyGroup_. Go to the Azure resource group and select **Delete resource group**.
+1. Delete the _policy-assignment.bicep_ file.
+
+You can also delete the policy assignment and resource group with Azure PowerShell or Azure CLI.
+
+# [PowerShell](#tab/azure-powershell)
+```azurepowershell
+Remove-AzPolicyAssignment -Id $policyid.ResourceId
+Remove-AzResourceGroup -Name "PolicyGroup"
+
+# Sign out of Azure
+Disconnect-AzAccount
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az policy assignment delete --name "audit-vm-managed-disks" --scope $rg
+az group delete --name PolicyGroup
+
+# Sign out of Azure
+az logout
+```
++ ## Next steps
-In this quickstart, you assigned a built-in policy definition to a scope and evaluated its
-compliance report. The policy definition validates that all the resources in the scope are compliant
-and identifies which ones aren't.
+In this quickstart, you assigned a built-in policy definition to a resource group scope and reviewed its compliance report. The policy definition audits if the virtual machine resources in the resource group are compliant and identifies resources that aren't compliant.
To learn more about assigning policies to validate that new resources are compliant, continue to the
-tutorial for:
+tutorial.
> [!div class="nextstepaction"] > [Creating and managing policies](./tutorials/create-and-manage.md)
hdinsight Hdinsight Use Mapreduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/hdinsight-use-mapreduce.md
description: Learn how to run Apache MapReduce jobs on Apache Hadoop in HDInsigh
Previously updated : 12/21/2022 Last updated : 01/04/2024 # Use MapReduce in Apache Hadoop on HDInsight
Learn how to run MapReduce jobs on HDInsight clusters.
## Example data
-HDInsight provides various example data sets, which are stored in the `/example/data` and `/HdiSamples` directory. These directories are in the default storage for your cluster. In this document, we use the `/example/data/gutenberg/davinci.txt` file. This file contains the notebooks of Leonardo da Vinci.
+HDInsight provides various example data sets, which are stored in the `/example/data` and `/HdiSamples` directory. These directories are in the default storage for your cluster. In this document, we use the `/example/data/gutenberg/davinci.txt` file. This file contains the notebooks of `Leonardo da Vinci`.
## Example MapReduce
HDInsight can run HiveQL jobs by using various methods. Use the following table
| **Use this**... | **...to do this** | ...from this **client operating system** | |: |: |: |: |
-| [SSH](apache-hadoop-use-mapreduce-ssh.md) |Use the Hadoop command through **SSH** |Linux, Unix, Mac OS X, or Windows |
-| [Curl](apache-hadoop-use-mapreduce-curl.md) |Submit the job remotely by using **REST** |Linux, Unix, Mac OS X, or Windows |
+| [SSH](apache-hadoop-use-mapreduce-ssh.md) |Use the Hadoop command through **SSH** |Linux, Unix, `MacOS X`, or Windows |
+| [Curl](apache-hadoop-use-mapreduce-curl.md) |Submit the job remotely by using **REST** |Linux, Unix, `MacOS X`, or Windows |
| [Windows PowerShell](apache-hadoop-use-mapreduce-powershell.md) |Submit the job remotely by using **Windows PowerShell** |Windows | ## Next steps
hdinsight Apache Hbase Backup Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-backup-replication.md
description: Set up Backup and replication for Apache HBase and Apache Phoenix i
Previously updated : 12/27/2022 Last updated : 01/04/2024 # Set up backup and replication for Apache HBase and Apache Phoenix on HDInsight
To acquire the quorum host names, run the following curl command:
curl -u admin:<password> -X GET -H "X-Requested-By: ambari" "https://<clusterName>.azurehdinsight.net/api/v1/clusters/<clusterName>/configurations?type=hbase-site&tag=TOPOLOGY_RESOLVED" | grep "hbase.zookeeper.quorum" ```
-The curl command retrieves a JSON document with HBase configuration information, and the grep command returns only the "hbase.zookeeper.quorum" entry, for example:
+The curl command retrieves a JSON document with HBase configuration information, and the `grep` command returns only the "hbase.zookeeper.quorum" entry, for example:
```output "hbase.zookeeper.quorum" : "<zookeepername1>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,<zookeepername2>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net,<zookeepername3>.54o2oqawzlwevlfxgay2500xtg.dx.internal.cloudapp.net"
hdinsight Hdinsight Apps Install Custom Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-apps-install-custom-applications.md
description: Learn how to install HDInsight applications for Apache Hadoop clust
Previously updated : 12/21/2022 Last updated : 01/04/2024 # Install custom Apache Hadoop applications on Azure HDInsight
For **Hue**, you can use the following steps:
### Azure CLI
-Replace `CLUSTERNAME`, and `RESOURCEGROUP` with the relevant values and then enter the commands below:
+Replace `CLUSTERNAME`, and `RESOURCEGROUP` with the relevant values and then enter the following commands:
-* To lists all of the applications for the HDInsight cluster.
+* To list all of the applications for the HDInsight cluster.
```azurecli az hdinsight application list --cluster-name CLUSTERNAME --resource-group RESOURCEGROUP
If an application installation failed, you can see the error messages and debug
### Azure CLI
-Replace `NAME`, `CLUSTERNAME`, and `RESOURCEGROUP` with the relevant values and then enter the command below:
+Replace `NAME`, `CLUSTERNAME`, and `RESOURCEGROUP` with the relevant values and then enter the following command:
```azurecli az hdinsight application delete --name NAME --cluster-name CLUSTERNAME --resource-group RESOURCEGROUP
hdinsight Hdinsight Log Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-log-management.md
description: Determine the types, sizes, and retention policies for HDInsight ac
Previously updated : 12/07/2022 Last updated : 01/04/2024 # Manage logs for an HDInsight cluster
It's important to understand the workload types running on your HDInsight cluste
* Consider how you can collect logs from the cluster, or from more than one cluster, and collate them for purposes such as auditing, monitoring, planning, and alerting. You might use a custom solution to access and download the log files regularly, and combine and analyze them to provide a dashboard display. You can also add other capabilities for alerting for security or failure detection. You can build these utilities using PowerShell, the HDInsight SDKs, or code that accesses the Azure classic deployment model.
-* Consider whether a monitoring solution or service would be a useful benefit. The Microsoft System Center provides an [HDInsight management pack](https://systemcenter.wiki/?Get_ManagementPackBundle=Microsoft.HDInsight.mpb&FileMD5=10C7D975C6096FFAA22C84626D211259). You can also use third-party tools such as Apache Chukwa and Ganglia to collect and centralize logs. Many companies offer services to monitor Hadoop-based big data solutions, for example: Centerity, Compuware APM, Sematext SPM, and Zettaset Orchestrator.
+* Consider whether a monitoring solution or service would be a useful benefit. The Microsoft System Center provides an [HDInsight management pack](https://systemcenter.wiki/?Get_ManagementPackBundle=Microsoft.HDInsight.mpb&FileMD5=10C7D975C6096FFAA22C84626D211259). You can also use third-party tools such as Apache Chukwa and Ganglia to collect and centralize logs. Many companies offer services to monitor Hadoop-based big data solutions, for example: `Centerity`, Compuware APM, Sematext SPM, and Zettaset Orchestrator.
## Step 2: Manage cluster service versions and view logs
YARN aggregates logs across all containers on a worker node and stores those log
/app-logs/<user>/logs/<applicationId> ```
-The aggregated logs aren't directly readable, as they're written in a TFile binary format indexed by container. Use the YARN ResourceManager logs or CLI tools to view these logs as plain text for applications or containers of interest.
+The aggregated logs aren't directly readable, as they're written in a `TFile` binary format indexed by container. Use the YARN `ResourceManager` logs or CLI tools to view these logs as plain text for applications or containers of interest.
#### YARN CLI tools
healthcare-apis Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/known-issues.md
Refer to the table below to find details about resolution dates or possible work
|Issue | Date discovered | Workaround | Date resolved | | :- | : | :- | :- |
-|API queries to FHIR service returned Internal Server error in UK south region |August 10th 2023 9:53 am PST|--|August 10th 2023 10:43 am PST|
+|FHIR Applications were down in EUS2 region|January 8, 2024 2 pm PST|--|January 8, 2024 4:15 pm PST|
+|API queries to FHIR service returned Internal Server error in UK south region |August 10, 2023 9:53 am PST|--|August 10, 2023 10:43 am PST|
|FHIR resources are not queryable by custom search parameters even after reindex is successful.| July 2023| Suggested workaround is to create support ticket to update the status of custom search parameters after reindex is successful.|--|
-|Using [token type](https://www.hl7.org/fhir/search.html#token) fields of length more than 128 characters can result in undesired behavior on `create`, `search`, `update`, and `delete` operations. | August 2022 |- | Resolved, customers impacted with 128 characters issue are notified on resolution. |
-|The SQL provider causes the `RawResource` column in the database to save incorrectly. This occurs in a few cases when a transient exception occurs that causes the provider to use its retry logic.ΓÇ»|April 2022 |-|May 2022 Resolved [#2571](https://github.com/microsoft/fhir-server/pull/2571) |
-| Queries not providing consistent result counts after appended with `_sort` operator. For more information, see [#2680](https://github.com/microsoft/fhir-server/pull/2680). | July 2022 | -|August 2022 Resolved [#2680](https://github.com/microsoft/fhir-server/pull/2680) |
## Next steps
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
- Title: Continuous integration and continuous deployment to Azure IoT Edge devices (classic editor) - Azure IoT Edge
-description: Set up continuous integration and continuous deployment using the classic editor - Azure IoT Edge with Azure DevOps, Azure Pipelines
--- Previously updated : 08/26/2021-----
-# Continuous integration and continuous deployment to Azure IoT Edge devices (classic editor)
--
-Azure Pipelines includes a built-in Azure IoT Edge task that helps you adopt DevOps with your Azure IoT Edge applications. This article demonstrates how to use the continuous integration and continuous deployment features of Azure Pipelines to build, test, and deploy applications quickly and efficiently to your Azure IoT Edge using the classic editor. Alternatively, you can [use YAML](how-to-continuous-integration-continuous-deployment.md).
--
-In this article, you learn how to use the built-in [Azure IoT Edge tasks](/azure/devops/pipelines/tasks/build/azure-iot-edge) for Azure Pipelines to create build and release pipelines for your IoT Edge solution. Each Azure IoT Edge task added to your pipeline implements one of the following four actions:
-
- | Action | Description |
- | | |
- | Build module images | Takes your IoT Edge solution code and builds the container images.|
- | Push module images | Pushes module images to the container registry you specified. |
- | Generate deployment manifest | Takes a deployment.template.json file and the variables, then generates the final IoT Edge deployment manifest file. |
- | Deploy to IoT Edge devices | Creates IoT Edge deployments to one or more IoT Edge devices. |
-
-Unless otherwise specified, the procedures in this article do not explore all the functionality available through task parameters. For more information, see the following resources:
-
-* [Task version](/azure/devops/pipelines/process/tasks?tabs=classic#task-versions)
-* **Advanced** - If applicable, specify modules that you do not want built.
-* [Control Options](/azure/devops/pipelines/process/tasks?tabs=classic#task-control-options)
-* [Environment Variables](/azure/devops/pipelines/process/variables?tabs=classic#environment-variables)
-* [Output variables](/azure/devops/pipelines/process/variables?tabs=classic#use-output-variables-from-tasks)
-
-## Prerequisites
-
-* An Azure Repos repository. If you don't have one, you can [Create a new Git repo in your project](/azure/devops/repos/git/create-new-repo). For this article, we created a repository called **IoTEdgeRepo**.
-* An IoT Edge solution committed and pushed to your repository. If you want to create a new sample solution for testing this article, follow the steps in [Develop Azure IoT Edge modules using Visual Studio Code](tutorial-develop-for-linux.md). For this article, we created a solution in our repository called **IoTEdgeSolution**, which has the code for a module named **filtermodule**.
-
- For this article, all you need is the solution folder created by the IoT Edge templates in either Visual Studio Code or Visual Studio. You don't need to build, push, deploy, or debug this code before proceeding. You'll set up those processes in Azure Pipelines.
-
- Know the path to the **deployment.template.json** file in your solution, which is used in several steps. If you're unfamiliar with the role of the deployment template, see [Learn how to deploy modules and establish routes](module-composition.md).
-
- >[!TIP]
- >If you're creating a new solution, clone your repository locally first. Then, when you create the solution you can choose to create it directly in the repository folder. You can easily commit and push the new files from there.
-
-* A container registry where you can push module images. You can use [Azure Container Registry](../container-registry/index.yml) or a third-party registry.
-* An active Azure [IoT hub](../iot-hub/iot-hub-create-through-portal.md) with at least two IoT Edge devices for testing the separate test and production deployment stages. You can follow the quickstart articles to create an IoT Edge device on [Linux](quickstart-linux.md) or [Windows](quickstart.md)
-
-## Create a build pipeline for continuous integration
-
-In this section, you create a new build pipeline. You configure the pipeline to run automatically and publish build logs whenever you check in changes to the IoT Edge solution.
-
-1. Sign in to your Azure DevOps organization (`https://dev.azure.com/{your organization}`) and open the project that contains your IoT Edge solution repository.
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/initial-project.png" alt-text="Screenshot that shows how to open your DevOps project.":::
-
-2. From the left pane menu in your project, select **Pipelines**. Select **Create Pipeline** at the center of the page. Or, if you already have build pipelines, select the **New pipeline** button in the top right.
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/add-new-pipeline.png" alt-text="Screenshot that shows how to create a new build pipeline.":::
-
-3. At the bottom of the **Where is your code?** page, select **Use the classic editor**. If you wish to use YAML to create your project's build pipelines, see the [YAML guide](how-to-continuous-integration-continuous-deployment.md).
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/create-without-yaml.png" alt-text="Screenshot that shows how to use the classic editor.":::
-
-4. Follow the prompts to create your pipeline.
-
- 1. Provide the source information for your new build pipeline. Select **Azure Repos Git** as the source, then select the project, repository, and branch where your IoT Edge solution code is located. Then, select **Continue**.
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/pipeline-source.png" alt-text="Screenshot showing how to select your pipeline source.":::
-
- 2. Select **Empty job** instead of a template.
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/start-with-empty-build-job.png" alt-text="Screenshot showing how to start with an empty job for your build pipeline.":::
-
-5. Once your pipeline is created, you are taken to the pipeline editor. Here, you can change the pipeline's name, agent pool, and agent specification.
-
- You can select a Microsoft-hosted pool, or a self-hosted pool that you manage.
-
- In your pipeline description, choose the correct agent specification based on your target platform:
-
- * If you would like to build your modules in platform amd64 for Linux containers, choose **ubuntu-18.04**
-
- * If you would like to build your modules in platform amd64 for Windows 1809 containers, you need to [set up self-hosted agent on Windows](/azure/devops/pipelines/agents/v2-windows).
-
- * If you would like to build your modules in platform arm32v7 or arm64 for Linux containers, you need to [set up self-hosted agent on Linux](https://devblogs.microsoft.com/iotdev/setup-azure-iot-edge-ci-cd-pipeline-with-arm-agent).
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/configure-env.png" alt-text="Configure build agent specification.":::
-
-6. Your pipeline comes preconfigured with a job called **Agent job 1**. Select the plus sign (**+**) to add four tasks to the job: **Azure IoT Edge** twice, **Copy Files** once, and **Publish Build Artifacts** once. Search for each task and hover over the task's name to see the **Add** button.
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/add-iot-edge-task.png" alt-text="Add Azure IoT Edge task.":::
-
- When all four tasks are added, your Agent job looks like the following example:
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/add-tasks.png" alt-text="Four tasks in the build pipeline.":::
-
-7. Select the first **Azure IoT Edge** task to edit it. This task builds all modules in the solution with the target platform that you specify. Edit the task with the following values:
-
- | Parameter | Description |
- | | |
- | Display name | The display name is automatically updated when the Action field changes. |
- | Action | Select **Build module images**. |
- | .template.json file | Select the ellipsis (**...**) and navigate to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
- | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
- | Output variables | Provide a reference name to associate with the file path where your deployment.json file generates, such as **edge**. |
-
- For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
-
- These configurations use the image repository and tag that are defined in the `module.json` file to name and tag the module image. **Build module images** also helps replace the variables with the exact value you define in the `module.json` file. In Visual Studio or Visual Studio Code, you specify the actual value in a `.env` file. In Azure Pipelines, you set the value on the **Pipeline Variables** tab. Select the **Variables** tab on the pipeline editor menu and configure the name and value as following:
-
- * **ACR_ADDRESS**: Your Azure Container Registry **Login server** value. You can find the login server value on the container registry's overview page in the Azure portal.
-
- If you have other variables in your project, you can specify the name and value on this tab. **Build module images** recognizes only variables that are in `${VARIABLE}` format. Make sure you use this format in your `**/module.json` files.
-
-8. Select the second **Azure IoT Edge** task to edit it. This task pushes all module images to the container registry that you select.
-
- | Parameter | Description |
- | | |
- | Display name | The display name is automatically updated when the Action field changes. |
- | Action | Select **Push module images**. |
- | Container registry type | Use the default type: `Azure Container Registry`. |
- | Azure subscription | Choose your subscription. |
- | Azure Container Registry | Select the type of container registry that you use to store your module images. Depending on which registry type you choose, the form changes. If you choose **Azure Container Registry**, use the dropdown lists to select the Azure subscription and the name of your container registry. If you choose **Generic Container Registry**, select **New** to create a registry service connection. |
- | .template.json file | Select the ellipsis (**...**) and navigate to the **deployment.template.json** file in the repository that contains your IoT Edge solution. |
- | Default platform | Select the appropriate operating system for your modules based on your targeted IoT Edge device. |
- | Add registry credential to deployment manifest | Specify true to add the registry credential for pushing docker images to deployment manifest. |
-
- For more information about this task and its parameters, see [Azure IoT Edge task](/azure/devops/pipelines/tasks/build/azure-iot-edge).
-
- If you have multiple container registries to host your module images, you need to duplicate this task, select different container registry, and use **Bypass module(s)** in the **Advanced** settings to bypass the images that are not for this specific registry.
-
-9. Select the **Copy Files** task to edit it. Use this task to copy files to the artifact staging directory.
-
- | Parameter | Description |
- | | |
- | Display name | Use the default name or customize |
- | Source folder | The folder with the files to be copied. |
- | Contents | Add two lines: `deployment.template.json` and `**/module.json`. These two files serve as inputs to generate the IoT Edge deployment manifest. |
- | Target Folder | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables#build-variables) to learn about the description. |
-
- For more information about this task and its parameters, see [Copy files task](/azure/devops/pipelines/tasks/utility/copy-files?tabs=classic).
-
-10. Select the **Publish Build Artifacts** task to edit it. Provide artifact staging directory path to the task so that the path can be published to release pipeline.
-
- | Parameter | Description |
- | | |
- | Display name | Use the default name or customize. |
- | Path to publish | Specify the variable `$(Build.ArtifactStagingDirectory)`. See [Build variables](/azure/devops/pipelines/build/variables#build-variables) to learn more. |
- | Artifact name | Use the default name: **drop** |
- | Artifact publish location | Use the default location: **Azure Pipelines** |
-
- For more information about this task and its parameters, see [Publish build artifacts task](/azure/devops/pipelines/tasks/utility/publish-build-artifacts).
-
-11. Open the **Triggers** tab and check the box to **Enable continuous integration**. Make sure the branch containing your code is included.
-
- :::image type="content" source="./media/how-to-continuous-integration-continuous-deployment-classic/configure-trigger.png" alt-text="Screenshot showing how to turn on continuous integration trigger.":::
-
-12. Select **Save** from the **Save & queue** dropdown.
-
-This pipeline is now configured to run automatically when you push new code to your repo. The last task, publishing the pipeline artifacts, triggers a release pipeline. Continue to the next section to build the release pipeline.
--
->[!NOTE]
->Layered deployments are not yet supported in Azure IoT Edge tasks in Azure DevOps.
->
->However, you can use an [Azure CLI task in Azure DevOps](/azure/devops/pipelines/tasks/deploy/azure-cli) to create your deployment as a layered deployment. For the **Inline Script** value, you can use the [az iot edge deployment create command](/cli/azure/iot/edge/deployment):
->
->```azurecli-interactive
->az iot edge deployment create -d {deployment_name} -n {hub_name} --content modules_content.json --layered true
->```
--
-## Next steps
-
-* Understand the IoT Edge deployment in [Understand IoT Edge deployments for single devices or at scale](module-deployment-monitoring.md)
-* Walk through the steps to create, update, or delete a deployment in [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md).
iot-operations Howto Configure Data Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-data-lake.md
The specification field of a DataLakeConnectorTopicMap resource contains the fol
- `mqttSourceTopic`: The name of the MQTT topic(s) to subscribe to. Supports [MQTT topic wildcard notation](https://chat.openai.com/share/c6f86407-af73-4c18-88e5-f6053b03bc02). - `qos`: The quality of service level for subscribing to the MQTT topic. It can be one of 0 or 1. - `table`: The table field specifies the configuration and properties of the Delta table in the Data Lake Storage account. It has the following subfields:
- - `tableName`: The name of the Delta table to create or append to in the Data Lake Storage account. This field is also known as the container name when used with Azure Data Lake Storage Gen2. It can contain any English letter, upper or lower case, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
+ - `tableName`: The name of the Delta table to create or append to in the Data Lake Storage account. This field is also known as the container name when used with Azure Data Lake Storage Gen2. It can contain any **lower case** English letter, and underbar `_`, with length up to 256 characters. No dashes `-` or space characters are allowed.
- `schema`: The schema of the Delta table, which should match the format and fields of the message payload. It's an array of objects, each with the following subfields: - `name`: The name of the column in the Delta table. - `format`: The data type of the column in the Delta table. It can be one of `boolean`, `int8`, `int16`, `int32`, `int64`, `uInt8`, `uInt16`, `uInt32`, `uInt64`, `float16`, `float32`, `float64`, `date32`, `timestamp`, `binary`, or `utf8`. Unsigned types, like `uInt8`, aren't fully supported, and are treated as signed types if specified here.
spec:
mqttSourceTopic: "orders" qos: 1 table:
- tableName: "ordersTable"
+ tableName: "orders"
schema: - name: "orderId" format: int32
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md
The services deployed in this quickstart include:
Review the prerequisites based on the environment you use to host the Kubernetes cluster.
-For this quickstart, we recommend GitHub Codespaces as a quick way to get started in a virtual environment without installing new tools. Or, use AKS Edge Essentials to create a cluster on Windows devices or K3s on Ubuntu Linux devices.
+For this quickstart, we recommend GitHub Codespaces as a quick way to get started in a virtual environment without installing new tools. Or, use Azure Kubernetes Service (AKS) Edge Essentials to create a cluster on Windows devices or K3s on Ubuntu Linux devices.
# [Virtual](#tab/codespaces)
For this quickstart, we recommend GitHub Codespaces as a quick way to get starte
# [Windows](#tab/windows)
-* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* In this quickstart, you use the `AksEdgeQuickStartForAio.ps1` script to set up an AKS Edge Essentials single-machine K3S Linux-only cluster. To learn more, see the [AKS Edge Essentials system requirements](/azure/aks/hybrid/aks-edge-system-requirements). For this quickstart, ensure that your machine has a minimum of 10 GB RAM, 4 vCPUs, and 40 GB free disk space.
-<!-- * Review the [AKS Edge Essentials requirements and support matrix](/azure/aks/hybrid/aks-edge-system-requirements) for other prerequisites, specifically the system and OS requirements. -->
+* An Azure subscription. If you don't have an Azure subscription, [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
* Azure CLI installed on your development machine. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli).
key-vault Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/backup-restore.md
Previously updated : 12/11/2023 Last updated : 01/09/2024 # Customer intent: As a developer using Key Vault I want to know the best practices so I can implement them.
There are 2 ways to execute a full backup. You must provide the following inform
[!INCLUDE [cloud-shell-try-it.md](../../../includes/cloud-shell-try-it.md)]
-#### Prerequisites if backing up and restoring using user assigned managed identity (preview):
+#### Prerequisites if backing up and restoring using user assigned managed identity:
-1. Ensure you have the Azure CLI version 2.54.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
+1. Ensure you have the Azure CLI version 2.56.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
2. Create a user assigned managed identity. 3. Create a storage account (or use an existing storage account). 4. If public network access is diabled on your storage account, enable trusted service bypass on the storage account in the ΓÇ£NetworkingΓÇ¥ tab, under ΓÇ£Exceptions.ΓÇ¥
Backup is a long running operation but will immediately return a Job ID. You can
While the backup is in progress, the HSM might not operate at full throughput as some HSM partitions will be busy performing the backup operation.
-### Backup HSM using user assigned managed identity (preview)
+### Backup HSM using user assigned managed identity
```azurecli-interactive az keyvault backup start --use-managed-identity true --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer ```
There are 2 ways to execute a full restore. You must provide the following infor
Restore is a long running operation but will immediately return a Job ID. You can check the status of the restore process using this Job ID. When the restore process is in progress, the HSM enters a restore mode and all data plane command (except check restore status) are disabled.
-### Restore HSM using user assigned managed identity (preview)
+### Restore HSM using user assigned managed identity
```azurecli-interactive az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --backup-folder mhsm-backup-foldername --use-managed-identity true ```
az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemoba
Selective key restore allows you to restore one individual key with all its key versions from a previous backup to an HSM.
-### Selective key restore using user assigned managed identity (preview)
+### Selective key restore using user assigned managed identity
``` az keyvault restore start --hsm-name mhsmdemo2 --storage-account-name mhsmdemobackup --blob-container-name mhsmdemobackupcontainer --backup-folder mhsm-backup-foldername --use-managed-identity true --key-name rsa-key2 ```
key-vault Mhsm Control Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/mhsm-control-data.md
These administrative security controls are in place in Azure Key Vault Managed H
- **Data defense**. You have MicrosoftΓÇÖs strong commitment to challenge government requests and to [defend your data](https://blogs.microsoft.com/on-the-issues/2020/11/19/defending-your-data-edpb-gdpr/). - **Contractual obligations**. It offers control obligations for security and customer data protection as discussed in [Microsoft Trust Center](https://www.microsoft.com/trust-center?rtc=1).-- **[Cross-region replication](../../availability-zones/cross-region-replication-azure.md)**. Soon, you can use geo replication in Managed HSM to deploy HSMs in a secondary region.
+- **[Cross-region replication](../../availability-zones/cross-region-replication-azure.md)**. You can use multi region replication in Managed HSM to deploy HSMs in a secondary region.
- **Disaster recovery**. Azure offers an end-to-end backup and disaster recovery solution that is simple, secure, scalable, and cost-effective: - [Business continuity management program](../../availability-zones/business-continuity-management-program.md) - [Azure Site Recovery](../../site-recovery/index.yml)
key-vault Security Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/security-domain.md
Previously updated : 03/28/2022 Last updated : 12/18/2023 # Security domain in Managed HSM overview
A managed HSM is a single-tenant, [Federal Information Processing Standards (FIP
To operate, a managed HSM must have a security domain. The security domain is an encrypted blob file that contains artifacts like the HSM backup, user credentials, the signing key, and the data encryption key that's unique to the managed HSM.
-A managed HSM serves the following purposes:
+A managed HSM security domain serves the following purposes:
- Establishes "ownership" by cryptographically tying each managed HSM to a root of trust keys under your sole control. This ensures that Microsoft doesn't have access to your cryptographic key material on the managed HSM. - Sets the cryptographic boundary for key material in a managed HSM instance.
load-balancer Load Balancer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot.md
Previously updated : 12/05/2022 Last updated : 01/09/2024
When the Load Balancer connectivity is unavailable, the most common symptoms are
- VMs behind the Load Balancer aren't responding to health probes - VMs behind the Load Balancer aren't responding to the traffic on the configured port
-When the external clients to the backend VMs go through the load balancer, the IP address of the clients will be used for the communication. Make sure the IP address of the clients are added into the NSG allowlist.
+When the external clients to the backend VMs go through the load balancer, the IP address of the clients is used for the communication. Make sure the IP address of the clients are added into the NSG allowlist.
-## No outbound connectivity from Standard internal Load Balancers (ILB)
+## Problem: No outbound connectivity from Standard internal Load Balancers (ILB)
-**Validation and resolution**
+### Validation and Resolution
Standard ILBs are **secure by default**. Basic ILBs allowed connecting to the internet via a *hidden* Public IP address called the default outbound access IP. This isn't recommended for production workloads as the IP address isn't static or locked down via network security groups that you own. If you recently moved from a Basic ILB to a Standard ILB, you should create a Public IP explicitly via [Outbound only](egress-only.md) configuration, which locks down the IP via network security groups. You can also use a [NAT Gateway](../virtual-network/nat-gateway/nat-overview.md) on your subnet. NAT Gateway is the recommended solution for outbound.
-## No inbound connectivity to Standard external Load Balancers (ELB)
+## Problem: No inbound connectivity to Standard external Load Balancers (ELB)
-### Cause: Standard load balancers and standard public IP addresses are closed to inbound connections unless opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you don't have an NSG on a subnet or NIC of your virtual machine resource, traffic isn't allowed to reach this resource.
+### Cause
+Standard load balancers and standard public IP addresses are closed to inbound connections unless opened by Network Security Groups. NSGs are used to explicitly permit allowed traffic. If you don't have an NSG on a subnet or NIC of your virtual machine resource, traffic isn't allowed to reach this resource.
-**Resolution**
+### Resolution
In order to allow the ingress traffic, add a Network Security Group to the Subnet or interface for your virtual resource.
-## Can't change backend port for existing LB rule of a load balancer that has Virtual Machine Scale Set deployed in the backend pool.
+## Problem: Can't change backend port for existing LB rule of a load balancer that has Virtual Machine Scale Set deployed in the backend pool.
-### Cause: The backend port can't be modified for a load balancing rule that's used by a health probe for load balancer referenced by Virtual Machine Scale Set
+### Cause
+The backend port can't be modified for a load balancing rule that's used by a health probe for load balancer referenced by Virtual Machine Scale Set
-**Resolution**
+### Resolution
In order to change the port, you can remove the health probe by updating the Virtual Machine Scale Set, update the port and then configure the health probe again.
-## Small traffic is still going through load balancer after removing VMs from backend pool of the load balancer.
+## Problem: Small traffic is still going through load balancer after removing VMs from backend pool of the load balancer.
-### Cause: VMs removed from backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, DNS, and other functions within Azure.
+### Cause
+VMs removed from backend pool should no longer receive traffic. The small amount of network traffic could be related to storage, DNS, and other functions within Azure.
+### Resolution
To verify, you can conduct a network trace. The Fully Qualified Domain Name (FQDN) used for your blob storage account is listed within the properties of each storage account. From a virtual machine within your Azure subscription, you can perform `nslookup` to determine the Azure IP assigned to that storage account.
-## Additional network captures
-
-If you decide to open a support case, collect the following information for a quicker resolution. Choose a single backend VM to perform the following tests:
--- Use `ps ping` from one of the backend VMs within the VNet to test the probe port response (example: ps ping 10.0.0.4:3389) and record results. -- If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM and the VNet test VM while you run PsPing then stop the Netsh trace.-
-## Load Balancer in failed state
-
-**Resolution**
+## Problem: Load Balancer in failed state
+### Resolution
- Once you identify the resource that is in a failed state, go to [Azure Resource Explorer](https://resources.azure.com/) and identify the resource in this state. - Update the toggle on the right-hand top corner to **Read/Write**. - Select **Edit** for the resource in failed state. - Select **PUT** followed by **GET** to ensure the provisioning state was updated to Succeeded. - You can then proceed with other actions as the resource is out of failed state.
+## Network captures needed for troubleshooting and support cases
+
+If you decide to open a support case, collect the following information for a quicker resolution. Choose a single backend VM to perform the following tests:
+
+- Use `ps ping` from one of the backend VMs within the virtual network to test the probe port response (example: ps ping 10.0.0.4:3389) and record results.
+- If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM and the virtual network test VM while you run PsPing then stop the Netsh trace.
+-
## Next steps If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 12/11/2023 Last updated : 01/09/2024
Your logic app also has *host settings*, which specify the runtime configuration
## App settings, parameters, and deployment
-In *multi-tenant* Azure Logic Apps, deployment depends on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. This design poses a challenge when you have to maintain environment variables for logic apps across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
+In *multitenant* Azure Logic Apps, deployment depends on Azure Resource Manager templates (ARM templates), which combine and handle resource provisioning for both logic apps and infrastructure. This design poses a challenge when you have to maintain environment variables for logic apps across various dev, test, and production environments. Everything in an ARM template is defined at deployment. If you need to change just a single variable, you have to redeploy everything.
In *single-tenant* Azure Logic Apps, deployment becomes easier because you can separate resource provisioning between apps and infrastructure. You can use *parameters* to abstract values that might change between environments. By defining parameters to use in your workflows, you can first focus on designing your workflows, and then insert your environment-specific variables later. You can call and reference your environment variables at runtime by using app settings and parameters. That way, you don't have to redeploy as often.
These settings affect the throughput and capacity for single-tenant Azure Logic
| `Jobs.BackgroundJobs.NumWorkersPerProcessorCount` | `192` dispatcher worker instances | Sets the number of *dispatcher worker instances* or *job dispatchers* to have per processor core. This value affects the number of workflow runs per core. | | `Jobs.BackgroundJobs.StatelessNumWorkersPerProcessorCount` | `192` dispatcher worker instances | Sets the number of *dispatcher worker instances* or *job dispatchers* to have per processor core, per stateless run. This value affects the number of concurrent workflow actions that are processed per run. |
+Both of the following settings are used to manually stop and immediately delete the specified workflows in Standard logic app.
+
+> [!NOTE]
+>
+> Use these settings with caution and only in non-production environments, such as load
+> or performance test environments, as you can't undo or recover from these operations.
+
+| Setting | Default value | Description |
+|||-|
+| `Jobs.CleanupJobPartitionPrefixes` | None | Immediately deletes all the run jobs for the specified workflows. |
+| `Jobs.SuspendedJobPartitionPartitionPrefixes` | None | Stops the run jobs for the specified workflows. |
+
+The following example shows the syntax for these settings where each workflow ID is followed by a colon (**:**) and separated by a semicolon (**;**):
+
+```json
+"Jobs.CleanupJobPartitionPrefixes": "<workflow-ID-1>:; <workflow-ID-2:",
+"Jobs.SuspendedJobPartitionPrefixes": "<workflow-ID-1>:; <workflow-ID-2>:"
+```
+ <a name="recurrence-triggers"></a> ### Recurrence-based triggers
The following settings work only for workflows that start with a recurrence-base
| `Runtime.Backend.HttpOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for HTTP triggers and actions. | | `Runtime.Backend.HttpOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for HTTP triggers and actions. | | `Runtime.Backend.HttpOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for HTTP triggers and actions. |
-| `Runtime.Backend.HttpOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP triggers and actions. |
+| `Runtime.Backend.HttpOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP actions only, not triggers. For more information, see [Limitations](#limitations). |
| `Runtime.Backend.HttpOperation.RequestTimeout` | `00:03:45` <br>(3 min and 45 sec) | Sets the request timeout value for HTTP triggers and actions. | <a name="http-webhook"></a>
The following settings work only for workflows that start with a recurrence-base
| `Runtime.Backend.HttpWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultRetryMinimumInterval` | `00:00:05` <br>(5 sec) | Sets the minimum retry interval for HTTP webhook triggers and actions. | | `Runtime.Backend.HttpWebhookOperation.DefaultWakeUpInterval` | `01:00:00` <br>(1 hour) | Sets the default wake-up interval for HTTP webhook trigger and action jobs. |
-| `Runtime.Backend.HttpWebhookOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP webhook triggers and actions. |
+| `Runtime.Backend.HttpWebhookOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for HTTP webhook actions only, not triggers. For more information, see [Limitations](#limitations). |
| `Runtime.Backend.HttpWebhookOperation.RequestTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for HTTP webhook triggers and actions. | <a name="built-in-storage"></a>
The following settings work only for workflows that start with a recurrence-base
| Setting | Default value | Description | |||-| | `Runtime.Backend.FunctionOperation.RequestTimeout` | `00:03:45` <br>(3 min and 45 sec) | Sets the request timeout value for Azure Functions actions. |
-| `Runtime.Backend.FunctionOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for Azure Functions actions. |
+| `Runtime.Backend.FunctionOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for Azure Functions actions. For more information, see [Limitations](#limitations). |
| `Runtime.Backend.FunctionOperation.DefaultRetryCount` | `4` retries | Sets the default retry count for Azure Functions actions. | | `Runtime.Backend.FunctionOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for Azure Functions actions. | | `Runtime.Backend.FunctionOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 hour) | Sets the maximum retry interval for Azure Functions actions. |
The following settings work only for workflows that start with a recurrence-base
| Setting | Default value | Description | |||-| | `Runtime.Backend.ApiConnectionOperation.RequestTimeout` | `00:02:00` <br>(2 min) | Sets the request timeout value for managed API connector triggers and actions. |
-| `Runtime.Backend.ApiConnectionOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for managed API connector triggers and actions. |
+| `Runtime.Backend.ApiConnectionOperation.MaxContentSize` | `104857600` bytes | Sets the maximum request size in bytes for managed API connector triggers and actions. For more information, see [Limitations](#limitations). |
| `Runtime.Backend.ApiConnectionOperation.DefaultRetryCount` | `4` retries | Sets the default retry count for managed API connector triggers and actions. | | `Runtime.Backend.ApiConnectionOperation.DefaultRetryInterval` | `00:00:07` <br>(7 sec) | Sets the default retry interval for managed API connector triggers and actions. | | `Runtime.Backend.ApiWebhookOperation.DefaultRetryMaximumInterval` | `01:00:00` <br>(1 day) | Sets the maximum retry interval for managed API connector webhook triggers and actions. |
The following settings work only for workflows that start with a recurrence-base
| `Runtime.Backend.Operation.MaximumRetryInterval` | `01:00:00:01` <br>(1 day and 1 sec) | Sets the maximum interval in the retry policy definition for a workflow operation. | | `Runtime.Backend.Operation.MinimumRetryInterval` | `00:00:05` <br>(5 sec) | Sets the minimum interval in the retry policy definition for a workflow operation. |
+### Limitations
+
+- Maximum content size
+
+ By default, built-in triggers, such as HTTP or Request, are limited to the message size described in [Limits and configuration reference - Messages](logic-apps-limits-and-config.md#messages). To handle files larger than the limit, try uploading your content as a blob to [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), and then get your content using the [Azure Blob connector](../connectors/connectors-create-api-azureblobstorage.md).
+ <a name="manage-host-settings"></a> ## Manage host settings - host.json
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 10/30/2023 Last updated : 01/09/2024 # Limits and configuration reference for Azure Logic Apps > For Power Automate, review [Limits and configuration in Power Automate](/power-automate/limits-and-config).
-This reference guide describes the limits and configuration information for Azure Logic Apps and related resources. Based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows, you choose whether to create a Consumption logic app workflow that runs in *multi-tenant* Azure Logic Apps or an integration service environment (ISE). Or, create a Standard logic app workflow that runs in *single-tenant* Azure Logic Apps or an App Service Environment (v3 - Windows plans only).
+This reference guide describes the limits and configuration information for Azure Logic Apps and related resources. Based on your scenario, solution requirements, the capabilities that you want, and the environment where you want to run your workflows, you choose whether to create a Consumption logic app workflow that runs in *multitenant* Azure Logic Apps or an integration service environment (ISE). Or, create a Standard logic app workflow that runs in *single-tenant* Azure Logic Apps or an App Service Environment (v3 - Windows plans only).
> [!NOTE] > Many limits are the same across the available environments where Azure Logic Apps runs, but differences are noted where they exist.
-The following table briefly summarizes differences between a Consumption logic app and a Standard logic app. You'll also learn how single-tenant Azure Logic Apps compares to multi-tenant Azure Logic Apps and an ISE for deploying, hosting, and running your logic app workflows.
+The following table briefly summarizes differences between a Consumption logic app and a Standard logic app. You'll also learn how single-tenant Azure Logic Apps compares to multitenant Azure Logic Apps and an ISE for deploying, hosting, and running your logic app workflows.
[!INCLUDE [Logic app resource type and environment differences](../../includes/logic-apps-resource-environment-differences-table.md)]
The following tables list the values for a single workflow definition:
The following table lists the values for a single workflow run:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
||--|||-| | Run history retention in storage | 90 days | 90 days <br>(Default) | 366 days | The amount of time to keep a workflow's run history in storage after a run starts. <br><br>**Note**: If the workflow's run duration exceeds the retention limit, this run is removed from the run history in storage. If a run isn't immediately removed after reaching the retention limit, the run is removed within 7 days. <br><br>Whether a run completes or times out, run history retention is always calculated by using the run's start time and the current limit specified in the workflow setting, [**Run history retention in days**](#change-retention). No matter the previous limit, the current limit is always used for calculating retention. <br><br>For more information, review [Change duration and run history retention in storage](#change-retention). | | Run duration | 90 days | - Stateful workflow: 90 days <br>(Default) <br><br>- Stateless workflow: 5 min <br>(Default) | 366 days | The amount of time that a workflow can continue running before forcing a timeout. The run duration is calculated by using a run's start time and the limit that's specified in the workflow setting, [**Run history retention in days**](#change-duration) at that start time. <br><br>**Important**: Make sure the run duration value is always less than or equal to the run history retention in storage value. Otherwise, run histories might be deleted before the associated jobs are complete. <br><br>For more information, review [Change run duration and history retention in storage](#change-duration). |
If a run's duration exceeds the current run history retention limit, the run is
For Consumption logic app workflows, the same setting controls the maximum number of days that a workflow can run and for keeping run history in storage.
-* In multi-tenant Azure Logic Apps, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
+* In multitenant Azure Logic Apps, the 90-day default limit is the same as the maximum limit. You can only decrease this value.
* In an ISE, you can decrease or increase the 90-day default limit.
The following table lists the values for a single workflow run:
The following table lists the values for a **For each** loop:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
||--|||-| | Array items | 100,000 items | - Stateful workflow: 100,000 items <br>(Default) <br><br>- Stateless workflow: 100 items <br>(Default) | 100,000 items | The number of array items that a **For each** loop can process. <br><br>To filter larger arrays, you can use the [query action](logic-apps-perform-data-operations.md#filter-array-action). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Concurrent iterations | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br>(Default) <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <br><br>To change this value in the multi-tenant service, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-for-each). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Concurrent iterations | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br>(Default) <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | Concurrency off: 20 <br><br>Concurrency on: <br><br>- Default: 20 <br>- Min: 1 <br>- Max: 50 | The number of **For each** loop iterations that can run at the same time, or in parallel. <br><br>To change this value in multitenant Azure Logic Apps, see [Change **For each** concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-for-each-concurrency) or [Run **For each** loops sequentially](logic-apps-workflow-actions-triggers.md#sequential-for-each). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="until-loop"></a>
The following table lists the values for a **For each** loop:
The following table lists the values for an **Until** loop:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
||--|||-|
-| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <br><br>Stateless workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <br><br>To change this value in the multi-tenant service, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <br><br>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <br><br>To change this value in the multi-tenant service, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Iterations | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | Stateful workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 5,000 <br><br>Stateless workflow: <br><br>- Default: 60 <br>- Min: 1 <br>- Max: 100 | - Default: 60 <br>- Min: 1 <br>- Max: 5,000 | The number of cycles that an **Until** loop can have during a workflow run. <br><br>To change this value in multitenant Azure Logic Apps, in the **Until** loop shape, select **Change limits**, and specify the value for the **Count** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Timeout | Default: PT1H (1 hour) | Stateful workflow: PT1H (1 hour) <br><br>Stateless workflow: PT5M (5 min) | Default: PT1H (1 hour) | The amount of time that the **Until** loop can run before exiting and is specified in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601). The timeout value is evaluated for each loop cycle. If any action in the loop takes longer than the timeout limit, the current cycle doesn't stop. However, the next cycle doesn't start because the limit condition isn't met. <br><br>To change this value in multitenant Azure Logic Apps, in the **Until** loop shape, select **Change limits**, and specify the value for the **Timeout** property. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
<a name="concurrency-debatching"></a> ### Concurrency and debatching
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
||--|||-|
-| Trigger - concurrent runs | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 100 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <br><br>To change this value in the multi-tenant service, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
-| Maximum waiting runs | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | Concurrency off: <br><br>- Min: 1 run <br>(Default) <br><br>- Max: 50 runs <br>(Default) <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 200 runs <br>(Default) | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <br><br>To change this value in the multi-tenant service, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Trigger - concurrent runs | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 100 <br>- Min: 1 <br>- Max: 100 | Concurrency off: Unlimited <br><br>Concurrency on (irreversible): <br><br>- Default: 25 <br>- Min: 1 <br>- Max: 100 | The number of concurrent runs that a trigger can start at the same time, or in parallel. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items for [debatching arrays](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch). <br><br>To change this value in multitenant Azure Logic Apps, see [Change trigger concurrency limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-trigger-concurrency) or [Trigger instances sequentially](../logic-apps/logic-apps-workflow-actions-triggers.md#sequential-trigger). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
+| Maximum waiting runs | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | Concurrency off: <br><br>- Min: 1 run <br>(Default) <br><br>- Max: 50 runs <br>(Default) <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 200 runs <br>(Default) | Concurrency off: <br><br>- Min: 1 run <br><br>- Max: 50 runs <br><br>Concurrency on: <br><br>- Min: 10 runs plus the number of concurrent runs <br><br>- Max: 100 runs | The number of workflow instances that can wait to run when your current workflow instance is already running the maximum concurrent instances. <br><br>To change this value in multitenant Azure Logic Apps, see [Change waiting runs limit](../logic-apps/logic-apps-workflow-actions-triggers.md#change-waiting-runs). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
| **SplitOn** items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br><br>Concurrency on: 100 items | Concurrency off: 100,000 items <br>(Default) <br><br>Concurrency on: 100 items <br>(Default) | For triggers that return an array, you can specify an expression that uses a **SplitOn** property that [splits or debatches array items into multiple workflow instances](../logic-apps/logic-apps-workflow-actions-triggers.md#split-on-debatch) for processing, rather than use a **For each** loop. This expression references the array to use for creating and running a workflow instance for each array item. <br><br>**Note**: When concurrency is turned on, the **SplitOn** limit is reduced to 100 items. | <a name="throughput-limits"></a>
The following table lists the values for an **Until** loop:
The following table lists the values for a single workflow definition:
-| Name | Multi-tenant | Single-tenant | Notes |
+| Name | Multitenant | Single-tenant | Notes |
||--||-|
-| Action - Executions per 5-minute rolling interval | Default: 100,000 executions <br>- High throughput mode: 300,000 executions | None | In the multi-tenant service, you can raise the default value to the maximum value for your workflow. For more information, see [Run in high throughput mode](#run-high-throughput-mode), which is in preview. Or, you can [distribute the workload across more than one workflow](handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary. |
+| Action - Executions per 5-minute rolling interval | Default: 100,000 executions <br>- High throughput mode: 300,000 executions | None | In multitenant Azure Logic Apps, you can raise the default value to the maximum value for your workflow. For more information, see [Run in high throughput mode](#run-high-throughput-mode), which is in preview. Or, you can [distribute the workload across more than one workflow](handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary. |
| Action - Concurrent outbound calls | ~2,500 calls | None | You can reduce the number of concurrent requests or reduce the duration as necessary. |
-| Managed connector throttling | Throttling limit varies based on connector | Throttling limit varies based on connector | For multi-tenant, review [each managed connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors). <br><br>For more information about handling connector throttling, review [Handle throttling problems ("429 - Too many requests" errors)](handle-throttling-problems-429-errors.md#connector-throttling). |
+| Managed connector throttling | Throttling limit varies based on connector | Throttling limit varies based on connector | For multitenant, review [each managed connector's technical reference page](/connectors/connector-reference/connector-reference-logicapps-connectors). <br><br>For more information about handling connector throttling, review [Handle throttling problems ("429 - Too many requests" errors)](handle-throttling-problems-429-errors.md#connector-throttling). |
| Runtime endpoint - Concurrent inbound calls | ~1,000 calls | None | You can reduce the number of concurrent requests or reduce the duration as necessary. | | Runtime endpoint - Read calls per 5 min | 60,000 read calls | None | This limit applies to calls that get the raw inputs and outputs from a workflow's run history. You can distribute the workload across more than one workflow as necessary. | | Runtime endpoint - Invoke calls per 5 min | 45,000 invoke calls | None | You can distribute workload across more than one workflow as necessary. |
The following table lists the values for a single workflow definition:
<a name="run-high-throughput-mode"></a>
-### Run in high throughput mode
+## Scale for high throughput
-For a single workflow definition, the number of actions that run every 5 minutes has a [default limit](../logic-apps/logic-apps-limits-and-config.md#throughput-limits). To raise the default value to the [maximum value](../logic-apps/logic-apps-limits-and-config.md#throughput-limits) for your workflow, which is three times the default value, you can enable high throughput mode, which is in preview. Or, you can [distribute the workload across more than one workflow](../logic-apps/handle-throttling-problems-429-errors.md#logic-app-throttling) as necessary.
+### [Standard](#tab/standard)
-#### [Portal (multi-tenant service)](#tab/azure-portal)
+Single-tenant Azure Logic Apps uses storage and compute as the primary resources to run your Standard logic app workflows.
+
+### Storage
+
+Stateful workflows use Azure Table storage and Azure Blob storage for persistenting data storage during runtime and for maintaining run histories. These workflows also use Azure Queues for scheduling. A single storage account enables a substantial number of requests with rates of up to 2K per partition and 20K requests per second at the account level. Beyond these thresholds, request rates are subject to throttling. For storage scalability limits, see [Targets for data operations](../storage/tables/storage-performance-checklist.md#targets-for-data-operations).
+
+Although a single storage account can handle reasonably high throughput, as the workflow execution rate increases, you might encounter partition level throttling or account level throttling. To ensure smooth operations, make sure that you understand the possible limitations and ways that you can address them.
+
+##### Share workload across multiple workflows
+
+Single-tenant Azure Logic Apps minimizes partition level throttling by distributing storage transactions across multiple partitions. However, to improve distribution and mitigate partition level throttling, [distribute the workload across multiple workflows](handle-throttling-problems-429-errors.md#logic-app-throttling), rather than a single workflow.
+
+##### Share workload across multiple storage accounts
+
+ If your logic app's workflows require high throughput, use multiple storage accounts, rather than a single account. You can significantly increase throughput by distributing your logic app's workload across multiple storage accounts with 32 as the limit. To determine the number of storage accounts that you need, use the general guideline for ~100,000 action executions per minute, per storage account. While this estimate works well for most scenarios, the number of actions might be lower if your workflow actions are compute heavy, for example, a query action that processes large data arrays. Make sure that you perform load testing and tune your solution before using in production.
+
+ To enable using multiple storage accounts, follow these steps before you create your Standard logic app. Otherwise, if you change the settings after creation, you might experience data loss or not achieve the necessary scalability.
+
+ 1. [Create the storage accounts](../storage/common/storage-account-create.md?tabs=azure-portal) that you want to use. Save the connection string for each storage account.
+
+ 1. Find and open your Standard logic app, and then [edit your logic app's host settings (**host.json** file)](edit-app-settings-host-settings.md?tabs=azure-portal#manage-host-settingshostjson) to include the following **`extensions`** object, which contains the **`workflow`** and **`settings`** objects with the **Runtime.ScaleUnitsCount** setting:
+
+ ```json
+ "extensions": {
+ "workflow": {
+ "settings": {
+ "Runtime.ScaleUnitsCount": "<storage-accounts-number>"
+ }
+ }
+ }
+ ```
+
+ The following example specifies **3** as the number of storage accounts:
+
+ ```json
+ {
+ "version": "2.0",
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle.Workflows",
+ "version": "[1.*, 2.0.0)"
+ },
+ "extensions": {
+ "workflow": {
+ "settings": {
+ "Runtime.ScaleUnitsCount": "3"
+ }
+ }
+ }
+ }
+ ```
+
+ 1. [Edit your logic app's application configuration settings (**local.settings.json**)](edit-app-settings-host-settings.md?tabs=azure-portal#manage-app-settingslocalsettingsjson) to add an app setting named **CloudStorageAccount.Workflows.ScaleUnitsDataStorage.CU0\<*storage-account-number*>\.ConnectionString** and the corresponding storage account connection string using the following syntax where the first storage account number is **`00`** up to the number of storage accounts minus 1, for example:
+
+ | App setting name | Value |
+ ||-|
+ | **CloudStorageAccount.Workflows.ScaleUnitsDataStorage.CU00.ConnectionString** | `<connection-string-1>` |
+ | **CloudStorageAccount.Workflows.ScaleUnitsDataStorage.CU01.ConnectionString** | `<connection-string-2>` |
+ | **CloudStorageAccount.Workflows.ScaleUnitsDataStorage.CU02.ConnectionString** | `<connection-string-3>` |
+
+ 1. In your logic app's application configuration settings, update the **AzureWebJobsStorage** setting value with the same connection string that's in the **CloudStorageAccount.Workflows.ScaleUnitsDataStorage.CU00.ConnectionString** setting.
+
+#### Compute
+
+A Standard logic app runs by using one of the [available compute plans](logic-apps-pricing.md#standard-pricing-tiers), which provide different levels of virtual CPU and memory, or by using an App Service Environment v3, which provides more compute options.
+
+Single-tenant Azure Logic Apps dynamically scales to effectively handle increasing loads. Your logic app uses the following primary factors to determine whether to scale.
+
+> [!NOTE]
+>
+> For a Standard logic app in an App Service Environment v3, dynamic scaling isn't available.
+> You must set scaling rules on the associated App Service Plan. As a commonly used scaling rule,
+> you can use the CPU metric, and scale your App Service Plan to keep the virtual CPU between 50-70%.
+> For more information, see [Get started with autoscale in Azure](../azure-monitor/autoscale/autoscale-get-started.md).
+
+- Trigger
+
+ To determine scaling requirements, the scaler analyzes the trigger that starts each workflow in your logic app. For example, for a workflow with a Service Bus trigger, if the queue length continuously grows, the scaler takes action to add worker instances, which enable processing more messages. Likewise, for a workflow with a Request trigger, if the request latency experiences an upward trend, the scaler increases the number of worker instances to distribute the request load more efficiently. For more information about worker instances, see [Azure Logic Apps (Standard) - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564).
+
+- Workflow job execution delay
+
+ At runtime, workflow actions are divided into individual jobs that are queued for execution. Job dispatchers regularly poll the job queue to retrieve and execute these jobs. However, if compute capacity is insufficient to pick up these jobs, they stay in the queue for a longer time, resulting in increased execution delays. The scaler monitors this situation and make scaling decisions to keep the execution delays under control. For more information about how the runtime schedules and runs jobs, see [Azure Logic Apps (Standard) - Runtime Deep Dive](https://techcommunity.microsoft.com/t5/azure-integration-services-blog/azure-logic-apps-running-anywhere-runtime-deep-dive/ba-p/1835564).
+
+ The scaler also considers the minimum and maximum worker instance counter configuration to determine whether to make scaling decisions, such as adding, removing, or maintaining the current number of worker instances. Typically, the scaler makes these decisions at intervals of approximately 15-30 seconds. So, consider this ramp-up time and its impact on your logic app's scaling speed to effectively handle peak loads. For example, if your workload requires scaling your logic app from just 1 worker instance to 100 worker instances, the ramp-up alone might take 25-50 minutes. Single-tenant Azure Logic Apps scaling shares the same [Azure Functions scaling infrastructure](../azure-functions/event-driven-scaling.md).
+
+##### Configure your logic app compute for faster scaling
+
+- Share workload across multiple logic apps.
+
+ Each logic app can scale independently, so distributing your workload across more than one logic app can significantly accelerate the scaling speed. For example, two logic apps can scale to twice the number of worker instances in the same timeframe as a single logic app. By splitting your workload across multiple apps, you can effectively multiply the scalability and achieve faster scaling results.
+
+- Use prewarmed instances.
+
+ If your scenario requires quicker ramp-up time, consider using prewarmed instances. If your peak load times are deterministic, you can use an automation task to adjust these prewarm instances on a schedule. For more information, see [Manage Azure resources and monitor costs by creating automation tasks (preview)](create-automation-tasks-azure-resources.md).
+
+### [Consumption](#tab/consumption)
+
+Multitenant Azure Logic Apps has a [default limit](#throughput-limits) on the number of actions that run every 5 minutes. To raise the default value to the [maximum value](#throughput-limits), you can enable high throughput mode, which is in preview. Or, [distribute the workload across multiple logic apps and workflows](handle-throttling-problems-429-errors.md#logic-app-throttling), rather than rely on a single logic app and workflow.
+
+#### Enable high throughput in the portal
1. In the Azure portal, on your logic app's menu, under **Settings**, select **Workflow settings**.
For a single workflow definition, the number of actions that run every 5 minutes
![Screenshot that shows logic app menu in Azure portal with "Workflow settings" and "High throughput" set to "On".](./media/logic-apps-limits-and-config/run-high-throughput-mode.png)
-#### [Resource Manager Template](#tab/azure-resource-manager)
+#### Enable high throughput in a Resource Manager template
To enable this setting in an ARM template for deploying your logic app, in the `properties` object for your logic app's resource definition, add the `runtimeConfiguration` object with the `operationOptions` property set to `OptimizedForHighThroughput`:
The following table lists the retry policy limits for a trigger or action, based
The following table lists the values for a single workflow definition:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
-||--|||-|
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
+||-|||-|
| Variables per workflow | 250 variables | 250 variables <br>(Default) | 250 variables || | Variable - Maximum content size | 104,857,600 characters | Stateful workflow: 104,857,600 characters <br>(Default) <br><br>Stateless workflow: 1,024 characters <br>(Default) | 104,857,600 characters | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | | Variable (Array type) - Maximum number of array items | 100,000 items | 100,000 items <br>(Default) | Premium SKU: 100,000 items <br><br>Developer SKU: 5,000 items | To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
By default, the HTTP action and APIConnection actions follow the [standard async
> [!NOTE] > For the **Logic App (Standard)** resource type in the single-tenant service, stateless workflows can only run *synchronously*.
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
-||--|||-|
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
+||-|||-|
| Outbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of outbound requests include calls made by the HTTP trigger or action. <br><br>**Tip**: For longer running operations, use an [asynchronous polling pattern](../logic-apps/logic-apps-create-api-app.md#async-pattern) or an ["Until" loop](../logic-apps/logic-apps-workflow-actions-triggers.md#until-action). To work around timeout limits when you call another workflow that has a [callable endpoint](logic-apps-http-endpoint.md), you can use the built-in Azure Logic Apps action instead, which you can find in the designer's operation picker under **Built-in**. <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | | Inbound request | 120 sec <br>(2 min) | 235 sec <br>(3.9 min) <br>(Default) | 240 sec <br>(4 min) | Examples of inbound requests include calls received by the Request trigger, HTTP Webhook trigger, and HTTP Webhook action. <br><br>**Note**: For the original caller to get the response, all steps in the response must finish within the limit unless you call another nested workflow. For more information, see [Call, trigger, or nest logic apps](../logic-apps/logic-apps-http-endpoint.md). <br><br>To change the default limit in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
By default, the HTTP action and APIConnection actions follow the [standard async
### Request trigger and webhook trigger size limits
-| Name | Multi-tenant | Single-tenant | Notes |
-||--||-|
-| Request trigger (inbound) and webhook-based triggers - Content size limit per 5-minute rolling interval per workflow | 3,145,728 KB | None | This limit applies only to the content size for inbound requests received by the Request trigger or any webhook trigger. <br><br>For example, suppose the backend has 100 workers. Each worker has a limit of 31,457,280 bytes, which is the result from dividing 3,145,728,000 bytes by 100 workers. To avoid experiencing premature throttling for the Request trigger, use a new HTTP client for each request, which helps evenly distribute the calls across all nodes. For a webhook trigger, you might have to use multiple workflows, which splits the load and avoids throttling. |
+| Name | Multitenant | Single-tenant | Notes |
+||-||-|
+| Request trigger (inbound) and webhook-based triggers - Content size limit per 5-minute rolling interval per workflow | 3,145,728 KB | None | This limit applies only to the content size for inbound requests received by the Request trigger or any webhook trigger. <br><br>For example, suppose the backend has 100 workers. Each worker has a limit of 31,457,280 bytes, which is the result from dividing 3,145,728,000 bytes by 100 workers. To avoid experiencing premature throttling for the Request trigger, use a new HTTP client for each request, which helps evenly distribute the calls across all nodes. For a webhook trigger, you might have to use multiple workflows, which split the load and avoids throttling. |
<a name="message-size-limits"></a> ### Messages
-| Name | Chunking enabled | Multi-tenant | Single-tenant | Integration service environment | Notes |
-|||--|-||-|
+| Name | Chunking enabled | Multitenant | Single-tenant | Integration service environment | Notes |
+|||-|-||-|
| Content download - Maximum number of requests | Yes | 1,000 requests | 1,000 requests <br>(Default) | 1,000 requests || | Message size | No | 100 MB | 100 MB | 200 MB | To work around this limit, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). However, some connectors and APIs don't support chunking or even the default limit. <br><br>- Connectors such as AS2, X12, and EDIFACT have their own [B2B message limits](#b2b-protocol-limits). <br><br>- ISE connectors use the ISE limit, not the non-ISE connector limits. <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). | | Message size per action | Yes | 1 GB | 1,073,741,824 bytes <br>(1 GB) <br>(Default) | 5 GB | This limit applies to actions that either natively support chunking or let you enable chunking in their runtime configuration. <br><br>If you're using an ISE, the Azure Logic Apps engine supports this limit, but connectors have their own chunking limits up to the engine limit, for example, see the [Azure Blob Storage connector's API reference](/connectors/azureblob/). For more information about chunking, see [Handle large messages with chunking](../logic-apps/logic-apps-handle-large-messages.md). <br><br>To change the default value in the single-tenant service, review [Edit host and app settings for logic apps in single-tenant Azure Logic Apps](edit-app-settings-host-settings.md). |
The following table lists the values for a single workflow definition:
The following table lists the values for a single workflow definition:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
-||--|||-|
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
+||-|||-|
| Maximum number of code characters | 1,024 characters | 100,000 characters | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). | | Maximum duration for running code | 5 sec | 15 sec | 1,024 characters | To use the higher limit, create a **Logic App (Standard)** resource, which runs in single-tenant Azure Logic Apps, either [by using the Azure portal](create-single-tenant-workflows-azure-portal.md) or [by using Visual Studio Code and the **Azure Logic Apps (Standard)** extension](create-single-tenant-workflows-visual-studio-code.md). |
The following table lists the values for a single workflow definition:
## Custom connector limits
-In multi-tenant Azure Logic Apps and the integration service environment only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. In single-tenant Azure Logic Apps, you can create and use only [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
+In multitenant Azure Logic Apps and the integration service environment only, you can create and use [custom managed connectors](/connectors/custom-connectors), which are wrappers around an existing REST API or SOAP API. In single-tenant Azure Logic Apps, you can create and use only [custom built-in connectors](https://techcommunity.microsoft.com/t5/integrations-on-azure/azure-logic-apps-running-anywhere-built-in-connector/ba-p/1921272).
The following table lists the values for custom connectors:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
-||--|||-|
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
+||-|||-|
| Custom connectors | 1,000 per Azure subscription | Unlimited | 1,000 per Azure subscription || | APIs per service | SOAP-based: 50 | Not applicable | SOAP-based: 50 || | Parameters per API | SOAP-based: 50 | Not applicable | SOAP-based: 50 ||
The following tables list the values for the number of artifacts limited to each
The following table lists the message size limits that apply to B2B protocols:
-| Name | Multi-tenant | Single-tenant | Integration service environment | Notes |
-||--|||-|
+| Name | Multitenant | Single-tenant | Integration service environment | Notes |
+||-|||-|
| AS2 | v2 - 100 MB<br>v1 - 25 MB | Unavailable | v2 - 200 MB <br>v1 - 25 MB | Applies to decode and encode | | X12 | 50 MB | Unavailable | 50 MB | Applies to decode and encode | | EDIFACT | 50 MB | Unavailable | 50 MB | Applies to decode and encode |
For Azure Logic Apps to receive incoming communication through your firewall, yo
> - **Office 365**: The return caller is actually the Office 365 connector. You can specify the managed connector outbound > IP address prefixes for each region, or optionally, you can use the **AzureConnectors** service tag for these managed connectors. >
-> - **SAP**: The return caller depends on whether the deployment environment is either multi-tenant Azure or ISE.
-> In the multi-tenant environment, the on-premises data gateway makes the call back to the Azure Logic Apps service.
+> - **SAP**: The return caller depends on whether the deployment environment is either multitenant Azure or ISE.
+> In the multitenant environment, the on-premises data gateway makes the call back to the Azure Logic Apps service.
> In an ISE, the SAP connector makes the call back to Azure Logic Apps.
-<a name="multi-tenant-inbound"></a>
+<a name="multitenant-inbound"></a>
-#### Multi-tenant - Inbound IP addresses
+#### Multitenant - Inbound IP addresses
| Region | Azure Logic Apps IP | |--||
If your workflow also uses any [managed connectors](../connectors/managed.md), s
* [Adjust communication settings for the on-premises data gateway](/data-integration/gateway/service-gateway-communication) * [Configure proxy settings for the on-premises data gateway](/data-integration/gateway/service-gateway-proxy)
-<a name="multi-tenant-outbound"></a>
+<a name="multitenant-outbound"></a>
-#### Multi-tenant - Outbound IP addresses
+#### Multitenant - Outbound IP addresses
This section lists the outbound IP addresses that Azure Logic Apps requires in your logic app's Azure region to communicate through your firewall. Also, if your workflow uses any managed connectors or custom connectors, your firewall has to allow traffic in your logic app's Azure region for [*all the managed connectors' outbound IP addresses*](/connectors/common/outbound-ip-addresses/#azure-logic-apps). If you have custom connectors that access on-premises resources through the on-premises data gateway resource in Azure, set up your *gateway installation* to allow access for the corresponding managed connector outbound IP addresses.
This section lists the outbound IP addresses that Azure Logic Apps requires in y
## Next steps
-* [Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
+* [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md)
* [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md)
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
Last updated 11/04/2022
-# Submit a training job in Studio (preview)
+# Submit a training job in Studio
There are many ways to create a training job with Azure Machine Learning. You can use the CLI (see [Train models (create jobs)](how-to-train-model.md)), the REST API (see [Train models with REST (preview)](how-to-train-with-rest.md)), or you can use the UI to directly create a training job. In this article, you'll learn how to use your own data and code to train a machine learning model with a guided experience for submitting training jobs in Azure Machine Learning studio.
There are many ways to create a training job with Azure Machine Learning. You ca
1. Select your subscription and workspace.
-* Navigate to the Azure Machine Learning Studio and enable the feature by clicking open the preview panel.
-[![Azure Machine Learning studio preview panel allowing users to enable preview features.](media/how-to-train-with-ui/preview-panel.png)](media/how-to-train-with-ui/preview-panel.png)
-
-* You may enter the job creation UI from the homepage. Click **Create new** and select **Job**.
-[![Azure Machine Learning studio homepage](media/how-to-train-with-ui/home-entry.png)](media/how-to-train-with-ui/home-entry.png)
+* You may enter the job creation UI from the homepage. Click **Create new** and select **Job**.
+[![Azure Machine Learning studio homepage](media/how-to-train-with-ui/unified-job-submission-home.png)](media/how-to-train-with-ui/unified-job-submission-home.png)
In this wizard, you can select your method of training, complete the rest of the submission wizard based on your selection, and submit the training job. Below we will walk through the wizard for running a custom script (command job).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you see a list of your recent automated ML experiments, including th
Additional configurations|Description | Primary metric| Main metric used for scoring your model. [Learn more about model metrics](how-to-configure-auto-train.md#primary-metric).
- Debug model via the Responsible AI dashboard | Generate a Responsible AI dashboard to do a holistic assessment and debugging of the recommended best model. This includes insights such as model explanations, fairness and performance explorer, data explorer, model error analysis. [Learn more about how you can generate a Responsible AI dashboard.](./how-to-responsible-ai-insights-ui.md). RAI Dashboard can only be run if 'Serverless' compute (preview) is specified in the experiment set-up step.
+ Enable ensemble stacking | Ensemble learning improves machine learning results and predictive performance by combining multiple models as opposed to using single models. [Learn more about ensemble models](concept-automated-ml.md#ensemble).
Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
- Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you don't spend more time on the training job than necessary.
- Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job won't run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
+ Explain best model| Automatically shows explainability on the best model created by Automated ML.
+
1. (Optional) View featurization settings: if you choose to enable **Automatic featurization** in the **Additional configuration settings** form, default featurization techniques are applied. In the **View featurization settings**, you can change these defaults and customize accordingly. Learn how to [customize featurizations](#customize-featurization). ![Screenshot shows the Select task type dialog box with View featurization settings called out.](media/how-to-use-automated-ml-for-ml-models/view-featurization-settings.png)
+1. The **[Optional] Limits** form allows you to do the following.
+
+ | Option | Description |
+ ||--|
+ |**Max trials**| Maximum number of trials, each with different combination of algorithm and hyperparameters to try during the AutoML job. Must be an integer between 1 and 1000.
+ |**Max concurrent trials**| Maximum number of trial jobs that can be executed in parallel. Must be an integer between 1 and 1000.
+ |**Max nodes**| Maximum number of nodes this job can use from selected compute target.
+ |**Metric score threshold**| When this threshold value will be reached for an iteration metric the training job will terminate. Keep in mind that meaningful models have correlation > 0, otherwise they are as good as guessing the average Metric threshold should be between bounds [0, 10].
+ |**Experiment timeout (minutes)**| Maximum time in minutes the entire experiment is allowed to run. Once this limit is reached the system will cancel the AutoML job, including all its trials (children jobs).
+ |**Iteration timeout (minutes)**| Maximum time in minutes each trial job is allowed to run. Once this limit is reached the system will cancel the trial.
+ |**Enable early termination**| Select to end the job if the score is not improving in the short term.
1. The **[Optional] Validate and test** form allows you to do the following.
mariadb Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-business-continuity.md
Title: Business continuity - Azure Database for MariaDB
-description: Learn about business continuity (point-in-time restore, data center outage, geo-restore) when using Azure Database for MariaDB service.
+description: Learn about business continuity (point-in-time restore, datacenter outage, geo-restore) when you're using the Azure Database for MariaDB service.
Last updated 06/24/2022
[!INCLUDE [azure-database-for-mariadb-deprecation](includes/azure-database-for-mariadb-deprecation.md)]
-This article describes the capabilities that Azure Database for MariaDB provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user or application error affects data integrity, an Azure region has an outage, or your application requires maintenance.
+This article describes the capabilities that Azure Database for MariaDB provides for business continuity and disaster recovery. Learn about options for recovering from disruptive events that could cause data loss or cause your database and application to become unavailable. Learn what to do when a user error or application error affects data integrity, an Azure region has an outage, or your application needs maintenance.
-## Features that you can use to provide business continuity
+## Features for business continuity
-As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after the disruptive event - this is your Recovery Time Objective (RTO). You also need to understand the maximum amount of recent data updates (time interval) the application can tolerate losing when recovering after the disruptive event - this is your Recovery Point Objective (RPO).
+As you develop your business continuity plan, you need to understand your:
-Azure Database for MariaDB provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in a different region. Each has different characteristics for the recovery time and the potential data loss. With [Geo-restore](concepts-backup.md) feature, a new server is created using the backup data that is replicated from another region. The overall time it takes to restore and recover depends on the size of the database and the amount of logs to recover. The overall time to establish the server varies from few minutes to few hours. With [read replicas](concepts-read-replicas.md), transaction logs from the primary are asynchronously streamed to the replica. In the event of a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
+- **Recovery time objective (RTO)**: The maximum acceptable time before the application fully recovers after a disruptive event.
+- **Recovery point objective (RPO)**: The maximum amount of recent data updates (time interval) that the application can tolerate losing when it's recovering after a disruptive event.
+
+Azure Database for MariaDB provides business continuity and disaster recovery features that include geo-redundant backups with the ability to initiate geo-restore, and deploying read replicas in another region. Each has different characteristics for the recovery time and the potential data loss.
+
+With [geo-restore](concepts-backup.md), Azure Database for MariaDB creates a new server by using the backup data that's replicated from another region. The overall time to restore and recover depends on the size of the database and the amount of log data to recover. The overall time to establish the server varies from few minutes to few hours.
+
+With [read replicas](concepts-read-replicas.md), transaction logs from the primary database are asynchronously streamed to a replica. If there's a primary database outage due to a zone-level or a region-level fault, failing over to the replica provides a shorter RTO and reduced data loss.
> [!NOTE]
-> The lag between the primary and the replica depends on the latency between the sites, the amount of data to be transmitted and most importantly on the write workload of the primary server. Heavy write workloads can generate significant lag.
+> The lag between the primary database and the replica depends on the latency between the sites, the amount of data to be transmitted, and (most important) the write workload of the primary server. Heavy write workloads can generate a significant lag.
>
-> Because of asynchronous nature of replication used for read-replicas, they **should not** be considered as a High Availability (HA) solution since the higher lags can mean higher RTO and RPO. Only for workloads where the lag remains smaller through the peak and non-peak times of the workload, read replicas can act as a HA alternative. Otherwise read replicas are intended for true read-scale for ready heavy workloads and for (Disaster Recovery) DR scenarios.
+> Because of the asynchronous nature of the replication that's used for read replicas, don't consider read replicas to be a high-availability solution. The higher lags can mean higher RTO and RPO. Read replicas can act as a high-availability alternative only for workloads where the lag remains smaller through the peak and off-peak times. Otherwise, read replicas are intended for true read scale for read-heavy workloads and for disaster recovery scenarios.
-The following table compares RTO and RPO in a **typical workload** scenario:
+The following table compares RTO and RPO in a *typical workload* scenario:
-| **Capability** | **Basic** | **General Purpose** | **Memory optimized** |
+| Capability | Basic | General purpose | Memory optimized |
| :: | :-: | :--: | :: |
-| Point in Time Restore from backup | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min| Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min | Any restore point within the retention period <br/> RTO - Varies <br/>RPO < 15 min |
-| Geo-restore from geo-replicated backups | Not supported | RTO - Varies <br/>RPO > 24 h | RTO - Varies <br/>RPO > 24 h |
-| Read replicas | RTO - Minutes* <br/>RPO < 5 min* | RTO - Minutes* <br/>RPO < 5 min*| RTO - Minutes* <br/>RPO < 5 min*|
+| Point-in-time restore from backup | Any restore point within the retention period <br/> RTO varies <br/>RPO is less than 15 minutes| Any restore point within the retention period <br/> RTO varies <br/>RPO is less than 15 minutes | Any restore point within the retention period <br/> RTO varies <br/>RPO is less than 15 minutes |
+| Geo-restore from geo-replicated backups | Not supported | RTO varies <br/>RPO is greater than 24 hours | RTO varies <br/>RPO is greater than 24 hours |
+| Read replicas | RTO is minutes <br/>RPO is less than 5 minutes | RTO is minutes <br/>RPO is less than 5 minutes| RTO is minutes <br/>RPO is less than 5 minutes|
-\* RTO and RPO **can be much higher** in some cases depending on various factors including latency between sites, the amount of data to be transmitted, and importantly primary database write workload.
+RTO and RPO *can be much higher* in some cases, depending on factors like latency between sites, the amount of data to be transmitted, and the primary database's write workload.
-## Recover a server after a user or application error
+## Recovery of a server after a user or application error
-You can use the service's backups to recover a server from various disruptive events. A user may accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data due to an application defect, and so on.
+You can use the service's backups to recover a server from various disruptive events. For example, a user might accidentally delete some data, inadvertently drop an important table, or even drop an entire database. An application might accidentally overwrite good data with bad data because of an application defect.
-You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period you have configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server into the original server.
+You can perform a point-in-time-restore to create a copy of your server to a known good point in time. This point in time must be within the backup retention period that you configured for your server. After the data is restored to the new server, you can either replace the original server with the newly restored server or copy the needed data from the restored server to the original server.
> [!IMPORTANT]
-> Deleted servers can be restored only within **five days** of deletion after which the backups are deleted. The database backup can be accessed and restored only from the Azure subscription hosting the server. To restore a dropped server, refer [documented steps](howto-restore-dropped-server.md). To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../azure-resource-manager/management/lock-resources.md).
+> You can restore deleted servers only within *five days* of deletion. After five days, the backups are deleted. You can access and restore the database backup only from the Azure subscription that hosts the server. To restore a dropped server, refer to the [documented steps](howto-restore-dropped-server.md). To help protect server resources from accidental deletion or unexpected changes after deployment, administrators can use [management locks](../azure-resource-manager/management/lock-resources.md).
-## Recover from an Azure regional data center outage
+## Recovery from an Azure regional datacenter outage
-Although rare, an Azure data center can have an outage. When an outage occurs, it causes a business disruption that might only last a few minutes, but could last for hours.
+Although it's rare, an Azure datacenter can have an outage. When an outage occurs, it causes a business disruption that might last only a few minutes but could last for hours.
-One option is to wait for your server to come back online when the data center outage is over. This works for applications that can afford to have the server offline for some period of time, for example a development environment. When data center has an outage, you do not know how long the outage might last, so this option only works if you don't need your server for a while.
+One option is to wait for your server to come back online when the datacenter outage is over. When datacenter has an outage, you don't know how long the outage might last. So this option works only for applications that can afford to have the server offline for some time (for example, a development environment).
## Geo-restore
-The geo-restore feature restores the server using geo-redundant backups. The backups are hosted in your server's [paired region](../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region your server is hosted in is offline. You can restore from these backups to any other region and bring your server back online. Learn more about geo-restore from the [backup and restore concepts article](concepts-backup.md).
+The geo-restore feature restores the server by using geo-redundant backups. The backups are hosted in your server's [paired region](../availability-zones/cross-region-replication-azure.md). These backups are accessible even when the region where your server is hosted is offline. You can restore from these backups to any other region and then bring your server back online. Learn more about geo-restore in the [article about backup and restore concepts](concepts-backup.md).
> [!IMPORTANT]
-> Geo-restore is only possible if you provisioned the server with geo-redundant backup storage. If you wish to switch from locally redundant to geo-redundant backups for an existing server, you must take a dump using mysqldump of your existing server and restore it to a newly created server configured with geo-redundant backups.
+> Geo-restore is possible only if you provisioned the server with geo-redundant backup storage. If you want to switch from locally redundant to geo-redundant backups for an existing server, you must generate a backup of your existing server by using [mysqldump](howto-migrate-dump-restore.md). Then, restore to a newly created server that's configured with geo-redundant backups.
## Cross-region read replicas
-You can use cross region read replicas to enhance your business continuity and disaster recovery planning. Read replicas are updated asynchronously using MySQL's binary log replication technology. Learn more about read replicas, available regions, and how to fail over from the [read replicas concepts article](concepts-read-replicas.md).
+You can use cross-region read replicas to enhance your planning for business continuity and disaster recovery. Read replicas are updated asynchronously through MySQL's replication technology for binary logs. Learn more about read replicas, available regions, and how to fail over in the [article about read replica concepts](concepts-read-replicas.md).
## FAQ ### Where does Azure Database for MariaDB store customer data?
-By default, Azure Database for MariaDB doesn't move or store customer data out of the region it is deployed in. However, customers can optionally chose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replica](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
+By default, Azure Database for MariaDB doesn't move or store customer data out of the region where it's deployed. However, you can optionally choose to enable [geo-redundant backups](concepts-backup.md#backup-redundancy-options) or create [cross-region read replicas](concepts-read-replicas.md#cross-region-replication) for storing data in another region.
## Next steps - Learn more about the [automated backups in Azure Database for MariaDB](concepts-backup.md).-- Learn how to restore using [the Azure portal](howto-restore-server-portal.md) or [the Azure CLI](howto-restore-server-cli.md).
+- Learn how to restore by using [the Azure portal](howto-restore-server-portal.md) or [the Azure CLI](howto-restore-server-cli.md).
- Learn about [read replicas in Azure Database for MariaDB](concepts-read-replicas.md).
mariadb Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-high-availability.md
Title: High availability - Azure Database for MariaDB
-description: This article provides information on high availability in Azure Database for MariaDB
+description: This article provides information on high availability in Azure Database for MariaDB.
Last updated 06/24/2022
[!INCLUDE [azure-database-for-mariadb-deprecation](includes/azure-database-for-mariadb-deprecation.md)]
-The Azure Database for MariaDB service provides a guaranteed high level of availability with the financially backed service level agreement (SLA) of [99.99%](https://azure.microsoft.com/support/legal/sla/MariaDB) uptime. Azure Database for MariaDB provides high availability during planned events such as user-initated scale compute operation, and also when unplanned events such as underlying hardware, software, or network failures occur. Azure Database for MariaDB can quickly recover from most critical circumstances, ensuring virtually no application down time when using this service.
+The Azure Database for MariaDB service is suitable for running mission-critical databases that require high uptime. It provides high availability during:
-Azure Database for MariaDB is suitable for running mission critical databases that require high uptime. Built on Azure architecture, the service has inherent high availability, redundancy, and resiliency capabilities to mitigate database downtime from planned and unplanned outages, without requiring you to configure any additional components.
+- Planned events, such as user-initiated scale compute operations.
+- Unplanned events, such as underlying hardware, software, or network failures.
+
+Azure Database for MariaDB provides a [financially backed service-level agreement](https://azure.microsoft.com/support/legal/sla/MariaDB) for uptime. Because the service is built on Azure architecture, you can take advantage of its capabilities for high availability, redundancy, and resiliency without configuring any additional components.
## Components in Azure Database for MariaDB
-| **Component** | **Description**|
+| Component | Description|
| | -- |
-| <b>MariaDB Database Server | Azure Database for MariaDB provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery operation after an outage to happen in seconds. <br/> Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write ahead logs (ib_log) on Azure Storage ΓÇô which is attached to the database server. During the database [checkpoint](https://mariadb.com/kb/innodb-redo-log/#checkpoints) process, data pages from the database server memory are also flushed to the storage. |
-| <b>Remote Storage | All MariaDB physical data files and log files are stored on Azure Storage, which is architected to store three copies of data within a region to ensure data redundancy, availability, and reliability. The storage layer is also independent of the database server. It can be detached from a failed database server and reattached to a new database server within few seconds. Also, Azure Storage continuously monitors for any storage faults. If a block corruption is detected, it is automatically fixed by instantiating a new storage copy. |
-| <b>Gateway | The Gateway acts as a database proxy, routes all client connections to the database server. |
+| MariaDB database server | Azure Database for MariaDB provides security, isolation, resource safeguards, and fast restart capability for database servers. These capabilities facilitate operations such as scaling and database server recovery (in seconds) after an outage. <br/>Data modifications in the database server typically occur in the context of a database transaction. All database changes are recorded synchronously in the form of write-ahead logs (*ib_log* files) on Azure Storage, which is attached to the database server. During the database [checkpoint](https://mariadb.com/kb/innodb-redo-log/#checkpoints) process, data pages from the database server memory are also flushed to the storage. |
+| Remote storage | All MariaDB physical data files and log files are stored on Azure Storage, which stores three copies of data within a region to provide data redundancy, availability, and reliability. The storage layer is independent of the database server. It can be detached from a failed database server and reattached to a new database server in a few seconds. <br/>Azure Storage continuously monitors for any storage faults. If it detects a block corruption, it automatically fixes the problem by instantiating a new storage copy. |
+| Gateway | The gateway acts as a database proxy by routing all client connections to the database server. |
-## Planned downtime mitigation
+## Mitigation of planned downtime
-Azure Database for MariaDB is architected to provide high availability during planned downtime operations.
+The architecture of Azure Database for MariaDB provides high availability during planned downtime operations.
-![view of Elastic Scaling in Azure MariaDB](./media/concepts-high-availability/elastic-scaling-mariadb-server.png)
+![Diagram of elastic scaling in Azure Database for MariaDB.](./media/concepts-high-availability/elastic-scaling-mariadb-server.png)
-Here are some planned maintenance scenarios:
+Here are some scenarios for planned maintenance:
-| **Scenario** | **Description**|
+| Scenario | Description|
| | -- |
-| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. When the client application retries the connection, or tries to make a new connection, the Gateway directs the connection request to the new database server.|
-| <b>Scaling Up Storage | Scaling up the storage is an online operation and does not interrupt the database server.|
-| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for MariaDB automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, refer to the [documentation](concepts-monitoring.md#planned-maintenance-notification), and also check your [portal](https://aka.ms/servicehealthpm).|
+| Compute scale-up or scale-down | When you perform a compute scale-up or scale-down operation, Azure Database for MariaDB provisions a new database server by using the scaled compute configuration. On the old database server, the service allows active checkpoints to finish, drains client connections, and cancels any uncommitted transactions. The service then shuts down the old database server. It detaches the storage from the old database server and attaches the storage to the new database server. When the client application retries the connection or tries to make a new connection, the gateway directs the connection request to the new database server.|
+| Scaling up storage | Scaling up the storage is an online operation and doesn't interrupt the database server.|
+| New software deployment (Azure) | Rollouts of new features or bug fixes automatically happen as part of the service's planned maintenance. For more information, see the [documentation](concepts-monitoring.md#planned-maintenance-notification) and check your [portal](https://aka.ms/servicehealthpm).|
+| Minor version upgrades | Azure Database for MariaDB automatically patches database servers to the minor version that Azure determines. Automatic patching happens as part of the service's planned maintenance. It incurs a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, see the [documentation](concepts-monitoring.md#planned-maintenance-notification) and check your [portal](https://aka.ms/servicehealthpm).|
-## Unplanned downtime mitigation
+## Mitigation of unplanned downtime
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. MariaDB engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime cannot be avoided, Azure Database for MariaDB mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
+Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware faults, network problems, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server.
-![view of High Availability in Azure MariaDB](./media/concepts-high-availability/availability-mariadb-server.png)
+The MariaDB engine performs the recovery operation by using write-ahead log and database files, and it opens the database server to allow clients to connect. Uncommitted transactions are lost, and the application must retry them.
-### Unplanned downtime: failure scenarios and service recovery
+Although you can't avoid unplanned downtime, Azure Database for MariaDB mitigates it by automatically performing recovery operations at both the database server and storage layers without requiring human intervention.
-Here are some failure scenarios and how Azure Database for MariaDB automatically recovers:
+![Diagram of high availability in Azure Database for MariaDB.](./media/concepts-high-availability/availability-mariadb-server.png)
-| **Scenario** | **Automatic recovery** |
-| - | - |
-| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server through the Gateway. <br /> <br /> Applications using the MariaDB databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries, the Gateway transparently redirects the connection to the newly created database server. |
-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
+### Unplanned downtime: Failure scenarios and service recovery
-Here are some failure scenarios that require user action to recover:
+Here are two failure scenarios and how Azure Database for MariaDB automatically recovers:
-| **Scenario** | **Recovery plan** |
+| Scenario | Automatic recovery |
| - | - |
-| <b> Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery (DR). (See [this article](howto-read-replicas-portal.md) about creating and managing read replicas for details). In the event of a region-level failure, you can manually promote the read replica configured on the other region to be your production database server. |
-| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [mysqldump](howto-migrate-dump-restore.md), and then use [restore](howto-migrate-dump-restore.md#restore-your-mariadb-database) to restore those tables into your database. |
+| Database server failure | If the database server is down because of an underlying hardware fault, Azure Database for MariaDB drops active connections and cancels any inflight transactions. The service automatically deploys a new database server and attaches the remote data storage to the new database server. After the database recovery is complete, clients can connect to the new database server through the gateway. <br />Applications that use the MariaDB databases need to be built in a way that they detect and retry dropped connections and failed transactions. When the application retries a connection, the gateway transparently redirects the connection to the newly created database server. |
+| Storage failure | Storage-related problems, such as a disk failure or a physical block corruption, don't affect applications. Because the data is stored in three copies, the surviving storage serves the copy of the data. Azure Database for MariaDB automatically corrects block corruptions. If a copy of data is lost, the service automatically creates a new copy of the data. |
+Here are failure scenarios that require user action to recover:
+
+| Scenario | Recovery plan |
+| - | - |
+| Region failure | Failure of a region is a rare event. However, if you need protection from a region failure, you can configure one or more read replicas in other regions for disaster recovery. For details, see [this article](howto-read-replicas-portal.md) about creating and managing read replicas. If a region-level failure happens, you can manually promote a read replica configured in another region to be your production database server. |
+| Logical/user error | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](concepts-backup.md). This action restores and recovers the data until the time just before the error occurred.<br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the tables via [mysqldump](howto-migrate-dump-restore.md), and then [restore](howto-migrate-dump-restore.md#restore-your-mariadb-database) those tables in your database. |
## Summary
-Azure Database for MariaDB provides fast restart capability of database servers, redundant storage, and efficient routing from the Gateway. For additional data protection, you can configure backups to be geo-replicated, and also deploy one or more read replicas in other regions. With inherent high availability capabilities, Azure Database for MariaDB protects your databases from most common outages, and offers an industry leading, finance-backed [99.99% of uptime SLA](https://azure.microsoft.com/support/legal/sla/MariaDB). All these availability and reliability capabilities enable Azure to be the ideal platform to run your mission-critical applications.
+Azure Database for MariaDB has inherent high-availability capabilities to help protect your databases from common outages. It provides fast restart capability of database servers, redundant storage, and efficient routing from the gateway. For additional data protection, you can configure backups to be geo-replicated and deploy read replicas in other regions.
## Next steps -- Learn about [Azure regions](../availability-zones/az-overview.md)-- Learn about [handling transient connectivity errors](concepts-connectivity.md)-- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md)
+- Learn about [Azure regions](../availability-zones/az-overview.md).
+- Learn about [handling transient connectivity errors](concepts-connectivity.md).
+- Learn how to [replicate your data with read replicas](howto-read-replicas-portal.md).
nat-gateway Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat.md
Refer to the table below for which tools to use to validate NAT gateway connecti
### How to analyze outbound connectivity
-To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged. The source IP and port specified in the NSG flow log will be that of the virtual machine and not of the NAT gateway.
+To analyze outbound traffic from NAT gateway, use NSG flow logs. NSG flow logs provide connection information for your virtual machines. The connection information contains the source IP and port and the destination IP and port and the state of the connection. The traffic flow direction and the size of the traffic in number of packets and bytes sent is also logged. The source IP and port specified in the NSG flow log is for the virtual machine and not the NAT gateway.
* To learn more about NSG flow logs, see [NSG flow log overview](../network-watcher/network-watcher-nsg-flow-logging-overview.md).
NAT gateway can't be deployed in a gateway subnet. A gateway subnet is used by a
### Can't attach NAT gateway to a subnet that contains a virtual machine NIC in a failed state
-When associating a NAT gateway to a subnet that contains a virtual machine network interface (NIC) in a failed state, you'll receive an error message indicating that this action can't be performed. You must first resolve the VM NIC failed state before you can attach a NAT gateway to the subnet.
+When associating a NAT gateway to a subnet that contains a virtual machine network interface (NIC) in a failed state, you receive an error message indicating that this action can't be performed. You must first resolve the VM NIC failed state before you can attach a NAT gateway to the subnet.
To get your virtual machine NIC out of a failed state, you can use one of the two following methods.
NAT gateway can't be associated with more than 16 public IP addresses. You can u
The following IP prefix sizes can be used with NAT gateway:
-* /28 (sixteen addresses)
+* /28 (16 addresses)
-* /29 (eight addresses)
+* /29 (8 addresses)
-* /30 (four addresses)
+* /30 (4 addresses)
-* /31 (two addresses)
+* /31 (2 addresses)
### IPv6 coexistence
-[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources. See [Configure dual stack outbound connectivity with NAT gateway and public Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal) to learn how to provide IPv4 and IPv6 outbound connectivity from your dual stack subnet.
+[NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway can't be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but only uses IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources. See [Configure dual stack outbound connectivity with NAT gateway and public Load balancer](/azure/virtual-network/nat-gateway/tutorial-dual-stack-outbound-nat-load-balancer?tabs=dual-stack-outbound-portal) to learn how to provide IPv4 and IPv6 outbound connectivity from your dual stack subnet.
### Can't use basic SKU public IPs with NAT gateway
NAT gateway is a standard SKU resource and can't be used with basic SKU resource
NAT gateway is a [zonal resource](./nat-availability-zones.md) and can either be designated to a specific zone or to ΓÇÿno zoneΓÇÖ. When NAT gateway is placed in ΓÇÿno zoneΓÇÖ, Azure places the NAT gateway into a zone for you, but you don't have visibility into which zone the NAT gateway is located.
-NAT gateway can be used with public IP addresses designated to a specific zone, no zone, all zones (zone-redundant) depending on its own availability zone configuration. Follow guidance below:
+NAT gateway can be used with public IP addresses designated to a specific zone, no zone, all zones (zone-redundant) depending on its own availability zone configuration.
| NAT gateway availability zone designation | Public IP address / prefix designation that can be used | |||
NAT gateway can be used with public IP addresses designated to a specific zone,
>[!NOTE] >If you need to know the zone that your NAT gateway resides in, make sure to designate it to a specific availability zone.
+## More troubleshooting guidance
+
+If the issue you're experiencing isn't covered by this article, refer to the other NAT gateway troubleshooting articles:
+* [Troubleshoot outbound connectivity with NAT Gateway](/azure/nat-gateway/troubleshoot-nat-connectivity)
+* [Troubleshoot outbound connectivity with NAT Gateway and other Azure services](/azure/nat-gateway/troubleshoot-nat-and-azure-services)
+ ## Next steps We're always looking to improve the experience of our customers. If you're experiencing issues with NAT gateway that aren't listed or resolved by this article, submit feedback through GitHub via the bottom of this page. We'll address your feedback as soon as possible.
openshift Howto Deploy Java Jboss Enterprise Application Platform App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-jboss-enterprise-application-platform-app.md
# Quickstart: Deploy JBoss EAP on Azure Red Hat OpenShift using the Azure portal
-This article shows you how to quickly stand up JBoss EAP on Azure Red Hat OpenShift using the Azure portal. If you prefer manual step-by-step guidance for running JBoss EAP on Azure Red Hat OpenShift that doesn't utilize the automation enabled by the Azure portal, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro).
+This article shows you how to quickly stand up JBoss EAP on Azure Red Hat OpenShift (ARO) using the Azure portal.
+
+This article uses the Azure Marketplace offer for JBoss EAP to accelerate your journey to ARO. The offer automatically provisions a number of resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the JBoss EAP Operator, and optionally a container image including JBoss EAP and your application using Source-to-Image (S2I). To see the offer, visit the [Azure portal](https://aka.ms/eap-aro-portal). If you prefer manual step-by-step guidance for running JBoss EAP on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Red Hat JBoss Enterprise Application Platform (JBoss EAP) on an Azure Red Hat OpenShift 4 cluster](/azure/developer/java/ee/jboss-eap-on-aro).
## Prerequisites
openshift Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-deploy-java-liberty-app.md
# Deploy IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift
-This article shows you how to quickly stand up IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift using the Azure portal.
+This article shows you how to quickly stand up IBM WebSphere Liberty and Open Liberty on Azure Red Hat OpenShift (ARO) using the Azure portal.
-For step-by-step guidance in setting up Liberty and Open Liberty on Azure Red Hat OpenShift, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
+This article uses the Azure Marketplace offer for Open/WebSphere Liberty to accelerate your journey to ARO. The offer automatically provisions a number of resources including an ARO cluster with a built-in OpenShift Container Registry (OCR), the Liberty Operator, and optionally a container image including Liberty and your application. To see the offer, visit the [Azure portal](https://aka.ms/liberty-aro). If you prefer manual step-by-step guidance for running Liberty on ARO that doesn't utilize the automation enabled by the offer, see [Deploy a Java application with Open Liberty/WebSphere Liberty on an Azure Red Hat OpenShift cluster](/azure/developer/java/ee/liberty-on-aro).
This article is intended to help you quickly get to deployment. Before going to production, you should explore [Tuning Liberty](https://www.ibm.com/docs/was-liberty/base?topic=tuning-liberty).
operator-nexus Concepts Network Fabric Controller https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-network-fabric-controller.md
Similar to the creation process, deleting an NFC usually takes between 45 and 60
**What steps should be taken if the NFC fails to initialize on the first attempt?** If the NFC does not provision successfully on the first try, the recommended course of action is to clean up and recreate the NFC. This is due to the lack of support for updating the NFC during intermediate failures.+
+## Next steps
+
+- [Network Fabric Services](concepts-network-fabric-services.md)
operator-nexus Concepts Network Fabric Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-network-fabric-services.md
+
+ Title: Azure Operator Nexus Network Fabric Services
+description: Overview of Network Fabric Services for Azure Operator Nexus.
+++ Last updated : 12/21/2023++++
+# Network Fabric Services overview
+The Network Fabric Controller (NFC) serves as the host for Nexus Network Fabric (NNF) services, illustrated in the diagram below. These services enable secure internet access for on-premises applications and services. Communication between on-premises applications and NNF services is facilitated through a specialized Express Route service (VPN). This setup allows on-premises services to connect to the NNF services via Express Route at one end, and access internet-based services at the other end.
+++
+## Enhanced Security with Nexus Network Fabric Proxy Management
+The Nexus Network Fabric employs a robust, cloud-native proxy designed to protect the Nexus infrastructure and its associated workloads. This proxy is primarily focused on preventing data exfiltration attacks and maintaining a controlled allowlist of URLs for NNF instance connections. In combination with the under-cloud proxy, the NNF proxy delivers comprehensive security for workload networks. There are two distinct aspects of this system: the Infrastructure Management Proxy, which handles all infrastructure traffic, and the Workload Management Proxy, dedicated to facilitating communication between workloads and public or Azure endpoints.
+
+## Optimized Time Synchronization with Managed Network Time Protocol (NTP)
+The Network Time Protocol (NTP) is an essential network protocol that aligns the time settings of computer systems over packet-switched networks. In the Azure Operator Nexus instance, NTP is instrumental in ensuring the consistent time settings across all compute nodes and network devices. This level of synchronization is critical for the Network Functions (NFs) operating within the infrastructure. It significantly contributes to the effectiveness of telemetry and security measures, maintaining the integrity and coordination of the system.
+
+## Nexus Network Fabric Resources
+The following are key resources for Nexus Network Fabric.
+
+### InternetGateways
+*InternetGateways* is a critical resource in network architecture, acting as the connecting bridge between a virtual network and the Internet. It enables virtual machines and other entities within a virtual network to communicate seamlessly with external services. These services range from websites and APIs to various cloud services, making InternetGateways a versatile and essential component.
+
+#### Properties
+
+| Property | Description |
+|||
+| Name | Serves as the unique identifier for the Internet Gateway. |
+| Location | Specifies the Azure region where the Internet Gateway is deployed, ensuring regional compliance and optimization. |
+| Subnets | Defines the subnets linked with the Internet Gateway, determining the network segments it services. |
+| Public IP Address| Assigns a public IP address to the gateway, enabling external network interactions. |
+| Routes | Outlines the routing rules and configurations for managing traffic through the gateway. |
++
+#### Use cases
+
+* **Internet Access:** Facilitates Internet connectivity for virtual network resources, crucial for updates, downloads, and accessing external services.
+* **Hybrid Connectivity:** Ideal for hybrid scenarios, allowing secure connections between on-premises networks and Azure resources.
+* **Load Balancing:** Enhances network performance and availability by evenly distributing traffic across multiple gateways.
+* **Security Enforcement:** Enables the implementation of robust security policies, such as outbound traffic restrictions and encryption mandates.
+
+### InternetGatewayRules
+*InternetGatewayRules* represents a set of rules associated with an Internet Gateway in the Managed Network Fabric. These rules establish guidelines for either permitting or restricting traffic as it moves through the Internet Gateway, providing a framework for network traffic management.
+
+#### Properties
+
+| Property | Description |
+||--|
+| Name | Acts as the unique identifier for each rule. |
+| Priority | Sets the evaluation order of the rules, with higher priority rules taking precedence.|
+| Action | Determines the action (e.g., allow, deny) for traffic that matches the rule criteria.|
+| Source IP Address Range | Identifies the originating IP address range applicable to the rule. |
+| Destination IP Address Range | Defines the targeted IP address range for the rule. |
+| Protocol | Specifies the network protocol (e.g., TCP, UDP) relevant to the rule. |
+| Port Range | Details the port range for the rule, if applicable. |
++
+#### Use cases
+
+* **Traffic Filtering:** InternetGatewayRules enable organizations to control both incoming and outgoing network traffic based on specific criteria. For example, they can block certain IP ranges or allow only particular protocols.
+
+* **Enforcing Security Policies:** These rules are instrumental in implementing security measures, such as restricting traffic to enhance network security. An organization might block known malicious IP ranges or limit traffic to specific ports for certain services.
+
+* **Compliance Assurance:** The rules can also be utilized to comply with regulatory standards by limiting types of traffic, thereby aiding in data privacy and access control.
+
+* **Traffic Load Balancing:** InternetGatewayRules can distribute network traffic across multiple gateways to optimize resource utilization. This includes prioritizing or throttling traffic based on business needs.
+
+## FAQs
+
+**Is Support Available for HTTP Endpoints?**
+
+Azure's default configuration supports only HTTPS endpoints to ensure secure communication. HTTP endpoints are not supported as part of this security measure. By prioritizing HTTPS, Azure maintains high standards of data integrity and privacy.
+
+**How Can I Safeguard Against Data Exfiltration?**
+
+To strengthen security against data exfiltration, Azure supports the allowance of specific Fully Qualified Domain Names (FQDNs) on the proxy. This additional security measure ensures that your network can only be accessed by approved traffic, greatly minimizing the potential for unauthorized data movement.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
Along with data migration, the tool automatically provides the following built-i
- The source and target server must be in the same Azure region. Cross region migrations are enabled only for servers in India, China and UAE as Flexible server may not be available in all regions within these geographies. - The tool takes care of the migration of data and schema. It doesn't migrate managed service features such as server parameters, connection security details and firewall rules. - The migration tool shows the number of tables copied from source to target server. You need to manually validate the data in target server post migration.-- The tool only migrates user databases and not system databases like template_0, template_1, azure_sys and azure_maintenance.
+- The tool migrates only user databases. System databases like azure_sys, azure_maintenance or template databases such as template0, template1 will not be migrated.
> [!NOTE] > The following limitations are applicable only for flexible servers on which the migration of users/roles functionality is enabled.
For the changes to take effect, server restart would be required.
Use the **Save and Restart** option and wait for the flexible server to restart. > [!NOTE]
-> If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER or PG_PARTMAN extensions are used in your single server, please raise a support request since the migration tool does not handle these extensions.
+> If TIMESCALEDB, POSTGIS_TOPOLOGY, POSTGIS_TIGER_GEOCODER, POSTGRES_FDW or PG_PARTMAN extensions are used in your single server, please raise a support request since the migration tool does not handle these extensions.
##### Create Azure Active Directory users on target server > [!NOTE]
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
The `create` parameters that go into the json file format are as shown below:
| `adminCredentials` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target. These passwords help to authenticate against the source and target servers. | `sourceServerUserName` | Required | The default value is the admin user created during the creation of single server and the password provided is used for authentication against this user. In case you aren't using the default user, this parameter is the user or role on the source server used for performing the migration. This user should have necessary privileges and ownership on the database objects involved in the migration and should be a member of **azure_pg_admin** role. | | `targetServerUserName` | Required | The default value is the admin user created during the creation of flexible server and the password provided is used for authentication against this user. In case you aren't using the default user, this parameter is the user or role on the target server used for performing the migration. This user should be a member of **azure_pg_admin**, **pg_read_all_settings**, **pg_read_all_stats**,**pg_stat_scan_tables** roles and should have the **Create role, Create DB** attributes. |
-| `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. |
+| `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. Note only user databases are migrated. System databases or template databases such as template0, template1 will not be migrated. |
| `overwriteDbsInTarget` | Required | When set to true, if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. | | `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. | | `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. |
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
Under this tab, there's a list of user databases inside the Single Server. You c
:::image type="content" source="./media/concepts-single-to-flexible/flexible-migration-database.png" alt-text="Screenshot of Databases to migrate." lightbox="./media/concepts-single-to-flexible/flexible-migration-database.png":::
-### Review
- >[!NOTE]
-> Gentle reminder to allowlist necessary [extensions](./concepts-single-to-flexible.md#allowlist-required-extensions) before you select **Create** in case it's not yet complete.
+> The tool migrates only user databases. System databases or template databases such as template0, template1 will not be migrated.
+
+### Review
The **Review** tab summarizes all the details for creating the validation or migration. Review the details and click on the start button.
sap Dbms Guide Maxdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-maxdb.md
When deploying SAP MaxDB into Azure, you must review your backup methodology. Ev
Backing up and restoring a database in Azure works the same way as it does for on-premises systems, so you can use standard SAP MaxDB backup/restore tools, which are described in one of the SAP MaxDB documentation documents listed in SAP Note [767598].
+#### <a name="01885ad6-88cf-4d5a-bdb5-6d43a6eed53e"></a>Backup and Restore with Azure Backup
+You can also integrate MaxDB backup with **Azure Backup** using the third-party backup tool **Maxback** (https://maxback.io). MaxBack allows you to backup and restore MaxDB on Windows with VSS integration, which is also used by Azure Backup. The advantage of using Azure Backup is that backup and restore is done at the storage level. MaxBack ensures that the database is in the right state for backup and restore, and automatically handles log volume backups.
+ #### <a name="77cd2fbb-307e-4cbf-a65f-745553f72d2c"></a>Performance Considerations for Backup and Restore As in bare-metal deployments, backup and restore performance are dependent on how many volumes can be read in parallel and the throughput of those volumes. Therefore, one can assume:
search Search Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-reliability.md
Availability zones are used when you add two or more replicas to your search ser
### Prerequisites + Service tier must be Standard or higher.
-+ Service region must be in a region that has available zones (listed in the following table).
++ Service region must be in a region that has available zones (listed in the following section). + Configuration must include multiple replicas: two for read-only query workloads, three for read-write workloads that include indexing.
-Availability zones for Azure AI Search are supported in the following regions:
+### Supported regions
+
+Support for availability zones depends on infrastructure and storage. Currently, two zones that were announced in October 2023 have insufficient storage and don't provide an availability zone for Azure AI Search:
+++ Israel Central++ Italy North+
+Otherwise, availability zones for Azure AI Search are supported in the following regions:
| Region | Roll out | |--|--|
Availability zones for Azure AI Search are supported in the following regions:
| East US 2 | January 30, 2021 or later | | France Central| October 23, 2020 or later | | Germany West Central | May 3, 2021, or later |
-| Italy North | October 4, 2023 or later |
| Japan East | January 30, 2021 or later | | Korea Central | January 20, 2022 or later | | North Europe | January 28, 2021 or later |
spring-apps Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/cost-management.md
The first 50 vCPU hours and 100-GB hours of memory are free each month. For more
If you have Azure Spring Apps instances that don't need to run continuously, you can save costs by reducing the number of running instances. For more information, see [Start or stop your Azure Spring Apps service instance](how-to-start-stop-service.md).
-## Standard consumption and dedicated plan
-
-Unlike other pricing plans, the Standard consumption and dedicated plan offers a pure consumption-based pricing model. You can dynamically add and remove resources based on the resource utilization, number of incoming HTTP requests, or by events. When running apps in a consumption workload profile, you're charged for active and idle usage of resources, and the number of requests. For more information, see the [Standard consumption and dedicated plan](overview.md#standard-consumption-and-dedicated-plan) section of [What is Azure Spring Apps?](overview.md)
- ## Scale and autoscale You can manually scale computing capacities to accommodate a changing environment. For more information, see [Scale an application in Azure Spring Apps](how-to-scale-manual.md).
spring-apps Plan Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/plan-comparison.md
+
+ Title: Compare available plans in Azure Spring Apps
+description: Understand and compare all plans in the Azure Spring Apps.
++++ Last updated : 01/05/2024+++
+# Compare available plans in Azure Spring Apps
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+This article provides a comparison of plans available in Azure Spring Apps. Each plan is designed to cater to different customer scenarios and purposes, as described in the following list:
+
+- Enterprise plan: This plan is designed to expedite the development and deployment of mission-critical and large-scale enterprise applications with higher SLA and large application cluster support. This plan also addresses enterprise requirements around configuration management, service discovery, API gateway, API portal, ease of integration, portability, and flexibility with on-demand managed Tanzu commercial components and Tanzu Spring Runtime support, built on top of strong partnership between VMware and Microsoft.
+- Basic plan: An entry-level plan for individual development and testing.
+- Standard plan: A Spring-centric and opinionated application hosting platform with built-in and pre-configured settings for build, service registry, storage, and more.
+- Standard consumption and dedicated plan: This plan is hosted in an Azure Container Apps environment and is designed to seamlessly interact with other apps running in the same environment with simplified networking and unified observability.
+
+The following table shows the differences between each plan:
+
+| Feature | Description | Enterprise | Basic | Standard | Standard consumption and dedicated |
+|--|--||-|-|--|
+| **Application management** | Application management with hassle-free infrastructure operations. | | | | |
+| App lifecycle management | Create, deploy, stop, and restart apps easily without knowledge of the underlying infrastructure. | ✔️ | ✔️ | ✔️ | ✔️ |
+| SLA | The ensured SLA for both apps and managed components. | **99.95%** | n/a | 99.90% | Not available during preview. |
+| Max App instance size | The maximum application instance size. | **8 vCPU, 32 GB** | 1 vCPU, 2 GB | 4 vCPU, 8 GB | 4 vCPU, 8 GB in consumption, up to 16 vCPU, 128 GB in dedicated |
+| Max App instances | The maximum number of application instances. | **1000** | 25 | 500 | 400 in consumption, 1000 in dedicated. |
+| Auto and manual scaling | Automatic and manual app scaling in/out and up/down. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Deploy from source code, artifact and custom image | Deploy from source code, artifact, and custom image for apps. | ✔️ | ✔️ | ✔️ | Artifact and custom image. |
+| Java app support | Build and deploy Java apps, mainly Spring Apps. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Java native image support | Build and deploy Java native image apps. | ✔️ | ❌ | ❌ | ❌ |
+| .NET Core app support | Build and deploy .NET Core apps. | ✔️ | ❌ | ❌ | ❌ |
+| Node.js app support | Build and deploy Node.js apps. | ✔️ | ❌ | ❌ | ❌ |
+| GO app support | Build and deploy Go apps. | ✔️ | ❌ | ❌ | ❌ |
+| Python app support | Build and deploy Python apps. | ✔️ | ❌ | ❌ | ❌ |
+| PHP app support | Build and deploy PHP apps. | ✔️ | ❌ | ❌ | ❌ |
+| Static web app support | Build and deploy static web apps with static web content, like CSS, JS, and HTML files. | ✔️ | ❌ | ❌ | ❌ |
+| Zero downtime deployment | Rolling update and blue/green deployment strategies with assured zero downtime and affect for apps. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Custom domain | Support multiple custom domains on apps. | ✔️ | ❌ | ✔️ | ✔️ |
+| Bring your own storage | Support to mount Azure storage for apps to use. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Custom health probes | Support to customize apps on health probes, like liveness, readiness, and startup probes, and graceful termination periods. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Auto patching | Automatic patching of the base OS, language runtime (such as the JDK), and APM agents in maintaining images for apps. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Spring Runtime Support | Built-in Tanzu Spring Runtime support with extended support period on Spring projects and 24/7 VMware support. | ✔️ | ❌ | ❌ | ❌ |
+| **Troubleshooting and monitoring** | Troubleshooting and monitoring. | | | | |
+| Remote debugging | Remote debugging. | ✔️ | ✔️ | ✔️ | n/a |
+| Thread/heap/JFR dump | Thread/heap/JFR dump. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Web shell support | Use a web shell to connect to any running app instance to directly run JDK commands. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Out-of-box APM integration | Out-of-box APM integration (Azure Application Insights and popular third-party APMs like Dynatrace, AppDynamics, New Relic, and Elastic APM). | ✔️ | ✔️ | ✔️ | ✔️ |
+| **Security** | Secure networking and identity management. | | | | |
+| Secure communication along whole traffic path | Secure communication along the whole traffic path, including ingress controller to apps, app to app, and apps to backing services such as databases. | ✔️ | ✔️ | ✔️ | ✔️ |
+| VNET injection | Virtual network (VNET) injection. | ✔️ | ❌ | ✔️ | ✔️ |
+| Private endpoint | Support to connect with backing services like Azure databases, Key Vault, and so on using a private endpoint. | ✔️ | ❌ | ✔️ | ✔️ |
+| Managed identity | Support both Azure system and user-assigned managed identity. | ✔️ | ✔️ | ✔️ | ✔️ |
+| **Integration** | Integration capability with backing services, CICD, and IDEs. | | | | |
+| Easy integration with any Azure services | Integration with any Azure services on top of Azure SDK and Spring Cloud Azure. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Out-of-box CICD integration | Out-of-box CICD integration with Azure DevOps, Jenkins, and GitHub Actions, and so forth. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Out-of-box integration with popular IDEs | Out-of-box integration with popular IDEs like VS Code and IntelliJ, to allow in-place interaction with Azure Spring Apps. | ✔️ | ✔️ | ✔️ | ✔️ |
+| **Managed components** | Fully managed components with ensured SLA, timely maintenance, and well-tuned configuration to support app development and operation. | | | | |
+| SLA | The ensured SLA for both apps and managed components. | **99.95%** | n/a | 99.90% | Not available during preview. |
+| Build and maintain images from source code | A build service to centrally manage building and maintaining Open Container Initiative (OCI) images from source code. | ✔️ **(configurable build service<sup>1</sup>)** | ✔️ (default build service) | ✔️ (default build service) | ❌ |
+| An API gateway to route requests to backend apps | Spring Cloud Gateway to route requests with cross-cutting concerns addressed centrally (throttling, request/response filters, authentication and authorization, and so forth). | ✔️ | ❌ | ❌ | ❌ |
+| An API portal to browse and try out APIs | An API portal to view detailed API documentation, and to try out APIs. | ✔️ | ❌ | ❌ | ❌ |
+| App configuration management | A configuration service to distribute app configurations from Git host repositories to apps. | ✔️ **(supports polyglot apps)** | ✔️ (supports Spring apps only) | ✔️ (supports Spring apps only) | ✔️ (supports Spring apps only) |
+| Service registry and discovery | A service registry to provide service registration and discovery capabilities for microservices-based Spring applications. | ✔️ | ✔️ | ✔️ | ✔️ |
+| Real-time monitoring and troubleshooting apps | A lightweight insights and troubleshooting tool that helps app developers and app operators to look inside running Spring applications. | ✔️ | ❌ | ❌ | ❌ |
+| Expedite development with distributable project templates | A project bootstrapping tool to build and distribute templates/accelerators that codify enterprise-conformant code and configurations in a discoverable and repeatable way. | ✔️ | ❌ | ❌ | ❌ |
+
+<sup>1</sup> The configurable build service enables the following features:
+
+- Bring your own container registry: configure your own Azure Container Registry (ACR) to store built images instead of using the Azure Spring Apps managed ACR to deploy to other Azure Spring Apps Enterprise-plan environments with verified images.
+- Configure resources for the whole build pool, up to 64 vCPU and 128 GB.
+- Configure which OS stack to use as the base image for your apps.
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Several filters are available for customizing a blob inventory report:
++ View the JSON for inventory rules by selecting the **Code view** tab in the **Blob inventory** section of the Azure portal. Filters are specified within a rule definition. ```json
View the JSON for inventory rules by selecting the **Code view** tab in the **Bl
| RemainingRetentionDays (Will appear only if include deleted containers is selected) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | + ## Inventory run If you configure a rule to run daily, then it will be scheduled to run every day. If you configure a rule to run weekly, then it will be scheduled to run each week on Sunday UTC time.
An inventory job can take a longer amount of time in these cases:
An object replication policy can prevent an inventory job from writing inventory reports to the destination container. Some other scenarios can archive the reports or make the reports immutable when they're partially completed which can cause inventory jobs to fail.
+### Inventory and Immutable Storage
+
+In instances where immutable storage is enabled, it's essential to be aware of a specific limitation pertaining to Inventory reports. Due to the inherent characteristics of immutable storage, notably its write-once, read-many (WORM) nature, the execution of Inventory reports is constrained. The results cannot be written when immutable storage is active. This stands as a known limitation, and we recommend planning your reporting activities accordingly.
+ ## Next steps - [Enable Azure Storage blob inventory reports](blob-inventory-how-to.md)
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
description: Determine the level of support for each storage account feature giv
Previously updated : 12/11/2023 Last updated : 01/09/2023
The following table describes whether a feature is supported in a standard gener
| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-metrics-migration.md?toc=/azure/storage/blobs/toc.json)<sup>3</sup> | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Microsoft Entra security. <sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
+<sup>3</sup> Storage Analytics metrics is retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
+ ## Premium block blob accounts The following table describes whether a feature is supported in a premium block blob account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
The following table describes whether a feature is supported in a premium block
| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; | | [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24;| &#x2705; |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-metrics-migration.md?toc=/azure/storage/blobs/toc.json)<sup>3</sup> | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Microsoft Entra security. <sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
+<sup>3</sup> Storage Analytics metrics is retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
+ ## See also - [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md)
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Previously updated : 08/17/2023 Last updated : 01/09/2023
To estimate the cost of storing and accessing blob data in a general-purpose v2
To decide on the best access tier for your needs, it can be helpful to determine your blob data capacity, and how that data is being used. This can be best done by looking at the monitoring metrics for your account.
-### Monitoring existing storage accounts
-
-To monitor your existing storage accounts and gather this data, you can make use of Azure Storage Analytics, which performs logging and provides metrics data for a storage account. Storage Analytics can store metrics that include aggregated transaction statistics and capacity data about requests to the storage service for GPv1, GPv2, and Blob storage account types. This data is stored in well-known tables in the same storage account.
-
-For more information, see [About Storage Analytics Metrics](../blobs/monitor-blob-storage.md) and [Storage Analytics Metrics Table Schema](/rest/api/storageservices/Storage-Analytics-Metrics-Table-Schema)
-
-> [!NOTE]
-> Blob storage accounts expose the Table service endpoint only for storing and accessing the metrics data for that account.
-
-To monitor the storage consumption for Blob storage, you need to enable the capacity metrics.
-With this enabled, capacity data is recorded daily for a storage account's Blob service and recorded as a table entry that is written to the *$MetricsCapacityBlob* table within the same storage account.
-
-To monitor data access patterns for Blob storage, you need to enable the hourly transaction metrics from the API. With hourly transaction metrics enabled, per API transactions are aggregated every hour, and recorded as a table entry that is written to the *$MetricsHourPrimaryTransactionsBlob* table within the same storage account. The *$MetricsHourSecondaryTransactionsBlob* table records the transactions to the secondary endpoint when using RA-GRS storage accounts.
-
-> [!NOTE]
-> If you have a general-purpose storage account in which you have stored page blobs and virtual machine disks, or queues, files, or tables, alongside block and append blob data, this estimation process isn't applicable. The capacity data doesn't differentiate block blobs from other types, and doesn't give capacity data for other data types. If you use these types, an alternative methodology is to look at the quantities on your most recent bill.
-
-To get a good approximation of your data consumption and access pattern, we recommend you choose a retention period for the metrics that is representative of your regular usage and extrapolate. One option is to retain the metrics data for seven days and collect the data every week, for analysis at the end of the month. Another option is to retain the metrics data for the last 30 days and collect and analyze the data at the end of the 30-day period.
-
-For details on enabling, collecting, and viewing metrics data, see [Storage analytics metrics](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json).
-
-> [!NOTE]
-> Storing, accessing, and downloading analytics data is also charged just like regular user data.
-
-### Utilizing usage metrics to estimate costs
-
-#### Capacity costs
-
-The latest entry in the capacity metrics table *$MetricsCapacityBlob* with the row key *'data'* shows the storage capacity consumed by user data. The latest entry in the capacity metrics table *$MetricsCapacityBlob* with the row key *'analytics'* shows the storage capacity consumed by the analytics logs.
-
-This total capacity consumed by both user data and analytics logs (if enabled) can then be used to estimate the cost of storing data in the storage account. The same method can also be used for estimating storage costs in GPv1 storage accounts.
-
-#### Transaction costs
+To estimate the cost of storing and accessing blob data in a general-purpose v2 storage account in a particular tier, evaluate your existing usage pattern or approximate your expected usage pattern. In general, you want to know:
-The sum of *'TotalBillableRequests'*, across all entries for an API in the transaction metrics table indicates the total number of transactions for that particular API. *For example*, the total number of *'GetBlob'* transactions in a given period can be calculated by the sum of total billable requests for all entries with the row key *'user;GetBlob'*.
+- Your Blob storage consumption, in gigabytes, including:
+ - How much data is being stored in the storage account?
+ - How does the data volume change on a monthly basis; does new data constantly replace old data?
-In order to estimate transaction costs for Blob storage accounts, you need to break down the transactions into three groups since they're priced differently.
+- The primary access pattern for your Blob storage data, including:
+ - How much data is being read from and written to the storage account?
+ - How many read operations versus write operations occur on the data in the storage account?
-- Write transactions such as *'PutBlob'*, *'PutBlock'*, *'PutBlockList'*, *'AppendBlock'*, *'ListBlobs'*, *'ListContainers'*, *'CreateContainer'*, *'SnapshotBlob'*, and *'CopyBlob'*.-- Delete transactions such as *'DeleteBlob'* and *'DeleteContainer'*.-- All other transactions.
+To decide on the best access tier for your needs, it can be helpful to determine your blob data capacity, and how that data is being used. This can be best done by looking at the monitoring metrics for your account.
-In order to estimate transaction costs for GPv1 storage accounts, you need to aggregate all transactions irrespective of the operation/API.
+### Monitoring existing storage accounts
-#### Data access and geo-replication data transfer costs
+To monitor your existing storage accounts and gather this data, you can make use of storage metrics in Azure Monitor. Azure Monitor stores metrics that include aggregated transaction statistics and capacity data about requests to the storage service. Azure Storage sends metric data to the Azure Monitor back end. Azure Monitor provides a unified monitoring experience that includes data from the Azure portal as well as data that is ingested. For more information, see any of these articles:
-While storage analytics doesn't provide the amount of data read from and written to a storage account, it can be roughly estimated by looking at the transaction metrics table. The sum of *'TotalIngress'* across all entries for an API in the transaction metrics table indicates the total amount of ingress data in bytes for that particular API. Similarly the sum of *'TotalEgress'* indicates the total amount of egress data, in bytes.
+- [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+- [Monitoring Azure Files](../files/storage-files-monitoring.md)
+- [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+- [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
In order to estimate the data access costs for Blob storage accounts, you need to break down the transactions into two groups. -- The amount of data retrieved from the storage account can be estimated by looking at the sum of *'TotalEgress'* for primarily the *'GetBlob'* and *'CopyBlob'* operations.
+- The amount of data retrieved from the storage account can be estimated by looking at the sum of the *'Ingress'* metric for primarily the *'GetBlob'* and *'CopyBlob'* operations.
+
+- The amount of data written to the storage account can be estimated by looking at the sum of *'Egress'* metrics for primarily the *'PutBlob'*, *'PutBlock'*, *'CopyBlob'* and *'AppendBlock'* operations.
-- The amount of data written to the storage account can be estimated by looking at the sum of *'TotalIngress'* for primarily the *'PutBlob'*, *'PutBlock'*, *'CopyBlob'* and *'AppendBlock'* operations.
+To determine the price of each operation against the blob storage service, see [Map each REST operation to a price](../blobs/map-rest-apis-transaction-categories.md).
The cost of geo-replication data transfer for Blob storage accounts can also be calculated by using the estimate for the amount of data written when using a GRS or RA-GRS storage account.
storage Storage Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-analytics.md
Title: Use Azure Storage analytics to collect logs and metrics data
+ Title: Use Azure Storage analytics to collect log data
description: Storage Analytics enables you to track metrics data for all storage services, and to collect logs for Blob, Queue, and Table storage. Previously updated : 03/03/2017 Last updated : 01/09/2023
# Storage Analytics
-Azure Storage Analytics performs logging and provides metrics data for a storage account. You can use this data to trace requests, analyze usage trends, and diagnose issues with your storage account.
+Azure Storage Analytics performs logging for a storage account. You can use this data to trace requests, analyze usage trends, and diagnose issues with your storage account.
+
+> [!NOTE]
+> Storage Analytics supports only logs. Storage Analytics metrics are retired. See [Transition to metrics in Azure Monitor](../common/storage-analytics-metrics.md?toc=/azure/storage/blobs/toc.json). While Storage Analytics logs are still suppported, we recommend that you use Azure Storage logs in Azure Monitor instead of Storage Analytics logs. To learn more, see any of the following articles:
+>
+> - [Monitoring Azure Blob Storage](../blobs/monitor-blob-storage.md)
+> - [Monitoring Azure Files](../files/storage-files-monitoring.md)
+> - [Monitoring Azure Queue Storage](../queues/monitor-queue-storage.md)
+> - [Monitoring Azure Table storage](../tables/monitor-table-storage.md)
To use Storage Analytics, you must enable it individually for each service you want to monitor. You can enable it from the [Azure portal](https://portal.azure.com). For details, see [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md). You can also enable Storage Analytics programmatically via the REST API or the client library. Use the [Set Blob Service Properties](/rest/api/storageservices/set-blob-service-properties), [Set Queue Service Properties](/rest/api/storageservices/set-queue-service-properties), [Set Table Service Properties](/rest/api/storageservices/set-table-service-properties), and [Set File Service Properties](/rest/api/storageservices/Get-File-Service-Properties) operations to enable Storage Analytics for each service.
-The aggregated data is stored in a well-known blob (for logging) and in well-known tables (for metrics), which may be accessed using the Blob service and Table service APIs.
+The aggregated log data is stored in a well-known blob, which may be accessed using the Blob service and Table service APIs.
Storage Analytics has a 20 TB limit on the amount of stored data that is independent of the total limit for your storage account. For more information about storage account limits, see [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md).
For an in-depth guide on using Storage Analytics and other tools to identify, di
## Billing for Storage Analytics
-All metrics data is written by the services of a storage account. As a result, each write operation performed by Storage Analytics is billable. Additionally, the amount of storage used by metrics data is also billable.
-
-The following actions performed by Storage Analytics are billable:
--- Requests to create blobs for logging.-- Requests to create table entities for metrics.
+The amount of storage used by logs data is billable. You're also billed for requests to create blobs for logging.
-If you have configured a data retention policy, you can reduce the spending by deleting old logging and metrics data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
+If you have configured a data retention policy, you can reduce the spending by deleting old log data. For more information about retention policies, see [Setting a Storage Analytics Data Retention Policy](/rest/api/storageservices/Setting-a-Storage-Analytics-Data-Retention-Policy).
### Understanding billable requests
-Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. Similarly, Storage Analytics stores metrics for both a service and the API operations of that service, including the percentages and count of certain status messages. Together, these features can help you analyze your billable requests, make improvements on your application, and diagnose issues with requests to your services. For more information about billing, see [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
+Every request made to an account's storage service is either billable or non-billable. Storage Analytics logs each individual request made to a service, including a status message that indicates how the request was handled. See [Understanding Azure Storage Billing - Bandwidth, Transactions, and Capacity](/archive/blogs/windowsazurestorage/understanding-windows-azure-storage-billing-bandwidth-transactions-and-capacity).
-When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your logs and metrics data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual API operation.
+When looking at Storage Analytics data, you can use the tables in the [Storage Analytics Logged Operations and Status Messages](/rest/api/storageservices/storage-analytics-logged-operations-and-status-messages) topic to determine what requests are billable. Then you can compare your log data to the status messages to see if you were charged for a particular request. You can also use the tables in the previous topic to investigate availability for a storage service or individual API operation.
## Next steps - [Monitor a storage account in the Azure portal](./manage-storage-analytics-logs.md)-- [Storage Analytics Metrics](storage-analytics-metrics.md) - [Storage Analytics Logging](storage-analytics-logging.md)
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
Azure Storage services offer the following benefits for application developers a
- **Secure.** All data written to an Azure storage account is encrypted by the service. Azure Storage provides you with fine-grained control over who has access to your data. - **Scalable.** Azure Storage is designed to be massively scalable to meet the data storage and performance needs of today's applications. - **Managed.** Azure handles hardware maintenance, updates, and critical issues for you.-- **Accessible.** Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a variety of languages, including .NET, Java, Node.js, Python, PHP, Ruby, Go, and others, as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.
+- **Accessible.** Data in Azure Storage is accessible from anywhere in the world over HTTP or HTTPS. Microsoft provides client libraries for Azure Storage in a variety of languages, including .NET, Java, Node.js, Python, Go, and others, as well as a mature REST API. Azure Storage supports scripting in Azure PowerShell or Azure CLI. And the Azure portal and Azure Storage Explorer offer easy visual solutions for working with your data.
## Azure Storage data services
Blob Storage is ideal for:
- Storing data for backup and restore, disaster recovery, and archiving. - Storing data for analysis by an on-premises or Azure-hosted service.
-Objects in Blob Storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the [Azure Storage REST API](/rest/api/storageservices/blob-service-rest-api), [Azure PowerShell](/powershell/module/azure.storage), [Azure CLI](/cli/azure/storage), or an Azure Storage client library. The storage client libraries are available for multiple languages, including [.NET](/dotnet/api/overview/azure/storage), [Java](/java/api/overview/azure/storage), [Node.js](https://azure.github.io/azure-storage-node), [Python](/python/api/overview/azure/storage), [PHP](https://azure.github.io/azure-storage-php/), and [Ruby](https://azure.github.io/azure-storage-ruby).
+Objects in Blob Storage can be accessed from anywhere in the world via HTTP or HTTPS. Users or client applications can access blobs via URLs, the [Azure Storage REST API](/rest/api/storageservices/blob-service-rest-api), [Azure PowerShell](/powershell/module/azure.storage), [Azure CLI](/cli/azure/storage), or an Azure Storage client library. The storage client libraries are available for multiple languages, including [.NET](/dotnet/api/overview/azure/storage), [Java](/java/api/overview/azure/storage), [Node.js](https://azure.github.io/azure-storage-node), and [Python](/python/api/overview/azure/storage).
Clients can also securely connect to Blob Storage by using SSH File Transfer Protocol (SFTP) and mount Blob Storage containers by using the Network File System (NFS) 3.0 protocol.
You can access resources in a storage account by any language that can make HTTP
- [Azure Storage client library for Java/Android](/java/api/overview/azure/storage) - [Azure Storage client library for Node.js](../blobs/reference.md#javascript-client-libraries) - [Azure Storage client library for Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob)-- [Azure Storage client library for PHP](https://github.com/Azure/azure-storage-php)-- [Azure Storage client library for Ruby](https://github.com/Azure/azure-storage-ruby) - [Azure Storage client library for C++](https://github.com/Azure/azure-storage-cpp) ### Azure Storage management API and library references
storage Storage Metrics Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-metrics-migration.md
description: Learn how to transition from Storage Analytics metrics (classic met
Previously updated : 01/03/2024 Last updated : 01/09/2024
# Transition to metrics in Azure Monitor
-On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* will be retired. If you use classic metrics, make sure to transition to metrics in Azure Monitor prior to that date. This article helps you make the transition.
+On **January 9, 2024** Storage Analytics metrics, also referred to as *classic metrics* retired. If you used classic metrics, this article helps you transition to metrics in Azure Monitor.
## Steps to complete the transition
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
The appliance scale targets vary depending on region and redundancy of the SAN i
|Resource |France Central |Southeast Asia |Australia East |North Europe | West Europe | UK South | East US | East US 2 | South Central US| West US 2 | West US 3 | Sweden Central | ||||| |Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |5 |5 |5 |5 |5 |5 |5 | 5 | 5|5|
-|Maximum total capacity (TiB) |100 |100 |600 |600|600|600| |600 |600 |600 | 100 | 100 |
-|Maximum base capacity (TiB) |100 |100 |400 |400 | 400|400 |400 |400 |400 |400 | 100 |100 |
-|Minimum total capacity (TiB) |1 |1 |1 |1 |1 |1 |1 |1 | 1 | 1 | 1 |1|
+|Maximum total capacity units (TiB) |100 |100 |600 |600|600|600| |600 |600 |600 | 100 | 100 |
+|Maximum base capacity units (TiB) |100 |100 |400 |400 | 400|400 |400 |400 |400 |400 | 100 |100 |
+|Minimum total SAN capacity (TiB) |1 |1 |1 |1 |1 |1 |1 |1 | 1 | 1 | 1 |1|
|Maximum total IOPS |500,000 |500,000 |2,000,000 |2,000,000|2,000,000 |2,000,000 |2,000,000 |2,000,000 |2,000,000 |2,000,000 | 500,000 |500,000 | |Maximum total throughput (MB/s) |8,000 |8,000 |32,000 |32,000 |32,000|32,000 |32,000 |32,000 |32,000 |32,000 | 8,000|8,000|
ZRS is only available in France Central, North Europe, West Europe and West US 2
|Resource |France Central |North Europe | West Europe |West US 2 | ||||| |Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |5 |5 |
-|Maximum total capacity (TiB) |200 |200 |200 |200 |
-|Maximum base capacity (TiB) |100 |100 |100 |100 |
-|Minimum total capacity (TiB) |1 |1 |1 |1 |
+|Maximum total capacity units (TiB) |200 |200 |200 |200 |
+|Maximum base capacity units (TiB) |100 |100 |100 |100 |
+|Minimum total SAN capacity (TiB) |1 |1 |1 |1 |
|Maximum total IOPS |500,000 |500,000 |500,000 |500,000 | |Maximum total throughput (MB/s) |8,000 |8,000 |8,000 |8,000 |
synapse-analytics Design Guidance For Replicated Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/design-guidance-for-replicated-tables.md
Title: Design guidance for replicated tables
description: Recommendations for designing replicated tables in Synapse SQL pool Previously updated : 09/27/2022 Last updated : 01/09/2024
For example, this load pattern loads data from four sources, but only invokes on
To ensure consistent query execution times, consider forcing the build of the replicated tables after a batch load. Otherwise, the first query will still use data movement to complete the query.
+The 'Build Replicated Table Cache' operation can execute up to two operations simultaneously. For example, if you attempt to rebuild the cache for five tables, the system will utilize a staticrc20 (which cannot be modified) to concurrently build two tables at the time. Therefore, it is recommended to avoid using large replicated tables exceeding 2 GB, as this may slow down the cache rebuild across the nodes and increase the overall time.
+ This query uses the [sys.pdw_replicated_table_cache_state](/sql/relational-databases/system-catalog-views/sys-pdw-replicated-table-cache-state-transact-sql?toc=/azure/synapse-analytics/sql-data-warehouse/toc.json&bc=/azure/synapse-analytics/sql-data-warehouse/breadcrumb/toc.json&view=azure-sqldw-latest&preserve-view=true) DMV to list the replicated tables that have been modified, but not rebuilt. ```sql
To trigger a rebuild, run the following statement on each table in the preceding
SELECT TOP 1 * FROM [ReplicatedTable] ```
+To monitor the rebuild process, you can use [sys.dm_pdw_exec_requests](/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-requests-transact-sql?view=azure-sqldw-latest&preserve-view=true), where the `command` will start with 'BuildReplicatedTableCache'. For example:
+
+```sql
+-- Monitor Build Replicated Cache
+SELECT *
+FROM sys.dm_pdw_exec_requests
+WHERE command like 'BuildReplicatedTableCache%'
+```
+
+> [!TIP]
+> [Table size queries](/azure/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-overview#table-size-queries) can be used to verify which table(s) have a replicated distribution policy and which are larger than 2 GB.
+ ## Next steps To create a replicated table, use one of these statements:
synapse-analytics Disable Geo Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/disable-geo-backup.md
Title: Disable geo-backups
+ Title: Disable geo-backups
description: How-to guide for disabling geo-backups for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics ---- Previously updated : 01/06/2021 - Last updated : 01/09/2024+++
-# Disable geo-backups for a [dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-overview-what-is.md) in Azure Synapse Analytics
+# Disable geo-backups for a dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics
-In this article, you learn to disable geo-backups for your dedicated SQL pool (formerly SQL DW) Azure portal.
+In this article, you learn to disable geo-backups for your [dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-overview-what-is.md) in the Azure portal.
## Disable geo-backups through Azure portal
Follow these steps to disable geo-backups for your dedicated SQL pool (formerly
> [!NOTE] > If you disable geo-backups, you will no longer be able to recover your dedicated SQL pool (formerly SQL DW) to another Azure region.
->
1. Sign in to your [Azure portal](https://portal.azure.com/) account.
-1. Select the dedicated SQL pool (formerly SQL DW) resource that you would like to disable geo-backups for.
+1. Select the dedicated SQL pool (formerly SQL DW) resource where you would like to disable geo-backups.
1. Under **Settings** in the left-hand navigation panel, select **Geo-backup policy**.
- ![Disable geo-backup](./media/sql-data-warehouse-restore-from-geo-backup/disable-geo-backup-1.png)
+ :::image type="content" source="media/sql-data-warehouse-restore-from-geo-backup/disable-geo-backup-menu.png" alt-text="A screenshot from the Azure portal, of the navigation menu, showing where to find the geo-backup policy page.":::
1. To disable geo-backups, select **Disabled**.
- ![Disabled geo-backup](./media/sql-data-warehouse-restore-from-geo-backup/disable-geo-backup-2.png)
+ :::image type="content" source="media/sql-data-warehouse-restore-from-geo-backup/disable-geo-backup-option.png" alt-text="A screenshot from the Azure portal, of the disable geo-backup option.":::
-1. Select *Save* to ensure that your settings are saved.
+1. Select **Save** to ensure that your settings are saved.
- ![Save geo-backup settings](./media/sql-data-warehouse-restore-from-geo-backup/disable-geo-backup-3.png)
+ :::image type="content" source="media/sql-data-warehouse-restore-from-geo-backup/disable-geo-backup-save.png" alt-text="A screenshot from the Azure portal, showing the Save geo-backup settings button.":::
-## Next steps
+## Related content
- [Restore an existing dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-restore-active-paused-dw.md)-- [Restore a deleted dedicated SQL pool (formerly SQL DW)](sql-data-warehouse-restore-deleted-dw.md)
+- [Restore a deleted dedicated SQL pool (formerly SQL DW) in Azure Synapse Analytics](sql-data-warehouse-restore-deleted-dw.md)
virtual-desktop Publish Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/publish-applications.md
Title: Publish applications with RemoteApp in Azure Virtual Desktop portal - Azure
+ Title: Publish applications with RemoteApp in Azure Virtual Desktop - Azure
description: How to publish applications with RemoteApp in Azure Virtual Desktop using the Azure portal and Azure PowerShell.
virtual-desktop Set Up Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-mfa.md
Last updated 10/27/2023
+ # Enforce Microsoft Entra multifactor authentication for Azure Virtual Desktop using Conditional Access > [!IMPORTANT]
Here's how to create a Conditional Access policy that requires multifactor authe
> [!TIP] > If you're using Azure Virtual Desktop (classic) and if the Conditional Access policy blocks all access excluding Azure Virtual Desktop app IDs, you can fix this by also adding the **Azure Virtual Desktop** (app ID 9cdead84-a844-4324-93f2-b2e6bb768d07) to the policy. Not adding this app ID will block feed discovery of Azure Virtual Desktop (classic) resources.
+ > [!TIP]
+ > When searching for an application name on Azure, use search terms that begin with the application name in order instead of keywords the application name contains out of order. For example, when you want to use Azure Virtual Desktop, you need to enter 'Azure Virtual` in that order. If you enter `virtual` by itself, the search won't return the desired application.
+ > [!IMPORTANT] > Don't select the app called Azure Virtual Desktop Azure Resource Manager Provider (app ID 50e95039-b200-4007-bc97-8d5790743a63). This app is only used for retrieving the user feed and shouldn't have multifactor authentication.
virtual-machines Automatic Extension Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-extension-upgrade.md
Automatic Extension Upgrade supports the following extensions (and more are adde
- [Azure Monitor Agent](../azure-monitor/agents/azure-monitor-agent-overview.md) - [Log Analytics Agent for Linux](../azure-monitor/agents/log-analytics-agent.md) - [Azure Diagnostics extension for Linux](../azure-monitor/agents/diagnostics-extension-overview.md)-
+- Service Fabric ΓÇô [Linux](../service-fabric/service-fabric-tutorial-create-vnet-and-linux-cluster.md#service-fabric-extension)
## Enabling Automatic Extension Upgrade
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Ensure that your VM has access to IP address 168.63.129.16. For more information
## Installation
-The preferred method of installing and upgrading the Azure Linux VM Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux VM Agent package into their images and repositories.
+The supported method of installing and upgrading the Azure Linux VM Agent uses an RPM or a DEB package from your distribution's package repository. All the [endorsed distribution providers](../linux/endorsed-distros.md) integrate the Azure Linux VM Agent package into their images and repositories.
+Some Linux distributions might disable the Azure Linux VM Agent **Auto Update** feature and some of the repositories might also contain older versions, those might have issues with modern extensions so, we recommend to have the latest stable version installed.
+To make sure the Azure Linux VM Agent is updating properly we recommend having the option `AutoUpdate.Enabled=Y` in the `/etc/waagent.conf` file or simply commenting out that option will result in its defaults too. Having `AutoUpdate.Enabled=N` will not allow the Azure Linux VM Agent to update properly.
-For advanced installation options, such as installing from a source or to custom locations or prefixes, see [Microsoft Azure Linux VM Agent](https://github.com/Azure/WALinuxAgent).
+For advanced installation options, such as installing from a source or to custom locations or prefixes, see [Microsoft Azure Linux VM Agent](https://github.com/Azure/WALinuxAgent). Other than these scenarios, we do not support or recommend upgrading or reinstalling the Azure Linux VM Agent from source.
## Command-line options
virtual-machines Sizes Gpu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes-gpu.md
GPU optimized VM sizes are specialized virtual machines available with single, m
- The [NCv3-series](ncv3-series.md) and [NC T4_v3-series](nct4-v3-series.md) sizes are optimized for compute-intensive GPU-accelerated applications. Some examples are CUDA and OpenCL-based applications and simulations, AI, and Deep Learning. The NC T4 v3-series is focused on inference workloads featuring NVIDIA's Tesla T4 GPU and AMD EPYC2 Rome processor. The NCv3-series is focused on high-performance computing and AI workloads featuring NVIDIAΓÇÖs Tesla V100 GPU. -- The [ND A100 v4-series](nda100-v4-series.md) size is focused on scale-up and scale-out deep learning training and accelerated HPC applications. The ND A100 v4-series uses 8 NVIDIA A100 TensorCore GPUs, each available with a 200 Gigabit Mellanox InfiniBand HDR connection and 40 GB of GPU memory.
+- The [NC 100 v4-series](nc-a100-v4-series.md) sizes are focused on midrange AI training and batch inference workload. The NC A100 v4-series offers flexibility to select one, two, or four NVIDIA A100 80GB PCIe Tensor Core GPUs per VM to leverage the right-size GPU acceleration for your workload.
-- [NGads V620-series)](ngads-v-620-series.md) VM sizes are optimized for high performance, interactive gaming experiences hosted in Azure. They're powered by AMD Radeon PRO V620 GPUs and AMD EPYC 7763 (Milan) CPUs.
+- The [ND A100 v4-series](nda100-v4-series.md) sizes are focused on scale-up and scale-out deep learning training and accelerated HPC applications. The ND A100 v4-series uses 8 NVIDIA A100 TensorCore GPUs, each available with a 200 Gigabit Mellanox InfiniBand HDR connection and 40 GB of GPU memory.
+
+- [NGads V620-series](ngads-v-620-series.md) VM sizes are optimized for high performance, interactive gaming experiences hosted in Azure. They're powered by AMD Radeon PRO V620 GPUs and AMD EPYC 7763 (Milan) CPUs.
- [NV-series](nv-series.md) and [NVv3-series](nvv3-series.md) sizes are optimized and designed for remote visualization, streaming, gaming, encoding, and VDI scenarios using frameworks such as OpenGL and DirectX. These VMs are backed by the NVIDIA Tesla M60 GPU.
virtual-network-manager Common Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/common-issues.md
Previously updated : 3/22/2023 Last updated : 03/22/2023
virtual-network-manager Concept Connectivity Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-connectivity-configuration.md
Previously updated : 3/22/2023 Last updated : 03/22/2023
virtual-network-manager Concept Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-cross-tenant.md
Previously updated : 3/22/2023 Last updated : 03/22/2023 # Cross-tenant support in Azure Virtual Network Manager
-In this article, youΓÇÖll learn about cross-tenant support in Azure Virtual Network Manager. Cross-tenant supports allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions.
+In this article, you learn about cross-tenant support in Azure Virtual Network Manager. Cross-tenant supports allows organizations to use a central Network Manager instance for managing virtual networks across different tenants and subscriptions.
[!INCLUDE [virtual-network-manager-preview](../../includes/virtual-network-manager-preview.md)] ## Overview of Cross-tenant
-Cross-tenant support in Azure Virtual Network Manager allows you to add subscriptions or management groups from other tenants to your network manager. This is done by establishing a two-way connection between the network manager and target tenants. Once connected, the central manager can deploy connectivity and/or security admin rules to virtual networks across those connected subscriptions or management groups. This support will assist organizations that fit the following scenarios:
+Cross-tenant support in Azure Virtual Network Manager allows you to add subscriptions or management groups from other tenants to your network manager. This is done by establishing a two-way connection between the network manager and target tenants. Once connected, the central manager can deploy connectivity and/or security admin rules to virtual networks across those connected subscriptions or management groups. This support assists organizations that fit the following scenarios:
- Acquisitions ΓÇô In instances where organizations merge through acquisition and have multiple tenants, cross tenant support allows a central network manager to manage virtual networks across the tenants. -- Managed service provider ΓÇô In managed service provider scenarios, an organization may manage the resources of other organizations. Cross-tenant support will allow central management of virtual networks by a central service provider for multiple clients.
+- Managed service provider ΓÇô In managed service provider scenarios, an organization can manage the resources of other organizations. Cross-tenant support allows central management of virtual networks by a central service provider for multiple clients.
## Cross-tenant connection
Establishing cross-tenant support begins with creating a cross tenant connection
- Network manager connection - You create a cross-tenant connection from your network manager. The connection includes the exact scope of the tenantΓÇÖs subscriptions or management groups to manage in your network manager. - Virtual network manager hub connection - the tenant creates a cross-tenant connection from their virtual network manager hub. This connection includes the scope of subscriptions or management groups to be managed by the central network manager.
-Once both cross-tenant connections exist and the scopes are exactly the same, a true connection is established. Administrators can use their network manager to add cross-tenant resources to their [network groups](concept-network-groups.md) and to manage virtual networks included in the connection scope. Existing connectivity and/or security admin rules will be applied to the resources based on existing configurations.
+Once both cross-tenant connections exist and the scopes are exactly the same, a true connection is established. Administrators can use their network manager to add cross-tenant resources to their [network groups](concept-network-groups.md) and to manage virtual networks included in the connection scope. Existing connectivity and/or security admin rules are applied to the resources based on existing configurations.
-A cross-tenant connection can only be established and maintained when both objects from each party exist. When one of the connections is removed, the cross-tenant connection is broken. If you need to delete a cross-tenant connection, you'll perform the following:
+A cross-tenant connection can only be established and maintained when both objects from each party exist. When one of the connections is removed, the cross-tenant connection is broken. If you need to delete a cross-tenant connection, you perform the following:
-- Remove cross-tenant connection from the network manager side via Cross-tenant connections blade.-- Remove cross-tenant connection from the tenant side via Virtual network manager hub's Cross-tenant connections blade.
+- Remove cross-tenant connection from the network manager side via Cross-tenant connections settings in the Azure portal.
+- Remove cross-tenant connection from the tenant side via Virtual network manager hub's Cross-tenant connections settings in the Azure portal.
> [!NOTE] > Once a connection is removed from either side, the network manager will no longer be able to view or manage the tenant's resources under that former connection's scope.
A cross-tenant connection can only be established and maintained when both objec
The resources required to create the cross-tenant connection contain a state, which represents whether the associated scope has been added to the Network Manager scope. Possible state values include: * Connected: Both the Scope Connection and Network Manager Connection resources exist. The scope has been added to the Network Manager's scope.
-* Pending: One of the two approval resources has not been created. The scope has not yet been added to the Network Manager's scope.
-* Conflict: There is already a network manager with this subscription or management group defined within its scope. Two network managers with the same scope access cannot directly manage the same scope, therefore this subscription/management group cannot be added to the Network Manager scope. To resolve the conflict, remove the scope from the conflicting network manager's scope and recreate the connection resource.
+* Pending: One of the two approval resources hasn't been created. The scope hasn't yet been added to the Network Manager's scope.
+* Conflict: There's already a network manager with this subscription or management group defined within its scope. Two network managers with the same scope access can't directly manage the same scope, therefore this subscription/management group can't be added to the Network Manager scope. To resolve the conflict, remove the scope from the conflicting network manager's scope and recreate the connection resource.
* Revoked: The scope was at one time added to the Network Manager scope, but the removal of an approval resource has caused it to be revoked. The only state that represents the scope has been added to the Network Manager scope is 'Connected'.
virtual-network-manager Concept Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-deployments.md
Previously updated : 3/22/2023 Last updated : 03/22/2023
virtual-network-manager Concept Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-enforcement.md
Previously updated : 3/22/2023 Last updated : 03/22/2023 # Virtual network enforcement with security admin rules in Azure Virtual Network Manager
virtual-network-manager Concept Network Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-groups.md
Previously updated : 3/22/2023 Last updated : 03/23/2023
virtual-network-manager Concept Network Manager Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-network-manager-scope.md
Previously updated : 3/22/2023 Last updated : 03/22/2023
virtual-network-manager Create Virtual Network Manager Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-terraform.md
Previously updated : 6/7/2023 Last updated : 06/07/2023 content_well_notification: - AI-contribution zone_pivot_groups: azure-virtual-network-manager-quickstart-options
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Both | Yes | Yes | | **PowerPlatformPlex** | This tag represents the IP addresses used by the infrastructure to host Power Platform extension execution on behalf of the customer. | Inbound | Yes | Yes | | **PowerQueryOnline** | Power Query Online. | Both | No | Yes |
+| **Scuba** | Data connectors for Microsoft security products (Sentinel, Defender, etc). | Inbound | No | No|
+| **SerialConsole** | Limit access to boot diagnostics storage accounts from only Serial Console service tag | Inbound | No | Yes |
| **ServiceBus** | Azure Service Bus traffic that uses the Premium service tier. | Outbound | Yes | Yes | | **ServiceFabric** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET endpoint. (For example, https:// westus.servicefabric.azure.com). | Both | No | Yes | | **Sql** | Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Synapse Analytics.<br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure SQL Database service, but not a specific SQL database or server. This tag doesn't apply to SQL managed instance. | Outbound | Yes | Yes |
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
Previously updated : 06/30/2023 Last updated : 01/09/2024
The following sections describe the key concepts in virtual hub routing.
### <a name="hub-route"></a>Hub route table
-A virtual hub route table can contain one or more routes. A route includes its name, a label, a destination type, a list of destination prefixes, and next hop information for a packet to be routed. A **Connection** typically will have a routing configuration that associates or propagates to a route table.
+A virtual hub route table can contain one or more routes. A route includes its name, a label, a destination type, a list of destination prefixes, and next hop information for a packet to be routed. A **Connection** typically has a routing configuration that associates or propagates to a route table.
### <a name= "hub-route"></a> Hub routing intent and policies
-Routing Intent and Routing policies allow you to configure your Virtual WAN hub to send Internet-bound and Private (Point-to-site, Site-to-site, ExpressRoute, Network Virtual Appliances inside the Virtual WAN Hub and Virtual Network) Traffic via an Azure Firewall, Next-Generation Firewall NVA or software-as-a-service solution deployed in the Virtual WAN hub. There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub may have at most one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a Next Hop resource.
+Routing Intent and Routing policies allow you to configure your Virtual WAN hub to send Internet-bound and Private (Point-to-site, Site-to-site, ExpressRoute, Network Virtual Appliances inside the Virtual WAN Hub and Virtual Network) Traffic via an Azure Firewall, Next-Generation Firewall NVA or software-as-a-service solution deployed in the Virtual WAN hub. There are two types of Routing Policies: Internet Traffic and Private Traffic Routing Policies. Each Virtual WAN Hub can have, at most, one Internet Traffic Routing Policy and one Private Traffic Routing Policy, each with a Next Hop resource.
While Private Traffic includes both branch and Virtual Network address prefixes, Routing Policies considers them as one entity within the Routing Intent concepts.
You can set up the routing configuration for a virtual network connection during
### <a name="association"></a>Association
-Each connection is associated to one route table. Associating a connection to a route table allows the traffic (from that connection) to be sent to the destination indicated as routes in the route table. The routing configuration of the connection will show the associated route table. Multiple connections can be associated to the same route table. All VPN, ExpressRoute, and User VPN connections are associated to the same (default) route table.
+Each connection is associated to one route table. Associating a connection to a route table allows the traffic (from that connection) to be sent to the destination indicated as routes in the route table. The routing configuration of the connection shows the associated route table. Multiple connections can be associated to the same route table. All VPN, ExpressRoute, and User VPN connections are associated to the same (default) route table.
By default, all connections are associated to a **Default route table** in a virtual hub. Each virtual hub has its own Default route table, which can be edited to add a static route(s). Routes added statically take precedence over dynamically learned routes for the same prefixes.
Consider the following when configuring Virtual WAN routing:
* All branch connections (Point-to-site, Site-to-site, and ExpressRoute) need to be associated to the Default route table. That way, all branches will learn the same prefixes. * All branch connections need to propagate their routes to the same set of route tables. For example, if you decide that branches should propagate to the Default route table, this configuration should be consistent across all branches. As a result, all connections associated to the Default route table will be able to reach all of the branches. * When you use Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub isn't possible.
-* You may specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection doesn't support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet)
+* You can specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection doesn't support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet)
* All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route doesn't propagate across hubs. * You can only use Virtual WAN to program routes in a spoke if the prefix is shorter (less specific) than the virtual network prefix. For example, in the diagram above the spoke VNET1 has the prefix 10.1.0.0/16: in this case, Virtual WAN wouldn't be able to inject a route that matches the virtual network prefix (10.1.0.0/16) or any of the subnets (10.1.0.0/24, 10.1.1.0/24). In other words, Virtual WAN can't attract traffic between two subnets that are in the same virtual network.
-* While true that 2 hubs on the same virtual WAN will announce routes to each other (as long as the propagation is enabled to the same labels) this only applies to dynamic routing. Once you define a static route, this is not the case.
+* While it's true that 2 hubs on the same virtual WAN will announce routes to each other (as long as the propagation is enabled to the same labels), this only applies to dynamic routing. Once you define a static route, this isn't the case.
## Next steps
virtual-wan Route Maps Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/route-maps-dashboard.md
The following steps walk you through how to navigate to the Route Map dashboard.
1. Go to the **Azure portal -> your Virtual WAN**. 1. On your Virtual WAN, in the left pane, under Connectivity, select **Hubs**. 1. On the hubs page, you can see the hubs that are connected to your Virtual WAN. Select the hub that you want to view.
-1. In the left pane, under Routing, select **Route-Maps** to open the **Route Map Dashboard**.
+1. In the left pane, under Routing, select **Route-Maps**.
+1. Select **Route Map Dashboard** from the Settings section to open the **Route Map Dashboard**.
:::image type="content" source="./media/route-maps-dashboard/dashboard-view.png" alt-text="Screenshot shows the Route Map dashboard page." lightbox="./media/route-maps-dashboard/dashboard-view.png":::
In this example, you can use the Route Map Dashboard to view the routes on **Con
## Next steps * [Configure Route Maps](route-maps-how-to.md)
-* [About Route Maps](route-maps-about.md)
+* [About Route Maps](route-maps-about.md)
virtual-wan Virtual Wan About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-about.md
Previously updated : 06/30/2023 Last updated : 01/09/2024 # Customer intent: As someone with a networking background, I want to understand what Virtual WAN is and if it is the right choice for my Azure network.
Azure Virtual WAN is a networking service that brings many networking, security,
You don't have to have all of these use cases to start using Virtual WAN. You can get started with just one use case, and then adjust your network as it evolves.
-The Virtual WAN architecture is a hub and spoke architecture with scale and performance built in for branches (VPN/SD-WAN devices), users (Azure VPN/OpenVPN/IKEv2 clients), ExpressRoute circuits, and virtual networks. It enables a [global transit network architecture](virtual-wan-global-transit-network-architecture.md), where the cloud hosted network 'hub' enables transitive connectivity between endpoints that may be distributed across different types of 'spokes'.
+The Virtual WAN architecture is a hub and spoke architecture with scale and performance built in for branches (VPN/SD-WAN devices), users (Azure VPN/OpenVPN/IKEv2 clients), ExpressRoute circuits, and virtual networks. It enables a [global transit network architecture](virtual-wan-global-transit-network-architecture.md), where the cloud hosted network 'hub' enables transitive connectivity between endpoints that might be distributed across different types of 'spokes'.
Azure regions serve as hubs that you can choose to connect to. All hubs are connected in full mesh in a Standard Virtual WAN making it easy for the user to use the Microsoft backbone for any-to-any (any spoke) connectivity.
virtual-wan Virtual Wan Global Transit Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-global-transit-network-architecture.md
Azure Virtual WAN supports the following global transit connectivity paths. The
* Branch-to-VNet (a) * Branch-to-branch (b)
- * ExpressRoute Global Reach and Virtual WAN
+* ExpressRoute Global Reach and Virtual WAN
* Remote User-to-VNet (c) * Remote User-to-branch (d) * VNet-to-VNet (e)
For more information on deploying and orchestrating Next-Generation Firewall Net
Virtual WAN supports the following global secured transit connectivity paths. While the diagram and traffic patterns in this section describe Azure Firewall use cases, the same traffic patterns are supported with Network Virtual Appliances and SaaS security solutions deployed in the hub. The letters in parentheses map to Figure 5.
+* Branch-to-VNet secure transit (c)
+* Branch-to-VNet secure transit across Virtual hubs (g), supported with [Routing Intent](../virtual-wan/how-to-routing-policies.md)
* VNet-to-VNet secure transit (e)
+* VNet-to-VNet secure transit across Virtual Hubs (h), supported with [Routing Intent](../virtual-wan/how-to-routing-policies.md)
+* Branch-to-Branch secure transit (b), supported with [Routing Intent](../virtual-wan/how-to-routing-policies.md)
+* Branch-to Branch secure transit across Virtual Hubs (f), supported with [Routing Intent](../virtual-wan/how-to-routing-policies.md)
* VNet-to-Internet or third-party Security Service (i) * Branch-to-Internet or third-party Security Service (j)
-### VNet-to-VNet secured transit (e)
+### VNet-to-VNet secured transit (e), VNet-to-VNet secure transit cross-region(h)
-The VNet-to-VNet secured transit enables VNets to connect to each other via the Azure Firewall in the Virtual WAN hub.
+The VNet-to-VNet secured transit enables VNets to connect to each other via security appliances (Azure Firewall, select NVA and SaaS) deployed in the Virtual WAN hub.
### VNet-to-Internet or third-party Security Service (i)
-The VNet-to-Internet enables VNets to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services doesn't flow through the Azure Firewall. You can configure Vnet-to-Internet path via supported third-party security service using Azure Firewall Manager.
+The VNet-to-Internet enables VNets to connect to the internet via the via security appliances (Azure Firewall, select NVA and SaaS) in the virtual WAN hub. Traffic to internet via supported third-party security services doesn't flow through a security appliance and is routed straight to the third-party security service. You can configure Vnet-to-Internet path via supported third-party security service using Azure Firewall Manager.
### Branch-to-Internet or third-party Security Service (j)
-The Branch-to-Internet enables branches to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services doesn't flow through the Azure Firewall. You can configure Branch-to-Internet path via supported third-party security service using Azure Firewall Manager.
+The Branch-to-Internet enables branches to connect to the internet via the Azure Firewall in the virtual WAN hub. Traffic to internet via supported third-party security services doesn't flow through a security appliance and is routed straight to the third-party security service. You can configure Branch-to-Internet path via supported third-party security service using Azure Firewall Manager.
-### Branch-to-branch secured transit cross-region (f)
+### Branch-to-branch secured transit, Branch-to-branch secured transit cross-region (b), (f)
-Branches can be connected to a secured virtual hub with Azure Firewall using ExpressRoute circuits and/or site-to-site VPN connections. You can connect the branches to the virtual WAN hub that is in the region closest to the branch.
+Branches can be connected to a secured virtual hub with Azure Firewall using ExpressRoute circuits and/or site-to-site VPN connections. You can connect the branches to the virtual WAN hub that is in the region closest to the branch. Configuring [Routing Intent](../virtual-wan/how-to-routing-policies.md) on Virtual WAN hubs allows for branch-to-branch same hub or branch-to-branch inter-hub/inter-region inspection by security appliances (Azure Firewall, select NVA and SaaS) deployed in the Virtual WAN Hub.
This option lets enterprises leverage the Azure backbone to connect branches. However, even though this capability is available, you should weigh the benefits of connecting branches over Azure Virtual WAN vs. using a private WAN.
-### Branch-to-VNet secured transit (g)
+### Branch-to-VNet secured transit (c), Branch-to-VNet secured transit cross-region (g)
-The Branch-to-VNet secured transit enables branches to communicate with virtual networks in the same region as the virtual WAN hub as well as another virtual network connected to another virtual WAN hub in another region.
+The Branch-to-VNet secured transit enables branches to communicate with virtual networks in the same region as the virtual WAN hub as well as another virtual network connected to another virtual WAN hub in another region (inter-hub traffic inspection supported only with [Routing Intent](../virtual-wan/how-to-routing-policies.md)).
### How do I enable default route (0.0.0.0/0) in a Secured Virtual Hub