Updates from: 03/30/2024 02:07:41
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/access-tokens.md
grant_type=authorization_code
&client_secret=2hMG2-_:y12n10vwH... ```
-If you want to test this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/).
+If you want to test this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview).
A successful token response looks like this:
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
error=access_denied
| state |See the full description in the preceding table. If a `state` parameter is included in the request, the same value should appear in the response. The app should verify that the `state` values in the request and response are identical. | ## 2. Get an access token
-Now that you've acquired an authorization code, you can redeem the `code` for a token to the intended resource by sending a POST request to the `/token` endpoint. In Azure AD B2C, you can [request access tokens for other API's](access-tokens.md#request-a-token) as usual by specifying their scope(s) in the request.
+Now that you've acquired an authorization code, you can redeem the `code` for a token to the intended resource by sending a POST request to the `/token` endpoint. In Azure AD B2C, you can [request access tokens for other APIs](access-tokens.md#request-a-token) as usual by specifying their scope(s) in the request.
You can also request an access token for your app's own back-end Web API by convention of using the app's client ID as the requested scope (which will result in an access token with that client ID as the "audience"):
grant_type=authorization_code
| redirect_uri |Required |The redirect URI of the application where you received the authorization code. | | code_verifier | recommended | The same `code_verifier` used to obtain the authorization code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
-If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/).
+If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview).
A successful token response looks like this:
active-directory-b2c Custom Policies Series Call Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md
You need to deploy an app, which serves as your external app. Your custom policy
1. To test the app works as expected, use the following steps: 1. In your terminal, run the `node index.js` command to start your app server.
- 1. To make a POST request similar to the one shown in this example, you can use an HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/):
+ 1. To make a POST request similar to the one shown in this example, you can use an HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview).
```http POST http://localhost/validate-accesscode HTTP/1.1
At this point, you're ready to deploy your Node.js app.
### Step 1.2 - Deploy the Node.js app in Azure App Service
-For your custom policy to reach your Node.js app, it needs to be reachable, so, you need deploy it. In this article, you deploy the app by using [Azure App Service](../app-service/overview-vnet-integration.md), but you use an alternative hosting approach.
+For your custom policy to reach your Node.js app, it needs to be reachable, so, you need to deploy it. In this article, you deploy the app by using [Azure App Service](../app-service/overview-vnet-integration.md), but you use an alternative hosting approach.
Follow the steps in [Deploy your app to Azure](../app-service/quickstart-nodejs.md#deploy-to-azure) to deploy your Node.js app to Azure. For the **Name** of the app, use a descriptive name such as `custompolicyapi`. Hence:
Follow the steps in [Deploy your app to Azure](../app-service/quickstart-nodejs.
- Service endpoint looks similar to `https://custompolicyapi.azurewebsites.net/validate-accesscode`.
-You can test the app you've deployed by using an HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/). This time, use `https://custompolicyapi.azurewebsites.net/validate-accesscode` URL as the endpoint.
+You can test the app you've deployed by using an HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview). This time, use `https://custompolicyapi.azurewebsites.net/validate-accesscode` URL as the endpoint.
## Step 2 - Call the REST API
active-directory-b2c Secure Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-api-management.md
Before you begin, make sure that you have the following resources in place:
* An [application that's registered in your tenant](tutorial-register-applications.md) * [User flows that are created in your tenant](tutorial-create-user-flows.md) * A [published API](../api-management/import-and-publish.md) in Azure API Management
-* (Optional) A [Postman platform](https://www.postman.com/) to test secured access
## Get Azure AD B2C application ID
You're now ready to add the inbound policy in Azure API Management that validate
<on-error> <base /> </on-error> </policies> ```-
-## Validate secure API access
-
-To ensure that only authenticated callers can access your API, you can validate your Azure API Management configuration by calling the API with [Postman](https://www.postman.com/).
-
-To call the API, you need both an access token that's issued by Azure AD B2C and an Azure API Management subscription key.
-
-### Get an access token
-
-You first need a token that's issued by Azure AD B2C to use in the `Authorization` header in Postman. You can get one by using the *Run now* feature of the sign-up/sign-in user flow you that you created as one of the prerequisites.
-
-1. In the [Azure portal](https://portal.azure.com), go to your Azure AD B2C tenant.
-1. Under **Policies**, select **User flows**.
-1. Select an existing sign-up/sign-in user flow (for example, *B2C_1_signupsignin1*).
-1. For **Application**, select *webapp1*.
-1. For **Reply URL**, select `https://jwt.ms`.
-1. Select **Run user flow**.
-
- ![Screenshot of the "Run user flow" pane for the sign-up/sign-in user flow in the Azure portal.](media/secure-apim-with-b2c-token/portal-03-user-flow.png)
-
-1. Complete the sign-in process. You should be redirected to `https://jwt.ms`.
-1. Record the encoded token value that's displayed in your browser. You use this token value for the Authorization header in Postman.
-
- ![Screenshot of the encoded token value displayed on jwt.ms.](media/secure-apim-with-b2c-token/jwt-ms-01-token.png)
-
-### Get an API subscription key
-
-A client application (in this case, Postman) that calls a published API must include a valid API Management subscription key in its HTTP requests to the API. To get a subscription key to include in your Postman HTTP request:
-
-1. In the [Azure portal](https://portal.azure.com), go to your Azure API Management service instance.
-1. Select **Subscriptions**.
-1. Select the ellipsis (**...**) next to **Product: Unlimited**, and then select **Show/hide keys**.
-1. Record the **Primary Key** for the product. You use this key for the `Ocp-Apim-Subscription-Key` header in your HTTP request in Postman.
-
-![Screenshot of the "Subscription key" page in the Azure portal, with "Show/hide keys" selected.](media/secure-apim-with-b2c-token/portal-04-api-subscription-key.png)
-
-### Test a secure API call
-
-With the access token and Azure API Management subscription key recorded, you're now ready to test whether you've correctly configured secure access to the API.
-
-1. Create a new `GET` request in [Postman](https://www.postman.com/). For the request URL, specify the speakers list endpoint of the API you published as one of the prerequisites. For example:
-
- `https://contosoapim.azure-api.net/conference/speakers`
-
-1. Next, add the following headers:
-
- | Key | Value |
- | | -- |
- | `Authorization` | The encoded token value you recorded earlier, prefixed with `Bearer ` (include the space after "Bearer") |
- | `Ocp-Apim-Subscription-Key` | The Azure API Management subscription key you recorded earlier. |
- | | |
-
- Your **GET** request URL and **Headers** should appear similar to those shown in the following image:
-
- ![Screenshot of the Postman UI showing the GET request URL and headers.](media/secure-apim-with-b2c-token/postman-01-headers.png)
-
-1. In Postman, select the **Send** button to execute the request. If you've configured everything correctly, you should be given a JSON response with a collection of conference speakers (shown here, truncated):
-
- ```json
- {
- "collection": {
- "version": "1.0",
- "href": "https://conferenceapi.azurewebsites.net:443/speakers",
- "links": [],
- "items": [
- {
- "href": "https://conferenceapi.azurewebsites.net/speaker/1",
- "data": [
- {
- "name": "Name",
- "value": "Scott Guthrie"
- }
- ],
- "links": [
- {
- "rel": "http://tavis.net/rels/sessions",
- "href": "https://conferenceapi.azurewebsites.net/speaker/1/sessions"
- }
- ]
- },
- [...]
- ```
-
-### Test an insecure API call
-
-Now that you've made a successful request, test the failure case to ensure that calls to your API with an *invalid* token are rejected as expected. One way to perform the test is to add or change a few characters in the token value, and then run the same `GET` request as before.
-
-1. Add several characters to the token value to simulate an invalid token. For example, you could add "INVALID" to the token value, as shown here:
-
- ![Screenshot of the Headers section of Postman UI showing the string INVALID added to token.](media/secure-apim-with-b2c-token/postman-02-invalid-token.png)
-
-1. Select the **Send** button to execute the request. With an invalid token, the expected result is a `401` unauthorized status code:
-
- ```json
- {
- "statusCode": 401,
- "message": "Unauthorized. Access token is missing or invalid."
- }
- ```
-
-If you see a `401` status code, you've verified that only callers with a valid access token issued by Azure AD B2C can make successful requests to your Azure API Management API.
- ## Support multiple applications and issuers Several applications typically interact with a single REST API. To enable your API to accept tokens intended for multiple applications, add their application IDs to the `<audiences>` element in the Azure API Management inbound policy.
advisor Advisor Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-release-notes.md
description: A description of what's new and changed in Azure Advisor
Last updated 11/02/2023 + # What's new in Azure Advisor? Learn what's new in the service. These items might be release notes, videos, blog posts, and other types of information. Bookmark this page to stay up to date with the service.
+## March 2024
+
+### Well-Architected Framework (WAF) assessments & recommendations
+
+The Well-Architected Framework (WAF) assessment provides a curated view of a workloadΓÇÖs architecture. Now you can take the WAF assessment and manage recommendations on Azure Advisor to improve resiliency, security, cost, operational excellence, and performance efficiency. As a part of this release, we're announcing two key WAF assessments - [Mission Critical | Well-Architected Review](/assessments/23513bdb-e8a2-4f0b-8b6b-191ee1f52d34/) and [Azure Well-Architected Review](/assessments/azure-architecture-review/).
+
+To get started, visit [Use Azure WAF assessments](/azure/advisor/advisor-assessments).
+ ## November 2023 ### ZRS recommendations for Azure Disks
ai-services Choose Model Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/choose-model-feature.md
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end Azure AI Document Intelligence supports a wide variety of models that enable you to add intelligent document processing to your applications and optimize your workflows. Selecting the right model is essential to ensure the success of your enterprise. In this article, we explore the available Document Intelligence models and provide guidance for how to choose the best solution for your projects.
ai-services Concept Custom Label Tips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-label-tips.md
monikerRange: '>=doc-intel-3.0.0'
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end This article highlights the best methods for labeling custom model datasets in the Document Intelligence Studio. Labeling documents can be time consuming when you have a large number of labels, long documents, or documents with varying structure. These tips should help you label documents more efficiently.
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
monikerRange: '>=doc-intel-3.0.0'
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end Custom neural document models or neural models are a deep learned model type that combines layout and language features to accurately extract labeled fields from documents. The base custom neural model is trained on various document types that makes it suitable to be trained for extracting fields from structured, semi-structured, and unstructured documents. Custom neural models are available in the [v3.0 and later models](v3-1-migration-guide.md) The table below lists common document types for each category:
ai-services Concept Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md
monikerRange: '>=doc-intel-3.0.0'
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end [Document Intelligence Studio](https://documentintelligence.ai.azure.com/) is an online tool for visually exploring, understanding, and integrating features from the Document Intelligence service into your applications. Use the Document Intelligence Studio to:
ai-services Concept General Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end The General document model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to extract key-value pairs, tables, and selection marks from documents. General document is available with the v3.1 and v3.0 APIs. For more information, _see_ our [migration guide](v3-1-migration-guide.md).
ai-services Concept Health Insurance Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-health-insurance-card.md
monikerRange: 'doc-intel-4.0.0 || >=doc-intel-3.0.0'
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end The Document Intelligence health insurance card model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract key information from US health insurance cards. A health insurance card is a key document for care processing and can be digitally analyzed for patient onboarding, financial coverage information, cashless payments, and insurance claim processing. The health insurance card model analyzes health card images; extracts key information such as insurer, member, prescription, and group number; and returns a structured JSON representation. Health insurance cards can be presented in various formats and quality including phone-captured images, scanned documents, and digital PDFs.
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
::: moniker-end ::: moniker range="doc-intel-3.0.0"
-**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1 (preview)**](?view=doc-intel-3.1.0&preserve-view=true)
+**This content applies to:** ![checkmark](media/yes-icon.png) **v3.0 (GA)** | **Latest versions:** ![purple-checkmark](media/purple-yes-icon.png) [**v4.0 (preview)**](?view=doc-intel-4.0.0&preserve-view=true) ![purple-checkmark](media/purple-yes-icon.png) [**v3.1**](?view=doc-intel-3.1.0&preserve-view=true)
::: moniker-end > [!NOTE]
ai-services Use Sdk Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md
Title: "Use Document Intelligence client library SDKs or REST API "
+ Title: "Use Document Intelligence client library or REST API "
-description: Learn how to use Document Intelligence SDKs or REST API and create apps to extract key data from documents.
+description: Learn how to use Document Intelligence client libraries or REST API and create apps to extract key data from documents.
Previously updated : 08/21/2023 Last updated : 03/28/2024 zone_pivot_groups: programming-languages-set-formre
zone_pivot_groups: programming-languages-set-formre
# Use Document Intelligence models ::: moniker range="doc-intel-4.0.0" ::: moniker-end ::: moniker range="doc-intel-3.1.0"
zone_pivot_groups: programming-languages-set-formre
::: moniker-end ::: moniker range=">=doc-intel-3.0.0"
-In this guide, you learn how to add Document Intelligence models to your applications and workflows. Use a programming language SDK of your choice or the REST API.
+
+In this guide, learn how to add Document Intelligence models to your applications and workflows. Use a programming language SDK of your choice or the REST API.
Azure AI Document Intelligence is a cloud-based Azure AI service that uses machine learning to extract key text and structure elements from documents. We recommend that you use the free service while you learn the technology. Remember that the number of free pages is limited to 500 per month.
-Choose from the following Document Intelligence models to analyze and extract data and values from forms and documents:
+Choose from the following Document Intelligence models and analyze and extract data and values from forms and documents:
> [!div class="checklist"] >
-> - The [prebuilt-read](../concept-read.md) model is at the core of all Document Intelligence models and can detect lines, words, locations, and languages. Layout, general document, prebuilt, and custom models all use the read model as a foundation for extracting texts from documents.
+> - The [prebuilt-read](../concept-read.md) model is at the core of all Document Intelligence models and can detect lines, words, locations, and languages. Layout, general document, prebuilt, and custom models all use the `read` model as a foundation for extracting texts from documents.
> > - The [prebuilt-layout](../concept-layout.md) model extracts text and text locations, tables, selection marks, and structure information from documents and images. You can extract key/value pairs using the layout model with the optional query string parameter **`features=keyValuePairs`** enabled. >
Choose from the following Document Intelligence models to analyze and extract da
> > - The [prebuilt-healthInsuranceCard.us](../concept-health-insurance-card.md) model extracts key information from US health insurance cards. >
-> - The [prebuilt-tax.us.w2](../concept-tax-document.md) model extracts information reported on US Internal Revenue Service (IRS) tax forms.
->
-> - The [prebuilt-tax.us.1098](../concept-tax-document.md) model extracts information reported on US 1098 tax forms.
->
-> - The [prebuilt-tax.us.1098E](../concept-tax-document.md) model extracts information reported on US 1098-E tax forms.
->
-> - The [prebuilt-tax.us.1098T](../concept-tax-document.md) model extracts information reported on US 1098-T tax forms.
->
-> - The [prebuilt-tax.us.1099(variations)](../concept-tax-document.md) model extracts information reported on US 1099 tax forms.
+> - The [prebuilt tax document models](../concept-tax-document.md) model extracts information reported on US tax forms.
> > - The [prebuilt-invoice](../concept-invoice.md) model extracts key fields and line items from sales invoices in various formats and quality. Fields include phone-captured images, scanned documents, and digital PDFs. > > - The [prebuilt-receipt](../concept-receipt.md) model extracts key information from printed and handwritten sales receipts. >
-> - The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident cards or *green cards*.
+> - The [prebuilt-idDocument](../concept-id-document.md) model extracts key information from US drivers licenses, international passport biographical pages, US state IDs, social security cards, and permanent resident cards.
+> [!div class="checklist"]
+>
+> - The [prebuilt-businessCard](../concept-business-card.md) model extracts key information and contact details from business card images.
::: moniker-end ::: zone pivot="programming-language-csharp" + [!INCLUDE [C# SDK quickstart](includes/v3-0/csharp-sdk.md)] ::: moniker-end
Choose from the following Document Intelligence models to analyze and extract da
::: zone pivot="programming-language-java" + [!INCLUDE [Java SDK quickstart](includes/v3-0/java-sdk.md)] ::: moniker-end
Choose from the following Document Intelligence models to analyze and extract da
::: zone pivot="programming-language-javascript" + [!INCLUDE [NodeJS SDK quickstart](includes/v3-0/javascript-sdk.md)] ::: moniker-end
Choose from the following Document Intelligence models to analyze and extract da
::: zone pivot="programming-language-python" + [!INCLUDE [Python SDK quickstart](includes/v3-0/python-sdk.md)] ::: moniker-end
Choose from the following Document Intelligence models to analyze and extract da
::: zone pivot="programming-language-rest-api" + [!INCLUDE [REST API quickstart](includes/v3-0/rest-api.md)] ::: moniker-end
Choose from the following Document Intelligence models to analyze and extract da
## Next steps
-Congratulations! You've learned to use Document Intelligence models to analyze various documents in different ways. Next, explore the Document Intelligence Studio and reference documentation.
+Congratulations! You learned to use Document Intelligence models to analyze various documents in different ways. Next, explore the Document Intelligence Studio and reference documentation.
>[!div class="nextstepaction"]
-> [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
+> [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) [Explore the Document Intelligence REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
-> [!div class="nextstepaction"]
-> [Explore the Document Intelligence REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)
::: moniker-end ::: moniker range="doc-intel-2.1.0"
ai-services Get Started Sdks Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api.md
In this quickstart, you used a document Intelligence model to analyze various fo
## Next steps
->[!div class="nextstepaction"]
-> [**For an enhanced experience and advanced model quality, try Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio) [**For v3.0 to v4.0 migration, see the Changelog Migration guides**](../changelog-release-history.md#march-2024-preview-release).
+* For an enhanced experience and advanced model quality, try [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio)
+
+* For v3.1 to v4.0 migration, see [**Changelog Migration guides**](../changelog-release-history.md#march-2024-preview-release).
::: moniker-end
ai-services Try Sample Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-sample-label-tool.md
Title: "Quickstart: Label forms, train a model, and analyze forms using the Sample Labeling tool - Document Intelligence (formerly Form Recognizer)"
-description: In this quickstart, you'll learn to use the Document Intelligence Sample Labeling tool to manually label documents. Then you'll train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
+description: In this quickstart, learn to use the Document Intelligence Sample Labeling tool to manually label documents. Then train a custom document processing model with the labeled documents and use the model to extract key/value pairs.
- ignite-2023 Previously updated : 07/18/2023 Last updated : 03/28/2024 monikerRange: 'doc-intel-2.1.0'
The Azure AI Document Intelligence Sample Labeling tool is an open source tool t
## Prerequisites
-You'll need the following to get started:
+You need the following to get started:
* An Azure subscriptionΓÇöyou can [create one for free](https://azure.microsoft.com/free/cognitive-services/)
You'll need the following to get started:
## Analyze using a Prebuilt model
-Document Intelligence offers several prebuilt models to choose from. Each model has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. Here are the prebuilt models currently supported by the Document Intelligence service:
+Document Intelligence offers several prebuilt models to choose from. Each model has its own set of supported fields. The model to use for the `Analyze` operation depends on the type of document to be analyzed. Here are the prebuilt models currently supported by the Document Intelligence service:
* [**Invoice**](../concept-invoice.md): extracts text, selection marks, tables, key-value pairs, and key information from invoices. * [**Receipt**](../concept-receipt.md): extracts text and key information from receipts.
Document Intelligence offers several prebuilt models to choose from. Each model
:::image type="content" source="../media/fott-select-form-type.png" alt-text="Screenshot of the 'select-form-type' dropdown menu.":::
-1. Select **Run analysis**. The Document Intelligence Sample Labeling tool will call the Analyze Prebuilt API and analyze the document.
+1. Select **Run analysis**. The Document Intelligence Sample Labeling tool calls the Analyze Prebuilt API and analyze the document.
-1. View the results - see the key-value pairs extracted, line items, highlighted text extracted and tables detected.
+1. View the results - see the key-value pairs extracted, line items, highlighted text extracted, and tables detected.
:::image type="content" source="../media/label-tool/prebuilt-2.jpg" alt-text="Analyze Results of Document Intelligence invoice model"::: 1. Download the JSON output file to view the detailed results. * The "readResults" node contains every line of text with its respective bounding box placement on the page.
- * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is "selected" or "unselected".
+ * The "selectionMarks" node shows every selection mark (checkbox, radio mark) and whether its status is `selected` or `unselected`.
* The "pageResults" section includes the tables extracted. For each table, the text, row, and column index, row and column spanning, bounding box, and more are extracted. * The "documentResults" field contains key/value pairs information and line items information for the most relevant parts of the document.
Azure the Document Intelligence Layout API extracts text, tables, selection mark
1. In the **Source** field, select **URL** from the dropdown menu, paste the following URL `https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/layout-page-001.jpg`, and select the **Fetch** button.
-1. Select **Run Layout**. The Document Intelligence Sample Labeling tool will call the Analyze Layout API and analyze the document.
+1. Select **Run Layout**. The Document Intelligence Sample Labeling tool calls the `Analyze Layout API` and analyzes the document.
- :::image type="content" source="../media/fott-layout.png" alt-text="Screenshot of Layout dropdown menu.":::
+ :::image type="content" source="../media/fott-layout.png" alt-text="Screenshot of layout dropdown menu.":::
-1. View the results - see the highlighted text extracted, selection marks detected and tables detected.
+1. View the results - see the highlighted text extracted, selection marks detected, and tables detected.
:::image type="content" source="../media/label-tool/layout-3.jpg" alt-text="Connection settings for Document Intelligence tool.":::
Azure the Document Intelligence Layout API extracts text, tables, selection mark
## Train a custom form model
-Train a custom model to analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You'll need at least five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
+Train a custom model to analyze and extract data from forms and documents specific to your business. The API is a machine-learning program trained to recognize form fields within your distinct content and extract key-value pairs and table data. You need at least five examples of the same form type to get started and your custom model can be trained with or without labeled datasets.
### Prerequisites for training a custom form model
Train a custom model to analyze and extract data from forms and documents specif
* Configure CORS
- [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Document Intelligence Studio. To configure CORS in the Azure portal, you'll need access to the CORS tab of your storage account.
+ [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Document Intelligence Studio. To configure CORS in the Azure portal, you need access to the CORS tab of your storage account.
1. Select the CORS tab for the storage account.
Train a custom model to analyze and extract data from forms and documents specif
1. Set the **Max Age** to 120 seconds or any acceptable value.
- 1. Select the save button at the top of the page to save the changes.
+ 1. Select the save button at the top of the page and save the changes.
### Use the Sample Labeling tool
Configure the **Project Settings** fields with the following values:
1. **Display Name**. Name your project.
-1. **Security Token**. Each project will auto-generate a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
+1. **Security Token**. Each project autogenerates a security token that can be used to encrypt/decrypt sensitive project settings. You can find security tokens in the Application Settings by selecting the gear icon at the bottom of the left navigation bar.
1. **Source connection**. The Sample Labeling tool connects to a source (your original uploaded forms) and a target (created labels and output data). Connections can be set up and shared across projects. They use an extensible provider model, so you can easily add new source/target providers.
Configure the **Project Settings** fields with the following values:
:::image type="content" source="../media/quickstarts/get-sas-url.png" alt-text="SAS location.":::
-1. **Folder Path** (optional). If your source forms are located within a folder in the blob container, specify the folder name.
+1. **Folder Path** (optional). If your source forms are located within a folder in the blob container, specify the folder name.
1. **Document Intelligence Service Uri** - Your Document Intelligence endpoint URL.
When you create or open a project, the main tag editor window opens. The tag edi
##### Identify text and tables
-Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool will draw bounding boxes around each text element.
+Select **Run Layout on unvisited documents** on the left pane to get the text and table layout information for each document. The labeling tool draws bounding boxes around each text element.
-The labeling tool will also show which tables have been automatically extracted. Select the table/grid icon on the left hand of the document to see the extracted table. Because the table content is automatically extracted, we won't label the table content, but rather rely on the automated extraction.
+The labeling tool also shows which tables were automatically extracted. Select the table/grid icon on the left hand of the document and see the extracted table. Because the table content is automatically extracted, we don't label the table content, but rather rely on the automated extraction.
:::image type="content" source="../media/label-tool/table-extraction.png" alt-text="Table visualization in Sample Labeling tool."::: ##### Apply labels to text
-Next, you'll create tags (labels) and apply them to the text elements that you want the model to analyze. Note the Sample Label data set includes already labeled fields; we'll add another field.
+Next, you create tags (labels) and apply them to the text elements that you want the model to analyze. Note the Sample Label data set includes already labeled fields; we add another field.
Use the tags editor pane to create a new tag you'd like to identify:
Use the tags editor pane to create a new tag you'd like to identify:
#### Train a custom model
-Choose the Train icon on the left pane to open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you'll see the following information:
+Choose the Train icon on the left pane and open the Training page. Then select the **Train** button to begin training the model. Once the training process completes, you see the following information:
-* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you'll need it if you want to do prediction calls through the [REST API](./get-started-sdks-rest-api.md?pivots=programming-language-rest-api) or [client library](./get-started-sdks-rest-api.md).
+* **Model ID** - The ID of the model that was created and trained. Each training call creates a new model with its own ID. Copy this string to a secure location; you need it if you want to do prediction calls through the [REST API](./get-started-sdks-rest-api.md?pivots=programming-language-rest-api) or [client library](./get-started-sdks-rest-api.md).
* **Average Accuracy** - The model's average accuracy. You can improve model accuracy by labeling more forms and retraining to create a new model. We recommend starting by labeling five forms analyzing and testing the results and then if needed adding more forms as needed. * The list of tags, and the estimated accuracy per tag. For more information, _see_ [Interpret and improve accuracy and confidence](../concept-accuracy-confidence.md).
Choose the Train icon on the left pane to open the Training page. Then select th
#### Analyze a custom form
-1. Select the **Analyze** icon from the navigation bar to test your model.
+1. Select the **`Analyze`** icon from the navigation bar and test your model.
1. Select source **Local file** and browse for a file to select from the sample dataset that you unzipped in the test folder.
-1. Choose the **Run analysis** button to get key/value pairs, text and tables predictions for the form. The tool will apply tags in bounding boxes and will report the confidence of each tag.
+1. Choose the **Run analysis** button to get key/value pairs, text, and tables predictions for the form. The tool applies tags in bounding boxes and reports the confidence of each tag.
:::image type="content" source="../media/analyze.png" alt-text="Training view.":::
-That's it! You've learned how to use the Document Intelligence sample tool for Document Intelligence prebuilt, layout and custom models. You've also learned to analyze a custom form with manually labeled data.
+That's it! You learned how to use the Document Intelligence sample tool for Document Intelligence prebuilt, layout, and custom models. You also learned to analyze a custom form with manually labeled data.
## Next steps
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
# Azure OpenAI API preview lifecycle
-This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. Post April 2, 2024, the latest three preview APIs will remain supported while older APIs will no longer be supported.
+This article is to help you understand the support lifecycle for the Azure OpenAI API previews. New preview APIs target a monthly release cadence. After July 1, 2024, the latest three preview APIs will remain supported while older APIs will no longer be supported unless support is explictly indicated.
> [!NOTE] > The `2023-06-01-preview` API will remain supported at this time, as `DALL-E 2` is only available in this API version. `DALL-E 3` is supported in the latest API releases. The `2023-10-01-preview` API will also remain supported at this time.
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
main().catch((err) => {
$openai = @{ api_key = $Env:AZURE_OPENAI_API_KEY api_base = $Env:AZURE_OPENAI_ENDPOINT # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
- api_version = '2023-10-01-preview' # this may change in the future
+ api_version = '2024-03-01-preview' # this may change in the future
name = 'YOUR-DEPLOYMENT-NAME-HERE' #This will correspond to the custom name you chose for your deployment when you deployed a model. }
ai-services Prompt Engineering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/prompt-engineering.md
description: Learn how to use prompt engineering to optimize your work with Azure OpenAI Service. Previously updated : 03/21/2023 Last updated : 03/29/2024
Prompt construction can be difficult. In practice, the prompt acts to configure
This section covers the basic concepts and elements of GPT prompts.
-Text prompts are how users interact with GPT models. As with all generative language models, GPT models attempt to produce the next series of words that are most likely to follow from the previous text. It's as if we're saying *What is the first thing that comes to your mind when I say `<prompt>`?ΓÇ¥*. The examples below demonstrate this behavior. Given the first words of famous content, the model is able to accurately continue the text.
+Text prompts are how users interact with GPT models. As with all generative language models, GPT models attempt to produce the next series of words that are most likely to follow from the previous text. It's as if we're saying *What is the first thing that comes to your mind when I say `<prompt>`?*. The examples below demonstrate this behavior. Given the first words of famous content, the model is able to accurately continue the text.
| Prompt |Completion | ||--|
Simple instructions | Write an introduction for a weekly newsletter. | Welcome t
### Primary Content
-Primary content refers to some sort of text that is being processed or transformed by the model. Primary content is typically used in conjunction with instructions. A simple example would be language translation. In the example below, the English text would be considered the primary content while ΓÇ£Translate to French:ΓÇ¥ is an instruction.
+Primary content refers to some sort of text that is being processed or transformed by the model. Primary content is typically used in conjunction with instructions. A simple example would be language translation. In the example below, the English text would be considered the primary content while "Translate to French:" is an instruction.
| Prompt | Completion| |--|--|
Successful prompts often rely on the practice of ΓÇ£one-shotΓÇ¥ or ΓÇ£few-shot
| Zero-shot | Headline: Coach confident injury won't derail Warriors<br>Topic:| The coach is confident that the injury won't derail the Warriors' season. The team is still focused on their goals and that they will continue to work hard to achieve them.| | Few-shot | Headline: Twins' Correa to use opt-out, test free agency<br>Topic: Baseball<br>Headline: Qatar World Cup to have zones for sobering up<br>Topic: Soccer<br>Headline: Yates: Fantasy football intel for Week 6<br>Topic: Football<br>Headline: Coach confident injury won't derail Warriors<br>Topic: | Basketball |
-The example above illustrates the utility of few-shot learning. Without the examples, the model seems to be guessing at the desired behavior, while the examples cleanly show the model how to operate. This also demonstrates the power of the model, it can infer the category of label that is wanted, even without a ΓÇÿbasketballΓÇÖ label in the examples.
+The example above illustrates the utility of few-shot learning. Without the examples, the model seems to be guessing at the desired behavior, while the examples cleanly show the model how to operate. This also demonstrates the power of the model, it can infer the category of label that is wanted, even without a "basketball" label in the examples.
### Cue
-Cues act as the ΓÇ£jumpstartΓÇ¥ for the output of the model, helping to direct the model to the desired output. It is often a prefix that the model can build onto. In the example below, we show the cue directing the story to include key elements. Cues are often used along with instructions, but not always. In the examples below, the model is being used to summarize a long announcement email. A cue is used in one case to help focus the output of the model and to suggest a specific output format (bullet points) in the other.
+Cues act as the "jumpstart" for the output of the model, helping to direct the model to the desired output. It is often a prefix that the model can build onto. In the example below, we show the cue directing the story to include key elements. Cues are often used along with instructions, but not always. In the examples below, the model is being used to summarize a long announcement email. A cue is used in one case to help focus the output of the model and to suggest a specific output format (bullet points) in the other.
In the context of the Chat Completion API instructions take the form of the system message, and few-shot examples are indicated by providing a series of example user/assistant interactions to help prime the model. | Number of Cues|Prompt | Completion| |-|-|--| | 0 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message: | The purpose of this email is to inform employees of recent changes to the performance review experience, which is part of Contoso's Performance & Development approach. The changes include an improved structure, upgraded in-tool user experience, and focused prompts. Additionally, there is now an optional Conversation Starters section for employees to flag topics they'd like to discuss with their managers. |
-| 1 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message:<br> The key takeaway of the message is | that Contoso is improving the performance review experience by making it more focused on core priorities, expectations and impact. Additionally, they are upgrading the in-tool user experience with tips, resources and guidance. Finally, they are adding guidance on how to activate company-wide core priorities. |
+| 1 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message:<br> The key takeaway of the message is | that Contoso is improving the performance review experience by making it more focused on core priorities, expectations, and impact. Additionally, they are upgrading the in-tool user experience with tips, resources, and guidance. Finally, they are adding guidance on how to activate company-wide core priorities. |
|2 | Performance reviews are a core component of our Performance & Development approach at Contoso. […] <br> Summarize the above email message: <br>Key Points:<br>• | Contoso is introducing changes to the performance review experience, based on feedback from employees.<br>• The changes include a new focus on core priorities, expectations, and impact; an upgraded user experience; and new guidance on company-wide core priorities.<br>• Performance reviews are a time to reflect on past performance and learning as well as to plan for future priorities and growth opportunities.<br>• Employees and managers should write comments in advance of the performance review meeting.<br>• There should be at least two performance reviews per year.<br>• The time investment for each review should remain about the same. | ### Supporting content
Supporting content is information that the model can utilize to influence the ou
- **Be Specific**. Leave as little to interpretation as possible. Restrict the operational space. - **Be Descriptive**. Use analogies.-- **Double Down**. Sometimes you may need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc. -- **Order Matters**. The order in which you present information to the model may impact the output. Whether you put instructions before your content (“summarize the following…”) or after (“summarize the above…”) can make a difference in output. Even the order of few-shot examples can matter. This is referred to as recency bias.-- **Give the model an “out”**. It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with ‘not found’ if the answer is not present." This can help the model avoid generating false responses.
+- **Double Down**. Sometimes you might need to repeat yourself to the model. Give instructions before and after your primary content, use an instruction and a cue, etc.
+- **Order Matters**. The order in which you present information to the model might impact the output. Whether you put instructions before your content (“summarize the following…”) or after (“summarize the above…”) can make a difference in output. Even the order of few-shot examples can matter. This is referred to as recency bias.
+- **Give the model an ΓÇ£outΓÇ¥**. It can sometimes be helpful to give the model an alternative path if it is unable to complete the assigned task. For example, when asking a question over a piece of text you might include something like "respond with "not found" if the answer is not present." This can help the model avoid generating false responses.
## Space efficiency
-While the input size increases with each new generation of GPT models, there will continue to be scenarios that provide more data than the model can handle. GPT models break words into ΓÇ£tokensΓÇ¥. While common multi-syllable words are often a single token, less common words are broken in syllables. Tokens can sometimes be counter-intuitive, as shown by the example below which demonstrates token boundaries for different date formats. In this case, spelling out the entire month is more space efficient than a fully numeric date. The current range of token support goes from 2000 tokens with earlier GPT-3 models to up to 32,768 tokens with the 32k version of the latest GPT-4 model.
+While the input size increases with each new generation of GPT models, there will continue to be scenarios that provide more data than the model can handle. GPT models break words into "tokens." While common multi-syllable words are often a single token, less common words are broken in syllables. Tokens can sometimes be counter-intuitive, as shown by the example below which demonstrates token boundaries for different date formats. In this case, spelling out the entire month is more space efficient than a fully numeric date. The current range of token support goes from 2,000 tokens with earlier GPT-3 models to up to 32,768 tokens with the 32k version of the latest GPT-4 model.
:::image type="content" source="../media/prompt-engineering/space-efficiency.png" alt-text="Screenshot of a string of text with highlighted colors delineating token boundaries." lightbox="../media/prompt-engineering/space-efficiency.png":::
Given this limited space, it is important to use it as efficiently as possible.
## Next steps
-[Learn more about Azure OpenAI](../overview.md)
+[Learn more about Azure OpenAI.](../overview.md)
ai-services Chatgpt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/chatgpt.md
Previously updated : 05/15/2023 Last updated : 03/29/2024 keywords: ChatGPT zone_pivot_groups: openai-chat
ai-services Content Filters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/content-filters.md
description: Learn how to use content filters (preview) with Azure OpenAI Servic
Previously updated : 6/5/2023 Last updated : 03/29/2024 recommendations: false
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
To obtain an embedding vector for a piece of text, we make a request to the embe
# [console](#tab/console) ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2024-02-01\
-H 'Content-Type: application/json' \ -H 'api-key: YOUR_API_KEY' \ -d '{"input": "Sample Document goes here"}'
from openai import AzureOpenAI
client = AzureOpenAI( api_key = os.getenv("AZURE_OPENAI_API_KEY"),
- api_version = "2023-05-15",
+ api_version = "2024-02-01",
azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT") )
import openai
openai.api_type = "azure" openai.api_key = YOUR_API_KEY openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
-openai.api_version = "2023-05-15"
+openai.api_version = "2024-02-01"
response = openai.Embedding.create( input="Your text string goes here",
foreach (float item in returnValue.Value.Data[0].Embedding.ToArray())
$openai = @{ api_key = $Env:AZURE_OPENAI_API_KEY api_base = $Env:AZURE_OPENAI_ENDPOINT # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
- api_version = '2023-05-15' # this may change in the future
+ api_version = '2024-02-01' # this may change in the future
name = 'YOUR-DEPLOYMENT-NAME-HERE' #This will correspond to the custom name you chose for your deployment when you deployed a model. }
ai-services Function Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/function-calling.md
Parallel function calls are supported with:
### Supported models * `gpt-35-turbo` (1106)
-* `gpt-4` (1106-preview)
+* `gpt-35-turbo` (0125)
+* `gpt-4` (1106-Preview)
+* `gpt-4` (0125-Preview)
-### Supported API versions
+### API support
-* [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+Support for parallel function was first added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
Parallel function calls allow you to perform multiple function calls together, allowing for parallel execution and retrieval of results. This reduces the number of calls to the API that need to be made and can improve overall performance.
import json
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-12-01-preview"
+ api_version="2024-03-01-preview"
)
ai-services Json Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/json-mode.md
JSON mode is only currently supported with the following models:
### Supported models -- `gpt-4-1106-preview` ([region availability](../concepts/models.md#gpt-4-and-gpt-4-turbo-preview-model-availability))-- `gpt-35-turbo-1106` ([region availability)](../concepts/models.md#gpt-35-turbo-model-availability))
+* `gpt-35-turbo` (1106)
+* `gpt-35-turbo` (0125)
+* `gpt-4` (1106-Preview)
+* `gpt-4` (0125-Preview)
-### API version
+### API support
-- `2023-12-01-preview`
+Support for JSON mode was first added in API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
## Example
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-12-01-preview"
+ api_version="2024-03-01-preview"
) response = client.chat.completions.create(
- model="gpt-4-1106-preview", # Model = should match the deployment name you chose for your 1106-preview model deployment
+ model="gpt-4-0125-Preview", # Model = should match the deployment name you chose for your 0125-Preview model deployment
response_format={ "type": "json_object" }, messages=[ {"role": "system", "content": "You are a helpful assistant designed to output JSON."},
because they plan to use the output for further scripting.
$openai = @{ api_key = $Env:AZURE_OPENAI_API_KEY api_base = $Env:AZURE_OPENAI_ENDPOINT # like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
- api_version = '2023-12-01-preview' # may change in the future
+ api_version = '2024-03-01-preview' # may change in the future
name = 'YOUR-DEPLOYMENT-NAME-HERE' # name you chose for your deployment }
ai-services Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-05-15"
+ api_version="2024-02-01"
) response = client.chat.completions.create(
import openai
openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
-openai.api_version = "2023-05-15"
+openai.api_version = "2024-02-01"
response = openai.ChatCompletion.create( engine="gpt-35-turbo", # engine = "deployment_name".
from openai import AzureOpenAI
client = AzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-12-01-preview",
+ api_version="2024-02-01",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") )
import openai
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY") openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/ openai.api_type = 'azure'
-openai.api_version = '2023-05-15' # this might change in the future
+openai.api_version = '2024-02-01' # this might change in the future
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
from openai import AzureOpenAI
client = AzureOpenAI( api_key = os.getenv("AZURE_OPENAI_API_KEY"),
- api_version = "2023-05-15",
+ api_version = "2024-02-01",
azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT") )
import openai
openai.api_type = "azure" openai.api_key = YOUR_API_KEY openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
-openai.api_version = "2023-05-15"
+openai.api_version = "2024-02-01"
response = openai.Embedding.create( input="Your text string goes here",
from openai import AsyncAzureOpenAI
async def main(): client = AsyncAzureOpenAI( api_key = os.getenv("AZURE_OPENAI_API_KEY"),
- api_version = "2023-12-01-preview",
+ api_version = "2024-02-01",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") ) response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}])
from openai import AzureOpenAI
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
-api_version = "2023-12-01-preview"
+api_version = "2024-02-01"
endpoint = "https://my-resource.openai.azure.com" client = AzureOpenAI(
ai-services Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/monitoring.md
Previously updated : 11/14/2023 Last updated : 03/29/2024 # Monitoring Azure OpenAI Service
These are legacy metrics that are common to all Azure AI Services resources. We
### Azure OpenAI Metrics
+> [!NOTE]
+> The **Provisioned-managed Utilization** metric is now deprecated and is no longer recommended. This metric has been replaced by the **Provisioned-managed Utilization V2** metric.
++ The following table summarizes the current subset of metrics available in Azure OpenAI. |Metric|Category|Aggregation|Description|Dimensions|
The following table summarizes the current subset of metrics available in Azure
| `Processed FineTuned Training Hours` | Usage |Sum| Number of Training Hours Processed on an OpenAI FineTuned Model | `ApiName`, `ModelDeploymentName`,`ModelName`, `Region`| | `Processed Inference Tokens` | Usage | Sum| Number of inference tokens processed by an OpenAI model. Calculated as prompt tokens (input) + generated tokens. Applies to PayGo, PTU, and PTU-manged SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`| | `Processed Prompt Tokens` | Usage | Sum | Total number of prompt tokens (input) processed on an OpenAI model. Applies to PayGo, PTU, and PTU-managed SKUs.|`ApiName`, `ModelDeploymentName`,`ModelName`, `Region`|
-| `Provision-managed Utilization` | Usage | Average | Provision-managed utilization is the utilization percentage for a given provisioned-managed deployment. Calculated as (PTUs consumed/PTUs deployed)*100. When utilization is at or above 100%, calls are throttled and return a 429 error code. | `ModelDeploymentName`,`ModelName`,`ModelVersion`, `Region`, `StreamType`|
+| `Provision-managed Utilization V2` | Usage | Average | Provision-managed utilization is the utilization percentage for a given provisioned-managed deployment. Calculated as (PTUs consumed/PTUs deployed)*100. When utilization is at or above 100%, calls are throttled and return a 429 error code. | `ModelDeploymentName`,`ModelName`,`ModelVersion`, `Region`, `StreamType`|
## Configure diagnostic settings
ai-services Provisioned Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md
The inferencing code for provisioned deployments is the same a standard deployme
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-05-15"
+ api_version="2024-02-01"
) response = client.chat.completions.create(
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-05-15",
+ api_version="2024-02-01",
max_retries=5,# default is 2 )
ai-services Use Blocklists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-blocklists.md
Copy the cURL command below to a text editor and make the following changes:
1. Optionally replace the value of the "description" field with a custom description. ```bash
-curl --location --request PUT 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/raiBlocklists/{raiBlocklistName}?api-version=2023-10-01-preview' \
+curl --location --request PUT 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/raiBlocklists/{raiBlocklistName}?api-version=2024-03-01-preview' \
--header 'Authorization: Bearer {token}' \ --header 'Content-Type: application/json' \ --data-raw '{
To apply a **completion** blocklist to a content filter, use the following cURL
1. Replace "raiBlocklistName" in the body with a custom name for your list. Allowed characters: `0-9, A-Z, a-z, - . _ ~`. ```bash
-curl --location --request PUT 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/raiPolicies/{raiPolicyName}?api-version=2023-10-01-preview' \
+curl --location --request PUT 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/raiPolicies/{raiPolicyName}?api-version=2024-03-01-preview' \
--header 'Authorization: Bearer {token}' \ --header 'Content-Type: application/json' \ --data-raw '{
Copy the cURL command below to a text editor and make the following changes:
1. Replace the value of the `"blocking pattern"` field with the item you'd like to add to your blocklist. The maximum length of a blockItem is 1000 characters. Also specify whether the pattern is regex or exact match. ```bash
-curl --location --request PUT 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/raiBlocklists/{raiBlocklistName}/raiBlocklistItems/{raiBlocklistItemName}?api-version=2023-10-01-preview' \
+curl --location --request PUT 'https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.CognitiveServices/accounts/{accountName}/raiBlocklists/{raiBlocklistName}/raiBlocklistItems/{raiBlocklistItemName}?api-version=2024-03-01-preview' \
--header 'Authorization: Bearer {token}' \ --header 'Content-Type: application/json' \ --data-raw '{
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
Previously updated : 02/23/2024 Last updated : 03/27/2024 recommendations: false
When customizing the app, we recommend:
Sample source code for the web app is available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT). Source code is provided "as is" and as a sample only. Customers are responsible for all customization and implementation of their web apps.
-### Updating the web app
-
-We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements.
+## Updating the web app
> [!NOTE] > After February 1, 2024, the web app requires the app startup command to be set to `python3 -m gunicorn app:app`. When updating an app that was published prior to February 1, 2024, you need to manually add the startup command from the **App Service Configuration** page.
+We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements. Additionally, the web app must be synchronized every time the API version being used is [retired](../api-version-deprecation.md#retiring-soon).
+
+**If you haven't customized the app:**
+* You can follow the synchronization steps below
+
+**If you've customized or changed the app's source code:**
+* You will need to update your app's source code manually and redeploy it.
+ * If your app is hosted on GitHub, push your code changes to your repo, and use the synchronization steps below.
+ * If you're redeploying the app manually (for example Azure CLI), follow the steps for your deployment strategy.
++
+### Synchronize the web app
+
+1. If you've customized your app, update the app's source code.
+1. Navigate to your web app in the [Azure portal](https://portal.azure.com/).
+1. Select **Deployment center** in the navigation menu, under **Deployment**.
+1. Select **Sync** at the top of the screen, and confirm that the app will be redeployed.
+
+ :::image type="content" source="../media/use-your-data/sync-app.png" alt-text="A screenshot of web app synchronization button on the Azure portal." lightbox="../media/use-your-data/sync-app.png":::
++ ## Chat history You can enable chat history for your users of the web app. When you enable the feature, your users will have access to their individual previous queries and responses.
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Azure OpenAI provides two methods for authentication. You can use either API Ke
The service APIs are versioned using the ```api-version``` query parameter. All versions follow the YYYY-MM-DD date structure. For example: ```http
-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01
``` ## Completions
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2023-05-15\
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/completions?api-version=2024-02-01\
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
#### Example request ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15 \
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2024-02-01 \
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d "{\"input\": \"The food was delicious and the waiter...\"}"
The request body consists of a series of messages. The model will generate a res
**Text-only chat** ```console
-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15 \
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2024-02-01 \
-H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d '{"messages":[{"role": "system", "content": "You are a helpful assistant."},{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},{"role": "user", "content": "Do other Azure AI services support this too?"}]}'
ai-services Fine Tune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/tutorials/fine-tune.md
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-12-01-preview" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
+ api_version="2024-02-01" # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
) training_file_name = 'training_set.jsonl'
import os
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY") openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") openai.api_type = 'azure'
-openai.api_version = '2023-12-01-preview' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
+openai.api_version = '2024-02-01' # This API version or later is required to access fine-tuning for turbo/babbage-002/davinci-002
training_file_name = 'training_set.jsonl' validation_file_name = 'validation_set.jsonl'
from openai import AzureOpenAI
client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), api_key=os.getenv("AZURE_OPENAI_API_KEY"),
- api_version="2023-05-15"
+ api_version="2024-02-01"
) response = client.chat.completions.create(
import os
import openai openai.api_type = "azure" openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
-openai.api_version = "2023-05-15"
+openai.api_version = "2024-02-01"
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY") response = openai.ChatCompletion.create(
ai-services Speech Synthesis Markup Pronunciation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-synthesis-markup-pronunciation.md
Usage of the `lexicon` element's attributes are described in the following table
The supported values for attributes of the `lexicon` element were [described previously](#custom-lexicon).
-After you publish your custom lexicon, you can reference it from your SSML. The following SSML example references a custom lexicon that was uploaded to `https://www.example.com/customlexicon.xml`. We support lexicon URLs from Azure Blob Storage, Azure Media Services (AMS) Storage, and GitHub. However, note that other public URLs may not be compatible.
+After you publish your custom lexicon, you can reference it from your SSML. The following SSML example references a custom lexicon that was uploaded to `https://www.example.com/customlexicon.xml`. We support lexicon URLs from Azure Blob Storage, Advanced Media Services (AMS) Storage, and GitHub. However, note that other public URLs may not be compatible.
```xml <speak version="1.0" xmlns="http://www.w3.org/2001/10/synthesis"
api-center Enable Api Analysis Linting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-analysis-linting.md
Title: Perform API linting and analysis - Azure API Center
description: Configure linting of API definitions in your API center to analyze compliance of APIs with the organization's API style guide. Previously updated : 03/11/2024 Last updated : 03/26/2024
Follow these steps to deploy the Azure Functions app that runs the linting funct
To enable the function app to access the API center, configure a managed identity for the function app. The following steps show how to enable and configure a system-assigned managed identity for the function app using the Azure portal or the Azure CLI.
-> [!NOTE]
-> In preview, this scenario requires the Contributor role to be assigned to the function app's managed identity.
-- #### [Portal](#tab/portal) 1. In the Azure portal, navigate to your function app and select **Identity** under the **Settings** section. 1. On the **System assigned** tab, set the **Status** to **On** and then select **Save**.
-Now that the managed identity is enabled, assign it the Contributor role to access the API center.
+Now that the managed identity is enabled, assign it the Azure API Center Compliance Manager role to access the API center.
-1. In the Azure portal, navigate to your API center and select **Access control (IAM)**.
+1. In the [Azure portal](https://portal.azure.com), navigate to your API center and select **Access control (IAM)**.
1. Select **+ Add > Add role assignment**.
-1. Select **Privileged administrator roles** and then select **Contributor**. Select **Next**.
+1. Select **Job function roles** and then select **Azure API Center Compliance Manager**. Select **Next**.
1. On the **Members** page, in **Assign access to**, select **Managed identity > + Select members**. 1. On the **Select managed identities** page, search for and select the managed identity of the function app. Click **Select** and then **Next**. 1. Review the role assignment, and select **Review + assign**.
Now that the managed identity is enabled, assign it the Contributor role to acce
--query "id" --output tsv) ```
-1. Assign the function app's managed identity the Contributor role in the API center using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command.
+1. Assign the function app's managed identity the Azure API Center Compliance Manager role in the API center using the [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) command.
```azurecli #! /bin/bash az role assignment create \
- --role "Contributor" \
+ --role "Azure API Center Compliance Manager" \
--assignee-object-id $principalID \ --assignee-principal-type ServicePrincipal \ --scope $apicID
Now that the managed identity is enabled, assign it the Contributor role to acce
```azurecli # PowerShell syntax az role assignment create `
- --role "Contributor" `
+ --role "Azure API Center Compliance Manager" `
--assignee-object-id $principalID ` --assignee-principal-type ServicePrincipal ` --scope $apicID
Now create an event subscription in your API center to trigger the function app
#### [Portal](#tab/portal)
-1. Sign in to the Azure portal at [https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview](https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview). Currently for this scenario, you must access your API center in the portal at this feature flag.
-1. Navigate to your API center and select **Events**.
-1. Select **Azure Function**.
+1. In the [Azure portal](https://portal.azure.com), navigate to your API center and select **Events**.
+1. On the **Get started** tab, select **Azure Function**.
1. On the **Create Event Subscription** page, do the following: 1. Enter a descriptive **Name** for the event subscription, and select **Event Grid Schema**. 1. In **Topic details**, enter a **System topic name** of your choice.
To test the event subscription, try uploading or updating an API definition file
To confirm that the event subscription was triggered:
-1. Sign in to the Azure portal at [https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview](https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview).
1. Navigate to your API center, and select **Events** in the left menu. 1. Select the **Event Subscriptions** tab and select the event subscription for your function app. 1. Review the metrics to confirm that the event subscription was triggered and that linting was invoked successfully.
In the portal, you can also view a summary of analysis reports for all API defin
To view the analysis report for an API definition in your API center:
-1. Sign in to the Azure portal at [https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview](https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview).
-1. Navigate to the API version in your API center where you added or updated an API definition.
+1. In the portal, navigate to the API version in your API center where you added or updated an API definition.
1. Select **Definitions**, and then select the API definition file that you uploaded or updated. 1. Select the **Analysis** tab. :::image type="content" source="media/enable-api-analysis-linting/analyze-api-definition.png" alt-text="Screenshot of Analysis tab for API definition in the portal.":::
The **API Analysis Report** opens, and it displays the API definition and errors
To view a summary of analysis reports for all API definitions in your API center:
-1. Sign in to the Azure portal at [https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview](https://portal.azure.com/?Microsoft_Azure_ApiManagement=apicenterpreview).
+1. In the portal, navigate to your API center.
1. In the left-hand menu, under **Governance**, select **API Analysis**. The summary appears. :::image type="content" source="media/enable-api-analysis-linting/api-analysis-summary.png" alt-text="Screenshot of the API analysis summary in the portal.":::
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
If launched from an app's **Custom domains** page, the App Service domain wizard
## Renew the domain
-The App Service domain you bought is valid for one year from the time of purchase. You can configure to renew your domain automatically, which will charge your payment method when your domain renews the following year. You can also manually renew your domain name up to 90 days ahead of domain expiration.
+The App Service domain you bought is valid for one year from the time of purchase. You can configure to renew your domain automatically, or you can also manually renew your domain name up to 90 days ahead of domain expiration. Upon successful auto or manual renewal, you will be billed for the cost of the domain and your domain expiration will be extended for another year.
> [!NOTE] > For .nl domains, you can only manually renew the domain starting 90 days ahead of domain expiration and up to the 20th of the month before the expiration date. You will not be able to renew the domain after this period even if the domain has not yet expired.
automation Automation Use Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-use-azure-ad.md
description: This article tells how to use Microsoft Entra ID within Azure Autom
Last updated 05/26/2023 -+ # Use Microsoft Entra ID to authenticate to Azure
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Arc resource bridge consists of an appliance VM that is deployed to the on-premi
### Private Link is unsupported
-Arc resource bridge doesn'tt support private link. All calls coming from the appliance VM shouldn't be going through your private link setup. The Private Link IPs may conflict with the appliance IP pool range, which isn't configurable on the resource bridge. Arc resource bridge reaches out to [required URLs](network-requirements.md#firewallproxy-url-allowlist) that shouldn't go through a private link connection. You must deploy Arc resource bridge on a separate network segment unrelated to the private link setup.
+Arc resource bridge doesn't support private link. All calls coming from the appliance VM shouldn't be going through your private link setup. The Private Link IPs may conflict with the appliance IP pool range, which isn't configurable on the resource bridge. Arc resource bridge reaches out to [required URLs](network-requirements.md#firewallproxy-url-allowlist) that shouldn't go through a private link connection. You must deploy Arc resource bridge on a separate network segment unrelated to the private link setup.
## Networking issues
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
You can view and configure the following settings using the **Resource Menu**. T
- [Cluster size](#cluster-size) - [Data persistence](#data-persistence) - [Identity](#identity)
- - [Alerts](#alerts)
- [Schedule updates](#schedule-updates) - [Geo-replication](#geo-replication) - [Virtual Network](#virtual-network)
You can view and configure the following settings using the **Resource Menu**. T
- [Export data](#importexport) - [Reboot](#reboot) - [Monitoring](#monitoring)
- - [Insights](#insights)
- - [Alerts](#alerts)
- - [Metrics](#metrics)
- - [Diagnostic settings](#diagnostic-settings)
- - [Advisor recommendations](#advisor-recommendations)
- - [Workbooks](#workbooks)
- Automation - [Tasks (preview)](#tasks) - [Export template](#export-template)
The **Overview** section provides you with basic information about your cache, s
### Activity log
-Select **Activity log** to view actions done to your cache. You can also use filtering to expand this view to include other resources. For more information on working with audit logs, see [Audit operations with Resource Manager](../azure-monitor/essentials/activity-log.md). For more information on monitoring Azure Cache for Redis events, see [Create alerts](cache-how-to-monitor.md#create-alerts).
+Select **Activity log** to view actions done to your cache. You can also use filtering to expand this view to include other resources. For more information on working with audit logs, see [Audit operations with Resource Manager](/azure/azure-monitor/essentials/activity-log). For more information on monitoring the activity log, see [Activity log](monitor-cache.md#azure-activity-log).
### Access control (IAM)
-The **Access control (IAM)** section provides support for Azure role-based access control (Azure RBAC) in the Azure portal. This configuration helps organizations meet their access management requirements simply and precisely. For more information, see [Azure role-based access control in the Azure portal](../role-based-access-control/role-assignments-portal.md).
+The **Access control (IAM)** section provides support for Azure role-based access control (Azure RBAC) in the Azure portal. This configuration helps organizations meet their access management requirements simply and precisely. For more information, see [Azure role-based access control in the Azure portal](/azure/role-based-access-control/role-assignments-portal).
### Tags
-The **Tags** section helps you organize your resources. For more information, see [Using tags to organize your Azure resources](../azure-resource-manager/management/tag-resources.md).
+The **Tags** section helps you organize your resources. For more information, see [Using tags to organize your Azure resources](/azure/azure-resource-manager/management/tag-resources).
### Diagnose and solve problems
Select **Diagnose and solve problems** to be provided with common issues and str
Select **Events** to add event subscriptions to your cache. Use events to build reactive, event-driven apps with the fully managed event routing service that is built into Azure.
-The Event Grid helps you build automation into your cloud infrastructure, create serverless apps, and integrate across services and clouds. For more information, see [What is Azure Event Grid](../event-grid/overview.md).
+The Event Grid helps you build automation into your cloud infrastructure, create serverless apps, and integrate across services and clouds. For more information, see [What is Azure Event Grid](/azure/event-grid/overview).
## Redis console
You can move your cache to a new subscription by selecting **Move**.
:::image type="content" source="media/cache-configure/redis-cache-move.png" alt-text="Move Azure Cache for Redis":::
-For information on moving resources from one resource group to another, and from one subscription to another, see [Move resources to new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md).
+For information on moving resources from one resource group to another, and from one subscription to another, see [Move resources to new resource group or subscription](/azure/azure-resource-manager/management/move-resource-group-and-subscription).
## Settings
Select **Properties** to view information about your cache, including the cache
### Locks
-The **Locks** section allows you to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. For more information, see [Lock resources with Azure Resource Manager](../azure-resource-manager/management/lock-resources.md).
+The **Locks** section allows you to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources. For more information, see [Lock resources with Azure Resource Manager](/azure/azure-resource-manager/management/lock-resources).
## Administration settings
To reboot one or more nodes of your cache, select the desired nodes and select *
## Monitoring
-The **Monitoring** section allows you to configure diagnostics and monitoring for your Azure Cache for Redis.
-For more information on Azure Cache for Redis monitoring and diagnostics, see [How to monitor Azure Cache for Redis](cache-how-to-monitor.md).
+The **Monitoring** section allows you to configure diagnostics and monitoring for your Azure Cache for Redis instance.
+- For more information on Azure Cache for Redis monitoring and diagnostics, see [Monitor Azure Cache for Redis](monitor-cache.md).
+- For information on how to set up and use Azure Cache for Redis monitoring and diagnostics, see [How to monitor Azure Cache for Redis](cache-how-to-monitor.md).
-- [Insights](#insights)-- [Metrics](#metrics)-- [Alerts](#alerts)-- [Diagnostic settings](#diagnostic-settings)-- [Advisor recommendations](#advisor-recommendations) ### Insights
-Use **Insights** to see groups of predefined tiles and charts to use as starting point for your cache metrics.
-
-For more information, see [Use Insights for predefined charts](cache-how-to-monitor.md#use-insights-for-predefined-charts).
+Use **Insights** to see groups of predefined tiles and charts to use as starting point for your cache metrics. For more information, see [Insights](monitor-cache.md#insights).
### Metrics
-Select **Metrics** to Create your own custom chart to track the metrics you want to see for your cache. For more information, see [Create alerts](cache-how-to-monitor.md#create-alerts).
+Select **Metrics** to create your own custom chart to track the metrics you want to see for your cache. For more information, see [Create your own metrics](cache-how-to-monitor.md#create-your-own-metrics).
### Alerts
Select **Alerts** to configure alerts based on Azure Cache for Redis metrics. Fo
### Diagnostic settings
-By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, select **Diagnostics settings** to [configure the storage account](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics) used to store cache diagnostics.
+By default, cache metrics in Azure Monitor are [stored for 30 days](/azure/azure-monitor/essentials/data-platform-metrics) and then deleted. To persist your cache metrics for longer than 30 days, select **Diagnostics settings** to [configure the storage account](monitor-cache.md#data-storage) used to store cache diagnostics.
>[!NOTE]
->In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
->
+>In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to Azure Monitor logs](/azure/azure-monitor/essentials/stream-monitoring-data-event-hubs).
### Advisor recommendations
Azure Automation delivers a cloud-based automation, operating system updates, an
Select **Tasks** to help you manage Azure Cache for Redis resources more easily. These tasks vary in number and availability, based on the resource type. Presently, you can only use the **Send monthly cost for resource** template to create a task while in preview.
-For more information, see [Manage Azure resources and monitor costs by creating automation tasks](../logic-apps/create-automation-tasks-azure-resources.md).
+For more information, see [Manage Azure resources and monitor costs by creating automation tasks](/azure/logic-apps/create-automation-tasks-azure-resources).
### Export template
-Select **Export template** to build and export a template of your deployed resources for future deployments. For more information about working with templates, see [Deploy resources with Azure Resource Manager templates](../azure-resource-manager/templates/deploy-powershell.md).
+Select **Export template** to build and export a template of your deployed resources for future deployments. For more information about working with templates, see [Deploy resources with Azure Resource Manager templates](/azure/azure-resource-manager/templates/deploy-powershell).
## Support & troubleshooting settings
The settings in the **Support + troubleshooting** section provide you with optio
### Resource health
-**Resource health** watches your resource and tells you if it's running as expected. For more information about the Azure Resource health service, see [Azure Resource health overview](../service-health/resource-health-overview.md).
+**Resource health** watches your resource and tells you if it's running as expected. For more information about the Azure Resource health service, see [Azure Resource health overview](/azure/service-health/resource-health-overview).
> [!NOTE] > Resource health is currently unable to report on the health of Azure Cache for Redis instances hosted in a virtual network. For more information, see [Do all cache features work when hosting a cache in a VNET?](cache-how-to-premium-vnet.md#do-all-cache-features-work-when-a-cache-is-hosted-in-a-virtual-network)
azure-cache-for-redis Cache How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-monitor.md
Title: Monitor Azure Cache for Redis
-description: Learn how to monitor the health and performance your Azure Cache for Redis instances.
+ Title: How to monitor Azure Cache for Redis
+description: Learn how to monitor the health and performance of your Azure Cache for Redis instances.
Previously updated : 12/02/2023 Last updated : 02/29/2024
-# Monitor Azure Cache for Redis
+# How to monitor Azure Cache for Redis
-Azure Cache for Redis uses [Azure Monitor](../azure-monitor/index.yml) to provide several options for monitoring your cache instances. Use these tools to monitor the health of your Azure Cache for Redis instances and to help you manage your caching applications.
+Azure Cache for Redis uses [Azure Monitor](/azure/azure-monitor/index) to provide several options for monitoring your cache instances. Use these tools to monitor the health of your Azure Cache for Redis instances and to help you manage your caching applications.
Use Azure Monitor to:
Use Azure Monitor to:
Metrics for Azure Cache for Redis instances are collected using the Redis [`INFO`](https://redis.io/commands/info) command. Metrics are collected approximately two times per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
-To configure a different retention policy, see [Use a storage account to export cache metrics](#use-a-storage-account-to-export-cache-metrics).
+To configure a different retention policy, see [Data storage](monitor-cache.md#data-storage). For more information about the different `INFO` values used for each cache metric, see [Create your own metrics](#create-your-own-metrics).
-For more information about the different `INFO` values used for each cache metric, see [Create your own metrics](#create-your-own-metrics).
+For detailed information about all the monitoring options available for Azure Cache for Redis, see [Monitor Azure Cache for Redis](monitor-cache.md).
+<a name="use-a-storage-account-to-export-cache-metrics"></a>
+<a name="list-of-metrics"></a>
+<a name="monitor-azure-cache-for-redis"></a>
## View cache metrics
-The Resource menu shows some simple metrics in two places: **Overview** and **Monitoring**.
+You can view Azure Monitor metrics for Azure Cache for Redis directly from an Azure Cache for Redis resource in the [Azure portal](https://portal.azure.com).
-To view basic cache metrics, [find your cache](cache-configure.md#configure-azure-cache-for-redis-settings) in the [Azure portal](https://portal.azure.com). On the left, select **Overview**. You see the following predefined monitoring charts: **Memory Usage**, and **Redis Server Load**. These charts are useful summaries that allow you to take a quick look at the state of your cache.
+[Select your Azure Cache for Redis instance](cache-configure.md#configure-azure-cache-for-redis-settings) in the portal. The **Overview** page shows the predefined **Memory Usage** and **Redis Server Load** monitoring charts. These charts are useful summaries that allow you to take a quick look at the state of your cache.
:::image type="content" source="./media/cache-how-to-monitor/cache-overview-metrics.png" alt-text="Screen showing two charts: Memory Usage and Redis Server Load.":::
-For more in depth information, you can see more metrics under the **Monitoring** section of the Resource menu. Select **Metrics** to see, create, or customize a chart by adding metrics, removing metrics, and changing the reporting interval.
+For more in-depth information, you can monitor the following useful Azure Cache for Redis metrics from the **Monitoring** section of the Resource menu.
-
-The other options under **Monitoring**, provide other ways to view and use the metrics for your caches.
-
-|Selection | Description |
-|||
-| [**Insights**](#use-insights-for-predefined-charts) | A group of predefined tiles and charts to use as starting point for your cache metrics. |
-| [**Alerts**](#create-alerts) | Configure alerts based on metrics and activity logs. |
-| [**Metrics**](#create-your-own-metrics) | Create your own custom chart to track the metrics you want to see. |
-| [**Advisor Recommendations**](cache-configure.md#advisor-recommendations)) | Helps you follow best practices to optimize your Azure deployments. |
-| [**Workbooks**](#organize-with-workbooks) | Organize your metrics into groups so that you display metric information in a coherent and effective way. |
-
-## View metrics charts with Azure Monitor for Azure Cache for Redis
-
-Use [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md) for a view of the overall performance, failures, capacity, and operational health of all your Azure Cache for Redis resources. View metrics in a customizable, unified, and interactive experience that lets you drill down into details for individual resources. Azure Monitor for Azure Cache for Redis is based on the [workbooks feature of Azure Monitor](../azure-monitor/visualize/workbooks-overview.md) that provides rich visualizations for metrics and other data. To learn more, see the [Explore Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md) article.
-
-While you can access Azure Monitor features from the Monitor menu in the Azure portal, Azure Monitor features can be accessed directly from the Resource menu for an Azure Cache for Redis resource. For more information on working with metrics using Azure Monitor, see [Overview of metrics in Microsoft Azure](../azure-monitor/data-platform.md).
-
-For scenarios where you don't need the full flexibility of Azure Monitor for Azure Cache for Redis, you can instead view metrics and create custom charts using **Metrics** from the Resource menu for your cache, and customize your chart using your preferred metrics, reporting interval, chart type, and more. For more information, see [Create your own metrics](#create-your-own-metrics).
-
-## Use Insights for predefined charts
-
-The **Monitoring** section in the Resource menu contains **Insights**. When you select **Insights**, you see groupings of three types of charts: **Overview**, **Performance**, and **Operations**.
--
-Each tab contains status tiles and charts. These tiles and charts are a starting point for your metrics. If you wish to expand beyond **Insights**, you can define your own alerts, metrics, diagnostic settings and workbooks.
-
-## Use a storage account to export cache metrics
-
-By default, cache metrics in Azure Monitor are [stored for 30 days](../azure-monitor/essentials/data-platform-metrics.md) and then deleted. To persist your cache metrics for longer than 30 days, you can use a [storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage) and specify a **Retention (days)** policy that meets your requirements.
-
-Configure a storage account to use with to store your metrics. The storage account must be in the same region as the caches. Once you create a storage account, configure the storage account for your cache metrics:
-
-1. In the **Azure Cache for Redis** page, under the **Monitoring** heading, select **Diagnostics settings**.
-
-1. Select **+ Add diagnostic setting**.
-
-1. Name the settings.
+| Azure Cache for Redis metric | More information |
+| | |
+| Network bandwidth usage |[Cache performance - available bandwidth](cache-planning-faq.yml#azure-cache-for-redis-performance) |
+| Connected clients |[Default Redis server configuration - max clients](cache-configure.md#maxclients) |
+| Server load |[Redis Server Load](monitor-cache-reference.md#azure-cache-for-redis-metrics) |
+| Memory usage |[Cache performance - size](cache-planning-faq.yml#azure-cache-for-redis-performance) |
-1. Check **Archive to a storage account**. You're charged normal data rates for storage and transactions when you send diagnostics to a storage account.
-
-1. Select **Configure** to choose the storage account in which to store the cache metrics.
-
-1. Under the table heading **metric**, check box beside the line items you want to store, such as **AllMetrics**. Specify a **Retention (days)** policy. The maximum days retention you can specify is **365 days**. However, if you want to keep the metrics data forever, set **Retention (days)** to **0**.
-
-1. Select **Save**.
-
- :::image type="content" source="./media/cache-how-to-monitor/cache-diagnostics.png" alt-text="Redis diagnostics":::
-
->[!NOTE]
->In addition to archiving your cache metrics to storage, you can also [stream them to an Event hub or send them to a Log Analytics workspace](../azure-monitor/essentials/rest-api-walkthrough.md#retrieve-metric-values).
->
-To access your metrics, you view them in the Azure portal as previously described in this article. You can also access them using the [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md).
+For a complete list and description of metrics you can monitor, see [Azure Cache for Redis metrics](monitor-cache-reference.md#azure-cache-for-redis-metrics).
-> [!NOTE]
-> If you change storage accounts, the data in the previously configured storage account remains available for download, but it is not displayed in the Azure portal.
->
+The other options under **Monitoring** provide other ways to monitor your caches. For detailed information, see [Monitor Azure Cache for Redis](monitor-cache.md).
## Create your own metrics
In the Resource menu on the left, select **Metrics** under **Monitoring**. Here,
### Aggregation types
-When you're seeing the aggregation type:
--- **Count** show 2, it indicates the metric received 2 data points for your time granularity (1 minute).-- **Max** shows the maximum value of a data point in the time granularity.-- **Min** shows the minimum value of a data point in the time granularity.-- **Average** shows the average value of all data points in the time granularity.-- **Sum** shows the sum of all data points in the time granularity and might be misleading depending on the specific metric.- Under normal conditions, **Average** and **Max** are similar because only one node emits these metrics (the primary node). In a scenario where the number of connected clients changes rapidly, **Max**, **Average**, and **Min** would show different values and is also expected behavior. Generally, **Average** shows you a smooth chart of your desired metric and reacts well to changes in time granularity. **Max** and **Min** can hide large changes in the metric if the time granularity is large but can be used with a small time granularity to help pinpoint exact times when large changes occur in the metric.
The types **Count** and **Sum** can be misleading for certain metrics (connected
> [!NOTE] > Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. The activity is normal in the operation of cache.
->
For nonclustered caches, we recommend using the metrics without the suffix `Instance Based`. For example, to check server load for your cache instance, use the metric _Server Load_. In contrast, for clustered caches, we recommend using the metrics with the suffix `Instance Based`. Then, add a split or filter on `ShardId`. For example, to check the server load of shard 1, use the metric **Server Load (Instance Based)**, then apply filter **ShardId = 1**.
-## List of metrics
--- 99th Percentile Latency (preview)
- - Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond.
- - Useful for tracking the health of your Redis instance. Latency increases if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command.
- - This metric is only available in Standard and Premium tier caches.
- - This metric isn't available for caches that are affected by Cloud Service retirement. See more information [here](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic)
-- Cache Latency (preview)
- - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval.
-- Cache Misses
- - The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there might be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`.
-- Cache Miss Rate
- - The percent of unsuccessful key lookups during the specified reporting interval. This metric isn't available in Enterprise or Enterprise Flash tier caches.
-- Cache Read
- - The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.
-- Cache Write
- - The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client.
-- Connected Clients
- - The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there might still be a few instances of connected clients because of internal processes and connections.
-- Connected Clients Using Microsoft Entra Token (preview)
- - The number of client connections to the cache authenticated using Microsoft Entra token during the specified reporting interval.
-- Connections Created Per Second
- - The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches.
-- Connections Closed Per Second
- - The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches.
-- CPU
- - The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note: This metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server.
-- Errors
- - Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types, but could add more in the future. The error types represented now are as follows:
- - **Failover** ΓÇô when a cache fails over (subordinate promotes to primary).
- - **Dataloss** ΓÇô when there's data loss on the cache.
- - **UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough, and specifically, when the number of bytes in the Redis server output buffer for a client goes over 1,000,000 bytes.
- - **AOF** ΓÇô when there's an issue related to AOF persistence.
- - **RDB** ΓÇô when there's an issue related to RDB persistence.
- - **Import** ΓÇô when there's an issue related to Import RDB.
- - **Export** ΓÇô when there's an issue related to Export RDB.
- - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token. Not recommended. Use **MicrosoftEntraAuthenticationFailure** instead.
- - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires. Not recommended. Use **MicrosoftEntraTokenExpired** instead.
- - **MicrosoftEntraAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token.
- - **MicrosoftEntraTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires.
-
-> [!NOTE]
-> Metrics for errors aren't available when using the Enterprise tiers.
--- Evicted Keys
- - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit.
- - This number maps to `evicted_keys` from the Redis INFO command.
-- Expired Keys
- - The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.
-
-> [!IMPORTANT]
-> Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
->
-
-> [!NOTE]
-> The [Geo-Replication Dashboard](#organize-with-workbooks) workbook is a simple and easy way to view all Premium-tier geo-replication metrics in the same place. This dashboard will pull together metrics that are only emitted by the geo-primary or geo-secondary, so they can be viewed simultaneously.
->
--- Geo Replication Connectivity Lag
- - Depicts the time, in seconds, since the last successful data synchronization between geo-primary & geo-secondary. If the link goes down, this value continues to increase, indicating a problem.
- - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
- - This metric is only available in the Premium tier for caches with geo-replication enabled.
-- Geo Replication Data Sync Offset
- - Depicts the approximate amount of data in bytes that has yet to be synchronized to geo-secondary cache.
- - This metric is only emitted _from the geo-primary_ cache instance. On the geo-secondary instance, this metric has no value.
- - This metric is only available in the Premium tier for caches with geo-replication enabled.
-- Geo Replication Full Sync Event Finished
- - Depicts the completion of full synchronization between geo-replicated caches. When you see lots of writes on geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
- - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
- - This metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
- - This metric is only available in the Premium tier for caches with geo-replication enabled.
--- Geo Replication Full Sync Event Started
- - Depicts the start of full synchronization between geo-replicated caches. When there are many writes in geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
- - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
- - The metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
- - The metric is only available in the Premium tier for caches with geo-replication enabled.
--- Geo Replication Healthy
- - Depicts the status of the geo-replication link between caches. There can be two possible states that the replication link can be in:
- - 0 disconnected/unhealthy
- - 1 ΓÇô healthy
- - The metric is available in the Enterprise, Enterprise Flash tiers, and Premium tier caches with geo-replication enabled.
- - In caches on the Premium tier, this metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
- - This metric might indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning.
- - A value of 0 doesn't mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
- - If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
--- Gets
- - Sum of the number of get commands run on the cache during the specified reporting interval. The sum is a combined total of the increases in the `cmdstat` counts reported by the Redis INFO all command for all commands in the _get_ family, including `GET`, `HGET` , `MGET`, and others. This value can differ from the total number of hits and misses because some individual commands access multiple keys. For example: `MGET key1 key2 key3` only increments the number of gets by one but increments the combined number of hits and misses by three.
-- Operations per Second
- - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command.
-- Server Load
- - The percentage of CPU cycles in which the Redis server is busy processing and _not waiting idle_ for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. You can expect a large latency effect. If you're seeing a high Redis Server Load, such as 100, because you're sending many expensive commands to the server, then you might see timeout exceptions in the client. In this case, you should consider scaling up, scaling out to a Premium cluster, or partitioning your data into multiple caches. When _Server Load_ is only moderately high, such as 50 to 80 percent, then average latency usually remains low, and timeout exceptions could have other causes than high server latency.
- - The _Server Load_ metric is sensitive to other processes on the machine using the existing CPU cycles that reduce the Redis server's idle time. For example, on the _C1_ tier, background tasks such as virus scanning cause _Server Load_ to spike higher for no obvious reason. We recommended that you pay attention to other metrics such as operations, latency, and CPU, in addition to _Server Load_.
-
-> [!CAUTION]
-> The _Server Load_ metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes _Server Load_ is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime.
--- Sets
- - Sum of the number of set commands run on the cache during the specified reporting interval. This sum is a combined total of the increases in the `cmdstat` counts reported by the Redis INFO all command for all commands in the _set_ family, including `SET`, `HSET` , `MSET`, and others.
-- Total Keys
- - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval.
-- Total Operations
- - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub, there are no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there are `Total Operations` metrics that reflect the cache usage for pub/sub operations.
-- Used Memory
- - The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation.
- - On the Enterprise and Enterprise Flash tier, the Used Memory value includes the memory in both the primary and replica nodes. This can make the metric appear twice as large as expected.
-- Used Memory Percentage
- - The percent of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. This value doesn't include fragmentation.
-- Used Memory RSS
- - The amount of cache memory used in MB during the specified reporting interval, including fragmentation. This value maps to `used_memory_rss` from the Redis INFO command. This metric isn't available in Enterprise or Enterprise Flash tier caches.
- ## Create alerts You can configure to receive alerts based on metrics and activity logs. Azure Monitor allows you to configure an alert to do the following when it triggers:
To configure alerts for your cache, select **Alerts** under **Monitoring** on th
:::image type="content" source="./media/cache-how-to-monitor/cache-monitoring.png" alt-text="Screenshot showing how to create an alert.":::
-For more information about configuring and using Alerts, see [Overview of Alerts](../azure-monitor/alerts/alerts-classic-portal.md).
+For more information about configuring and using alerts, see [Overview of Alerts](/azure/azure-monitor/alerts/alerts-classic-portal) and [Azure Cache for Redis alerts](monitor-cache.md#alerts).
## Organize with workbooks
The two workbooks provided are:
## Related content -- [Azure Monitor for Azure Cache for Redis](redis-cache-insights-overview.md)-- [Azure Monitor Metrics REST API](../azure-monitor/essentials/stream-monitoring-data-event-hubs.md)
+- [Monitor Azure Cache for Redis](monitor-cache.md)
+- [Azure Monitor Insights for Azure Cache for Redis](redis-cache-insights-overview.md)
+- [Azure Cache for Redis monitoring data reference](monitor-cache-reference.md)
+- [Azure Monitor Metrics REST API](/azure/azure-monitor/essentials/stream-monitoring-data-event-hubs)
- [`INFO`](https://redis.io/commands/info)
azure-cache-for-redis Cache Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-insights-overview.md
Title: Azure Monitor for Azure Cache for Redis | Microsoft Docs
-description: This article describes the Azure Monitor for Azure Redis Cache feature, which provides cache owners with a quick understanding of performance and utilization problems.
+ Title: Azure Monitor insights for Azure Cache for Redis | Microsoft Docs
+description: This article describes Azure Monitor insights for Azure Cache for Redis, which provides cache owners with a quick understanding of performance and utilization.
- Previously updated : 03/15/2024 Last updated : 03/25/2024
-# Explore Azure Monitor for Azure Cache for Redis
-
-For all of your Azure Cache for Redis resources, Azure Monitor for Azure Cache for Redis provides a unified, interactive view of:
--- Overall performance-- Failures-- Capacity-- Operational health-
-This article helps you understand the benefits of this new monitoring experience. It also shows how to modify and adapt the experience to fit the unique needs of your organization.
-
-## Introduction
-
-Before starting the experience, you should understand how Azure Monitor for Azure Cache for Redis visually presents information.
-
-It delivers:
--- **At scale perspective** of your Azure Cache for Redis resources in a single location across all of your subscriptions. You can selectively scope to only the subscriptions and resources you want to evaluate.--- **Drill-down analysis** of a particular Azure Cache for Redis resource. You can diagnose problems and see detailed analysis of utilization, failures, capacity, and operations. Select any of these categories to see an in-depth view of relevant information.
+# Azure Monitor insights for Azure Cache for Redis
-- **Customization** of this experience, which is built atop Azure Monitor workbook templates. The experience lets you change what metrics are displayed and modify or set thresholds that align with your limits. You can save the changes in a custom workbook and then pin workbook charts to Azure dashboards.
+Azure Monitor insights for Azure Cache for Redis provide a unified, interactive view of cache performance, failures, capacity, and operational health. This article shows you how to view Azure Cache for Redis insights across all of your subscriptions, and how to modify and adapt insights to fit the unique needs of your organization.
-This feature doesn't require you to enable or configure anything. Azure Cache for Redis information is collected by default.
+For more information about Azure Monitor for Azure Cache for Redis, see [Monitor Azure Cache for Redis](monitor-cache.md). For a full list of the metric definitions that form these insights, see [Supported metrics for Microsoft.Cache/redis](monitor-cache-reference.md#supported-metrics-for-microsoftcacheredis).
->[!NOTE]
->There is no charge to access this feature. You're charged only for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
+## View insights from Azure Monitor
-## View utilization and performance metrics for Azure Cache for Redis
+You can access Azure Cache for Redis insights from the **Insights Hub** of Azure Monitor.
To view the utilization and performance of your Azure Cache for Redis instances across all of your subscriptions, do the following steps:
To view the utilization and performance of your Azure Cache for Redis instances
1. Select **Azure Cache for Redis**. If this option isn't present, select **More** > **Azure Cache for Redis**.
+## Workbooks
+
+Azure Cache for Redis insights are based on the [workbooks feature of Azure Monitor](/azure/azure-monitor/visualize/workbooks-overview) that provides rich visualizations for metrics and other data. Azure Cache for Redis insights provides two workbooks by default:
+
+ :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-workbook.png" alt-text="Screenshot showing the workbooks selected in the Resource menu.":::
+
+- **Azure Cache For Redis Resource Overview** combines many of the most commonly used metrics so that the health and performance of the cache instance can be viewed at a glance.
+ :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-resource-overview.png" alt-text="Screenshot of graphs showing a resource overview for the cache.":::
+
+- **Geo-Replication Dashboard** pulls geo-replication health and status metrics from both the geo-primary and geo-secondary cache instances to give a complete picture of geo-replcation health. Using this dashboard is recommended, as some geo-replication metrics are only emitted from either the geo-primary or geo-secondary.
+ :::image type="content" source="media/cache-how-to-monitor/cache-monitoring-geo-dashboard.png" alt-text="Screenshot showing the geo-replication dashboard with a geo-primary and geo-secondary cache set.":::
+ ### Overview On **Overview**, the table displays interactive Azure Cache for Redis metrics. You can filter the results based on the options you select from the following drop-down lists:
When you select **Failures** at the top of the page, the **Failures** table of t
:::image type="content" source="./media/cache-insights-overview/failures.png" alt-text="Screenshot of failures with a breakdown by HTTP request type.":::
-### Metric definitions
-
-For a full list of the metric definitions that form these workbooks, check out the [article on available metrics and reporting intervals](./cache-how-to-monitor.md#create-your-own-metrics).
-
-## View from an Azure Cache for Redis resource
+## View insights from an Azure Cache for Redis resource
To access Azure Monitor for Azure Cache for Redis directly from an individual resource:
To expand or collapse all views in a workbook, select the expand symbol to the l
:::image type="content" source="../cosmos-db/media/insights-overview/expand.png" alt-text="Screenshot of highlighted expand-workbook symbol.":::
-## Customize Azure Monitor for Azure Cache for Redis
+## Customize Azure Monitor insights for Azure Cache for Redis
Because this experience is built atop Azure Monitor workbook templates, you can select **Customize** > **Edit** > **Save** to save a copy of your modified version into a custom workbook.
After you save a custom workbook, go to the workbook gallery to open it.
:::image type="content" source="../cosmos-db/media/insights-overview/gallery.png" alt-text="Screenshot of a command bar with Gallery highlighted.":::
-## Troubleshooting
-
-For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
-
-## Next steps
+## Related content
-- Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerts that aid in detecting problems.-- Learn the scenarios that workbooks support, how to author or customize reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
+- [Create interactive reports with Azure Monitor workbooks](/azure/azure-monitor/visualize/workbooks-overview)
+- [Troubleshoot workbook-based insights](/azure/azure-monitor/insights/troubleshoot-workbooks)
+- [Configure metric alerts](/azure/azure-monitor/alerts/alerts-metric)
+- [Configure service health notifications](/azure/service-health/alerts-activity-log-service-notifications-portal)
azure-cache-for-redis Monitor Cache Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/monitor-cache-reference.md
+
+ Title: Monitoring data reference for Azure Cache for Redis
+description: This article contains important reference material you need when you monitor Azure Cache for Redis.
Last updated : 03/21/2024+++++++
+# Azure Cache for Redis monitoring data reference
++
+See [Monitor Azure Cache for Redis](monitor-cache.md) for details on the data you can collect for Azure Cache for Redis and how to use it.
++
+For more details and information about the supported metrics for Microsoft.Cache/redis and Microsoft.Cache/redisEnterprise, see [List of metrics](monitor-cache-reference.md#azure-cache-for-redis-metrics).
+
+### Supported metrics for Microsoft.Cache/redis
+The following table lists the metrics available for the Microsoft.Cache/redis resource type.
++
+### Supported metrics for Microsoft.Cache/redisEnterprise
+The following table lists the metrics available for the Microsoft.Cache/redisEnterprise resource type.
++
+<a name="available-metrics-and-reporting-intervals"></a>
+<a name="create-your-own-metrics"></a>
+<a name="metrics-details"></a>
+## Azure Cache for Redis metrics
+
+The following list provides details and more information about the supported Azure Monitor metrics for [Microsoft.Cache/redis](/azure/azure-monitor/reference/supported-metrics/microsoft-cache-redis-metrics) and [Microsoft.Cache/redisEnterprise](/azure/azure-monitor/reference/supported-metrics/microsoft-cache-redisenterprise-metrics).
++
+- 99th Percentile Latency (preview)
+ - Depicts the worst-case (99th percentile) latency of server-side commands. Measured by issuing `PING` commands from the load balancer to the Redis server and tracking the time to respond.
+ - Useful for tracking the health of your Redis instance. Latency increases if the cache is under heavy load or if there are long running commands that delay the execution of the `PING` command.
+ - This metric is only available in Standard and Premium tier caches.
+ - This metric is not available for caches that are affected by Cloud Service retirement. See more information [here](cache-faq.yml#caches-with-a-dependency-on-cloud-services--classic)
+- Cache Latency (preview)
+ - The latency of the cache calculated using the internode latency of the cache. This metric is measured in microseconds, and has three dimensions: `Avg`, `Min`, and `Max`. The dimensions represent the average, minimum, and maximum latency of the cache during the specified reporting interval.
+- Cache Misses
+ - The number of failed key lookups during the specified reporting interval. This number maps to `keyspace_misses` from the Redis INFO command. Cache misses don't necessarily mean there's an issue with the cache. For example, when using the cache-aside programming pattern, an application looks first in the cache for an item. If the item isn't there (cache miss), the item is retrieved from the database and added to the cache for next time. Cache misses are normal behavior for the cache-aside programming pattern. If the number of cache misses is higher than expected, examine the application logic that populates and reads from the cache. If items are being evicted from the cache because of memory pressure, then there might be some cache misses, but a better metric to monitor for memory pressure would be `Used Memory` or `Evicted Keys`.
+- Cache Miss Rate
+ - The percent of unsuccessful key lookups during the specified reporting interval. This metric isn't available in Enterprise or Enterprise Flash tier caches.
+- Cache Read
+ - The amount of data read from the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth used by this cache. If you want to set up alerts for server-side network bandwidth limits, then create it using this `Cache Read` counter. See [this table](./cache-planning-faq.yml#azure-cache-for-redis-performance) for the observed bandwidth limits for various cache pricing tiers and sizes.
+- Cache Write
+ - The amount of data written to the cache in Megabytes per second (MB/s) during the specified reporting interval. This value is derived from the network interface cards that support the virtual machine that hosts the cache and isn't Redis specific. This value corresponds to the network bandwidth of data sent to the cache from the client.
+- Connected Clients
+ - The number of client connections to the cache during the specified reporting interval. This number maps to `connected_clients` from the Redis INFO command. Once the [connection limit](cache-configure.md#default-redis-server-configuration) is reached, later attempts to connect to the cache fail. Even if there are no active client applications, there might still be a few instances of connected clients because of internal processes and connections.
+- Connected Clients Using Microsoft Entra Token (preview)
+ - The number of client connections to the cache authenticated using Microsoft Entra token during the specified reporting interval.
+- Connections Created Per Second
+ - The number of instantaneous connections created per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches.
+- Connections Closed Per Second
+ - The number of instantaneous connections closed per second on the cache via port 6379 or 6380 (SSL). This metric can help identify whether clients are frequently disconnecting and reconnecting, which can cause higher CPU usage and Redis Server Load. This metric isn't available in Enterprise or Enterprise Flash tier caches.
+- CPU
+ - The CPU utilization of the Azure Cache for Redis server as a percentage during the specified reporting interval. This value maps to the operating system `\Processor(_Total)\% Processor Time` performance counter. Note that this metric can be noisy due to low priority background security processes running on the node, so we recommend monitoring Server Load metric to track load on a Redis server.
+- Errors
+ - Specific failures and performance issues that the cache could be experiencing during a specified reporting interval. This metric has eight dimensions representing different error types. The error types represented now are as follows:
+ - **Failover** ΓÇô when a cache fails over (subordinate promotes to primary)
+ - **Dataloss** ΓÇô when there's data loss on the cache
+ - **UnresponsiveClients** ΓÇô when the clients aren't reading data from the server fast enough, and specifically, when the number of bytes in the Redis server output buffer for a client goes over 1,000,000 bytes
+ - **AOF** ΓÇô when there's an issue related to AOF persistence
+ - **RDB** ΓÇô when there's an issue related to RDB persistence
+ - **Import** ΓÇô when there's an issue related to Import RDB
+ - **Export** ΓÇô when there's an issue related to Export RDB
+ - **AADAuthenticationFailure** (preview) - when there's an authentication failure using Microsoft Entra access token
+ - **AADTokenExpired** (preview) - when a Microsoft Entra access token used for authentication isn't renewed and it expires.
+- Evicted Keys
+ - The number of items evicted from the cache during the specified reporting interval because of the `maxmemory` limit.
+ - This number maps to `evicted_keys` from the Redis INFO command.
+- Expired Keys
+ - The number of items expired from the cache during the specified reporting interval. This value maps to `expired_keys` from the Redis INFO command.
+
+- Geo-replication metrics
+
+ Geo-replication metrics are affected by monthly internal maintenance operations. The Azure Cache for Redis service periodically patches all caches with the latest platform features and improvements. During these updates, each cache node is taken offline, which temporarily disables the geo-replication link. If your geo replication link is unhealthy, check to see if it was caused by a patching event on either the geo-primary or geo-secondary cache by using **Diagnose and Solve Problems** from the Resource menu in the portal. Depending on the amount of data in the cache, the downtime from patching can take anywhere from a few minutes to an hour. If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+ The [Geo-Replication Dashboard](cache-insights-overview.md#workbooks) workbook is a simple and easy way to view all Premium-tier geo-replication metrics in the same place. This dashboard pulls together metrics that are only emitted by the geo-primary or geo-secondary, so they can be viewed simultaneously.
+
+ - Geo Replication Connectivity Lag
+ - Depicts the time, in seconds, since the last successful data synchronization between geo-primary & geo-secondary. If the link goes down, this value continues to increase, indicating a problem.
+ - This metric is only emitted **from the geo-secondary** cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+ - Geo Replication Data Sync Offset
+ - Depicts the approximate amount of data in bytes that has yet to be synchronized to geo-secondary cache.
+ - This metric is only emitted _from the geo-primary_ cache instance. On the geo-secondary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+ - Geo Replication Full Sync Event Finished
+ - Depicts the completion of full synchronization between geo-replicated caches. When you see lots of writes on geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
+ - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - This metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
+ - This metric is only available in the Premium tier for caches with geo-replication enabled.
+
+ - Geo Replication Full Sync Event Started
+ - Depicts the start of full synchronization between geo-replicated caches. When there are many writes in geo-primary, and replication between the two caches canΓÇÖt keep up, then a full sync is needed. A full sync involves copying the complete data from geo-primary to geo-secondary by taking an RDB snapshot rather than a partial sync that occurs on normal instances. See [this page](https://redis.io/docs/manual/replication/#how-redis-replication-works) for a more detailed explanation.
+ - The metric reports zero most of the time because geo-replication uses partial resynchronizations for any new data added after the initial full synchronization.
+ - The metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
+ - The metric is only available in the Premium tier for caches with geo-replication enabled.
+
+ - Geo Replication Healthy
+ - Depicts the status of the geo-replication link between caches. There can be two possible states that the replication link can be in:
+ - 0 disconnected/unhealthy
+ - 1 ΓÇô healthy
+ - The metric is available in the Enterprise, Enterprise Flash tiers, and Premium tier caches with geo-replication enabled.
+ - In caches on the Premium tier, this metric is only emitted _from the geo-secondary_ cache instance. On the geo-primary instance, this metric has no value.
+ - This metric might indicate a disconnected/unhealthy replication status for several reasons, including: monthly patching, host OS updates, network misconfiguration, or failed geo-replication link provisioning.
+ - A value of 0 doesn't mean that data on the geo-replica is lost. It just means that the link between geo-primary and geo-secondary is unhealthy.
+ - If the geo-replication link is unhealthy for over an hour, [file a support request](../azure-portal/supportability/how-to-create-azure-support-request.md).
+
+- Gets
+ - The number of get operations from the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_get`, `cmdstat_hget`, `cmdstat_hgetall`, `cmdstat_hmget`, `cmdstat_mget`, `cmdstat_getbit`, and `cmdstat_getrange`, and is equivalent to the sum of cache hits and misses during the reporting interval.
+- Operations per Second
+ - The total number of commands processed per second by the cache server during the specified reporting interval. This value maps to "instantaneous_ops_per_sec" from the Redis INFO command.
+- Server Load
+ - The percentage of CPU cycles in which the Redis server is busy processing and _not waiting idle_ for messages. If this counter reaches 100, the Redis server has hit a performance ceiling, and the CPU can't process work any faster. You can expect a large latency effect. If you're seeing a high Redis Server Load, such as 100, because you're sending many expensive commands to the server, then you might see timeout exceptions in the client. In this case, you should consider scaling up, scaling out to a Premium cluster, or partitioning your data into multiple caches. When _Server Load_ is only moderately high, such as 50 to 80 percent, then average latency usually remains low, and timeout exceptions could have other causes than high server latency.
+ - The _Server Load_ metric is sensitive to other processes on the machine using the existing CPU cycles that reduce the Redis server's idle time. For example, on the _C1_ tier, background tasks such as virus scanning cause _Server Load_ to spike higher for no obvious reason. We recommended that you pay attention to other metrics such as operations, latency, and CPU, in addition to _Server Load_.
+
+ > [!CAUTION]
+ > The _Server Load_ metric can present incorrect data for Enterprise and Enterprise Flash tier caches. Sometimes _Server Load_ is represented as being over 100. We are investigating this issue. We recommend using the CPU metric instead in the meantime.
+
+- Sets
+ - The number of set operations to the cache during the specified reporting interval. This value is the sum of the following values from the Redis INFO all command: `cmdstat_set`, `cmdstat_hset`, `cmdstat_hmset`, `cmdstat_hsetnx`, `cmdstat_lset`, `cmdstat_mset`, `cmdstat_msetnx`, `cmdstat_setbit`, `cmdstat_setex`, `cmdstat_setrange`, and `cmdstat_setnx`.
+- Total Keys
+ - The maximum number of keys in the cache during the past reporting time period. This number maps to `keyspace` from the Redis INFO command. Because of a limitation in the underlying metrics system for caches with clustering enabled, Total Keys return the maximum number of keys of the shard that had the maximum number of keys during the reporting interval.
+- Total Operations
+ - The total number of commands processed by the cache server during the specified reporting interval. This value maps to `total_commands_processed` from the Redis INFO command. When Azure Cache for Redis is used purely for pub/sub, there are no metrics for `Cache Hits`, `Cache Misses`, `Gets`, or `Sets`, but there are `Total Operations` metrics that reflect the cache usage for pub/sub operations.
+- Used Memory
+ - The amount of cache memory in MB that is used for key/value pairs in the cache during the specified reporting interval. This value maps to `used_memory` from the Redis INFO command. This value doesn't include metadata or fragmentation.
+ - On the Enterprise and Enterprise Flash tier, the Used Memory value includes the memory in both the primary and replica nodes. This can make the metric appear twice as large as expected.
+- Used Memory Percentage
+ - The percent of total memory that is being used during the specified reporting interval. This value references the `used_memory` value from the Redis INFO command to calculate the percentage. This value doesn't include fragmentation.
+- Used Memory RSS
+ - The amount of cache memory used in MB during the specified reporting interval, including fragmentation. This value maps to `used_memory_rss` from the Redis INFO command. This metric isn't available in Enterprise or Enterprise Flash tier caches.
++
+### Supported resource logs for Microsoft.Cache/redis
+
+### Supported resource logs for Microsoft.Cache/redisEnterprise/databases
++
+### Azure Cache for Redis
+microsoft.cache/redis
+- [ACRConnectedClientList](/azure/azure-monitor/reference/tables/acrconnectedclientlist)
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics)
+
+### Azure Cache for Redis Enterprise
+Microsoft.Cache/redisEnterprise
+- [REDConnectionEvents](/azure/azure-monitor/reference/tables/redconnectionevents)
+
+- [Microsoft.Cache resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftcache)
+
+## Related content
+
+- See [Monitor Azure Cache for Redis](monitor-cache.md) for a description of monitoring Azure Cache for Redis.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
azure-cache-for-redis Monitor Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/monitor-cache.md
+
+ Title: Monitor Azure Cache for Redis
+description: Start here to learn how to monitor Azure Cache for Redis.
Last updated : 03/21/2024+++++++
+# Monitor Azure Cache for Redis
+++
+Insights for Azure Cache for Redis deliver the following experience:
+
+- **At scale perspective** of your Azure Cache for Redis resources across subscriptions. You can selectively scope to only the subscriptions and resources you want to evaluate.
+- **Drill-down analysis** of an Azure Cache for Redis resource. To diagnose problems, you can see detailed analysis of utilization, failures, capacity, and operations, or see an in-depth view of relevant information.
+- **Customization** built on Azure Monitor workbook templates. You can change what metrics are displayed and modify or set thresholds that align with your limits. You can save the changes in a custom workbook and then pin workbook charts to Azure dashboards.
+
+Insights for Azure Cache for Redis don't require you to enable or configure anything. Azure Cache for Redis information is collected by default, and there's no extra charge to access insights.
+
+To learn how to view, configure, and customize insights for Azure Cache for Redis, see [Azure Monitor insights for Azure Cache for Redis](cache-insights-overview.md).
++
+For more information about the resource types for Azure Cache for Redis, see [Azure Cache for Redis monitoring data reference](monitor-cache-reference.md).
+
+<a name="use-a-storage-account-to-export-azure-cache-for-redis-metrics"></a>
++
+For a list of available metrics for Azure Cache for Redis, see [Azure Cache for Redis monitoring data reference](monitor-cache-reference.md#metrics).
++
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Cache for Redis, see [Azure Cache for Redis monitoring data reference](monitor-cache-reference.md#resource-logs).
+## Azure Cache for Redis resource logs
+
+In Azure Cache for Redis, two options are available to log:
+
+- **Cache Metrics** ("AllMetrics") [logs metrics from Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal)
+- **Connection Logs** logs connections to the cache for security and diagnostic purposes.
+
+### Cache metrics
+
+Azure Cache for Redis emits many metrics such as `Server Load` and `Connections per Second` that are useful to log. Selecting the **AllMetrics** option allows these and other cache metrics to be logged. You can configure how long to retain the metrics.
+
+### Connection logs
+
+Azure Cache for Redis uses Azure diagnostic settings to log information on client connections to your cache. Logging and analyzing this diagnostic setting helps you understand who is connecting to your caches and the timestamp of those connections. The log data could be used to identify the scope of a security breach and for security auditing purposes.
+
+The connection logs have slightly different implementations, contents, and setup procedures for the different Azure Cache for Redis tiers. For details, see [Azure Monitor diagnostic settings](cache-monitor-diagnostic-settings.md).
++++
+## Azure Cache for Redis metrics
+
+Metrics for Azure Cache for Redis instances are collected using the Redis [`INFO`](https://redis.io/commands/info) command. Metrics are collected approximately two times per minute and automatically stored for 30 days so they can be displayed in the metrics charts and evaluated by alert rules.
+
+The metrics are reported using several reporting intervals, including **Past hour**, **Today**, **Past week**, and **Custom**. Each metrics chart displays the average, minimum, and maximum values for each metric in the chart, and some metrics display a total for the reporting interval.
+
+Each metric includes two versions: One metric measures performance for the entire cache, and for caches that use clustering. A second version of the metric, which includes `(Shard 0-9)` in the name, measures performance for a single shard in a cache. For example if a cache has four shards, `Cache Hits` is the total number of hits for the entire cache, and `Cache Hits (Shard 3)` measures just the hits for that shard of the cache.
++
+#### Aggregation types
+
+For general information about aggregation types, see [Configure aggregation](/azure/azure-monitor/essentials/analyze-metrics#configure-aggregation).
+
+Under normal cache conditions, **Average** and **Max** are similar because only the primary node emits these metrics. In a scenario where the number of connected clients changes rapidly, **Max**, **Average**, and **Min** show different values, which is also expected behavior.
+
+The types **Count** and **Sum** can be misleading for certain metrics, such as connected clients. Instead, it's best to look at the **Average** metrics and not the **Sum** metrics.
+
+> [!NOTE]
+> Even when the cache is idle with no connected active client applications, you might see some cache activity, such as connected clients, memory usage, and operations being performed. The activity is normal in the operation of the cache.
+
+For nonclustered caches, it's best to use the metrics without the suffix `Instance Based`. For example, to check server load for your cache instance, use the metric _Server Load_.
+
+In contrast, for clustered caches, use the metrics with the suffix `Instance Based`. Then, add a split or filter on `ShardId`. For example, to check the server load of shard 1, use the metric **Server Load (Instance Based)**, then apply filter **ShardId = 1**.
++
+For sample Kusto queries for Azure Cache for Redis connection logs, see [Connection log queries](cache-monitor-diagnostic-settings.md#log-analytics-queries).
++
+### Azure Cache for Redis common alert rules
+
+The following table lists common and recommended alert rules for Azure Cache for Redis.
+
+| Alert type | Condition | Description |
+|:|:|:|
+|Metric|99th percentile latency|Alert on the worst-case latency of server-side commands in Azure Cache for Redis instances. Latency is measured by using `PING` commands and tracking response times. Track the health of your cache instance to see if long-running commands are compromising latency performance.
+|Metric |High `Server Load` usage or spikes |High server load means the Redis server is unable to keep up with requests, leading to timeouts or slow responses. Create alerts on metrics on server load metrics to be notified early about potential impacts.|
+|Metric |High network bandwidth usage |If the server exceeds the available bandwidth, then data isn't sent to the client as quickly. Client requests could time out because the server can't push data to the client fast enough. Set up alerts for server-side network bandwidth limits by using the `Cache Read` and `Cache Write` counters. |
++
+The following screenshot shows an advisor recommendation for an Azure Cache for Redis alert:
++
+To upgrade your cache, select **Upgrade now** to change the pricing tier and [scale](cache-configure.md#scale) your cache. For more information on choosing a pricing tier, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier).
+
+## Related content
+
+- See [Azure Cache for Redis monitoring data reference](monitor-cache-reference.md) for a reference of the metrics, logs, and other important values created for Azure Cache for Redis.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated worker process
-description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which lets you run your functions on currently supported versions of .NET and .NET Framework.
+description: Learn how to use the .NET isolated worker model to run your C# functions in Azure, which lets you run your functions on currently supported versions of .NET and .NET Framework.
Last updated 12/13/2023
recommendations: false
#Customer intent: As a developer, I need to know how to create functions that run in an isolated worker process so that I can run my function code on current (not LTS) releases of .NET.
-# Guide for running C# Azure Functions in an isolated worker process
+# Guide for running C# Azure Functions in the isolated worker model
This article is an introduction to working with Azure Functions in .NET, using the isolated worker model. This model allows your project to target versions of .NET independently of other runtime components. For information about specific .NET versions supported, see [supported version](#supported-versions).
The [ConfigureFunctionsWorkerDefaults] method is used to add the settings requir
Having access to the host builder pipeline means that you can also set any app-specific configurations during initialization. You can call the [ConfigureAppConfiguration] method on [HostBuilder] one or more times to add the configurations required by your function app. To learn more about app configuration, see [Configuration in ASP.NET Core](/aspnet/core/fundamentals/configuration/?view=aspnetcore-5.0&preserve-view=true).
-These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you still need to use the [host.json file](functions-host-json.md).
+These configurations apply to your function app running in a separate process. To make changes to the functions host or trigger and binding configuration, you still need to use the [host.json file](functions-host-json.md).
+
+> [!NOTE]
+> Custom configuration sources cannot be used for configuration of triggers and bindings. Trigger and binding configuration must be available to the Functions platform, and not just your application code. You can provide this configuration through the [application settings](../app-service/configure-common.md#configure-app-settings), [Key Vault references](../app-service/app-service-key-vault-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json), or [App Configuration references](../app-service/app-service-configuration-references.md?toc=%2Fazure%2Fazure-functions%2Ftoc.json) features.
### Dependency injection
This is an example of a middleware implementation that reads the `HttpRequestDat
This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header. For a more complete example of using custom middleware in your function app, see the [custom middleware reference sample](https://github.com/Azure/azure-functions-dotnet-worker/blob/main/samples/CustomMiddleware).
+### Customizing JSON serialization
+
+The isolated worker model uses `System.Text.Json` by default. You can customize the behavior of the serializer by configuring services as part of your `Program.cs` file. The following example shows this using `ConfigureFunctionsWebApplication`, but it will also work for `ConfigureFunctionsWorkerDefaults`:
+
+```csharp
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication((IFunctionsWorkerApplicationBuilder builder) =>
+ {
+ builder.Services.Configure<JsonSerializerOptions>(jsonSerializerOptions =>
+ {
+ jsonSerializerOptions.PropertyNamingPolicy = JsonNamingPolicy.CamelCase;
+ jsonSerializerOptions.DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull;
+ jsonSerializerOptions.ReferenceHandler = ReferenceHandler.Preserve;
+
+ // override the default value
+ jsonSerializerOptions.PropertyNameCaseInsensitive = false;
+ });
+ })
+ .Build();
+```
+
+You might wish to instead use JSON.NET (`Newtonsoft.Json`) for serialization. To do this, you would install the [`Microsoft.Azure.Core.NewtonsoftJson`](https://www.nuget.org/packages/Microsoft.Azure.Core.NewtonsoftJson) package. Then, in your service registration, you would reassign the `Serializer` property on the `WorkerOptions` configuration. The following example shows this using `ConfigureFunctionsWebApplication`, but it will also work for `ConfigureFunctionsWorkerDefaults`:
+
+```csharp
+var host = new HostBuilder()
+ .ConfigureFunctionsWebApplication((IFunctionsWorkerApplicationBuilder builder) =>
+ {
+ builder.Services.Configure<WorkerOptions>(workerOptions =>
+ {
+ var settings = NewtonsoftJsonObjectSerializer.CreateJsonSerializerSettings();
+ settings.ContractResolver = new CamelCasePropertyNamesContractResolver();
+ settings.NullValueHandling = NullValueHandling.Ignore;
+
+ workerOptions.Serializer = new NewtonsoftJsonObjectSerializer(settings);
+ });
+ })
+ .Build();
+```
+ ## Methods recognized as functions A function method is a public method of a public class with a `Function` attribute applied to the method and a trigger attribute applied to an input parameter, as shown in the following example:
In Visual Studio, the **Target Runtime** option in the publish profile should be
## Deploy to Azure Functions
-When running in Azure, your function code project must run in either a function app or in a Linux container. The function app and other required Azure resources must exist before you deploy your code.
+When you deploy your function code project to Azure, it must run in either a function app or in a Linux container. The function app and other required Azure resources must exist before you deploy your code.
You can also deploy your function app in a Linux container. For more information, see [Working with containers and Azure Functions](functions-how-to-custom-container.md).
azure-functions Migrate Dotnet To Isolated Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-dotnet-to-isolated-model.md
The section outlines the various changes that you need to make to your local pro
First, you'll convert the project file and update your dependencies. As you do, you will see build errors for the project. In subsequent steps, you'll make the corresponding changes to remove these errors.
-### .csproj file
+### Project file
The following example is a `.csproj` project file that uses .NET 6 on version 4.x:
var host = new HostBuilder()
host.Run(); ```
-This examples supports [ASP.NET Core integration] to use normal .NET 8 types. To use the built-in Functions HTTP types instead, replace the call to `ConfigureFunctionsWebApplication` with a call to `ConfigureFunctionsWorkerDefaults`.
+This example includes [ASP.NET Core integration] to improve performance and provide a familiar programming model when your app uses HTTP triggers. If you do not intend to use HTTP triggers, you can replace the call to `ConfigureFunctionsWebApplication` with a call to `ConfigureFunctionsWorkerDefaults`. If you do so, you can remove the reference to `Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore` from your project file. However, for the best performance, even for functions with other trigger types, you should keep the `FrameworkReference` to ASP.NET Core.
# [.NET Framework 4.8](#tab/netframework48)
namespace Company.FunctionApp
-The `Program.cs` file will replace any file that has the `FunctionsStartup` attribute, which is typically a `Startup.cs` file. In places where your `FunctionsStartup` code would reference `IFunctionsHostBuilder.Services`, you can instead add statements within the `.ConfigureServices()` method of the `HostBuilder` in your `Program.cs`. To learn more about working with `Program.cs`, see [Start-up and configuration](./dotnet-isolated-process-guide.md#start-up-and-configuration) in the isolated worker model guide.
-
-Once you have moved everything from any existing `FunctionsStartup` to the `Program.cs` file, you can delete the `FunctionsStartup` attribute and the class it was applied to.
### Function signature changes
When migrating from running in-process to running in an isolated worker process,
The value you have configured for `AzureWebJobsStorage`` might be different. You do not need to change its value as part of the migration.
+### host.json file
+
+No changes are required to your `host.json` file. However, if your Application Insights configuration in this file from your in-process model project, you might want to make additional changes in your `Program.cs` file. The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
+
+### Other code changes
+
+This section highlights other code changes to consider as you work through the migration. These changes are not needed by all applications, but you should evaluate if any are relevant to your scenarios.
++ ### Example function migrations #### HTTP trigger example
azure-functions Migrate Version 1 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-1-version-4.md
Choose the tab that matches your target version of .NET and the desired process
> [!TIP] > If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
-### .csproj file
+### Project file
The following example is a `.csproj` project file that runs on version 1.x:
var host = new HostBuilder()
host.Run(); ```
+This example includes [ASP.NET Core integration] to improve performance and provide a familiar programming model when your app uses HTTP triggers. If you do not intend to use HTTP triggers, you can replace the call to `ConfigureFunctionsWebApplication` with a call to `ConfigureFunctionsWorkerDefaults`. If you do so, you can remove the reference to `Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore` from your project file. However, for the best performance, even for functions with other trigger types, you should keep the `FrameworkReference` to ASP.NET Core.
++ # [.NET 6 (in-process)](#tab/net6-in-proc) A program.cs file isn't required when running in-process.
namespace Company.FunctionApp
} ``` + ### host.json file
To run on version 4.x, you must add `"version": "2.0"` to the host.json file. Yo
:::code language="json" source="~/functions-quickstart-templates//Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/host.json":::
+The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
+ # [.NET 6 (in-process)](#tab/net6-in-proc) :::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp/host.json":::
To run on version 4.x, you must add `"version": "2.0"` to the host.json file. Yo
:::code language="json" source="~/functions-quickstart-templates/Functions.Templates/ProjectTemplate_v4.x/CSharp-Isolated/host.json":::
+The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
Some key classes changed names between version 1.x and version 4.x. These change
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings.
+### Other code changes
+
+# [.NET 8 (isolated)](#tab/net8)
+
+This section highlights other code changes to consider as you work through the migration. These changes are not needed by all applications, but you should evaluate if any are relevant to your scenarios. Make sure to check [Behavior changes after version 1.x](#behavior-changes-after-version-1x) for additional changes you might need to make to your project.
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+Make sure to check [Behavior changes after version 1.x](#behavior-changes-after-version-1x) for additional changes you might need to make to your project.
+
+# [.NET Framework 4.8](#tab/netframework48)
+
+This section highlights other code changes to consider as you work through the migration. These changes are not needed by all applications, but you should evaluate if any are relevant to your scenarios. Make sure to check [Behavior changes after version 1.x](#behavior-changes-after-version-1x) for additional changes you might need to make to your project.
++++ ### HTTP trigger template Most of the code changes between version 1.x and version 4.x can be seen in HTTP triggered functions. The HTTP trigger template for version 1.x looks like the following example:
azure-functions Migrate Version 3 Version 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-version-3-version-4.md
Choose the tab that matches your target version of .NET and the desired process
> [!TIP] > If you are moving to an LTS or STS version of .NET using the isolated worker model, the [.NET Upgrade Assistant] can be used to automatically make many of the changes mentioned in the following sections.
-### .csproj file
+### Project file
The following example is a `.csproj` project file that uses .NET Core 3.1 on version 3.x:
var host = new HostBuilder()
host.Run(); ```
+This example includes [ASP.NET Core integration] to improve performance and provide a familiar programming model when your app uses HTTP triggers. If you do not intend to use HTTP triggers, you can replace the call to `ConfigureFunctionsWebApplication` with a call to `ConfigureFunctionsWorkerDefaults`. If you do so, you can remove the reference to `Microsoft.Azure.Functions.Worker.Extensions.Http.AspNetCore` from your project file. However, for the best performance, even for functions with other trigger types, you should keep the `FrameworkReference` to ASP.NET Core.
++ # [.NET 6 (in-process)](#tab/net6-in-proc) A program.cs file isn't required when running in-process.
namespace Company.FunctionApp
} ``` + ### local.settings.json file
When you migrate to version 4.x, make sure that your local.settings.json file ha
+### host.json file
+
+# [.NET 8 (isolated)](#tab/net8)
+
+No changes are required to your `host.json` file. However, if your Application Insights configuration in this file from your in-process model project, you might want to make additional changes in your `Program.cs` file. The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+No changes are required to your `host.json` file.
+
+# [.NET Framework 4.8](#tab/netframework48)
+
+No changes are required to your `host.json` file. However, if your Application Insights configuration in this file from your in-process model project, you might want to make additional changes in your `Program.cs` file. The `host.json` file only controls logging from the Functions host runtime, and in the isolated worker model, some of these logs come from your application directly, giving you more control. See [Managing log levels in the isolated worker model](./dotnet-isolated-process-guide.md#managing-log-levels) for details on how to filter these logs.
++++ ### Class name changes Some key classes changed names between versions. These changes are a result either of changes in .NET APIs or in differences between in-process and isolated worker process. The following table indicates key .NET classes used by Functions that could change when migrating:
Some key classes changed names between versions. These changes are a result eith
There might also be class name differences in bindings. For more information, see the reference articles for the specific bindings. +
+### Other code changes
+
+# [.NET 8 (isolated)](#tab/net8)
+
+This section highlights other code changes to consider as you work through the migration. These changes are not needed by all applications, but you should evaluate if any are relevant to your scenarios. Make sure to check [Breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x) for additional changes you might need to make to your project.
++
+# [.NET 6 (in-process)](#tab/net6-in-proc)
+
+Make sure to check [Breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x) for additional changes you might need to make to your project.
+
+# [.NET Framework 4.8](#tab/netframework48)
+
+This section highlights other code changes to consider as you work through the migration. These changes are not needed by all applications, but you should evaluate if any are relevant to your scenarios. Make sure to check [Breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x) for additional changes you might need to make to your project.
++++ ### HTTP trigger template The differences between in-process and isolated worker process can be seen in HTTP triggered functions. The HTTP trigger template for version 3.x (in-process) looks like the following example:
namespace Company.Function
::: zone-end + ::: zone pivot="programming-language-java,programming-language-javascript,programming-language-typescript,programming-language-powershell,programming-language-python" + To update your project to Azure Functions 4.x: 1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
To update your project to Azure Functions 4.x:
1. Update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x). ::: zone-end + ::: zone pivot="programming-language-java"
-3. If needed, move to one of the [Java versions supported on version 4.x](./functions-reference-java.md#supported-versions).
-4. Update the app's `POM.xml` file to modify the `FUNCTIONS_EXTENSION_VERSION` setting to `~4`, as in the following example:
+
+1. If needed, move to one of the [Java versions supported on version 4.x](./functions-reference-java.md#supported-versions).
+
+1. Update the app's `POM.xml` file to modify the `FUNCTIONS_EXTENSION_VERSION` setting to `~4`, as in the following example:
```xml <configuration>
To update your project to Azure Functions 4.x:
</appSettings> </configuration> ```+ ::: zone-end++ 3. If needed, move to one of the [Node.js versions supported on version 4.x](functions-reference-node.md#node-version).
-3. Take this opportunity to upgrade to PowerShell 7.2, which is recommended. For more information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
-3. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
+++
+1. Take this opportunity to upgrade to PowerShell 7.2, which is recommended. For more information, see [PowerShell versions](functions-reference-powershell.md#powershell-versions).
+++
+1. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
+ ::: zone-end ### Run the pre-upgrade validator
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
The following image is an example of rendering alternative routes with specified
The Azure Maps Web SDK provides a [Service module]. This module is a helper library that makes it easy to use the Azure Maps REST APIs in web or Node.js applications, using JavaScript or TypeScript. The Service module can be used to render the returned routes on the map. The module automatically determines which API to use with GET and POST requests.
+> [!NOTE]
+>
+> **Azure Maps Web SDK Service Module retirement**
+>
+> The Azure Maps Web SDK Service Module is now deprecated and will be retired on 9/30/26. To avoid service disruptions, we recommend migrating to the Azure Maps JavaScript REST SDK by 9/30/26. For more information, see [JavaScript/TypeScript REST SDK Developers Guide (preview)](how-to-dev-guide-js-sdk.md).
+ ## Next steps To learn more, please see:
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
description: Learn about the Azure Maps services module. See how to load and use this helper library to access Azure Maps REST services in web or Node.js applications. Previously updated : 06/26/2023 Last updated : 03/27/2024
The Azure Maps Web SDK provides a [services module]. This module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications by using JavaScript or TypeScript.
+> [!NOTE]
+>
+> **Azure Maps Web SDK Service Module retirement**
+>
+> The Azure Maps Web SDK Service Module is now deprecated and will be retired on 9/30/26. To avoid service disruptions, we recommend migrating to the Azure Maps JavaScript REST SDK by 9/30/26. For more information, see [JavaScript/TypeScript REST SDK Developers Guide (preview)](how-to-dev-guide-js-sdk.md).
+ ## Use the services module in a webpage 1. Create a new HTML file.
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
+### [3.2.0] (March 29, 2024)
+
+#### Other changes (3.2.0)
+
+- Upgrade MapLibre to [V4](https://github.com/maplibre/maplibre-gl-js/releases/tag/v4.0.0).
+
+- Correct the default value of `HtmlMarkerOptions.pixelOffset` from `[0, -18]` to `[0, 0]` in the doc.
+ ### [3.1.2] (February 22, 2024) #### New features (3.1.2) -- Added `fillAntialias` option to `PolygonLayer` for enabling MSAA on polygon fills.
+- Added `fillAntialias` option to `PolygonLayer` for enabling MSAA antialiasing on polygon fills.
#### Other changes (3.1.2)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
#### New features (2.3.7) -- Added `fillAntialias` option to `PolygonLayer` for enabling MSAA on polygon fills.
+- Added `fillAntialias` option to `PolygonLayer` for enabling MSAA antialiasing on polygon fills.
- Added a new option, `enableAccessibilityLocationFallback`, to enable or disable reverse-geocoding API fallback for accessibility (screen reader). #### Other changes (2.3.7)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.2.0]: https://www.npmjs.com/package/azure-maps-control/v/3.2.0
[3.1.2]: https://www.npmjs.com/package/azure-maps-control/v/3.1.2 [3.1.1]: https://www.npmjs.com/package/azure-maps-control/v/3.1.1 [3.1.0]: https://www.npmjs.com/package/azure-maps-control/v/3.1.0
azure-maps Release Notes Spatial Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md
This document contains information about new features and other changes to the Azure Maps Spatial IO Module.
-## [0.1.8] (February 22 2024)
+## [0.1.8] (February 22, 2024)
### Bug fixes (0.1.8)
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The Map Control API is a convenient client library. This API allows you to easil
This section shows how to use the Maps [Search API] to find a point of interest on your map. It's a RESTful API designed for developers to search for addresses, points of interest, and other geographical information. The Search service assigns a latitude and longitude information to a specified address. The **Service Module** explained next can be used to search for a location using the Maps Search API.
+> [!NOTE]
+>
+> **Azure Maps Web SDK Service Module retirement**
+>
+> The Azure Maps Web SDK Service Module is now deprecated and will be retired on 9/30/26. To avoid service disruptions, we recommend migrating to the Azure Maps JavaScript REST SDK by 9/30/26. For more information, see [JavaScript/TypeScript REST SDK Developers Guide (preview)](how-to-dev-guide-js-sdk.md).
+ ### Service Module 1. In the map `ready` event handler, construct the search service URL by adding the following JavaScript code immediately after `map.layers.add(resultLayer);`:
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
When you migrate the following services, which currently use Log Analytics agent
| [Automation Hybrid Runbook Worker overview](../../automation/automation-hybrid-runbook-worker.md) | Automation Hybrid Worker Extension (no dependency on Log Analytics agents or Azure Monitor Agent) | Generally available | [Migrate to Extension based Hybrid Workers](../../automation/extension-based-hybrid-runbook-worker-install.md#migrate-an-existing-agent-based-to-extension-based-hybrid-workers) | ## Known parity gaps for solutions that may impact your migration-- ***Sentinel***: CEF and Windows firewall logs are not yet GA
+- ***Sentinel***: Windows firewall logs are not yet GA
- ***SQL Assessment Solution***: This is now part of SQL best practice assessment. The deployment policies require one Log Analytics Workspace per subscription, which is not the best practice recommended by the AMA team. - ***Microsoft Defender for cloud***: Some features for the new agentless solution are in development. Your migration maybe impacted if you use FIM, Endpoint protection discovery recommendations, OS Misconfigurations (ASB recommendations) and Adaptive Application controls. - ***Container Insights***: The Windows version is in public preview.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
When you paste the XPath query into the field on the **Add data source** screen,
> [!TIP]
-> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPath query locally on your machine first. For more information, see the tip provided in the [Windows agent-based connections](../../sentinel/connect-services-windows-based.md) instructions. The [`Get-WinEvent`](/powershell/module/microsoft.powershell.diagnostics/get-winevent) PowerShell cmdlet supports up to 23 expressions. Azure Monitor data collection rules support up to 20. Also, `>` and `<` characters must be encoded as `&gt;` and `&lt;` in your data collection rule. The following script shows an example:
+> You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPath query locally on your machine first. For more information, see the tip provided in the [Windows agent-based connections](../../sentinel/connect-services-windows-based.md) instructions. The [`Get-WinEvent`](/powershell/module/microsoft.powershell.diagnostics/get-winevent) PowerShell cmdlet supports up to 23 expressions. Azure Monitor data collection rules support up to 20. The following script shows an example:
> > ```powershell > $XPath = '*[System[EventID=1035]]'
azure-monitor Action Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/action-groups.md
Last updated 05/02/2023 -+ # Action groups
azure-monitor Availability Private Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-private-test.md
If you want to use availability tests on internal servers that run behind a fire
> [!NOTE] > If you don't want to allow any ingress to your environment, use the method in the [Disconnected or no ingress scenarios](#disconnected-or-no-ingress-scenarios) section.
- Ensure you have a public DNS record for your internal website. The test will fail if the DNS can't be resolved. For more information, see [Create a custom domain name for internal application](../../cloud-services/cloud-services-custom-domain-name-portal.md#add-an-a-record-for-your-custom-domain).
+ Ensure you have a public DNS record for your internal website. The test will fail if the target url hostname cannot be resolved by public clients using public DNS. For more information, see [Create a custom domain name for internal application](../../cloud-services/cloud-services-custom-domain-name-portal.md#add-an-a-record-for-your-custom-domain).
Configure your firewall to permit incoming requests from our service.
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
Title: Sampling overrides (preview) - Azure Monitor Application Insights for Java
+ Title: Sampling overrides - Azure Monitor Application Insights for Java
description: Learn to configure sampling overrides in Azure Monitor Application Insights for Java. Last updated 11/15/2023
-# Sampling overrides (preview) - Azure Monitor Application Insights for Java
+# Sampling overrides - Azure Monitor Application Insights for Java
> [!NOTE] > The sampling overrides feature is in GA, starting from 3.5.0.
azure-monitor Container Insights Data Collection Configmap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-configmap.md
This article describes how to configure data collection in Container insights using ConfigMap. [ConfigMaps](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/) are a Kubernetes mechanism that allow you to store non-confidential data such as configuration file or environment variables. The ConfigMap is primarily used to configure data collection of the container logs and environment variables of the cluster. You can individually configure the stdout and stderr logs and also enable multiline logging.-
+l
Specific configuration you can perform with the ConfigMap includes: - Enable/disable and namespace filtering for stdout and stderr logs
Specific configuration you can perform with the ConfigMap includes:
- Enable/disable multiline logging - Ignore proxy settings
-> [!NOTE]
-> See [Configure data collection in Container insights using data collection rule](./container-insights-data-collection-dcr.md) to configure data collection using a DCR which allows you to configure different settings.
+> [!IMPORTANT]
+> Complete configuration of data collection in Container insights may require editing of both the ConfigMap and the data collection rule (DCR) for the cluster since each method allows configuration of a different set of settings.
+>
+> See [Configure data collection in Container insights using data collection rule](./container-insights-data-collection-dcr.md) for a list of settings and the process to configure data collection using the DCR.
## Prerequisites - ConfigMap is a global list and there can be only one ConfigMap applied to the agent for Container insights. Applying another ConfigMap will overrule the previous ConfigMap collection settings.
azure-monitor Container Insights Data Collection Dcr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-data-collection-dcr.md
Title: Configure Container insights data collection using data collection rule
+ Title: Configure data collection and cost optimization in Container insights using data collection rule
description: Describes how you can configure cost optimization and other data collection for Container insights using a data collection rule.
Last updated 12/19/2023
-# Configure data collection in Container insights using data collection rule
+# Configure data collection and cost optimization in Container insights using data collection rule
-This article describes how to configure data collection in Container insights using the [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for the cluster. A DCR is created when you onboard a cluster to Container insights. This DCR is used by the containerized agent to define data collection for the cluster.
+This article describes how to configure data collection in Container insights using the [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) for your Kubernetes cluster. This includes preset configurations for optimizing your costs. A DCR is created when you onboard a cluster to Container insights. This DCR is used by the containerized agent to define data collection for the cluster.
The DCR is primarily used to configure data collection of performance and inventory data and to configure cost optimization. Specific configuration you can perform with the DCR includes: -- Enable/disable collection and namespace filtering for performance and inventory data (Use [ConfigMap](./container-insights-data-collection-configmap.md) for namespace filtering of logs.)
+- Enable/disable collection and namespace filtering for performance and inventory data.
- Define collection interval for performance and inventory data - Enable/disable Syslog collection - Select log schema
-> [!NOTE]
-> See [Configure data collection in Container insights using ConfigMap](./container-insights-data-collection-configmap.md) to configure data collection using a DCR which allows you to configure different settings.
+> [!IMPORTANT]
+> Complete configuration of data collection in Container insights may require editing of both the DCR and the ConfigMap for the cluster since each method allows configuration of a different set of settings.
+>
+> See [Configure data collection in Container insights using ConfigMap](./container-insights-data-collection-configmap.md) for a list of settings and the process to configure data collection using ConfigMap.
## Prerequisites
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
Container insights monitors the performance of container workloads that are depl
To alert for high CPU or memory utilization, or low free disk space on cluster nodes, use the queries that are provided to create a metric alert or a metric measurement alert. Metric alerts have lower latency than log search alerts, but log search alerts provide advanced querying and greater sophistication. Log search alert queries compare a datetime to the present by using the `now` operator and going back one hour. (Container insights stores all dates in Coordinated Universal Time [UTC] format.) > [!IMPORTANT]
-> Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create alert rules, see the "Alert rules" section in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
+> The queries in this article depend on data collected by Container insights and stored in a Log Analytics workspace. If you've modified the default data collection settings, the queries might not return the expected results. Most notably, if you've disabled collection of performance data since you've enabled Prometheus metrics for the cluster, any queries using the `Perf` table won't return results.
+>
+> See [Configure data collection in Container insights using data collection rule](./container-insights-data-collection-dcr.md) for preset configurations including disabling performance data collection. See [Configure data collection in Container insights using ConfigMap](./container-insights-data-collection-configmap.md) for further data collection options.
+
If you aren't familiar with Azure Monitor alerts, see [Overview of alerts in Microsoft Azure](../alerts/alerts-overview.md) before you start. To learn more about alerts that use log queries, see [Log search alerts in Azure Monitor](../alerts/alerts-types.md#log-alerts). For more about metric alerts, see [Metric alerts in Azure Monitor](../alerts/alerts-metric-overview.md). ## Log query measurements
azure-monitor Container Insights Log Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-query.md
You can apply this data to scenarios that include migration planning, capacity a
For information on using these queries, see [Using queries in Azure Monitor Log Analytics](../logs/queries.md). For a complete tutorial on using Log Analytics to run queries and work with their results, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md).
+> [!IMPORTANT]
+> The queries in this article depend on data collected by Container insights and stored in a Log Analytics workspace. If you've modified the default data collection settings, the queries might not return the expected results. Most notably, if you've disabled collection of performance data since you've enabled Prometheus metrics for the cluster, any queries using the `Perf` table won't return results.
+>
+> See [Configure data collection in Container insights using data collection rule](./container-insights-data-collection-dcr.md) for preset configurations including disabling performance data collection. See [Configure data collection in Container insights using ConfigMap](./container-insights-data-collection-configmap.md) for further data collection options.
+ ## Open Log Analytics There are multiple options for starting Log Analytics. Each option starts with a different [scope](../logs/scope.md). For access to all data in the workspace, on the **Monitoring** menu, select **Logs**. To limit the data to a single Kubernetes cluster, select **Logs** from that cluster's menu.
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
This article describes how to enable complete monitoring of your Kubernetes clus
[Using the Azure portal](#enable-full-monitoring-with-azure-portal), you can enable all of the features at the same time. You can also enable them individually by using the Azure CLI, Azure Resource Manager template, Terraform, or Azure Policy. Each of these methods is described in this article. > [!IMPORTANT]
-> This article describes onboarding using default configuration settings including managed identity authentication. See [Configure agent data collection for Container insights](container-insights-data-collection-configmap.md) and [Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](prometheus-metrics-scrape-configuration.md) to customize your configuration to ensure that you aren't collecting more data than you require. See [Authentication for Container Insights](container-insights-authentication.md) for guidance on migrating from legacy authentication models.
+> Kubernetes clusters generate a lot of log data, which can result in significant costs if you aren't selective about the logs that you collect. Before you enable monitoring for your cluster, see the following articles to ensure that your environment is optimized for cost and that you limit your log collection to only the data that you require:
+>
+>- [Configure data collection and cost optimization in Container insights using data collection rule](./container-insights-data-collection-dcr.md)<br>Details on customizing log collection once you've enabled monitoring, including using preset cost optimization configurations.
+>- [Best practices for monitoring Kubernetes with Azure Monitor](../best-practices-containers.md)<br>Best practices for monitoring Kubernetes clusters organized by the five pillars of the [Azure Well-Architected Framework](/azure/architecture/framework/), including cost optimization.
+>- [Cost optimization in Azure Monitor](../best-practices-cost.md)<br>Best practices for configuring all features of Azure Monitor to optimize you costs and limit the amount of data that you collect.
## Supported clusters
azure-netapp-files Cool Access Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md
Standard storage with cool access is supported for the following regions:
* Norway West * Qatar Central * South Central US
+* South India
* Southeast Asia * Switzerland North * Switzerland West
azure-vmware Configure External Identity Source Nsx T https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-external-identity-source-nsx-t.md
Title: Set an external identity source for VMware NSX
description: Learn how to use Azure VMware Solution to set an external identity source for VMware NSX. Previously updated : 3/22/2024 Last updated : 3/29/2024
In this article, learn how to set up an external identity source for VMware NSX
You can set up NSX to use an external Lightweight Directory Access Protocol (LDAP) directory service to authenticate users. A user can sign in by using their Windows Server Active Directory account credentials or credentials from a third-party LDAP server. Then, the account can be assigned an NSX role, like in an on-premises environment, to provide role-based access for NSX users. ## Prerequisites
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
Title: Set an external identity source for vCenter Server
description: Learn how to set Windows Server Active Directory over LDAP or LDAPS for VMware vCenter Server as an external identity source. Previously updated : 3/22/2024 Last updated : 3/29/2024
[!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)]
+You can set up vCenter Server to use an external Lightweight Directory Access Protocol (LDAP) directory service to authenticate users. A user can sign in by using their Windows Server Active Directory account credentials or credentials from a third-party LDAP server. Then, the account can be assigned a vCenter Server role, like in an on-premises environment, to provide role-based access for vCenter Server users.
++ In this article, you learn how to: > [!div class="checklist"]
azure-vmware Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/introduction.md
Title: Introduction
description: Learn the features and benefits of Azure VMware Solution to deploy and manage VMware-based workloads in Azure. Previously updated : 3/28/2024 Last updated : 3/29/2024
azure-vmware Set Up Backup Server For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/set-up-backup-server-for-azure-vmware-solution.md
Title: Set up Azure Backup Server for Azure VMware Solution
description: Set up your Azure VMware Solution environment to back up virtual machines using Azure Backup Server. Previously updated : 12/19/2023 Last updated : 3/29/2024
This article helps you prepare your Azure VMware Solution environment to back up
> * Set the storage replication for a Recovery Services vault. > * Add storage to Azure Backup Server.
-## Supported VMware features
+## Supported VMware vSphere features
- **Agentless backup:** Azure Backup Server doesn't require an agent to be installed on the vCenter Server or ESXi server to back up the VM. Instead, provide the IP address or fully qualified domain name (FQDN) and the sign-in credentials used to authenticate the VMware vCenter Server with Azure Backup Server. - **Cloud-integrated backup:** Azure Backup Server protects workloads to disk and the cloud. The backup and recovery workflow of Azure Backup Server helps you manage long-term retention and offsite backup.
To set up Azure Backup Server for Azure VMware Solution, you must finish the fol
Azure Backup Server is deployed as an Azure infrastructure as a service (IaaS) VM to protect Azure VMware Solution VMs. + ## Prerequisites for the Azure Backup Server environment
backup Azure File Share Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-file-share-support-matrix.md
Title: Support Matrix for Azure file share backup by using Azure Backup description: Provides a summary of support settings and limitations when backing up Azure file shares. Previously updated : 01/24/2024 Last updated : 03/29/2024
You can use the [Azure Backup service](./backup-overview.md) to back up Azure fi
## Supported regions
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ Azure file shares backup is available in all regions, **except** for Germany Central (Sovereign), Germany Northeast (Sovereign), China East, China North, France South, and US Gov Iowa.
+# [Vault-standard tier (preview)](#tab/vault-tier)
+
+Vaulted backup for Azure Files (preview) is available in West Central US, Southeast Asia, UK South, East Asia, UK West, India Central.
+++ ## Supported storage accounts
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ | Storage account details | Support | | | | | Account Kind | Azure Backup supports Azure file shares present in general-purpose v1, general-purpose v2, and file storage type storage accounts |
Azure file shares backup is available in all regions, **except** for Germany Cen
| Replication | Azure file shares in storage accounts with any replication type are supported | | Firewall enabled | Azure file shares in storage accounts with Firewall rules that allow Microsoft Azure Services to access storage account are supported|
+# [Vault-standard tier (preview)](#tab/vault-tier)
+
+| Storage account details | Support |
+| | |
+| Account Kind | Azure Backup supports Azure file shares present in general-purpose v2, and file storage type storage accounts. |
+
+>[!Note]
+>Storage accounts with restricted network access aren't supported.
+++ ## Supported file shares
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ | File share type | Support | | -- | | | Standard | Supported |
Azure file shares backup is available in all regions, **except** for Germany Cen
| Premium | Supported | | File shares connected with Azure File Sync service | Supported |
+# [Vault-standard tier (preview)](#tab/vault-tier)
++
+| File share type | Support |
+| -- | |
+| Standard | Supported |
+| Large | Supported |
+| Premium | Supported |
+| File shares connected with Azure File Sync service | Supported |
++++ ## Protection limits | Setting | Limit |
Azure file shares backup is available in all regions, **except** for Germany Cen
## Backup limits
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ | Setting | Limit | | -- | -- | | Maximum number of on-demand backups per day | 10 | | Maximum number of scheduled backups per day | 6 |
+# [Vault-standard tier (preview)](#tab/vault-tier)
+
+| Setting | Limit |
+| | |
+| Maximum size of file share | 8 TB |
+| Maximum number of files in a file share | 8 million |
+++ ## Restore limits
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ | Setting | Limit | | | - | | Maximum number of restore per day | 20 |
Azure file shares backup is available in all regions, **except** for Germany Cen
| Maximum recommended restore size per restore for large file shares | 15 TiB | | Maximum duration of a restore job | 15 days
+# [Vault-standard tier (preview)](#tab/vault-tier)
+
+| Setting | Limit |
+| | |
+| Maximum size of a file | 1 TB |
+
+>[!Note]
+>Restore to file shares connected with Azure File sync service or with restricted network access isn't supported.
+++ ## Retention limits
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ | Setting | Limit | | | -- | | Maximum total recovery points per file share at any point in time | 200 |
Azure file shares backup is available in all regions, **except** for Germany Cen
| Maximum retention of monthly recovery points (snapshots) per file share | 120 months | | Maximum retention of yearly recovery points (snapshots) per file share | 10 years |
+# [Vault-standard tier (preview)](#tab/vault-tier)
++
+| Setting | Limit |
+| | |
+| Maximum retention of snapshot | 30 days |
+| Maximum retention of recovery point created by on-demand backup | 99 years |
+| Maximum retention of daily recovery points | 9999 days |
+| Maximum retention of weekly recovery points | 5163 weeks |
+| Maximum retention of monthly recovery points | 1188 months |
+| Maximum retention of yearly recovery points | 99 years |
+++ ## Supported restore methods
+**Choose a backup tier**:
+
+# [Snapshot tier](#tab/snapshot-tier)
+ | Restore method | Details | | | | | Full Share Restore | You can restore the complete file share to the original or an alternate location | | Item Level Restore | You can restore individual files and folders to the original or an alternate location |
+# [Vault-standard tier (preview)](#tab/vault-tier)
+
+| Restore method | Description |
+| | |
+| Full Share Restore | You can restore the complete file share to an alternate location
+
+>[!Note]
+>Original location restores (OLR) and file-level recovery aren't supported. You can perform restore to an empty folder with the **Overwrite** option only.
+++ ## Next steps * Learn how to [Back up Azure file shares](backup-afs.md)
backup Backup Azure Backup Cloud As Tape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-cloud-as-tape.md
Title: How to replace your tape infrastructure
+ Title: Replace your tape infrastructure by using Azure Backup
description: Learn how Azure Backup provides tape-like semantics that enable you to back up and restore data in Azure- Previously updated : 04/30/2017++ Last updated : 03/29/2024 + # Move your long-term storage from tape to the Azure cloud
-Azure Backup and System Center Data Protection Manager customers can:
+This article describes how you can enable backup and retention policies. If you're using tapes to address your long-term-retention needs, Azure Backup provides a powerful and viable alternative with the availability of this feature. The feature is enabled in the Azure Backup service (which is available [here](https://aka.ms/azurebackup_agent)). If you're using System Center DPM, then you must update to, at least, DPM 2012 R2 UR5 before using DPM with the Azure Backup service.
-* Back up data in schedules which best suit the organizational needs.
-* Retain the backup data for longer periods.
-* Make Azure a part of their long-term retention needs (instead of tape).
+Azure Backup and System Center Data Protection Manager enable you to:
-This article explains how customers can enable backup and retention policies. Customers who use tapes to address their long-term-retention needs now have a powerful and viable alternative with the availability of this feature. The feature is enabled in the latest release of the Azure Backup (which is available [here](https://aka.ms/azurebackup_agent)). System Center DPM customers must update to, at least, DPM 2012 R2 UR5 before using DPM with the Azure Backup service.
+* Back up data in schedules, which best suit the organizational needs.
+* Retain the backup data for longer periods.
+* Make Azure a part of your long-term retention needs (instead of tape).
## What is the Backup Schedule? The backup schedule indicates the frequency of the backup operation. For example, the settings in the following screen indicate that backups are taken daily at 6pm and at midnight.
-![Daily Schedule](./media/backup-azure-backup-cloud-as-tape/dailybackupschedule.png)
+![Screenshot shows the daily schedule option.](./media/backup-azure-backup-cloud-as-tape/dailybackupschedule.png)
-Customers can also schedule a weekly backup. For example, the settings in the following screen indicate that backups are taken every alternate Sunday & Wednesday at 9:30AM and 1:00AM.
+You can also schedule a weekly backup. For example, the settings in the following screen indicate that backups are taken every alternate Sunday & Wednesday at 9:30AM and 1:00AM.
-![Weekly Schedule](./media/backup-azure-backup-cloud-as-tape/weeklybackupschedule.png)
+![Screenshot shows the weekly schedule option.](./media/backup-azure-backup-cloud-as-tape/weeklybackupschedule.png)
## What is the Retention Policy?
-The retention policy specifies the duration for which the backup must be stored. Rather than just specifying a ΓÇ£flat policyΓÇ¥ for all backup points, customers can specify different retention policies based on when the backup is taken. For example, the backup point taken daily, which serves as an operational recovery point, is preserved for 90 days. The backup point taken at the end of each quarter for audit purposes is preserved for a longer duration.
+The retention policy specifies the duration for which the backup must be stored. Rather than just specifying a *flat policy* for all backup points, you can specify different retention policies based on when the backup is taken. For example, the backup point taken daily, which serves as an operational recovery point, is preserved for 90 days. The backup point taken at the end of each quarter for audit purposes is preserved for a longer duration.
-![Retention Policy](./media/backup-azure-backup-cloud-as-tape/retentionpolicy.png)
+![Screenshot shows the retention policy.](./media/backup-azure-backup-cloud-as-tape/retentionpolicy.png)
The total number of ΓÇ£retention pointsΓÇ¥ specified in this policy is 90 (daily points) + 40 (one each quarter for 10 years) = 130.
-## Example ΓÇô Putting both together
+## Example protection policy
-![Sample Screen](./media/backup-azure-backup-cloud-as-tape/samplescreen.png)
+![Screenshot shows the sample protection policy.](./media/backup-azure-backup-cloud-as-tape/samplescreen.png)
1. **Daily retention policy**: Backups taken daily are stored for seven days. 2. **Weekly retention policy**: Backups taken at midnight and 6 PM Saturday are preserved for four weeks. 3. **Monthly retention policy**: Backups taken at midnight and 6 PM on the last Saturday of each month are preserved for 12 months. 4. **Yearly retention policy**: Backups taken at midnight on the last Saturday of every March are preserved for 10 years.
-The total number of ΓÇ£retention pointsΓÇ¥ (points from which a customer can restore data) in the preceding diagram is computed as follows:
+The total number of ΓÇ£retention pointsΓÇ¥ (points from which you can restore data) in the preceding diagram is computed as follows:
-* two points per day for seven days = 14 recovery points
-* two points per week for four weeks = 8 recovery points
-* two points per month for 12 months = 24 recovery points
-* one point per year per 10 years = 10 recovery points
+* Two points per day for seven days = 14 recovery points
+* Two points per week for four weeks = 8 recovery points
+* Two points per month for 12 months = 24 recovery points
+* One point per year per 10 years = 10 recovery points
The total number of recovery points is 56.
The total number of recovery points is 56.
## Advanced configuration
-By selecting **Modify** in the preceding screen, customers have further flexibility in specifying retention schedules.
+By selecting **Modify** in the preceding screen, you have further flexibility in specifying retention schedules.
-![Modify Policy window](./media/backup-azure-backup-cloud-as-tape/modify.png)
+![Screenshot shows the Modify Policy blade.](./media/backup-azure-backup-cloud-as-tape/modify.png)
## Next steps
backup Backup Azure Mysql Flexible Server Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mysql-flexible-server-troubleshoot.md
+
+ Title: Troubleshoot Azure Database for MySQL - Flexible Server backup using Azure Backup
+description: Troubleshooting information for backing up Azure Database for MySQL - Flexible server.
+ Last updated : 03/29/2024+++++
+# Troubleshoot Azure Database for MySQL - Flexible Server backup (preview)
+
+This article provides the recommended actions to troubleshoot the issues you might encounter during the backup or restore of Azure Database for MySQL - Flexible server.
+
+## Common errors for the backup and restore operations
++
+### MySQLFlexOperationFailedUserError
+
+**Error code**: MySQLFlexOperationFailedUserError
+
+**Inner error code**: ResourceGroupNotFound
+
+**Recommended action**: Check if the resource group of the backed-up server is deleted. We recommend you to stop protection for the backup instance to avoid failures.
+
+### MySQLFlexOperationFailedUserError
+
+**Error code**: MySQLFlexOperationFailedUserError
+
+**Inner error code**: ResourceNotFound
+
+**Recommended action**: Check if the resource being backed up is deleted. We recommend you to stop protection for the backup instance to avoid failures.
+
+### MySQLFlexOperationFailedUserError
+
+**Error code**: MySQLFlexOperationFailedUserError
+
+**Inner error code**: AuthorizationFailed
+
+**Cause**: Required permissions aren't present to perform the backup operation.
+
+**Recommended action**: Assign the [appropriate permissions](backup-azure-mysql-flexible-server-about.md#permissions-for-an-azure-database-for-mysqlflexible-server-backup).
+
+### MySQLFlexClientError
+
+**Error code**: MySQLFlexClientError
+
+**Inner error code**: BackupAlreadyRunningForServer
+
+**Cause**: A backup operation is already running on the server.
+
+**Recommended action**: Wait for the previous operation to finish before triggering the next backup operation.
+
+### UserErrorMaxConcurrentOperationLimitReached
+
+**Error code**: UserErrorMaxConcurrentOperationLimitReached
+
+**Inner error code**: UserErrorMaxConcurrentOperationLimitReached
+
+**Cause**: The count to perform backups on the server reached the maximum limit.
+
+**Recommended action**: Try to trigger a backup once the current running backup job finishes.
+
+### UserErrorMSIMissingPermissions
+
+**Error code**: UserErrorMSIMissingPermissions
+
+**Inner error code**: UserErrorMSIMissingPermissions
+
+**Cause**: The required set of permissions isn't present to perform the restore operation.
+
+**Recommended action**: Assign the [appropriate permissions](backup-azure-mysql-flexible-server-about.md#permissions-for-an-azure-database-for-mysqlflexible-server-backup) and retrigger backup operation.
+
+## Next steps
+
+- [About long-term retention for Azure Database for MySQL - Flexible Server by using Azure Backup (preview)](backup-azure-mysql-flexible-server-about.md).
backup Backup Dpm Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-dpm-automation.md
Title: Use PowerShell to back up DPM workloads description: Learn how to deploy and manage Azure Backup for Data Protection Manager (DPM) using PowerShell- Previously updated : 01/23/2017 -++ Last updated : 03/29/2024+ # Deploy and manage backup to Azure for Data Protection Manager (DPM) servers using PowerShell
-This article shows you how to use PowerShell to setup Azure Backup on a DPM server, and to manage backup and recovery.
+This article describes how to use PowerShell to set up Azure Backup on a DPM server, and to manage backup and recovery.
-## Setting up the PowerShell environment
+## Set up the PowerShell environment
Before you can use PowerShell to manage backups from Data Protection Manager to Azure, you need to have the right environment in PowerShell. At the start of the PowerShell session, ensure that you run the following command to import the right modules and allow you to correctly reference the DPM cmdlets:
Bandwidth usage can also be controlled with options of ```-WorkHourBandwidth```
Set-DPMCloudSubscriptionSetting -DPMServerName "TestingServer" -SubscriptionSetting $setting -NoThrottle ```
-## Configuring the staging Area
+## Configure the staging Area
The Azure Backup agent running on the DPM server needs temporary storage for data restored from the cloud (local staging area). Configure the staging area using the [Set-DPMCloudSubscriptionSetting](/powershell/module/dataprotectionmanager/set-dpmcloudsubscriptionsetting) cmdlet and the ```-StagingAreaPath``` parameter.
In the example above, the staging area will be set to *C:\StagingArea* in the Po
The backup data sent to Azure Backup is encrypted to protect the confidentiality of the data. The encryption passphrase is the "password" to decrypt the data at the time of restore. It's important to keep this information safe and secure once it's set.
-In the example below, the first command converts the string ```passphrase123456789``` to a secure string and assigns the secure string to the variable named ```$Passphrase```. the second command sets the secure string in ```$Passphrase``` as the password for encrypting backups.
+In the example below, the first command converts the string ```passphrase123456789``` to a secure string and assigns the secure string to the variable named ```$Passphrase```. The second command sets the secure string in ```$Passphrase``` as the password for encrypting backups.
```powershell $Passphrase = ConvertTo-SecureString -string "passphrase123456789" -AsPlainText -Force
Set-DPMCloudSubscriptionSetting -DPMServerName "TestingServer" -SubscriptionSett
## Protect data to Azure Backup
-In this section, you'll add a production server to DPM and then protect the data to local DPM storage and then to Azure Backup. In the examples, we'll demonstrate how to back up files and folders. The logic can easily be extended to backup any DPM-supported data source. All your DPM backups are governed by a Protection Group (PG) with four parts:
+In this section, you'll add a production server to DPM and then protect the data to local DPM storage and then to Azure Backup. In the examples, we'll demonstrate how to back up files and folders. The logic can easily be extended to back up any DPM-supported data source. All your DPM backups are governed by a Protection Group (PG) with four parts:
1. **Group members** is a list of all the protectable objects (also known as *Datasources* in DPM) that you want to protect in the same protection group. For example, you may want to protect production VMs in one protection group and SQL Server databases in another protection group as they may have different backup requirements. Before you can back up any datasource on a production server you need to make sure the DPM Agent is installed on the server and is managed by DPM. Follow the steps for [installing the DPM Agent](/system-center/dpm/deploy-dpm-protection-agent) and linking it to the appropriate DPM Server. 2. **Data protection method** specifies the target backup locations - tape, disk, and cloud. In our example, we'll protect data to the local disk and to the cloud. 3. A **backup schedule** that specifies when backups need to be taken and how often the data should be synchronized between the DPM Server and the production server. 4. A **retention schedule** that specifies how long to retain the recovery points in Azure.
-### Creating a protection group
+### Create a protection group
Start by creating a new Protection Group using the [New-DPMProtectionGroup](/powershell/module/dataprotectionmanager/new-dpmprotectiongroup) cmdlet.
The above cmdlet will create a Protection Group named *ProtectGroup01*. An exist
$MPG = Get-ModifiableProtectionGroup $PG ```
-### Adding group members to the Protection Group
+### Add group members to the Protection Group
Each DPM Agent knows the list of datasources on the server that it's installed on. To add a datasource to the Protection Group, the DPM Agent needs to first send a list of the datasources back to the DPM server. One or more datasources are then selected and added to the Protection Group. The PowerShell steps needed to achieve this are:
Add-DPMChildDatasource -ProtectionGroup $MPG -ChildDatasource $DS
Repeat this step as many times as required, until you've added all the chosen datasources to the protection group. You can also start with just one datasource, and complete the workflow for creating the Protection Group, and at a later point add more datasources to the Protection Group.
-### Selecting the data protection method
+### Select the data protection method
-Once the datasources have been added to the Protection Group, the next step is to specify the protection method using the [Set-DPMProtectionType](/powershell/module/dataprotectionmanager/set-dpmprotectiontype) cmdlet. In this example, the Protection Group is setup for local disk and cloud backup. You also need to specify the datasource that you want to protect to cloud using the [Add-DPMChildDatasource](/powershell/module/dataprotectionmanager/add-dpmchilddatasource) cmdlet with -Online flag.
+Once the datasources have been added to the Protection Group, the next step is to specify the protection method using the [Set-DPMProtectionType](/powershell/module/dataprotectionmanager/set-dpmprotectiontype) cmdlet. In this example, the Protection Group is set up for local disk and cloud backup. You also need to specify the datasource that you want to protect to cloud using the [Add-DPMChildDatasource](/powershell/module/dataprotectionmanager/add-dpmchilddatasource) cmdlet with -Online flag.
```powershell Set-DPMProtectionType -ProtectionGroup $MPG -ShortTerm Disk ΓÇôLongTerm Online Add-DPMChildDatasource -ProtectionGroup $MPG -ChildDatasource $DS ΓÇôOnline ```
-### Setting the retention range
+### Set the retention range
Set the retention for the backup points using the [Set-DPMPolicyObjective](/powershell/module/dataprotectionmanager/set-dpmpolicyobjective) cmdlet. While it might seem odd to set the retention before the backup schedule has been defined, using the ```Set-DPMPolicyObjective``` cmdlet automatically sets a default backup schedule that can then be modified. It's always possible to set the backup schedule first and the retention policy after.
So if you need to modify the weekly schedule, you need to refer to the ```$onlin
### Initial backup
-When backing up a datasource for the first time, DPM needs creates initial replica that creates a full copy of the datasource to be protected on DPM replica volume. This activity can either be scheduled for a specific time, or can be triggered manually, using the [Set-DPMReplicaCreationMethod](/powershell/module/dataprotectionmanager/set-dpmreplicacreationmethod) cmdlet with the parameter ```-NOW```.
+When you back up a datasource for the first time, DPM needs creates initial replica that creates a full copy of the datasource to be protected on DPM replica volume. This activity can either be scheduled for a specific time, or can be triggered manually, using the [Set-DPMReplicaCreationMethod](/powershell/module/dataprotectionmanager/set-dpmreplicacreationmethod) cmdlet with the parameter ```-NOW```.
```powershell Set-DPMReplicaCreationMethod -ProtectionGroup $MPG -NOW ```
-### Changing the size of DPM Replica & recovery point volume
+### Change the size of DPM Replica & recovery point volume
You can also change the size of DPM Replica volume and Shadow Copy volume using [Set-DPMDatasourceDiskAllocation](/powershell/module/dataprotectionmanager/set-dpmdatasourcediskallocation) cmdlet as in the following example: Get-DatasourceDiskAllocation -Datasource $DS Set-DatasourceDiskAllocation -Datasource $DS -ProtectionGroup $MPG -manual -ReplicaArea (2gb) -ShadowCopyArea (2gb)
-### Committing the changes to the Protection Group
+### Commit the changes to the Protection Group
Finally, the changes need to be committed before DPM can take the backup per the new Protection Group configuration. This can be achieved using the [Set-DPMProtectionGroup](/powershell/module/dataprotectionmanager/set-dpmprotectiongroup) cmdlet.
backup Backup Mabs System State And Bmr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-system-state-and-bmr.md
Title: System state and bare-metal recovery protection
+ Title: System state and bare-metal recovery protection for Azure Backup
description: Use Azure Backup Server to back up your system state and provide bare-metal recovery (BMR) protection.- Previously updated : 05/15/2017++ Last updated : 03/29/2024 + # Back up system state and restore to bare metal by using Azure Backup Server
+This article describes how to back up system state and restore to bare metal by using Azure Backup Server.
+ Azure Backup Server backs up system state and provides bare-metal recovery (BMR) protection. * **System state backup**: Backs up operating system files. This backup allows you to recover when a computer starts, but system files and the registry are lost. A system state backup includes the following elements:
Azure Backup Server backs up system state and provides bare-metal recovery (BMR)
* Computer that runs certificate * **Bare-metal backup**: Backs up operating system files and all data on critical volumes, except for user data. By definition, a BMR backup includes a system state backup. It provides protection when a computer won't start and you have to recover everything.
+## Supported backup and restore scenarios
+ The following table summarizes what you can back up and recover. For information about app versions that system state and BMR can protect, see [What does Azure Backup Server back up?](backup-mabs-protection-matrix.md). |Backup|Issue|Recover from Azure Backup Server backup|Recover from system state backup|BMR|
The following table summarizes what you can back up and recover. For information
|SQL Server/Exchange<br /><br />Azure Backup Server app backup<br /><br />BMR/system state backup|Lost server (database/transaction logs intact)|N|N|Y| |SQL Server/Exchange<br /><br />Azure Backup Server app backup<br /><br />BMR/system state backup|Lost server (database/transaction logs lost)|N|N|Y<br /><br />BMR recovery, followed by regular Azure Backup Server recovery|
-## How system state backup works
+## System state backup workflow
When a system state backup runs, Backup Server communicates with Windows Server Backup to request a backup of the server's system state. By default, Backup Server and Windows Server Backup use the drive that has the most available free space. Information about this drive is saved in the *PSDataSourceConfig.xml* file.
You can customize the drive that Backup Server uses for the system state backup:
If a protection group is set to protect the system state of the computer, then run a consistency check. If an alert is generated, then select **Modify protection group** in the alert, and then complete the pages in the wizard. Then run another consistency check.
-If the protection server is in a cluster, a cluster drive might be selected as the drive that has the most free space. If that drive ownership is switched to another node and a system state backup runs, then the drive is unavailable and the backup fails. In this scenario, modify *PSDataSourceConfig.xml* to point to a local drive.
+If the protection server is in a cluster, a cluster drive might be selected as the drive that has the most free space. If that drive ownership is switched to another node and a system state backup run, then the drive is unavailable and the backup fails. In this scenario, modify *PSDataSourceConfig.xml* to point to a local drive.
Next, Windows Server Backup creates a folder called *WindowsImageBackup* in the root of the restore folder. As Windows Server Backup creates the backup, all the data is placed in this folder. When the backup finishes, the file is transferred to the Backup Server computer. Note the following information:
-* This folder and its contents aren't cleaned up when the backup or transfer finishes. The best way to think of this is that the space is reserved for the next time a backup finishes.
+* This folder and its contents aren't cleaned up when the backup or transfer finishes. This space is reserved for the next time a backup job completes.
* The folder is created for every backup. The time and date stamp reflect the time of your last system state backup.
-## How BMR backup works
+## BMR backup workflow
For BMR (including a system state backup), the backup job is saved directly to a share on the Backup Server computer. It's not saved to a folder on the protected server.
When the backup finishes, the file is transferred to the Backup Server computer.
* A Backup Server computer can't protect itself for BMR.
-* Short-term protection to tape (disk to tape, or D2T) isn't supported for BMR. Long-term storage to tape (disk to disk to tape, or D2D2T) is supported.
+* Short-term protection to tape (disk to tape, or D2T) isn't supported for BMR. Long-term storage to tape (disk to tape, or D2D2T) is supported.
* For BMR protection, Windows Server Backup must be installed on the protected computer. * For BMR protection, unlike for system state protection, Backup Server has no space requirements on the protected computer. Windows Server Backup directly transfers backups to the Backup Server computer. The backup transfer job doesn't appear in the Backup Server **Jobs** view.
-* Backup Server reserves 30 GB of space on the replica volume for BMR. You can change this space allotment on the **Disk Allocation** page in the Modify Protection Group Wizard. Or you can use the Get-DatasourceDiskAllocation and Set-DatasourceDiskAllocation PowerShell cmdlets. On the recovery point volume, BMR protection requires about 6 GB for a retention of five days.
+* Backup Server reserves 30 GB of space on the replica volume for BMR. You can change this space allotment on the **Disk Allocation** blade in the Modify Protection Group Wizard. Or you can use the Get-DatasourceDiskAllocation and Set-DatasourceDiskAllocation PowerShell cmdlets. On the recovery point volume, BMR protection requires about 6 GB for a retention of five days.
* You can't reduce the replica volume size to less than 15 GB. * Backup Server doesn't calculate the size of the BMR data source. It assumes 30 GB for all servers. Change the value based on the size of BMR backups that you expect in your environment. You can roughly calculate the size of a BMR backup as the sum of used space on all critical volumes. Critical volumes = boot volume + system volume + volume hosting system state data, such as Active Directory.
-* If you change from system state protection to BMR protection, then BMR protection requires less space on the *recovery point volume*. However, the extra space on the volume isn't reclaimed. You can manually shrink the volume size on the **Modify Disk Allocation** page of the Modify Protection Group Wizard. Or you can use the Get-DatasourceDiskAllocation and Set-DatasourceDiskAllocation PowerShell cmdlets.
+* If you change from system state protection to BMR protection, then BMR protection requires less space on the *recovery point volume*. However, the extra space on the volume isn't reclaimed. You can manually shrink the volume size on the **Modify Disk Allocation** blade of the Modify Protection Group Wizard. Or you can use the Get-DatasourceDiskAllocation and Set-DatasourceDiskAllocation PowerShell cmdlets.
If you change from system state protection to BMR protection, then BMR protection requires more space on the *replica volume*. The volume is automatically extended. If you want to change the default space allocations, then use the Modify-DiskAllocation PowerShell cmdlet.
When the backup finishes, the file is transferred to the Backup Server computer.
## Back up system state and bare metal
-To back up system state and bare metal:
+To back up system state and bare metal, follow these steps:
1. To open the Create New Protection Group Wizard, in the Backup Server Administrator Console, select **Protection** > **Actions** > **Create Protection Group**.
-1. On the **Select Protection Group Type** page, select **Servers**, and then select **Next**.
+1. On the **Select Protection Group Type** blade, select **Servers**, and then select **Next**.
-1. On the **Select Group Members** page, expand the computer, and then select either **BMR** or **system state**.
+1. On the **Select Group Members** blade, expand the computer, and then select either **BMR** or **system state**.
Remember that you can't protect both BMR and system state for the same computer in different groups. Also, when you select BMR, system state is automatically enabled. For more information, see [Deploy protection groups](/system-center/dpm/create-dpm-protection-groups).
-1. On the **Select Data Protection Method** page, choose how to handle short-term backup and long-term backup.
+1. On the **Select Data Protection Method** blade, choose how to handle short-term backup and long-term backup.
Short-term backup is always to disk first, with the option of backing up from the disk to Azure by using Azure Backup (short-term or long-term). An alternative to long-term backup to the cloud is to set up long-term backup to a standalone tape device or tape library that's connected to Backup Server.
-1. On the **Select Short-Term Goals** page, choose how to back up to short-term storage on disk:
+1. On the **Select Short-Term Goals** blade, choose how to back up to short-term storage on disk:
* For **Retention range**, choose how long to keep the data on disk. * For **Synchronization frequency**, choose how often to run an incremental backup to disk. If you don't want to set a backup interval, you can select **Just before a recovery point**. Backup Server will run an express full backup just before each recovery point is scheduled.
-1. If you want to store data on tape for long-term storage, then on the **Specify Long-Term Goals** page, choose how long to keep tape data (1 to 99 years).
+1. If you want to store data on tape for long-term storage, then on the **Specify Long-Term Goals** blade, choose how long to keep tape data (1 to 99 years).
1. For **Frequency of backup**, choose how often to run backup to tape. The frequency is based on the retention range you selected: * When the retention range is 1 to 99 years, you can back up daily, weekly, biweekly, monthly, quarterly, half-yearly, or yearly. * When the retention range is 1 to 11 months, you can back up daily, weekly, biweekly, or monthly. * When the retention range is 1 to 4 weeks, you can back up daily or weekly.
- 1. On the **Select Tape and Library Details** page, select the tape and library to use. Also choose whether data should be compressed and encrypted.
+ 1. On the **Select Tape and Library Details** blade, select the tape and library to use. Also choose whether data should be compressed and encrypted.
-1. On the **Review Disk Allocation** page, review the storage pool disk space that's available for the protection group.
+1. On the **Review Disk Allocation** blade, review the storage pool disk space that's available for the protection group.
* **Total Data size** is the size of the data you want to back up. * **Disk space to be provisioned on Azure Backup Server** is the space that Backup Server recommends for the protection group. Backup Server uses these settings to choose the ideal backup volume. You can edit the backup volume choices in **Disk allocation details**. * For workloads, in the drop-down menu, select the preferred storage. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage that Backup Server suggests that you add to the volume to ensure smooth backups.
-1. On the **Choose Replica Creation Method** page, select how to handle the initial full-data replication.
+1. On the **Choose Replica Creation Method** blade, select how to handle the initial full-data replication.
If you choose to replicate over the network, we recommend that you choose an off-peak time. For large amounts of data or for network conditions that are less than optimal, consider replicating the data offline by using removable media.
-1. On the **Choose Consistency Check Options** page, select how to automate consistency checks.
+1. On the **Choose Consistency Check Options** blade, select how to automate consistency checks.
You can choose to run a check only when replica data becomes inconsistent, or on a schedule. If you don't want to configure automatic consistency checking, then you can run a manual check at any time. To run a manual check, in the **Protection** area of the Backup Server Administrator Console, right-click the protection group, and then select **Perform Consistency Check**.
-1. If you chose to back up to the cloud by using Azure Backup, on the **Specify Online Protection Data** page, select the workloads that you want to back up to Azure.
+1. If you chose to back up to the cloud by using Azure Backup, on the **Specify Online Protection Data** blade, select the workloads that you want to back up to Azure.
-1. On the **Specify Online Backup Schedule** page, select how often to incrementally back up to Azure.
+1. On the **Specify Online Backup Schedule** blade, select how often to incrementally back up to Azure.
You can schedule backups to run every day, week, month, and year. You can also select the time and date at which backups should run. Backups can occur up to twice a day. Each time a backup runs, a data recovery point is created in Azure from the copy of the backup data that's stored on the Backup Server disk.
-1. On the **Specify Online Retention Policy** page, select how the recovery points that are created from the daily, weekly, monthly, and yearly backups are kept in Azure.
+1. On the **Specify Online Retention Policy** blade, select how the recovery points that are created from the daily, weekly, monthly, and yearly backups are kept in Azure.
-1. On the **Choose Online Replication** page, select how the initial full replication of data occurs.
+1. On the **Choose Online Replication** blade, select how the initial full replication of data occurs.
You can replicate over the network or back up offline (offline seeding). An offline backup uses the Azure Import feature. For more information, see [Offline backup workflow in Azure Backup](offline-backup-azure-data-box.md).
-1. On the **Summary** page, review your settings. After you select **Create Group**, initial replication of the data occurs. When the data replication finishes, on the **Status** page, the protection group status is **OK**. Backups then happen according to the protection group settings.
+1. On the **Summary** blade, review your settings. After you select **Create Group**, initial replication of the data occurs. When the data replication finishes, on the **Status** blade, the protection group status is **OK**. Backups then happen according to the protection group settings.
## Recover system state or BMR
You can recover BMR or system state to a network location. If you backed up BMR,
### Restore BMR
-To run recovery on the Backup Server computer:
+To run recovery on the Backup Server computer, follow these steps:
-1. In the **Recovery** pane, find the computer that you want to recover. Then select **Bare Metal Recovery**.
+1. On the **Recovery** blade, find the computer that you want to recover. Then select **Bare Metal Recovery**.
1. Available recovery points are indicated in bold on the calendar. Select the date and time for the recovery point that you want to use.
-1. On the **Select Recovery Type** page, select **Copy to a network folder**.
+1. On the **Select Recovery Type** blade, select **Copy to a network folder**.
-1. On the **Specify Destination** page, select the destination for the copied data.
+1. On the **Specify Destination** blade, select the destination for the copied data.
Remember, the destination needs to have enough room for the data. We recommend that you create a new folder for the destination.
-1. On the **Specify Recovery Options** page, select the security settings. Then select whether to use storage area network (SAN)-based hardware snapshots, for quicker recovery. This option is available only if:
+1. On the **Specify Recovery Options** blade, select the security settings. Then select whether to use storage area network (SAN)-based hardware snapshots, for quicker recovery. This option is available only if:
* You have a SAN that provides this functionality. * You can create and split a clone to make it writable. * The protected computer and Backup Server computer are connected to the same network. 1. Set up notification options.
-1. On the **Confirmation** page, select **Recover**.
+1. On the **Confirmation** blade, select **Recover**.
To set up the share location:
To restore the system:
1. Start the computer on which you want to restore the image by using the Windows DVD for the system you're restoring.
-1. On the first page, verify the settings for language and locale. On the **Install** page, select **Repair your computer**.
+1. On the first blade, verify the settings for language and locale. On the **Install** blade, select **Repair your computer**.
-1. On the **System Recovery Options** page, select **Restore your computer using a system image that you created earlier**.
+1. On the **System Recovery Options** blade, select **Restore your computer using a system image that you created earlier**.
-1. On the **Select a system image backup** page, select **Select a system image** > **Advanced** > **Search for a system image on the network**. If a warning appears, select **Yes**. Go to the share path, enter the credentials, and then select the recovery point. The system scans for specific backups that are available in that recovery point. Select the recovery point that you want to use.
+1. On the **Select a system image backup** blade, select **Select a system image** > **Advanced** > **Search for a system image on the network**. If a warning appears, select **Yes**. Go to the share path, enter the credentials, and then select the recovery point. The system scans for specific backups that are available in that recovery point. Select the recovery point that you want to use.
-1. On the **Choose how to restore the backup** page, select **Format and repartition disks**. On the next page, verify the settings.
+1. On the **Choose how to restore the backup** blade, select **Format and repartition disks**. On the next blade, verify the settings.
1. To begin the restore, select **Finish**. A restart is required.
To run recovery in Backup Server:
1. Available recovery points are indicated in bold on the calendar. Select the date and time for the recovery point that you want to use.
-1. On the **Select Recovery Type** page, select **Copy to a network folder**.
+1. On the **Select Recovery Type** blade, select **Copy to a network folder**.
-1. On the **Specify Destination** page, select where to copy the data.
+1. On the **Specify Destination** blade, select where to copy the data.
Remember, the destination you select needs to have enough room for the data. We recommend that you create a new folder for the destination.
-1. On the **Specify Recovery Options** page, select the security settings. Then select whether to use SAN-based hardware snapshots, for quicker recovery. This option is available only if:
+1. On the **Specify Recovery Options** blade, select the security settings. Then select whether to use SAN-based hardware snapshots, for quicker recovery. This option is available only if:
* You have a SAN that provides this functionality. * You can create and split a clone to make it writable. * The protected computer and Backup Server server are connected to the same network. 1. Set up notification options.
-1. On the **Confirmation** page, select **Recover**.
+1. On the **Confirmation** blade, select **Recover**.
To run Windows Server Backup: 1. Select **Actions** > **Recover** > **This Server** > **Next**.
-1. Select **Another Server**, select the **Specify Location Type** page, and then select **Remote shared folder**. Enter the path to the folder that contains the recovery point.
+1. Select **Another Server**, select the **Specify Location Type** blade, and then select **Remote shared folder**. Enter the path to the folder that contains the recovery point.
-1. On the **Select Recovery Type** page, select **System state**.
+1. On the **Select Recovery Type** blade, select **System state**.
-1. On the **Select Location for System State Recovery** page, select **Original Location**.
+1. On the **Select Location for System State Recovery** blade, select **Original Location**.
-1. On the **Confirmation** page, select **Recover**.
+1. On the **Confirmation** blade, select **Recover**.
1. After the restore, restart the server.
backup Blob Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-overview.md
You won't incur any management charges or instance fee when using operational ba
- Retention of data because of [Soft delete for blobs](../storage/blobs/soft-delete-blob-overview.md), [Change feed support in Azure Blob Storage](../storage/blobs/storage-blob-change-feed.md), and [Blob versioning](../storage/blobs/versioning-overview.md).
-# [Vaukted backup (preview)](#tab/vaulted-backup)
+# [Vaulted backup (preview)](#tab/vaulted-backup)
You won't incur backup storage charges or instance fees during the preview. However, you'll incur the source side cost, [associated with Object replication](../storage/blobs/object-replication-overview.md#billing), on the backed-up source account.
batch Batch Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-diagnostics.md
- Title: Metrics, alerts, and diagnostic logs
-description: Learn how to record and analyze diagnostic log events for Azure Batch account resources like pools and tasks.
- Previously updated : 04/05/2023-
-# Batch metrics, alerts, and logs for diagnostic evaluation and monitoring
-
-Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and [diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) for resources in your Azure Batch account.
-
-You can collect and consume this data in various ways to monitor your Batch account and diagnose issues. You can also configure [metric alerts](../azure-monitor/alerts/alerts-overview.md) so you receive notifications when a metric reaches a specified value.
-
-## Batch metrics
-
-[Metrics](../azure-monitor/essentials/data-platform-metrics.md) are Azure data (also called performance counters) that your Azure resources emit, and the Azure Monitor service consumes that data. Examples of metrics in a Batch account are Pool Create Events, Low-Priority Node Count, and Task Complete Events. These metrics can help identify trends and can be used for data analysis.
-
-See the [list of supported Batch metrics](../azure-monitor/essentials/metrics-supported.md#microsoftbatchbatchaccounts).
-
-Metrics are:
--- Enabled by default in each Batch account without extra configuration.-- Generated every 1 minute.-- Not persisted automatically, but they have a 30-day rolling history. You can persist activity metrics as part of diagnostic logging.-
-## View Batch metrics
-
-In the Azure portal, the **Overview** page for the Batch account shows key node, core, and task metrics by default.
-
-To view other metrics for a Batch account:
-
-1. In the Azure portal, search and select **Batch accounts**, and then select the name of your Batch account.
-1. Under **Monitoring** in the left side navigation menu, select **Metrics**.
-1. Select **Add metric** and then choose a metric from the dropdown list.
-1. Select an **Aggregation** option for the metric. For count-based metrics (like "Dedicated Core Count" or "Low-Priority Node Count"), use the **Avg** aggregation. For event-based metrics (like "Pool Resize Complete Events"), use the **Count** aggregation. Avoid using the **Sum** aggregation, which adds up the values of all data points received over the period of the chart.
-1. To add other metrics, repeat steps 3 and 4.
-
- :::image type="content" source="./media/batch-diagnostics/add-metric.png" alt-text="Screenshot of the metrics page of a batch account in the Azure portal. Metrics is highlighted in the left side navigation menu. The Metric and Aggregation options for a metric are highlighted as well.":::
--
-You can also retrieve metrics programmatically with the Azure Monitor APIs. For an example, see [Retrieve Azure Monitor metrics with .NET](/samples/azure-samples/monitor-dotnet-metrics-api/monitor-dotnet-metrics-api/).
-
-> [!NOTE]
-> Metrics emitted in the last 3 minutes might still be aggregating, so values might be under-reported during this time frame. Metric delivery is not guaranteed and might be affected by out-of-order delivery, data loss, or duplication.
-
-## Batch metric alerts
-
-You can configure near real-time metric alerts that trigger when the value of a specified metric crosses a threshold that you assign. The alert generates a notification when the alert is *Activated* (when the threshold is crossed and the alert condition is met). The alert also generates an alert when it's *Resolved* (when the threshold is crossed again and the condition is no longer met).
-
-Because metric delivery can be subject to inconsistencies such as out-of-order delivery, data loss, or duplication, you should avoid alerts that trigger on a single data point. Instead, use thresholds to account for any inconsistencies such as out-of-order delivery, data loss, and duplication over a period of time.
-
-For example, you might want to configure a metric alert when your low priority core count falls to a certain level. You could then use this alert to adjust the composition of your pools. For best results, set a period of 10 or more minutes where the alert will be triggered if the average low priority core count falls lower than the threshold value for the entire period. This time period allows for metrics to aggregate so that you get more accurate results.
-
-To configure a metric alert in the Azure portal:
-
-1. In the Azure portal, search and select **Batch accounts**, and then select the name of your Batch account.
-1. Under **Monitoring** in the left side navigation menu, select **Alerts**, and then select **Create** > **Alert Rule**.
-1. On the **Condition page**, select a **Signal** from the dropdown list.
-1. Enter the logic for your **Alert Rule** in the fields specific to the **Signal** you choose. The following screenshot shows the options for **Task Fail Events**.
-
- :::image type="content" source="./media/batch-diagnostics/create-alert-rule.png" alt-text="Screenshot of the Conditions tab on the Create and alert rule page." lightbox="./media/batch-diagnostics/create-alert-rule-lightbox.png":::
-
-1. Enter the name for your alert on the **Details** page.
-1. Then select **Review + create** > **Create**.
-
-For more information about creating metric alerts, see [Types of Azure Monitor alerts](../azure-monitor/alerts/alerts-metric-overview.md) and [Create a new alert rule](../azure-monitor/alerts/alerts-metric.md).
-
-You can also configure a near real-time alert by using the [Azure Monitor REST API](/rest/api/monitor/). For more information, see [Overview of alerts in Microsoft Azure](../azure-monitor/alerts/alerts-overview.md). To include job, task, or pool-specific information in your alerts, see [Create a new alert rule](../azure-monitor/alerts/alerts-log.md).
-
-## Batch diagnostics
-
-[Diagnostic logs](../azure-monitor/essentials/platform-logs-overview.md) contain information emitted by Azure resources that describe the operation of each resource. For Batch, you can collect the following logs:
--- **ServiceLog**: [events emitted by the Batch service](#service-log-events) during the lifetime of an individual resource such as a pool or task.-- **AllMetrics**: metrics at the Batch account level.-
-You must explicitly enable diagnostic settings for each Batch account you want to monitor.
-
-### Log destination options
-
-A common scenario is to select an Azure Storage account as the log destination. To store logs in Azure Storage, create the account before enabling collection of logs. If you associated a storage account with your Batch account, you can choose that account as the log destination.
-
-Alternately, you can:
--- Stream Batch diagnostic log events to [Azure Event Hubs](../event-hubs/event-hubs-about.md). Event Hubs can ingest millions of events per second, which you can then transform and store by using any real-time analytics provider.-- Send diagnostic logs to [Azure Monitor logs](../azure-monitor/logs/log-query-overview.md), where you can analyze them or export them for analysis in Power BI or Excel.-
-> [!NOTE]
-> You might incur additional costs to store or process diagnostic log data with Azure services.
-
-### Enable collection of Batch diagnostic logs
-
-To create a new diagnostic setting in the Azure portal, use the following steps.
-
-1. In the Azure portal, search and select **Batch accounts**, and then select the name of your Batch account.
-2. Under **Monitoring** in the left side navigation menu, select **Diagnostic settings**.
-3. In **Diagnostic settings**, select **Add diagnostic setting**.
-4. Enter a name for the setting.
-5. Select a destination: **Send to Log Analytics workspace**, **Archive to a storage account**, **Stream to an event hub**, or **Send to partner solution**. If you select a storage account, you can optionally select the number of days to retain data for each log. If you don't specify the number of days for retention, data is retained during the life of the storage account.
-6. Select any options in either the **Logs** or **Metrics** section.
-7. Select **Save** to create the diagnostic setting.
-
-The following screenshot shows an example diagnostic setting called *My diagnostic setting*. It sends **allLogs** and **AllMetrics** to a Log Analytics workspace.
--
-You can also enable log collection by [creating diagnostic settings in the Azure portal](../azure-monitor/essentials/diagnostic-settings.md) by using a [Resource Manager template](../azure-monitor/essentials/resource-manager-diagnostic-settings.md). You can also use Azure PowerShell or the Azure CLI. For more information, see [Overview of Azure platform logs](../azure-monitor/essentials/platform-logs-overview.md).
-
-### Access diagnostics logs in storage
-
-If you [archive Batch diagnostic logs in a storage account](../azure-monitor/essentials/resource-logs.md#send-to-azure-storage), a storage container is created in the storage account as soon as a related event occurs. Blobs are created according to the following naming pattern:
-
-```json
-insights-{log category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/
-RESOURCEGROUPS/{resource group name}/PROVIDERS/MICROSOFT.BATCH/
-BATCHACCOUNTS/{Batch account name}/y={four-digit numeric year}/
-m={two-digit numeric month}/d={two-digit numeric day}/
-h={two-digit 24-hour clock hour}/m=00/PT1H.json
-```
-
-For example:
-
-```json
-insights-metrics-pt1m/resourceId=/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/
-RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.BATCH/
-BATCHACCOUNTS/MYBATCHACCOUNT/y=2018/m=03/d=05/h=22/m=00/PT1H.json
-```
-
-Each `PT1H.json` blob file contains JSON-formatted events that occurred within the hour specified in the blob URL (for example, `h=12`). During the present hour, events are appended to the `PT1H.json` file as they occur. The minute value (`m=00`) is always `00`, since diagnostic log events are broken into individual blobs per hour. (All times are in UTC.)
-
-The following example shows a `PoolResizeCompleteEvent` entry in a `PT1H.json` log file. It includes information about the current and target number of dedicated and low-priority nodes, as well as the start and end time of the operation:
-
-```json
-{ "Tenant": "65298bc2729a4c93b11c00ad7e660501", "time": "2019-08-22T20:59:13.5698778Z", "resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.BATCH/BATCHACCOUNTS/MYBATCHACCOUNT/", "category": "ServiceLog", "operationName": "PoolResizeCompleteEvent", "operationVersion": "2017-06-01", "properties": {"id":"MYPOOLID","nodeDeallocationOption":"Requeue","currentDedicatedNodes":10,"targetDedicatedNodes":100,"currentLowPriorityNodes":0,"targetLowPriorityNodes":0,"enableAutoScale":false,"isAutoPool":false,"startTime":"2019-08-22 20:50:59.522","endTime":"2019-08-22 20:59:12.489","resultCode":"Success","resultMessage":"The operation succeeded"}}
-```
-
-To access the logs in your storage account programmatically, use the [Storage APIs](/rest/api/storageservices/).
-
-### Service log events
-
-Azure Batch service logs contain events emitted by the Batch service during the lifetime of an individual Batch resource, such as a pool or task. Each event emitted by Batch is logged in JSON format. The following example shows the body of a sample **pool create event**:
-
-```json
-{
- "id": "myPool1",
- "displayName": "Production Pool",
- "vmSize": "Standard_F1s",
- "imageType": "VirtualMachineConfiguration",
- "cloudServiceConfiguration": {
- "osFamily": "3",
- "targetOsVersion": "*"
- },
- "networkConfiguration": {
- "subnetId": " "
- },
- "virtualMachineConfiguration": {
- "imageReference": {
- "publisher": " ",
- "offer": " ",
- "sku": " ",
- "version": " "
- },
- "nodeAgentId": " "
- },
- "resizeTimeout": "300000",
- "targetDedicatedNodes": 2,
- "targetLowPriorityNodes": 2,
- "taskSlotsPerNode": 1,
- "vmFillType": "Spread",
- "enableAutoScale": false,
- "enableInterNodeCommunication": false,
- "isAutoPool": false
-}
-```
-
-The Batch Service emits the following log events:
--- [Pool create](batch-pool-create-event.md)-- [Pool delete start](batch-pool-delete-start-event.md)-- [Pool delete complete](batch-pool-delete-complete-event.md)-- [Pool resize start](batch-pool-resize-start-event.md)-- [Pool resize complete](batch-pool-resize-complete-event.md)-- [Pool autoscale](batch-pool-autoscale-event.md)-- [Task start](batch-task-start-event.md)-- [Task complete](batch-task-complete-event.md)-- [Task fail](batch-task-fail-event.md)-- [Task schedule fail](batch-task-schedule-fail-event.md)-
-## Next steps
--- [Overview of Batch APIs and tools](batch-apis-tools.md)-- [Monitor Batch solutions](monitoring-overview.md)
batch Monitor Batch Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitor-batch-reference.md
+
+ Title: Monitoring data reference for Azure Batch
+description: This article contains important reference material you need when you monitor Azure Batch.
Last updated : 03/28/2024+++++
+# Azure Batch monitoring data reference
+
+<!-- Intro. Required. -->
+
+See [Monitor Azure Batch](monitor-batch.md) for details on the data you can collect for Azure Batch and how to use it.
++
+### Supported metrics for Microsoft.Batch/batchaccounts
+The following table lists the metrics available for the Microsoft.Batch/batchaccounts resource type.
+++
+- poolId
+- jobId
++
+### Supported resource logs for Microsoft.Batch/batchaccounts
+
+### Service log events
+
+Batch service logs contain events emitted by the Batch service during the lifetime of an individual Batch resource, such as a pool or task. The Batch service emits the following log events:
+
+- [Pool create](batch-pool-create-event.md)
+- [Pool delete start](batch-pool-delete-start-event.md)
+- [Pool delete complete](batch-pool-delete-complete-event.md)
+- [Pool resize start](batch-pool-resize-start-event.md)
+- [Pool resize complete](batch-pool-resize-complete-event.md)
+- [Pool autoscale](batch-pool-autoscale-event.md)
+- [Task start](batch-task-start-event.md)
+- [Task complete](batch-task-complete-event.md)
+- [Task fail](batch-task-fail-event.md)
+- [Task schedule fail](batch-task-schedule-fail-event.md)
+
+Each event emitted by Batch is logged in JSON format. The following example shows the body of a sample **pool create event**:
+
+```json
+{
+ "id": "myPool1",
+ "displayName": "Production Pool",
+ "vmSize": "Standard_F1s",
+ "imageType": "VirtualMachineConfiguration",
+ "cloudServiceConfiguration": {
+ "osFamily": "3",
+ "targetOsVersion": "*"
+ },
+ "networkConfiguration": {
+ "subnetId": " "
+ },
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": " ",
+ "offer": " ",
+ "sku": " ",
+ "version": " "
+ },
+ "nodeAgentId": " "
+ },
+ "resizeTimeout": "300000",
+ "targetDedicatedNodes": 2,
+ "targetLowPriorityNodes": 2,
+ "taskSlotsPerNode": 1,
+ "vmFillType": "Spread",
+ "enableAutoScale": false,
+ "enableInterNodeCommunication": false,
+ "isAutoPool": false
+}
+```
+
+### Batch Accounts
+microsoft.batch/batchaccounts
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics#columns)
+
+- [Microsoft.Batch resource provider operations](/azure/role-based-access-control/permissions/compute#microsoftbatch)
+
+## Related content
+
+- See [Monitor Batch](monitor-batch.md) for a description of monitoring Batch.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
+- Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.
batch Monitor Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitor-batch.md
+
+ Title: Monitor Azure Batch
+description: Start here to learn how to monitor Azure Batch.
Last updated : 03/28/2024+++++
+# Monitor Azure Batch
+++
+For more information about the resource types for Batch, see [Batch monitoring data reference](monitor-batch-reference.md).
++
+### Access diagnostics logs in storage
+
+If you [archive Batch diagnostic logs in a storage account](/azure/azure-monitor/essentials/resource-logs#send-to-azure-storage), a storage container is created in the storage account as soon as a related event occurs. Blobs are created according to the following naming pattern:
+
+```json
+insights-{log category name}/resourceId=/SUBSCRIPTIONS/{subscription ID}/
+RESOURCEGROUPS/{resource group name}/PROVIDERS/MICROSOFT.BATCH/
+BATCHACCOUNTS/{Batch account name}/y={four-digit numeric year}/
+m={two-digit numeric month}/d={two-digit numeric day}/
+h={two-digit 24-hour clock hour}/m=00/PT1H.json
+```
+
+For example:
+
+```json
+insights-metrics-pt1m/resourceId=/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/
+RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.BATCH/
+BATCHACCOUNTS/MYBATCHACCOUNT/y=2018/m=03/d=05/h=22/m=00/PT1H.json
+```
+
+Each *PT1H.json* blob file contains JSON-formatted events that occurred within the hour specified in the blob URL (for example, `h=12`). During the present hour, events are appended to the *PT1H.json* file as they occur. The minute value (`m=00`) is always `00`, since diagnostic log events are broken into individual blobs per hour. All times are in UTC.
+
+The following example shows a `PoolResizeCompleteEvent` entry in a *PT1H.json* log file. The entry includes information about the current and target number of dedicated and low-priority nodes and the start and end time of the operation.
+
+```json
+{ "Tenant": "65298bc2729a4c93b11c00ad7e660501", "time": "2019-08-22T20:59:13.5698778Z", "resourceId": "/SUBSCRIPTIONS/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.BATCH/BATCHACCOUNTS/MYBATCHACCOUNT/", "category": "ServiceLog", "operationName": "PoolResizeCompleteEvent", "operationVersion": "2017-06-01", "properties": {"id":"MYPOOLID","nodeDeallocationOption":"Requeue","currentDedicatedNodes":10,"targetDedicatedNodes":100,"currentLowPriorityNodes":0,"targetLowPriorityNodes":0,"enableAutoScale":false,"isAutoPool":false,"startTime":"2019-08-22 20:50:59.522","endTime":"2019-08-22 20:59:12.489","resultCode":"Success","resultMessage":"The operation succeeded"}}
+```
+
+To access the logs in your storage account programmatically, use the [Storage APIs](/rest/api/storageservices).
++
+Examples of metrics in a Batch account are Pool Create Events, Low-Priority Node Count, and Task Complete Events. These metrics can help identify trends and can be used for data analysis.
+
+> [!NOTE]
+> Metrics emitted in the last 3 minutes might still be aggregating, so values might be underreported during this time frame. Metric delivery isn't guaranteed and might be affected by out-of-order delivery, data loss, or duplication.
+
+For a complete list of available metrics for Batch, see [Batch monitoring data reference](monitor-batch-reference.md#metrics).
++
+For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Batch, see [Batch monitoring data reference](monitor-batch-reference.md#resource-logs).
+
+You must explicitly enable diagnostic settings for each Batch account you want to monitor.
+
+For the Batch service, you can collect the following logs:
+
+- **ServiceLog**: [Events emitted by the Batch service](monitor-batch-reference.md#service-log-events) during the lifetime of an individual resource such as a pool or task.
+- **AllMetrics**: Metrics at the Batch account level.
+
+The following screenshot shows an example diagnostic setting that sends **allLogs** and **AllMetrics** to a Log Analytics workspace.
++
+When you create an Azure Batch pool, you can install any of the following monitoring-related extensions on the compute nodes to collect and analyze data:
+
+- [Azure Monitor agent for Linux](/azure/azure-monitor/agents/azure-monitor-agent-manage)
+- [Azure Monitor agent for Windows](/azure/azure-monitor/agents/azure-monitor-agent-manage)
+- [Azure Diagnostics extension for Windows VMs](/azure/virtual-machines/windows/extensions-diagnostics)
+- [Azure Monitor Logs analytics and monitoring extension for Linux](/azure/virtual-machines/extensions/oms-linux)
+- [Azure Monitor Logs analytics and monitoring extension for Windows](/azure/virtual-machines/extensions/oms-windows)
+
+For a comparison of the different extensions and agents and the data they collect, see [Compare agents](/azure/azure-monitor/agents/agents-overview#compare-to-legacy-agents).
++
+For Batch accounts specifically, the activity log collects events related to account creation and deletion and key management.
++
+When you analyze count-based Batch metrics like Dedicated Core Count or Low-Priority Node Count, use the **Avg** aggregation. For event-based metrics like Pool Resize Complete Events, use the **Count** aggregation. Avoid using the **Sum** aggregation, which adds up the values of all data points received over the period of the chart.
+++
+### Sample queries
+
+Here are a few sample log queries for Batch:
+
+Pool resizes: Lists resize times by pool and result code (success or failure):
+
+```kusto
+AzureDiagnostics
+| where OperationName=="PoolResizeCompleteEvent"
+| summarize operationTimes=make_list(startTime_s) by poolName=id_s, resultCode=resultCode_s
+```
+
+Task durations: Gives the elapsed time of tasks in seconds, from task start to task complete.
+
+```kusto
+AzureDiagnostics
+| where OperationName=="TaskCompleteEvent"
+| extend taskId=id_s, ElapsedTime=datetime_diff('second', executionInfo_endTime_t, executionInfo_startTime_t) // For longer running tasks, consider changing 'second' to 'minute' or 'hour'
+| summarize taskList=make_list(taskId) by ElapsedTime
+```
+
+Failed tasks per job: Lists failed tasks by parent job.
+
+```kusto
+AzureDiagnostics
+| where OperationName=="TaskFailEvent"
+| summarize failedTaskList=make_list(id_s) by jobId=jobId_s, ResourceId
+```
+++
+### Batch alert rules
+
+Because metric delivery can be subject to inconsistencies such as out-of-order delivery, data loss, or duplication, you should avoid alerts that trigger on a single data point. Instead, use thresholds to account for these inconsistencies over a period of time.
+
+For example, you might want to configure a metric alert when your low priority core count falls to a certain level. You could then use this alert to adjust the composition of your pools. For best results, set a period of 10 or more minutes where the alert triggers if the average low priority core count falls lower than the threshold value for the entire period. This time period allows for metrics to aggregate so that you get more accurate results.
+
+The following table lists some alert rule triggers for Batch. These alert rules are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Batch monitoring data reference](monitor-batch-reference.md).
+
+| Alert type | Condition | Description |
+|:|:|:|
+| Metric | Unusable node count | Whenever the Unusable Node Count is greater than 0 |
+| Metric | Task Fail Events | Whenever the total Task Fail Events is greater than dynamic threshold |
++
+## Other Batch monitoring options
+
+[Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. You can use [Azure Batch Insights](https://github.com/Azure/batch-insights) with Batch Explorer to get system statistics for your Batch nodes, such as virtual machine (VM) performance counters.
+
+In your Batch applications, you can use the [Batch .NET library](/dotnet/api/microsoft.azure.batch) to monitor or query the status of your resources including jobs, tasks, nodes, and pools. For example:
+
+- Monitor the [task state](/rest/api/batchservice/task/list#taskstate).
+- Monitor the [node state](/rest/api/batchservice/computenode/list#computenodestate).
+- Monitor the [pool state](/rest/api/batchservice/pool/get#poolstate).
+- Monitor [pool usage in the account](/rest/api/batchservice/pool/listusagemetrics).
+- Count [pool nodes by state](/rest/api/batchservice/account/listpoolnodecounts).
+
+You can use the Batch APIs to create list queries for Batch jobs, tasks, compute nodes, and other resources. For more information about how to filter list queries, see [Create queries to list Batch resources efficiently](batch-efficient-list-queries.md).
+
+Or, instead of potentially time-consuming list queries that return detailed information about large collections of tasks or nodes, you can use the [Get Task Counts](/rest/api/batchservice/job/gettaskcounts) and [List Pool Node Counts](/rest/api/batchservice/account/listpoolnodecounts) operations to get counts for Batch tasks and compute nodes. For more information, see [Monitor Batch solutions by counting tasks and nodes by state](batch-get-resource-counts.md).
+
+You can integrate Application Insights with your Azure Batch applications to instrument your code with custom metrics and tracing. For a detailed walkthrough of how to add Application Insights to a Batch .NET solution, instrument application code, monitor the application in the Azure portal, and build custom dashboards, see [Monitor and debug an Azure Batch .NET application with Application Insights](monitor-application-insights.md) and accompanying [code sample](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights).
+
+## Related content
+
+- See [Batch monitoring data reference](monitor-batch-reference.md) for a reference of the metrics, logs, and other important values created for Batch.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
+- Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.
batch Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitoring-overview.md
- Title: Monitor Azure Batch
-description: Learn about Azure monitoring services, metrics, diagnostic logs, and other monitoring features for Azure Batch.
- Previously updated : 08/23/2021--
-# Monitor Batch solutions
-
-[Azure Monitor](../azure-monitor/overview.md) and the Batch service provide a range of services, tools, and APIs to monitor your Batch solutions. This overview article helps you choose a monitoring approach that fits your needs.
-
-## Subscription-level monitoring
-
-At the subscription level, which includes Batch accounts, the [Azure activity log](../azure-monitor/essentials/activity-log.md) collects operational event data in several categories.
-
-For Batch accounts specifically, the activity log collects events related to account creation and deletion and key management.
-
-You can view the activity log in the Azure portal, or query for events using the Azure CLI, PowerShell cmdlets, or the Azure Monitor REST API. You can also export the activity log, or configure [activity log alerts](../azure-monitor/alerts/alerts-activity-log.md).
-
-## Batch account-level monitoring
-
-Monitor each Batch account using features of [Azure Monitor](../azure-monitor/overview.md). Azure Monitor collects [metrics](../azure-monitor/essentials/data-platform-metrics.md) and optionally [resource logs](../azure-monitor/essentials/resource-logs.md) for resources within a Batch account, such as pools, jobs, and tasks. Collect and consume this data manually or programmatically to monitor activities in your Batch account and to diagnose issues. For more information, see [Batch metrics, alerts, and logs for diagnostic evaluation and monitoring](batch-diagnostics.md).
-
-> [!NOTE]
-> Metrics are available by default in your Batch account without additional configuration, and they have a 30-day rolling history. You must create a diagnostic setting for a Batch account in order to send its resource logs to a Log Analytics workspace, and you may incur additional costs to store or process resource log data.
-
-## Batch resource monitoring
-
-In your Batch applications, use the Batch APIs to monitor or query the status of your resources including jobs, tasks, nodes, and pools. For example:
--- [Count tasks and compute nodes by state](batch-get-resource-counts.md)-- [Create queries to list Batch resources efficiently](batch-efficient-list-queries.md)-- [Create task dependencies](batch-task-dependencies.md)-- Use a [job manager task](/rest/api/batchservice/job/add#jobmanagertask)-- Monitor the [task state](/rest/api/batchservice/task/list#taskstate)-- Monitor the [node state](/rest/api/batchservice/computenode/list#computenodestate)-- Monitor the [pool state](/rest/api/batchservice/pool/get#poolstate)-- Monitor [pool usage in the account](/rest/api/batchservice/pool/listusagemetrics)-- Count [pool nodes by state](/rest/api/batchservice/account/listpoolnodecounts)-
-## Additional monitoring solutions
-
-Use [Application Insights](../azure-monitor/app/app-insights-overview.md) to programmatically monitor the availability, performance, and usage of your Batch jobs and tasks. Application Insights lets you monitor performance counters from compute nodes (VMs) and retrieve custom information for the tasks that run on them.
-
-For an example, see [Monitor and debug a Batch .NET application with Application Insights](monitor-application-insights.md) and the accompanying [code sample](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights).
-
-> [!NOTE]
-> You may incur additional costs to use Application Insights. See the [pricing information](https://azure.microsoft.com/pricing/details/application-insights/).
-
-[Batch Explorer](https://github.com/Azure/BatchExplorer) is a free, rich-featured, standalone client tool to help create, debug, and monitor Azure Batch applications. Download an [installation package](https://azure.github.io/BatchExplorer/) for Mac, Linux, or Windows. Optionally, use [Azure Batch Insights](https://github.com/Azure/batch-insights) to get system statistics for your Batch nodes, such as VM performance counters, in Batch Explorer.
-
-## Next steps
--- Learn about the [Batch APIs and tools](batch-apis-tools.md) available for building Batch solutions.-- Learn more about [diagnostic logging](batch-diagnostics.md) with Batch.
business-continuity-center Manage Protection Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/manage-protection-policy.md
Title: Manage protection policy for resources description: In this article, you learn how to manage backup and replication policies to protect your resources. Previously updated : 11/15/2023 Last updated : 03/29/2024 - ignite-2023
Follow these steps:
- **Non-Azure resources**: resources not managed by Azure 9. You can use **Select columns** to add or remove columns. :::image type="content" source="./media/manage-protection-policy/select-column.png" alt-text="Screenshot showing *select columns* option." lightbox="./media/manage-protection-policy/select-column.png":::+
+ You can also query information for your backup and replication policies at no additional cost using Azure Resource Graph (ARG). ARG is an Azure service designed to extend Azure Resource Management. It aims to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions.
+ To get started with querying information for your backup and replication policies using ARG, you can use the sample query provided by selecting **Open query**.
+
+ :::image type="content" source="./media/manage-protection-policy/query-for-backup-and-replication-policies.png" alt-text="Screenshot shows how to check for queries to view backup and replication policies." lightbox="./media/manage-protection-policy/query-for-backup-and-replication-policies.png":::
+ ## Next steps - [Configure protection](./tutorial-configure-protection-datasource.md)
business-continuity-center Manage Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/manage-vault.md
Title: Manage vault lifecycle used for Azure Backup and Azure Site Recovery description: In this article, you'll learn how to manage the lifecycle of the vaults (Recovery Services and Backup vault) used for Azure Backup and/or Azure Site Recovery. Previously updated : 11/15/2023 Last updated : 03/29/2024 - ignite-2023
Follow these steps:
7. You can use **Select columns** to add or remove columns. :::image type="content" source="./media/manage-vault/select-columns.png" alt-text="Screenshot showing *select columns* option." lightbox="./media/manage-vault/select-columns.png":::
+You can also query information for your vaults at no additional cost using Azure Resource Graph (ARG). ARG is an Azure service designed to extend Azure Resource Management. It aims to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions.
+
+To get started with querying information for your vaults using ARG, you can use the sample query provided by selecting **Open query**.
+ ## Modify security level
business-continuity-center Tutorial View Protectable Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-view-protectable-resources.md
Title: Tutorial - View protectable resources description: In this tutorial, learn how to view your resources that are currently not protected by any solution using Azure Business Continuity center. Previously updated : 11/15/2023 Last updated : 03/29/2024 - ignite-2023
In this view, you can see a list of all the resources which are not protected by
> [!NOTE] > Currently, you can only view the unprotected Azure resources under **Protectable resources**.
-
+
+You can also query information on your protectable Azure resources at no additional cost using Azure Resource Graph (ARG). ARG is an Azure service designed to extend Azure Resource Management. It aims to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions.
+
+To get started with querying your protectable Azure resources using ARG, you can use the sample query provided by selecting **Open query**.
+ ## Customize the view
-By default, only Azure Virtual machines are shown in the **Protectable resources** list.You can change the filters to view other resources.
+By default, only Azure Virtual machines are shown in the **Protectable resources** list. You can change the filters to view other resources.
- To look for specific resources, you can use various filters, such as subscriptions, resource groups, location, and resource type, and more. :::image type="content" source="./media/tutorial-view-protectable-resources/filter.png" alt-text="Screenshot showing the filtering options." lightbox="./media/tutorial-view-protectable-resources/filter.png":::
business-continuity-center Tutorial View Protected Items And Perform Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/business-continuity-center/tutorial-view-protected-items-and-perform-actions.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 03/29/2024
Follow these steps to view your protected items:
:::image type="content" source="./media/tutorial-view-protected-items-and-perform-actions/protected-items-retention-table.png" alt-text="Screenshot shows the protected items in the retention table." lightbox="./media/tutorial-view-protected-items-and-perform-actions/protected-items-retention-table.png":::
+You can also query information on protection for your resources at no additional cost using Azure Resource Graph (ARG). ARG is an Azure service designed to extend Azure Resource Management. It aims to provide efficient resource exploration with the ability to query at scale across a given set of subscriptions.
+
+To get started with querying information on protection for your resources using ARG, you can use the sample query provided, by selecting **Open query**.
++ ## View Protected item details To view additional details for a specific protected item, follow these steps:
cdn Cdn Custom Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-custom-ssl.md
# Tutorial: Configure HTTPS on an Azure CDN custom domain
-This tutorial shows how to enable the HTTPS protocol for a custom domain that's associated with an Azure CDN endpoint.
+This tutorial shows how to enable the HTTPS protocol for a custom domain associated with an Azure CDN endpoint.
-The HTTPS protocol on your custom domain (for example, HTTPS:\//www.contoso.com), ensures your sensitive data is delivered securely via TLS/SSL. When your web browser is connected via HTTPS, the browser validates the web site's certificate. The browser verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
+The HTTPS protocol on your custom domain (for example, `https://www.contoso.com`), ensures your sensitive data is delivered securely via TLS/SSL. When your web browser is connected via HTTPS, the browser validates the web site's certificate. The browser verifies it's issued by a legitimate certificate authority. This process provides security and protects your web applications from attacks.
-Azure CDN supports HTTPS on a CDN endpoint hostname, by default. For example, if you create a CDN endpoint (such as HTTPS:\//Contoso.azureedge.net), HTTPS is automatically enabled.
+Azure CDN supports HTTPS on a CDN endpoint hostname, by default. For example, if you create a CDN endpoint (such as `https://contoso.azureedge.net`), HTTPS is automatically enabled.
Some of the key attributes of the custom HTTPS feature are:
To enable HTTPS on an Azure CDN custom domain, you use a TLS/SSL certificate. Yo
Azure CDN handles certificate management tasks such as procurement and renewal. After you enable the feature, the process starts immediately.
-If the custom domain is already mapped to the CDN endpoint, no further action is needed. Azure CDN will process the steps and complete your request automatically.
+If the custom domain is already mapped to the CDN endpoint, no further action is needed. Azure CDN processes the steps and completes your request automatically.
If your custom domain is mapped elsewhere, use email to validate your domain ownership.
To enable HTTPS on a custom domain, follow these steps:
> This option is available only with **Azure CDN from Microsoft** and **Azure CDN from Edgio** profiles. >
-You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure CDN uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a non-allowed CA, your request will be rejected. If a certificate without complete chain is presented, the requests which involve that certificate are not guaranteed to work as expected. For Azure CDN from Edgio, any valid CA will be accepted.
+You can use your own certificate to enable the HTTPS feature. This process is done through an integration with Azure Key Vault, which allows you to store your certificates securely. Azure CDN uses this secure mechanism to get your certificate and it requires a few extra steps. When you create your TLS/SSL certificate, you must create a complete certificate chain with an allowed certificate authority (CA) that is part of the [Microsoft Trusted CA List](https://ccadb-public.secure.force.com/microsoft/IncludedCACertificateReportForMSFT). If you use a nonallowed CA, your request is rejected. If a certificate without complete chain is presented, requests, which involve that certificate aren't guaranteed to work as expected. For Azure CDN from Edgio, any valid CA is accepted.
### Prepare your Azure Key Vault account and certificate
Your CNAME record should be in the following format:
For more information about CNAME records, see [Create the CNAME DNS record](./cdn-map-content-to-custom-domain.md).
-If your CNAME record is in the correct format, DigiCert automatically verifies your custom domain name and creates a certificate for your domain. DigitCert won't send you a verification email and you won't need to approve your request. The certificate is valid for one year and will be autorenewed before it expires. Continue to [Wait for propagation](#wait-for-propagation).
+If your CNAME record is in the correct format, DigiCert automatically verifies your custom domain name and creates a certificate for your domain. DigitCert doesn't send you a verification email and you don't need to approve your request. The certificate is valid for one year and will be autorenewed before it expires. Continue to [Wait for propagation](#wait-for-propagation).
Automatic validation typically takes a few hours. If you don't see your domain validated in 24 hours, open a support ticket.
After the domain name is validated, it can take up to 6-8 hours for the custom d
### Operation progress
-The following table shows the operation progress that occurs when you enable HTTPS. After you enable HTTPS, four operation steps appear in the custom domain dialog. As each step becomes active, other substep details appear under the step as it progresses. Not all of these substeps will occur. After a step successfully completes, a green check mark appears next to it.
+The following table shows the operation progress that occurs when you enable HTTPS. After you enable HTTPS, four operation steps appear in the custom domain dialog. As each step becomes active, other substep details appear under the step as it progresses. Not all of these substeps occur. After a step successfully completes, a green check mark appears next to it.
| Operation step | Operation substep details | | | | | 1 Submitting request | Submitting request | | | Your HTTPS request is being submitted. | | | Your HTTPS request has been submitted successfully. |
-| 2 Domain validation | Domain is automatically validated if it's CNAME mapped to the CDN Endpoint. Otherwise, a verification request will be sent to the email listed in your domain's registration record (WHOIS registrant).|
+| 2 Domain validation | Domain is automatically validated if it's CNAME mapped to the CDN Endpoint. Otherwise, a verification request is sent to the email listed in your domain's registration record (WHOIS registrant).|
| | Your domain ownership has been successfully validated. | | | Domain ownership validation request expired (customer likely didn't respond within 6 days). HTTPS won't be enabled on your domain. * | | | Domain ownership validation request was rejected by the customer. HTTPS won't be enabled on your domain. * |
The following table shows the operation progress that occurs when you disable HT
| 2 Certificate deprovisioning | Deleting certificate | | 3 Complete | Certificate deleted |
+#### Certificate auto rotation with Azure CDN from Edgio
+
+Managed certificates from Azure Key Vault can utilize the certificate autorotate feature, allowing Azure CDN from Edgio to automatically retrieve updated certificates and propagate them to the Edgio CDN platform. To enable this feature:
+
+1. Register Azure CDN as an application within your Microsoft Entra ID.
+
+1. Authorize the Azure CDN service to access the secrets in your Key Vault. Navigate to "Access policies" within your Key Vault to add a new policy, then grant the **Microsoft.AzureFrontDoor-Cdn** service principal a **Get secrets** permission.
+
+1. Set the certificate version to **Latest** under the **Certificate management type** within the **Custom domain** menu. If a specific version of the certificate is selected, manual updates are required.
+
+> [!NOTE]
+> * Be aware that it can take up to 24 hours for the certificate auto-rotate to fully complete the propagation of the new certificate.
+> * If a certificate is utilized to cover multiple custom domains, it is imperative to enable certificate auto-rotate on all the custom domains sharing this certificate to ensure correct operation. Failure to do so may result in the Edgio platform serving an incorrect version of the certificate for the custom domain that does not have this feature enabled."
+ ## Frequently asked questions 1. *Who is the certificate provider and what type of certificate is used?*
The following table shows the operation progress that occurs when you disable HT
6. *On June 20, 2018, Azure CDN from Edgio started using a dedicated certificate with SNI TLS/SSL by default. What happens to my existing custom domains using Subject Alternative Names (SAN) certificate and IP-based TLS/SSL?*
- Your existing domains will be gradually migrated to single certificate in the upcoming months if Microsoft analyzes that only SNI client requests are made to your application.
+ Your existing domains are gradually migrated to single certificate in the upcoming months if Microsoft analyzes that only SNI client requests are made to your application.
If non-SNI clients are detected, your domains stay in the SAN certificate with IP-based TLS/SSL. Requests to your service or clients that are non-SNI, are unaffected.
cloud-services-extended-support Deploy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-portal.md
Title: Deploy a Azure Cloud Service (extended support) - Azure portal
+ Title: Deploy Azure Cloud Services (extended support) - Azure portal
description: Deploy an Azure Cloud Service (extended support) using the Azure portal
Last updated 10/13/2020
-# Deploy a Azure Cloud Services (extended support) using the Azure portal
+# Deploy Azure Cloud Services (extended support) using the Azure portal
This article explains how to use the Azure portal to create a Cloud Service (extended support) deployment. ## Before you begin
communication-services Subscribe Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/router-sdk/subscribe-events.md
dotnet run
| [`RouterWorkerOfferRevoked`](#microsoftcommunicationrouterworkerofferrevoked) | `Worker` | An offer to a worker was revoked | | [`RouterWorkerOfferExpired`](#microsoftcommunicationrouterworkerofferexpired) | `Worker` | An offer to a worker has expired | | [`RouterWorkerRegistered`](#microsoftcommunicationrouterworkerregistered) | `Worker` | A worker has been registered (status changed from inactive/draining to active) |
+| [`RouterWorkerUpdated`](#microsoftcommunicationrouterworkerupdated) | `Worker` | One of the following worker properties has been updated: `AvailableForOffers`, `TotalCapacity`, `QueueAssignments`, `ChannelConfigurations`, `Labels`, `Tags` |
| [`RouterWorkerDeregistered`](#microsoftcommunicationrouterworkerderegistered) | `Worker` | A worker has been deregistered (status changed from active to inactive/draining) | ### Microsoft.Communication.RouterJobReceived
dotnet run
| channelConfigurations| `List<ChannelConfiguration>` | ❌ | | tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+### Microsoft.Communication.RouterWorkerUpdated
+
+[Back to Event Catalog](#events-catalog)
+
+```json
+{
+ "id": "1027db4a-17fe-4a7f-ae67-276c3120a29f",
+ "topic": "/subscriptions/{subscription-id}/resourceGroups/{group-name}/providers/Microsoft.Communication/communicationServices/{communication-services-resource-name}",
+ "subject": "worker/{worker-id}",
+ "data": {
+ "workerId": "worker3",
+ "availableForOffers": true,
+ "totalCapacity": 100,
+ "queueAssignments": [
+ {
+ "id": "MyQueueId2",
+ "name": "Queue 3",
+ "labels": {
+ "Language": "en",
+ "Product": "Office",
+ "Geo": "NA"
+ }
+ }
+ ],
+ "labels": {
+ "x": "111",
+ "y": "111"
+ },
+ "channelConfigurations": [
+ {
+ "channelId": "FooVoiceChannelId",
+ "capacityCostPerJob": 10,
+ "maxNumberOfJobs": 5
+ }
+ ],
+ "tags": {
+ "Locale": "en-us",
+ "Segment": "Enterprise",
+ "Token": "FooToken"
+ },
+ "updatedWorkerProperties": [
+ "TotalCapacity",
+ "Labels",
+ "Tags",
+ "ChannelConfigurations",
+ "AvailableForOffers",
+ "QueueAssignments"
+ ]
+ },
+ "eventType": "Microsoft.Communication.RouterWorkerUpdated",
+ "dataVersion": "1.0",
+ "metadataVersion": "1",
+ "eventTime": "2022-02-17T00:55:25.1736293Z"
+}
+```
+
+#### Attribute list
+
+| Attribute | Type | Nullable | Description | Notes |
+|: |:--:|:-:|-|-|
+| workerId | `string` | ❌ |
+| totalCapacity | `int` | ❌ |
+| queueAssignments | `List<QueueDetails>` | ❌ |
+| labels | `Dictionary<string, object>` | ✔️ | | Based on user input
+| channelConfigurations| `List<ChannelConfiguration>` | ❌ |
+| tags | `Dictionary<string, object>` | ✔️ | | Based on user input
+| updatedWorkerProperties | `List<UpdateWorkerProperty>` | ❌ | Worker Properties updated including AvailableForOffers, QueueAssignments, ChannelConfigurations, TotalCapacity, Labels, and Tags
+ ### Microsoft.Communication.RouterWorkerDeregistered [Back to Event Catalog](#events-catalog)
public class ChannelConfiguration
} ```
+### UpdatedWorkerProperty
+
+```csharp
+public enum UpdatedWorkerProperty
+{
+ AvailableForOffers,
+ Capacity,
+ QueueAssignments,
+ Labels,
+ Tags,
+ ChannelConfigurations
+}
+```
+ ### WorkerSelector ```csharp
container-apps Certificates Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/certificates-overview.md
+
+ Title: Certificates in Azure Container Apps
+description: Learn the different options available to using and managing secure certificates in Azure Container Apps.
++++ Last updated : 03/28/2024+++
+# Certificates in Azure Container Apps
+
+You can add digital security certificates to secure custom DNS names in Azure Container Apps to support secure communication among your apps.
+
+## Options
+
+The following table lists the options available to add certificates in Container Apps:
+
+| Option | Description |
+|||
+| [Create a free Azure Container Apps managed certificate](./custom-domains-managed-certificates.md) | A private certificate that's free of charge and easy to use if you just need to secure your custom domain in Container Apps. |
+| Import a certificate from Key Vault | Useful if you use [Azure Key Vault](../key-vault/index.yml) to manage your [PKCS12 certificates](https://wikipedia.org/wiki/PKCS_12). |
+| [Upload a private certificate](./custom-domains-certificates.md) | You can upload a private certificate if you already have one. |
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Set up custom domain with existing certificate](custom-domains-certificates.md)
container-apps Opentelemetry Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/opentelemetry-agents.md
+
+ Title: Collect and read OpenTelemetry data in Azure Container Apps (preview)
+description: Learn to record and query data collected using OpenTelemetry in Azure Container Apps.
+++ Last updated : 03/08/2024++++
+# Collect and read OpenTelemetry data in Azure Container Apps (preview)
+
+Using an [OpenTelemetry](https://opentelemetry.io/) data agent with your Azure Container Apps environment, you can choose to send observability data in an OpenTelemetry format by:
+
+- Piping data from an agent into a desired endpoint. Destination options include Azure Monitor Application Insights, Datadog, and any OpenTelemetry Protocol (OTLP)-compatible endpoint.
+
+- Easily changing destination endpoints without having to reconfigure how they emit data, and without having to manually run an OpenTelemetry agent.
+
+This article shows you how to set up and configure an OpenTelemetry agent for your container app.
+
+## Configure an OpenTelemetry agent
+
+OpenTelemetry agents live within your container app environment. You configure agent settings via an ARM template or Bicep calls to the environment, or through the CLI.
+
+Each endpoint type (Azure Monitor Application Insights, DataDog, and OTLP) has specific configuration requirements.
++
+## Prerequisites
+
+Enabling the managed OpenTelemetry agent to your environment doesn't automatically mean the agent collects data. Agents only send data based on your configuration settings and instrumenting your code correctly.
+
+### Configure source code
+
+Prepare your application to collect data by installing the [OpenTelemetry SDK](https://opentelemetry.io/ecosystem/integrations/) and follow the OpenTelemetry guidelines to instrument [metrics](https://opentelemetry.io/docs/concepts/signals/logs/), [logs](https://opentelemetry.io/docs/concepts/signals/metrics), or [traces](https://opentelemetry.io/docs/concepts/signals/traces/).
+
+### Initialize endpoints
+
+Before you can send data to a collection destination, you first need to create an instance of the destination service. For example, if you want to send data to Azure Monitor Application Insights, you need to create an Application Insights instance ahead of time.
+
+The managed OpenTelemetry agent accepts the following destinations:
+
+- Azure Monitor Application Insights
+- Datadog
+- Any OTLP endpoint (For example: New Relic or Honeycomb)
+
+The following table shows you what type of data you can send to each destination:
+
+| Destination | Logs | Metrics | Traces |
+||||--|
+| [Azure App Insights](/azure/azure-monitor/app/app-insights-overview) | Yes | Yes | Yes |
+| [Datadog](https://datadoghq.com/) | No | Yes | Yes |
+| [OpenTelemetry](https://opentelemetry.io/) protocol (OTLP) configured endpoint | Yes | Yes | Yes |
+
+## Azure Monitor Application Insights
+
+The only configuration detail required from Application Insights is the connection string. Once you have the connection string, you can configure the agent via your container app's ARM template or with Azure CLI commands.
+
+# [ARM template](#tab/arm)
+
+Before you deploy this template, replace placeholders surrounded by `<>` with your values.
+
+```json
+{
+ ...
+ "properties": {
+ "appInsightsConfiguration ": {ΓÇ»
+ "connectionString": "<YOUR_APP_INSIGHTS_CONNECTION_STRING>"
+ }
+ "openTelemetryConfiguration": {
+ ...
+ "tracesConfiguration":{
+ "destinations": ["appInsights"]
+ },
+ "logsConfiguration": {
+ "destinations": ["apInsights"]
+ }
+ }
+ }
+}
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+Before you run this command, replace placeholders surrounded by `<>` with your values.
+
+```azurecli
+az containerapp env telemetry app-insights set \
+ --connection-string <YOUR_APP_INSIGHTS_CONNECTION_STRING> \
+ --EnableOpenTelemetryTraces true \
+ --EnableOpenTelemetryLogs true
+```
+++
+## Datadog
+
+The Datadog agent configuration requires a value for `site` and `key` from your Datadog instance. Gather these values from your Datadog instance according to this table:
+
+| Datadog agent property | Container Apps configuration property |
+|||
+| `DD_SITE` | `site` |
+| `DD_API_KEY` | `key` |
+
+Once you have these configuration details, you can configure the agent via your container app's ARM template or with Azure CLI commands.
+
+# [ARM template](#tab/arm)
+
+Before you deploy this template, replace placeholders surrounded by `<>` with your values.
+
+```json
+{
+ ...
+ "properties": {
+ ...
+ "openTelemetryConfiguration": {
+ ...
+ "destinationsConfiguration":{
+ ...
+ "dataDogConfiguration":{
+ "site": "<YOUR_DATADOG_SUBDOMAIN>.datadoghq.com",
+ "key": "<YOUR_DATADOG_KEY>"
+ }
+ },
+ "tracesConfiguration":{
+ "destinations": ["dataDog"]
+ },
+ "metricsConfiguration": {
+ "destinations": ["dataDog"]
+ }
+ }
+ }
+}
+```
++
+# [Azure CLI](#tab/azure-cli)
+
+Before you run this command, replace placeholders surrounded by `<>` with your values.
+
+```azurecli
+az containerapp env telemetry data-dog set \
+ --site "<YOUR_DATADOG_SUBDOMAIN>.datadoghq.com" \
+ --key <YOUR_DATADOG_KEY> \
+ --EnableOpenTelemetryTraces true \
+ --EnableOpenTelemetryMetrics true
+```
+++
+## OTLP endpoint
+
+An OpenTelemetry protocol (OTLP) endpoint is a telemetry data destination that consumes OpenTelemetry data. In your application configuration, you can add multiple OTLP endpoints. The following example adds two endpoints and sends the following data to these endpoints.
+
+| Endpoint name | Data sent to endpoint |
+|||
+| `oltp1` | Metrics and/or traces |
+| `oltp2` | Logs and/or traces |
+
+While you can set up as many OTLP-configured endpoints as you like, each endpoint must have a distinct name.
+
+# [ARM template](#tab/arm)
+
+```json
+{
+ "properties": {
+ "appInsightsConfiguration": {},
+ "openTelemetryConfiguration": {
+ "destinationsConfiguration":{
+ "otlpConfiguration": [
+ {
+ "name": "otlp1",
+ "endpoint": "ENDPOINT_URL_1",
+ "insecure": false,
+ "headers": "api-key-1=key"
+ },
+ {
+ "name": "otlp2",
+ "endpoint": "ENDPOINT_URL_2",
+ "insecure": true
+ }
+ ]
+ },
+ "logsConfiguration": {
+ "destinations": ["otlp2"]
+ },
+ "tracesConfiguration":{
+ "destinations": ["otlp1", "otlp2"]
+ },
+ "metricsConfiguration": {
+ "destinations": ["otlp1"]
+ }
+ }
+ }
+}
+
+```
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az containerap env telemetry otlp add \
+ --name "otlp1"
+ --endpoint "ENDPOINT_URL_1" \
+ --insecure false \
+ --headers "api-key-1=key" \
+ --EnableOpenTelemetryTraces true \
+ --EnableOpenTelemetryMetrics true
+az containerap env telemetry otlp add \
+ --name "otlp2"
+ --endpoint "ENDPOINT_URL_2" \
+ --insecure true \
+ --EnableOpenTelemetryTraces true \
+ --EnableOpenTelemetryLogs true
+```
+++
+| Name | Description |
+|||
+| `name` | A name you select to identify your OTLP-configured endpoint. |
+| `endpoint` | The URL of the destination that receives collected data. |
+| `insecure` | Default true. Defines whether to enable client transport security for the exporter's gRPC connection. If false, the `headers` parameter is required. |
+| `headers` | Space-separated values, in 'key=value' format, that provide required information for the OTLP endpoints' security. Example: `"api-key=key other-config-value=value"`. |
+
+## Configure Data Destinations
+
+To configure an agent, use the `destinations` array to define which agents your application sends data. Valid keys are either `appInsights`, `dataDog`, or the name of your custom OTLP endpoint. You can control how an agent behaves based off data type and endpoint-related options.
+
+### By data type
+
+| Option | Example |
+|||
+| Select a data type. | You can configure logs, metrics, and/or traces individually. |
+| Enable or disable any data type. | You can choose to send only traces and no other data. |
+| Send one data type to multiple endpoints. | You can send logs to both DataDog and an OTLP-configured endpoint. |
+| Send different data types to different locations. | You can send traces to an OTLP endpoint and metrics to DataDog. |
+| Disable sending all data types. | You can choose to not send any data through the OpenTelemetry agent. |
+
+### By endpoint
+
+- You can only set up one Application Insights and Datadog endpoint each at a time.
+- While you can define more than one OTLP-configured endpoint, each one must have a distinct name.
++
+The following example shows how to use an OTLP endpoint named `customDashboard`. It sends:
+- traces to app insights and `customDashboard`
+- logs to app insights and `customDashboard`
+- metrics to DataDog and `customDashboard`
+
+```json
+{
+ ...
+ "properties": {
+ ...
+ "openTelemetryConfiguration": {
+ ...
+ "tracesConfiguration": {
+ "destinations": [
+ "appInsights",
+ "customDashboard"
+ ]
+ },
+ "logsConfiguration": {
+ "destinations": [
+ "appInsights",
+ "customDashboard"
+ ]
+ },
+ "metricsConfiguration": {
+ "destinations": [
+ "dataDog",
+ "customDashboard"
+ ]
+ }
+ }
+ }
+}
+
+## Example OpenTelemetry configuration
+
+The following example ARM template shows how you might configure your container app to collect telemetry data using Azure Monitor Application Insights, Datadog, and with a custom OTLP agent named `customDashboard`.
+
+Before you deploy this template, replace placeholders surrounded by `<>` with your values.
+
+```json
+{
+ "location": "eastus",
+ "properties": {
+ "appInsightsConfiguration": {
+ "connectionString": "<APP_INSIGHTS_CONNECTION_STRING>"
+ },
+ "openTelemetryConfiguration": {
+ "destinationsConfiguration": {
+ "dataDogConfiguration": {
+ "site": "datadoghq.com",
+ "key": "<YOUR_DATADOG_KEY>"
+ },
+ "otlpConfigurations": [
+ {
+ "name": "customDashboard",
+ "endpoint": "<OTLP_ENDPOINT_URL>",
+ "insecure": true
+ }
+ ]
+ },
+ "tracesConfiguration": {
+ "destinations": [
+ "appInsights",
+ "customDashboard"
+ ]
+ },
+ "logsConfiguration": {
+ "destinations": [
+ "appInsights",
+ "customDashboard"
+ ]
+ },
+ "metricsConfiguration": {
+ "destinations": [
+ "dataDog",
+ "customDashboard"
+ ]
+ }
+ }
+ }
+}
+```
+
+## Environment variables
+
+The OpenTelemetry agent automatically injects a set of environment variables into your application at runtime.
+
+The first two environment variables follow standard OpenTelemetry exporter configuration and are used in OTLP standard software development kits. If you explicitly set the environment variable in the container app specification, your value overwrites the automatically injected value.
+
+Learn about the OTLP exporter configuration see, [OTLP Exporter Configuration](https://opentelemetry.io/docs/languages/sdk-configuration/otlp-exporter/).
+
+| Name | Description |
+|||
+| `OTEL_EXPORTER_OTLP_ENDPOINT` | A base endpoint URL for any signal type, with an optionally specified port number. This setting is helpful when youΓÇÖre sending more than one signal to the same endpoint and want one environment variable to control the endpoint. Example: `http://otel.service.k8se-apps:4317/` |
+| `OTEL_EXPORTER_OTLP_PROTOCOL` | Specifies the OTLP transport protocol used for all telemetry data. The managed agent only supports `grpc`. Value: `grpc`. |
+
+The other three environment variables are specific to Azure Container Apps, and are always injected. These variables hold agentΓÇÖs endpoint URLs for each specific data type (logs, metrics, traces).
+
+These variables are only necessary if you're using both the managed OpenTelemetry agent and another OpenTelemetry agent. Using these variables gives you control over how to route data between the different OpenTelemetry agents.
+
+| Name | Description | Example |
+||||
+| `CONTAINERAPP_OTEL_TRACING_GRPC_ENDPOINT` | Endpoint URL for trace data only. | `http://otel.service.k8se-apps:43178/v1/traces/` |
+| `CONTAINERAPP_OTEL_LOGGING_GRPC_ENDPOINT` | Endpoint URL for log data only. | `http://otel.service.k8se-apps:43178/v1/logs/ ` |
+| `CONTAINERAPP_OTEL_METRIC_GRPC_ENDPOINT` | Endpoint URL for metric data only. | `http://otel.service.k8se-apps:43178/v1/metrics/` |
+
+## OpenTelemetry agent costs
+
+You are [billed](./billing.md) for the underlying compute of the agent.
+
+See the destination service for their billing structure and terms. For example, if you send data to both Azure Monitor Application Insights and Datadog, you're responsible for the charges applied by both services.
+
+## Known limitations
+
+- OpenTelemetry agents are in preview.
+- System data, such as system logs or Container Apps standard metrics, isn't available to be sent to the OpenTelemetry agent.
+- The Application Insights endpoint doesn't accept metrics.
+- The Datadog endpoint doesn't accept logs.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about monitoring and health](observability.md)
container-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/overview.md
To begin working with Container Apps, select the description that best describes
| | Description | Resource | ||||
-| **I'm new to containers**| Start here if you have yet to build your first container, but are curious how containers can serve your development needs. | [Learn more about containers](start-containers.md) |
-| **I'm using serverless containers** | Container Apps provides automatic scaling, reduces operational complexity, and allows you to focus on your application rather than infrastructure.<br><br>Start here if you're interested in management, scalability, and pay-per-use features of cloud computing. | [Learn more about serverless containers](start-serverless-containers.md) |
+| **I'm new to containers**| Start here if you have yet to build your first container but are curious how containers can serve your development needs. | [Learn more about containers](start-containers.md) |
+| **I'm using serverless containers** | Container Apps provides automatic scaling, reduces operational complexity, and allows you to focus on your application rather than infrastructure.<br><br>Start here if you're interested in the management, scalability, and pay-per-use features of cloud computing. | [Learn more about serverless containers](start-serverless-containers.md) |
## Features
With Azure Container Apps, you can:
- [**Monitor logs**](log-monitoring.md) using Azure Log Analytics. -- [**Generous quotas**](quotas.md) which can be overridden to increase limits on a per-account basis.
+- [**Generous quotas**](quotas.md), which can be overridden to increase limits on a per-account basis.
<sup>1</sup> Applications that [scale on CPU or memory load](scale-app.md) can't scale to zero.
container-registry Container Registry Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-geo-replication.md
Last updated 10/31/2023
# Geo-replication in Azure Container Registry
-Companies that want a local presence, or a hot backup, choose to run services from multiple Azure regions. As a best practice, placing a container registry in each region where images are run allows network-close operations, enabling fast, reliable image layer transfers. Geo-replication enables an Azure container registry to function as a single registry, serving multiple regions with multi-primary regional registries.
+Companies that want a local presence or a hot backup choose to run services from multiple Azure regions. As a best practice, placing a container registry in each region where images are run allows network-close operations, enabling fast, reliable image layer transfers. Geo-replication enables an Azure container registry to function as a single registry, serving multiple regions with multi-primary regional registries.
A geo-replicated registry provides the following benefits:
A geo-replicated registry provides the following benefits:
* Registry resilience if a regional outage occurs > [!NOTE]
-> * If you need to maintain copies of container images in more than one Azure container registry, Azure Container Registry also supports [image import](container-registry-import-images.md). For example, in a DevOps workflow, you can import an image from a development registry to a production registry, without needing to use Docker commands.
+> * If you need to maintain copies of container images in more than one Azure container registry, Azure Container Registry also supports [image import](container-registry-import-images.md). For example, in a DevOps workflow, you can import an image from a development registry to a production registry without needing to use Docker commands.
> * If you want to move a registry to a different Azure region, instead of geo-replicating the registry, see [Manually move a container registry to another region](manual-regional-move.md). ## Prerequisites
-* The user requires following permissions (at registry level) to create/delete replications:
+* The user requires the following permissions (at the registry level) to create/delete replications:
| Permission | Description | |||
A geo-replicated registry provides the following benefits:
| Microsoft.ContainerRegistry/registries/replications/write | Delete a replication | ## Example use case
-Contoso runs a public presence website located across the US, Canada, and Europe. To serve these markets with local and network-close content, Contoso runs [Azure Kubernetes Service](../aks/index.yml) (AKS) clusters in West US, East US, Canada Central, and West Europe. The website application, deployed as a Docker image, utilizes the same code and image across all regions. Content, local to that region, is retrieved from a database, which is provisioned uniquely in each region. Each regional deployment has its unique configuration for resources like the local database.
+Contoso runs a public presence website located across the US, Canada, and Europe. To serve these markets with local and network-close content, Contoso runs [Azure Kubernetes Service](../aks/index.yml) (AKS) clusters in West US, East US, Canada Central, and West Europe. The website application, deployed as a Docker image, utilizes the same code and image across all regions. Content local to that region is retrieved from a database, which is provisioned uniquely in each region. Each regional deployment has its unique configuration for resources like the local database.
-The development team is located in Seattle WA, utilizing the West US data center.
+The development team is located in Seattle, WA, and utilizes the West US data center.
![Pushing to multiple registries](media/container-registry-geo-replication/before-geo-replicate.png)<br />*Pushing to multiple registries*
Typical challenges of multiple registries include:
![Pulling from a geo-replicated registry](media/container-registry-geo-replication/after-geo-replicate-pull.png)
-The geo-replication feature of Azure Container Registry has following benefits:
+The geo-replication feature of Azure Container Registry has the following benefits:
* Manage a single registry across all regions: `contoso.azurecr.io` * Manage a single configuration of image deployments as all regions use the same image URL: `contoso.azurecr.io/public/products/web:1.2`
-* Push to a single registry, while ACR automatically manages the geo-replication. ACR only replicates unique layers, reducing data transfer across regions.
+* Push to a single registry while ACR automatically manages the geo-replication. ACR only replicates unique layers, reducing data transfer across regions.
* Configure regional [webhooks](container-registry-webhook.md) to notify you of events in specific replicas. * Provide a highly available registry that is resilient to regional outages.
ACR begins syncing images across the configured replicas. Once complete, the por
## Considerations for using a geo-replicated registry * Each region in a geo-replicated registry is independent once set-up. Azure Container Registry SLAs apply to each geo-replicated region.
-* For every push or pull image operations on a geo-replicated registry, Azure Traffic Manager in the background sends a request to the registry closest location in the region to maintain network latency.
+* For every push or pull image operation on a geo-replicated registry, Azure Traffic Manager in the background sends a request to the registry's closest location in the region to maintain network latency.
* After you push an image or tag update to the closest region, it takes some time for Azure Container Registry to replicate the manifests and layers to the remaining regions you opted into. Larger images take longer to replicate than smaller ones. Images and tags are synchronized across the replication regions with an eventual consistency model. * To manage workflows that depend on push updates to a geo-replicated registry, we recommend that you configure [webhooks](container-registry-webhook.md) to respond to the push events. You can set up regional webhooks within a geo-replicated registry to track push events as they complete across the geo-replicated regions. * To serve blobs representing content layers, Azure Container Registry uses data endpoints. You can enable [dedicated data endpoints](container-registry-firewall-access-rules.md#enable-dedicated-data-endpoints) for your registry in each of your registry's geo-replicated regions. These endpoints allow configuration of tightly scoped firewall access rules. For troubleshooting purposes, you can optionally [disable routing to a replication](#temporarily-disable-routing-to-replication) while maintaining replicated data.
ACR begins syncing images across the configured replicas. Once complete, the por
* For high availability and resiliency, we recommend creating a registry in a region that supports enabling [zone redundancy](zone-redundancy.md). Enabling zone redundancy in each replica region is also recommended. * If an outage occurs in the registry's home region (the region where it was created) or one of its replica regions, a geo-replicated registry remains available for data plane operations such as pushing or pulling container images.
-* If the registry's home region becomes unavailable, you may be unable to carry out registry management operations including configuring network rules, enabling availability zones, and managing replicas.
+* If the registry's home region becomes unavailable, you may be unable to carry out registry management operations, including configuring network rules, enabling availability zones, and managing replicas.
* To plan for high availability of a geo-replicated registry encrypted with a [customer-managed key](tutorial-enable-customer-managed-keys.md) stored in an Azure key vault, review the guidance for key vault [failover and redundancy](../key-vault/general/disaster-recovery-guidance.md). ## Delete a replica
After you've configured a replica for your registry, you can delete it at any ti
To delete a replica in the Azure portal:
-1. Navigate to your Azure Container Registry, and select **Replications**.
-1. Select the name of a replica, and select **Delete**. Confirm that you want to delete the replica.
+1. Navigate to your Azure Container Registry and select **Replications**.
+1. Select the name of a replica and select **Delete**. Confirm that you want to delete the replica.
To use the Azure CLI to delete a replica of *myregistry* in the East US region:
az acr replication delete --name eastus --registry myregistry
Geo-replication is a feature of the [Premium service tier](container-registry-skus.md) of Azure Container Registry. When you replicate a registry to your desired regions, you incur Premium registry fees for each region.
-In the preceding example, Contoso consolidated two registries down to one, adding replicas to East US, Canada Central, and West Europe. Contoso would pay four times Premium per month, with no additional configuration or management. Each region now pulls their images locally, improving performance, reliability without network egress fees from West US to Canada and East US.
+In the preceding example, Contoso consolidated two registries down to one, adding replicas to East US, Canada Central, and West Europe. Contoso would pay four times Premium per month, with no additional configuration or management. Each region now pulls their images locally, improving performance and reliability without network egress fees from the West US to Canada and the East US.
## Troubleshoot push operations with geo-replicated registries
az acr replication update --name westus \
## Creating replication for a Private Endpoint enabled registry
-When creating a new registry replication for the primary registry enabled with Private Endpoint, we recommend validating User Identity has valid Private Endpoint creation permissions. Otherwise, the operation gets stuck in the provisioning state while creating the replication.
+When creating a new registry replication for the primary registry enabled with Private Endpoint, we recommend validating that the User Identity has valid Private Endpoint creation permissions. Otherwise, the operation gets stuck in the provisioning state while creating the replication.
Follow the below steps if you got stuck in the provisioning state while creating the registry replication:
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/quickstart-dotnet.md
The client library is available through NuGet, as the `Microsoft.Azure.Cosmos` p
cd ./src/web ```
-1. If not already installed, install the `Microsoft.Azure.Cosmos` package using `dotnet add package`.
+1. If not already installed, install the `Azure.Data.Tables` package using `dotnet add package`.
```bash
- dotnet add package Microsoft.Azure.Cosmos
+ dotnet add package Azure.Data.Tables
``` 1. Also, install the `Azure.Identity` package if not already installed.
cosmos-db Vector Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md
Title: Vector database
-description: Vector database and retrieval augmented generation (RAG) implementation.
+description: Vector database
Previously updated : 12/11/2023 Last updated : 03/29/2024 # Vector database
Last updated 12/11/2023
Vector databases are used in numerous domains and situations across analytical and generative AI, including natural language processing, video and image recognition, recommendation system, search, etc.
-An increasingly popular use case is augmenting your applications with large language models (LLMs) and vector databases that can access your own data through retrieval-augmented generation (RAG). This approach allows you to:
+Many AI-enhanced systems that emerged in 2023 use standalone vector databases that are distinct from "traditional" databases in their tech stacks. Instead of adding a separate vector database, you can use our integrated vector database when working with multi-modal data. By doing so, you avoid the extra cost of moving data to a separate database. Moreover, this architecture keeps your vector embeddings and original data together, and you can better achieve data consistency, scale, and performance. The latter reason is why OpenAI built its ChatGPT service on top of Azure Cosmos DB.
-- Generate contextually relevant and accurate responses to user prompts from AI models-- Overcome ChatGPT, GPT-3.5, or GPT-4ΓÇÖs token limits-- Reduce the costs from frequent fine-tuning on updated data-
-Some RAG implementation tutorials demonstrate integrating vector databases that are distinct from traditional databases. Instead of adding a separate vector database, you can use our integrated vector database when working with multi-modal data. By doing so, you avoid the extra cost of moving data to a separate database. Moreover, this keeps your vector embeddings and original data together, and you can better achieve data consistency, scale, and performance. The latter reason is why OpenAI built its ChatGPT service on top of Azure Cosmos DB.
-
-Here's how to implement our integrated vector database:
+Here's how to implement our integrated vector database, thereby taking advantage of its single-digit millisecond response times, automatic and instant scalability, and guaranteed speed at any scale:
| | Description | | | |
-| **[Azure Cosmos DB for Mongo DB vCore](#implement-vector-database-functionalities-using-our-api-for-mongodb-vcore)** | Store your application data and vector embeddings together in a single MongoDB-compatible service featuring natively integrated vector database. |
-| **[Azure Cosmos DB for PostgreSQL](#implement-vector-database-functionalities-using-our-api-for-postgresql)** | Store your data and vectors together in a scalable PostgreSQL offering with natively integrated vector database. |
-| **[Azure Cosmos DB for NoSQL with Azure AI Search](#implement-vector-database-functionalities-using-our-nosql-api-and-ai-search)** | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure AI Search. |
+| **[Azure Cosmos DB for Mongo DB vCore](#how-to-implement-vector-database-functionalities-using-our-api-for-mongodb-vcore)** | Store your application data and vector embeddings together in a single MongoDB-compatible service featuring natively integrated vector database. |
+| **[Azure Cosmos DB for PostgreSQL](#how-to-implement-vector-database-functionalities-using-our-api-for-postgresql)** | Store your data and vectors together in a scalable PostgreSQL offering with natively integrated vector database. |
+| **[Azure Cosmos DB for NoSQL with Azure AI Search](#how-to-implement-vector-database-functionalities-using-our-nosql-api-and-ai-search)** | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure AI Search. |
## What is a vector database? A vector database is a database designed to store and manage [vector embeddings](#embeddings), which are mathematical representations of data in a high-dimensional space. In this space, each dimension corresponds to a feature of the data, and tens of thousands of dimensions might be used to represent sophisticated data. A vector's position in this space represents its characteristics. Words, phrases, or entire documents, and images, audio, and other types of data can all be vectorized. These vector embeddings are used in similarity search, multi-modal search, recommendations engines, large languages models (LLMs), etc.
-It's increasingly popular to use the [vector search](#vector-search) feature in a vector database to enable [retrieval-augmented generation](#retrieval-augmented-generation) that harnesses LLMs and custom data or domain-specific information. This process involves extracting pertinent information from a custom data source and integrating it into the model request through prompt engineering.
+In a vector database, embeddings are indexed and queried through [vector search](#vector-search) algorithms based on their vector distance or similarity. A robust mechanism is necessary to identify the most relevant data. Some well-known vector search algorithms include Hierarchical Navigable Small World (HNSW), Inverted File (IVF), DiskANN, etc.
+
+Besides the above functionalities of a typical vector database, our integrated vector database also converts the existing raw data in your account into embeddings and stores them as vectors. This way, you avoid the extra cost of moving data to a separate vector database. Moreover, this architecture keeps your vector embeddings and original data together, and you can better achieve data consistency, scale, and performance.
+
+## What are some vector database use cases?
+
+Vector databases are used in numerous domains and situations across analytical and generative AI, including natural language processing, video and image recognition, recommendation system, search, etc. For example, you can use a vector database to:
-A robust mechanism is necessary to identify the most relevant data from the custom source that can be passed to the LLM. Our integrated vector database converts the data in your database into embeddings and store them as vectors for future use. The vector search captures the semantic meaning of the text and going beyond mere keywords to comprehend the context. Moreover, this mechanism allows you to optimize for the LLMΓÇÖs limit on the number of [tokens](#tokens) per request.
+- identify similar images, documents, and songs based on their contents, themes, sentiments, and styles
+- identify similar products based on their characteristics, features, and user groups
+- recommend contents, products, or services based on individuals' preferences
+- recommend contents, products, or services based on user groups' similarities
+- identify the best-fit potential options from a large pool of choices to meet complex requirements
+- identify data anomalies or fraudulent activities that are dissimilar from predominant or normal patterns
+- implement persistent memory for AI agents
-Prior to sending a request to the LLM, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the LLM request using [prompt engineering](#prompts-and-prompt-engineering).
+Besides these typical use cases for vector database, our integrated vector database is also an ideal solution for production-level LLM caching thanks to its low latency, high scalability, and high availability.
-Here are multiple ways to implement RAG on your data by using our vector database functionalities.
+It's especially popular to use vector databases to enable [retrieval-augmented generation (RAG)](#retrieval-augmented-generation) that harnesses LLMs and custom data or domain-specific information. This approach allows you to:
-## Implement vector database functionalities using our API for MongoDB vCore
+- Generate contextually relevant and accurate responses to user prompts from AI models
+- Overcome LLMs' [tokens](#tokens) limits
+- Reduce the costs from frequent fine-tuning on updated data
+
+This process involves extracting pertinent information from a custom data source and integrating it into the model request through prompt engineering. Before sending a request to the LLM, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the LLM request using [prompt engineering](#prompts-and-prompt-engineering).
+
+Here are multiple ways to implement RAG on your data by using our vector database functionalities:
+
+## How to implement vector database functionalities using our API for MongoDB vCore
Use the natively integrated vector database in [Azure Cosmos DB for MongoDB vCore](mongodb/vcore/vector-search.md), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
Use the natively integrated vector database in [Azure Cosmos DB for MongoDB vCor
- [Python - LlamaIndex integration](https://docs.llamaindex.ai/en/stable/examples/vector_stores/AzureCosmosDBMongoDBvCoreDemo.html) - [Python - Semantic Kernel memory integration](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/memory/azure_cosmosdb)
-## Implement vector database functionalities using our API for PostgreSQL
+## How to implement vector database functionalities using our API for PostgreSQL
Use the natively integrated vector database in [Azure Cosmos DB for PostgreSQL](postgresql/howto-use-pgvector.md), which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
Use the natively integrated vector database in [Azure Cosmos DB for PostgreSQL](
- Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch)
-## Implement vector database functionalities using our NoSQL API and AI Search
+## How to implement vector database functionalities using our NoSQL API and AI Search
The natively integrated vector database in our NoSQL API will become available in mid-2024. In the meantime, you may implement RAG patterns with Azure Cosmos DB for NoSQL and [Azure AI Search](../search/vector-search-overview.md). This approach enables powerful integration of your data residing in the NoSQL API into your AI-oriented applications.
The process of creating good prompts for a scenario is called prompt engineering
### Tokens Tokens are small chunks of text generated by splitting the input text into smaller segments. These segments can either be words or groups of characters, varying in length from a single character to an entire word. For instance, the word hamburger would be divided into tokens such as ham, bur, and ger while a short and common word like pear would be considered a single token. LLMs like ChatGPT, GPT-3.5, or GPT-4 break words into tokens for processing.+
+## Related content
+
+- [Azure Cosmos DB for MongoDB vCore Integrated Vector Database](mongodb/vcore/vector-search.md)
+- [Azure PostgreSQL Server pgvector Extension](../postgresql/flexible-server/how-to-use-pgvector.md)
+- [Azure AI Search](../search/search-what-is-azure-search.md)
+- [Open Source Vector Database List](/semantic-kernel/memories/vector-db#available-connectors-to-vector-databases)
data-factory Airflow Sync Github Repository https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-sync-github-repository.md
In this article, you learn how to synchronize your GitHub repository in Azure Da
To sync your GitHub repository by using the Workflow Orchestration Manager UI: 1. Ensure that your repository contains the necessary folders and files:
- - **Dags/**: For Apache Airflow directed acyclic graphs (DAGs) (required).
+ - **dags/**: For Apache Airflow directed acyclic graphs (dags) (required).
- **Plugins/**: For integrating external features to Airflow. :::image type="content" source="media/airflow-git-sync-repository/airflow-folders.png" alt-text="Screenshot that shows the Airflow folders structure in GitHub.":::
data-factory Enable Aad Authentication Azure Ssis Ir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/enable-aad-authentication-azure-ssis-ir.md
ms.devlang: powershell
-+ Last updated 07/17/2023
data-manager-for-agri Concepts Farm Operations Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-farm-operations-data.md
Title: Working with Farm Activities data in Azure Data Manager for Agriculture
-description: Learn how to integrate with Farm Activities data providers and ingest data into ADMA
+ Title: Work with farm activities data in Azure Data Manager for Agriculture
+description: Learn how to integrate with data providers for farm activities and ingest data into Azure Data Manager for Agriculture.
Last updated 08/14/2023
-# Working with Farm Activities data in Azure Data Manager for Agriculture
-Farm Activities data is one of the most important ground truth datasets in precision agriculture. It's these machine-generated reports that preserve the record of what exactly happened where and when that is used to both improve in-field practice and the downstream values chain analytics cases
+# Work with farm activities data in Azure Data Manager for Agriculture
-The Data Manager for Agriculture supports both
-* summary data - entered as properties in the operation data item directly
-* precision data - (for example, a .shp, .dat, .isoxml) uploaded as an attachment file and reference linked to the operation data item.
+Data about farm activities is one of the most important ground-truth datasets in precision agriculture. These machine-generated reports preserve the record of exactly what happened and when. That record can help improve in-field practice and the downstream value-chain analytics.
+
+Azure Data Manager for Agriculture supports both:
+
+* **Summary data**: Entered as properties directly in the operation data item.
+* **Precision data**: Uploaded as an attachment file (for example, .shp, .dat, or .isoxml) and reference linked to the operation data item.
+
+New operation data can be pushed into the service via the APIs for operation and attachment creation. Or, if the desired source is in the supported list of original equipment manufacturer (OEM) connectors, data can be synced automatically from providers like Climate FieldView with an ingestion job for farm operations.
-New operation data can be pushed into the service via the APIs for operation and attachment creation. Or, if the desired source is in the supported list of OEM connectors, data can be synced automatically from providers like Climate FieldView with a farm operation ingestion job.
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
-* Azure Data Manager for Agriculture supports a range of Farm Activities data that can be found [here](/rest/api/data-manager-for-agri/#farm-activities)
-## Integration with farm equipment manufacturers
-Azure Data Manager for Agriculture fetches the associated Farm Activities data (planting, application, tillage & harvest) from the data provider (Ex: Climate FieldView) by creating a Farm Activities data ingestion job. Look [here](./how-to-ingest-and-egress-farm-operations-data.md) for more details.
+Azure Data Manager for Agriculture supports a range of data about farm activities. For more information, see [What is Azure Data Manager for Agriculture?](/rest/api/data-manager-for-agri).
+
+## Integration with manufacturers of farm equipment
+
+Azure Data Manager for Agriculture fetches the associated data about farm activities (planting, application, tillage, and harvest) from the data provider (for example, Climate FieldView) by creating a data ingestion job for farm activities. For more information, see [Working with farm activities and activity data in Azure Data Manager for Agriculture](./how-to-ingest-and-egress-farm-operations-data.md).
## Next steps
-* Test our APIs [here](/rest/api/data-manager-for-agri).
+* [Test the Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri)
data-manager-for-agri Concepts Hierarchy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-hierarchy-model.md
Title: Hierarchy model in Azure Data Manager for Agriculture
-description: Provides information on the data model to organize your agriculture data.
+description: Get information on the data model to organize your agriculture-related data.
Last updated 08/22/2023
-# Hierarchy model to organize agriculture related data
+# Hierarchy model in Azure Data Manager for Agriculture
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)]
-To generate actionable insights data related to growers, farms, and fields should be organized in a well defined manner. Firms operating in the agriculture industry often perform longitudinal studies and need high quality data to generate insights. Data Manager for Agriculture organizes agronomic data in the below manner.
+To generate actionable insights, data related to growers, farms, and fields should be organized in a well-defined way. Firms that operate in the agriculture industry often perform longitudinal studies and need high-quality data to generate insights. Azure Data Manager for Agriculture organizes agronomic data in the following hierarchy.
-## Understanding farm hierarchy
+## Farm hierarchy
-### Party
-* Party is the owner and custodian of any data related to their farm. You could imagine Party to be the legal entity that is running the business.
-* The onus of defining the Party entity is with the customer setting up Data Manager for Agriculture.
+### Party
+
+A party is the owner and custodian of any data related to a farm. You can think of a party as the legal entity that runs the business.
+
+The customer who sets up Azure Data Manager for Agriculture defines the party entity.
### Farm
-* Farms are logical entities. A farm is a collection of fields.
-* Farms don't have any geometry associated with them. Farm entity helps you organize your growing operations. For example, Contoso Inc is the Party that has farms in Oregon and Idaho.
+
+Farms are logical entities. A farm is a collection of fields.
+
+Farms don't have any geometry associated with them. A farm entity helps you organize your growing operations. For example, Contoso Ltd. is the party that has farms in Oregon and Idaho.
### Field
-* Fields denote a stable geometry that is in general agnostic to seasons and other temporal constructs. For example, field could be the geometry denoted in government records.
-* Fields are multi-polygon. For example, a road might divide the farm in two or more parts.
+
+Fields denote a stable geometry that's generally agnostic to seasons and other temporal constructs. For example, a field could be the geometry denoted in government records.
+
+Fields are multipolygons. For example, a road might divide the farm into two or more parts.
### Seasonal field
-* Seasonal field is the most important construct in the farming world. A seasonal fields definition includes the following things
- * geometry
- * Season
- * Crop
-* A seasonal field is associated with a field or a farm
-* In Data Manager for Agriculture, seasonal fields are mono crop entities. In cases where farmers are cultivating different crops simultaneously, they have to create one seasonal field per crop.
-* A seasonal field is associated with one season. If a farmer cultivates across multiple seasons, they have to create one seasonal field per season.
-* It's multi-polygon. Same crop can be planted in different areas within the farm.
+
+A seasonal field is the most important construct in the farming world. A seasonal field's definition includes:
+
+* Geometry
+* Season
+* Crop
+
+A seasonal field is:
+
+* Associated with a field or a farm.
+* A monocrop entity in Azure Data Manager for Agriculture. If farmers cultivate multiple crops simultaneously, they have to create one seasonal field per crop.
+* Associated with one season. If farmers cultivate across multiple seasons, they have to create one seasonal field per season.
+* A multipolygon. The same crop can be planted in various areas within the farm.
### Season
-* Season represents the temporal aspect of farming. It's a function of local agronomic practices, procedures and weather.
+
+The season represents the temporal aspect of farming. It's a function of local agronomic practices, procedures, and weather.
### Crop
-* Crop entity provides the phenotypic details of the planted crop.
+
+A crop entity provides the phenotypic details of the planted crop.
### Crop product
-* Crop Product entity refers to the commercial variety (brand, product) of the planted seeds. A seasonal field can contain information about various varieties of seeds planted (belonging to the same crop).
+
+A crop product is the commercial variety (brand and product) of the planted seeds. A seasonal field can contain information about varieties of seeds planted in the same crop.
## Next steps
-* Test our APIs [here](/rest/api/data-manager-for-agri).
+* [Test the Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri)
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
Title: Ingesting satellite data in Azure Data Manager for Agriculture
-description: Provides step by step guidance to ingest Satellite data
+ Title: Ingest satellite data in Azure Data Manager for Agriculture
+description: Get step-by-step guidance on how to ingest satellite data.
show_latex: true
-# Using satellite imagery in Azure Data Manager for Agriculture
-Satellite imagery makes up a foundational pillar of agriculture data. To support scalable ingestion of geometry-clipped imagery, we partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. This BYOL experience allows you to manage your own costs. This capability helps you with storing your field-clipped historical and up to date imagery in the linked context of the relevant fields.
+# Ingest satellite imagery in Azure Data Manager for Agriculture
+
+Satellite imagery is a foundational pillar of agriculture data. To support scalable ingestion of geometry-clipped imagery, Microsoft partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience for Azure Data Manager for Agriculture. You can use this BYOL experience to manage your own costs. This capability helps you with storing your field-clipped historical and up-to-date imagery in the linked context of the relevant fields.
## Prerequisites
-* To search and ingest imagery, you need a user account that has suitable subscription entitlement with Sentinel Hub: https://www.sentinel-hub.com/pricing/
-* Read the Sinergise Sentinel Hub terms of service and privacy policy: https://www.sentinel-hub.com/tos/
-* Have your providerClientId and providerClientSecret ready
+
+* To search for and ingest imagery, you need a user account that has suitable subscription entitlement with [Sentinel Hub](https://www.sentinel-hub.com/pricing/).
+* Read the [Sinergise Sentinel Hub terms of service and privacy policy](https://www.sentinel-hub.com/tos/).
+* Have your `providerClientId` and `providerClientSecret` values ready.
## Ingesting geometry-clipped imagery
-Using satellite data in Data Manager for Agriculture involves following steps:
+Using satellite data in Azure Data Manager for Agriculture involves the following steps:
+ [!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)] ## Consumption visibility and logging
-As all ingest data is under a BYOL model, transparency into the cost of a given job is needed. Our data manager offers built-in logging to provide transparency on PU consumption for calls to our upstream partner Sentinel Hub. The information appears under the ΓÇ£SatelliteLogsΓÇ¥ Category of the standard data manager Logging found [here](how-to-set-up-audit-logs.md).
-## STAC search for available imagery
-Our data manager supports the industry standard [STAC](https://stacspec.org/en) search interface to find metadata on imagery in the sentinel collection prior to committing to downloading pixels. To do so, the search endpoint accepts a location in the form of a point, polygon or multipolygon plus a start and end date time. Alternatively, if you already have the unique "Item ID," it can be provided as an array, of up 5, to retrieve those specific items directly
+Because all ingested data is under a BYOL model, the cost of a job is transparent. Azure Data Manager for Agriculture offers built-in logging to provide transparency on processing unit (PU) consumption for calls to upstream partner Sentinel Hub. The information appears under the `SatelliteLogs` category of the [standard Azure Data Manager logging](how-to-set-up-audit-logs.md).
+
+## STAC search for available imagery
+
+Azure Data Manager for Agriculture supports the industry-standard [SpatioTemporal Asset Catalogs (STAC)](https://stacspec.org/en) search interface to find metadata on imagery in the Sentinel Hub collection before committing to downloading pixels. To do so, the search endpoint accepts a location in the form of a point, polygon, or multipolygon, plus a start and end date/time. Alternatively, if you already have the unique item ID, you can provide it as an array of up to five to retrieve those specific items directly.
> [!IMPORTANT]
-> To be consistent with STAC syntax ΓÇ£Feature IDΓÇ¥ is renamed to ΓÇ£Item IDΓÇ¥ from the 2023-11-01-preview API version.
-> If an "Item ID" is provided, any location and time parameters in the request will be ignored.
+> To be consistent with STAC syntax, *feature ID* is renamed to *item ID* from the 2023-11-01-preview API version.
+>
+> If you provide an item ID, any location and time parameters in the request are ignored.
-## Single tile source control
-Published tiles overlap space on the earth to ensure full spatial coverage. If the queried geometry lies in a space where more than one tile matches for a reasonable time frame, the provider automatically mosaics the returned image with selected pixels from the range of candidate tiles. The provider produces the ΓÇ£bestΓÇ¥ resulting image.
+## Single-tile source control
-In some cases, it isn't desirable and traceability to a single tile source is preferred. To support this strict source control, our data manager supports specifying a single item ID in the ingest-job.
+Published tiles overlap space on the earth to ensure full spatial coverage. If the queried geometry lies in a space where more than one tile matches for a reasonable time frame, the provider automatically mosaics the returned image with selected pixels from the range of candidate tiles. The provider produces the best resulting image.
+
+In some cases, using more than one tile isn't desirable and traceability to a single tile source is preferred. To support this strict source control, Azure Data Manager for Agriculture supports specifying a single item ID in the ingest job.
> [!NOTE]
-> This functionality is only available from the 2023-11-01-preview API version.
-> If an "Item ID" is provided for which the geometry only has partial coverage (eg the geometry spans more than one tile), the returned images will only reflect the pixels that are present in the specified itemΓÇÖs tile and will result in a partial image.
+> This functionality is available only from the 2023-11-01-preview API version.
+>
+> If the geometry for a provided item ID has partial coverage (for example, the geometry spans more than one tile), the returned images reflect only the pixels that are present in the specified item's tile and result in a partial image.
## Reprojection
-> [!IMPORTANT]
-> This functionality has been changed from the 2023-11-01-preview API version, however it will be immediately applicable to all versions. Older versions used a static conversion of 10m*10m set at the equator, so imagery ingested prior to this release may have a difference in size to those ingested after this release .
-Data Manager for Agriculture uses the WSG84 (EPSG: 4326), a flat coordinate system, whereas Sentinel-2 imagery is presented in UTM, a ground projection system that approximates the round earth.
+> [!IMPORTANT]
+> Reprojection functionality has changed from the 2023-11-01-preview API version, but it's immediately applicable to all versions. Older versions used a static conversion of 10 m * 10 m set at the equator. Imagery ingested before this release might have a difference in size from imagery ingested after this release.
+
+Azure Data Manager for Agriculture uses WGS84 (EPSG: 4326), a flat coordinate system. Sentinel-2 imagery is presented in UTM, a ground projection system that approximates the round earth.
-Translating between a flat image and a round earth involves an approximation translation. The accuracy of this translation is set to equal at the equator (10 m^2) and increases in error margin as the point in question moves away from the equator to the poles.
-For consistency, our data manager uses the following formula at 10 m^2 base for all Sentinel-2 calls:
-
+Translating between a flat image and a round earth involves an approximation translation. The accuracy of this translation is set to equal at the equator (10 m^2) and increases in error margin as the point in question moves away from the equator to the poles.
+
+For consistency, Azure Data Manager for Agriculture uses the following formula at 10 m^2 base for all Sentinel-2 calls:
$$ Latitude = \frac{10 m}{111320}
$$
$$ ## Caching+ > [!IMPORTANT]
-> This functionality is only available from the 2023-11-01-preview api version. Item caching is only applicable for "Item ID" based retrieval. For a typical geometry and time search, the returned items will not be cached.
+> Caching functionality is available only from the 2023-11-01-preview API version. Item caching is applicable only for retrieval that's based on item ID. For a typical geometry and time search, the returned items aren't cached.
-Our data manager optimizes performance and costing of highly repeated calls to the same item. It caches recent STAC items when retrieved by "Item ID" for five days in the customerΓÇÖs instance and enables local retrieval.
+Azure Data Manager for Agriculture optimizes performance and costing of highly repeated calls to the same item. It caches recent STAC items retrieved by item ID for five days in the customer's instance and enables local retrieval.
-For the first call to the search endpoint, our data manager brokers the request and triggers a request to the upstream provider to retrieve the matching or intersecting data items, incurring any provider fees. Any subsequent search first directs to the cache for a match. If found, data is served from the cache directly and doesn't result in a call to the upstream provider, thus saving any more provider fees. If no match is found, or if it after the five day retention period, then a subsequent call for the data will be passed to the upstream provider. And treated as another first call with the results being cached.
+For the first call to the search endpoint, Azure Data Manager for Agriculture brokers the request and triggers a request to the upstream provider to retrieve the matching or intersecting data items. The request incurs any provider fees.
-If an ingestion job is for an identical geometry, referenced by the same resource ID, and with overlapping time to an already retrieved scene, then the locally stored image is used. It isn't redownloaded from the upstream provider. There's no expiration for this pixel-level caching.
+Any subsequent search first directs to the cache for a match. If there's a match, data is served from the cache directly. This process doesn't result in a call to the upstream provider, so it doesn't incur more provider fees. If there's no match, or if the five-day retention period elapses, a subsequent call for the data is passed to the upstream provider. That call is treated as another first call, so the results are cached.
-## Satellite sources supported by Azure Data Manager for Agriculture
-In our public preview, we support ingesting data from Sentinel-2 constellation.
+If an ingestion job is for an identical geometry, referenced by the same resource ID, and with overlapping time to an already retrieved scene, Azure Data Manager for Agriculture uses the locally stored image. The image isn't downloaded again from the upstream provider. There's no expiration for this pixel-level caching.
+
+## Satellite sources that Azure Data Manager for Agriculture supports
+
+While Azure Data Manager for Agriculture is in preview, it supports ingesting data from the Sentinel-2 constellation.
### Sentinel-2
-[Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) is a satellite constellation launched by 'European Space Agency' (ESA) under the Copernicus mission. This constellation has a pair of satellites and carries a Multi-Spectral Instrument (MSI) payload that samples 13 spectral bands: four bands at 10 m, six bands at 20 m and three bands at 60-m spatial resolution.
-> [!TIP]
-> Sentinel-2 has two products: Level 1 (top of the atmosphere) data and its atmospherically corrected variant Level 2 (bottom of the atmosphere) data. We support ingesting and retrieving Sentinel_2_L2A and Sentinel_2_L1C data from Sentinel 2.
+[Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) is a satellite constellation that the European Space Agency (ESA) launched under the Copernicus mission. This constellation has a pair of satellites and carries a multispectral instrument (MSI) payload that samples 13 spectral bands: four bands at 10 m, six bands at 20 m, and three bands at 60-m spatial resolution.
+
+Sentinel-2 has two products:
+
+* Level 1 data for the top of the atmosphere.
+* Level 2 data for the bottom of the atmosphere. This variant is atmospherically corrected.
+
+Azure Data Manager for Agriculture supports ingesting and retrieving Sentinel_2_L2A and Sentinel_2_L1C data from Sentinel 2.
### Image names and resolutions
-The image names and resolutions supported by APIs used to ingest and read satellite data (for Sentinel-2) in our service:
-| Category | Image Name | Description | Native resolution |
+APIs that you use to ingest and read satellite data (for Sentinel-2) in Azure Data Manager for Agriculture support the following image names and resolutions:
+
+| Category | Image name | Description | Native resolution |
|:--:|:-:|:-:|:-:|
-|Raw bands| B01 | Coastal aerosol | 60 m |
-|Raw bands| B02 | Blue| 10 m |
-|Raw bands| B03 | Green | 10 m |
-|Raw bands| B04 | Red | 10 m |
-|Raw bands| B05 | Vegetation red edge | 20 m |
-|Raw bands| B06 | Vegetation red edge | 20 m |
-|Raw bands| B07 | Vegetation red edge | 20 m |
-|Raw bands| B08 | NIR | 10 m |
-|Raw bands| B8A | Narrow NIR | 20 m |
-|Raw bands| B09 | Water vapor | 60 m |
-|Raw bands| B11 | SWIR | 20 m |
-|Raw bands| B12 | SWIR | 20 m |
-|Sen2Cor processor output| AOT | Aerosol optical thickness map | 10 m |
-|Sen2Cor processor output| SCL | Scene classification data | 20 m |
-|Sen2Cor processor output| SNW | Snow probability| 20 m |
-|Sen2Cor processor output| CLD | Cloud probability| 20 m |
-|Derived Indices| NDVI | Normalized difference vegetation index | 10 m/20 m/60 m (user defined) |
-|Derived Indices| NDWI | Normalized difference water index | 10 m/20 m/60 m (user defined) |
-|Derived Indices| EVI | Enhanced vegetation index | 10 m/20 m/60 m (user defined) |
-|Derived Indices| LAI | Leaf Area Index | 10 m/20 m/60 m (user defined) |
-|Derived Indices| LAIMask | Leaf Area Index Mask | 10 m/20 m/60 m (user defined) |
-|CLP| Cloud probability, based on [s2cloudless](https://github.com/sentinel-hub/sentinel2-cloud-detector). | Values range from 0 (no clouds) to 255 (clouds). | 10 m/20 m/60 m (user defined)|
-|CLM| Cloud masks based on [s2cloudless](https://github.com/sentinel-hub/sentinel2-cloud-detector) | Value of 1 represents clouds, 0 represents no clouds and 255 represents no data. | 10 m/20 m/60 m (user defined)|
-|dataMask | Binary mask to denote availability of data | 0 represents non availability of data OR pixels lying outside the 'Area of interest' | Not applicable, per pixel value|
+|Raw bands| B01 | Coastal aerosol | 60 m |
+|Raw bands| B02 | Blue| 10 m |
+|Raw bands| B03 | Green | 10 m |
+|Raw bands| B04 | Red | 10 m |
+|Raw bands| B05 | Vegetation red edge | 20 m |
+|Raw bands| B06 | Vegetation red edge | 20 m |
+|Raw bands| B07 | Vegetation red edge | 20 m |
+|Raw bands| B08 | Near infrared (NIR) | 10 m |
+|Raw bands| B8A | Narrow NIR | 20 m |
+|Raw bands| B09 | Water vapor | 60 m |
+|Raw bands| B11 | Short-wave infrared (SWIR) | 20 m |
+|Raw bands| B12 | SWIR | 20 m |
+|Sen2Cor processor output| AOT | Aerosol optical thickness map | 10 m |
+|Sen2Cor processor output| SCL | Scene classification data | 20 m |
+|Sen2Cor processor output| SNW | Snow probability| 20 m |
+|Sen2Cor processor output| CLD | Cloud probability| 20 m |
+|Derived indices| NDVI | Normalized difference vegetation index | 10 m/20 m/60 m (user defined) |
+|Derived indices| NDWI | Normalized difference water index | 10 m/20 m/60 m (user defined) |
+|Derived indices| EVI | Enhanced vegetation index | 10 m/20 m/60 m (user defined) |
+|Derived indices| LAI | Leaf area index | 10 m/20 m/60 m (user defined) |
+|Derived indices| LAIMask | Leaf area index mask | 10 m/20 m/60 m (user defined) |
+|CLP| Cloud probability based on [s2cloudless](https://github.com/sentinel-hub/sentinel2-cloud-detector) | Values range from `0` (no clouds) to `255` (clouds). | 10 m/20 m/60 m (user defined)|
+|CLM| Cloud masks based on [s2cloudless](https://github.com/sentinel-hub/sentinel2-cloud-detector) | Value of `1` represents clouds, `0` represents no clouds, and `255` represents no data. | 10 m/20 m/60 m (user defined)|
+|dataMask | Binary mask to denote availability of data | Value of `0` represents unavailability of data or pixels lying outside the area of interest. | Not applicable, per pixel value|
## Points to note
-* We use CRS EPSG: 4326 for Sentinel-2 data. The resolutions quoted in the APIs are at the equator.
-* For preview:
- * A maximum of five satellite jobs can be run concurrently, per tenant.
- * A satellite job can ingest data for a maximum of one year in a single API call.
- * Only TIFs are supported.
- * Only 10 m, 20 m and 60-m images are supported.
+
+Azure Data Manager for Agriculture uses CRS EPSG: 4326 for Sentinel-2 data. The resolutions quoted in the APIs are at the equator.
+
+For the preview:
+
+* A maximum of five satellite jobs can run concurrently, per tenant.
+* A satellite job can ingest data for a maximum of one year in a single API call.
+* Only TIFs are supported.
+* Only 10-m, 20-m, and 60-m images are supported.
## Next steps
-* Test our APIs [here](/rest/api/data-manager-for-agri).
+* [Test the Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri)
data-manager-for-agri Concepts Ingest Sensor Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-sensor-data.md
Title: Ingesting sensor data in Azure Data Manager for Agriculture
-description: Provides step by step guidance to ingest Sensor data.
+ Title: Ingest sensor data in Azure Data Manager for Agriculture
+description: Get step-by-step guidance for ingesting sensor data.
Last updated 06/19/2023
-# Ingesting sensor data
+# Ingest sensor data in Azure Data Manager for Agriculture
-Smart agriculture, also known as precision agriculture, allows growers to maximize yields using minimal resources such as water, fertilizer, and seeds, etc. By deploying sensors, growers and research organization can begin to understand crops at a micro-scale, conserve resources, reduce impact on the environment and ultimately maximize crop yield. Sensors enable important ground truth data (soil moisture, rainfall, wind speed etc.) and this data in turn improves accuracy of recommendations.
-
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
+Smart agriculture, also known as precision agriculture, allows growers to maximize yields by using minimal resources such as water, fertilizer, and seeds. By deploying sensors, growers and research organizations can begin to understand crops at a micro scale, conserve resources, reduce impact on the environment, and maximize crop yield. Sensors enable important ground-truth data (such as soil moisture, rainfall, and wind speed). This data, in turn, improves the accuracy of recommendations.
Sensors are of various types:
-* Location-sensor (determines lat/long & altitude)
-* Electrochemical sensor (determines pH, soil nutrients)
-* Soil moisture sensor
-* Airflow sensor (determines the pressure required to push a pre-determined amount of air into the ground at a prescribed depth)
-* Weather sensor
-There's a large ecosystem of sensor providers that help growers to monitor and optimize crop performance. Sensor based data also enables an understanding of the changing environmental factors.
+* Location sensors, which determine latitude, longitude, and altitude
+* Electrochemical sensors, which determine pH and soil nutrients
+* Soil moisture sensors
+* Airflow sensors, which determine the pressure required to push a predetermined amount of air into the ground at a prescribed depth
+* Weather sensors
+
+There's a large ecosystem of sensor providers that help growers to monitor and optimize crop performance. Sensor-based data also enables an understanding of the changing environmental factors.
+ ## How sensors work
-Sensors are placed in the field based on its characteristics. Sensors record measurements and transfer the data to the connected node. Each node has one or more sensors connected to it. Nodes equipped with internet connectivity can directly push the data to cloud. Other nodes use an IOT agent to transfer data the gateway.
+Sensors are placed in a field based on its characteristics. Sensors record measurements and transfer the data to the connected node. Each node has one or more sensors connected to it. Nodes equipped with internet connectivity can push data directly to the cloud. Other nodes use an Internet of Things (IoT) agent to transfer data to the gateway.
-Gateways collect all essential data from the nodes and push it securely to the cloud via either cellular connectivity, Wi-Fi, or Ethernet. Once the data resides in a sensor partner cloud, the sensor partner pushes the relevant sensors data to the dedicated IOTHub endpoint provided by Data Manager for Agriculture.
+Gateways collect all essential data from the nodes and push it securely to the cloud via cellular connectivity, Wi-Fi, or Ethernet. After the data resides in a sensor partner's cloud, the sensor partner pushes the relevant sensor data to the dedicated Azure IoT Hub endpoint that Azure Data Manager for Agriculture provides.
-In addition to the above approach, IOT devices (sensors/nodes/gateway) can directly push the data to IOTHub endpoint. In both cases, the data first reaches the IOTHub, post that the next set of processing happens.
+In addition to the preceding approach, IoT devices (sensors, nodes, and gateway) can push the data directly to the IoT Hub endpoint. In both cases, the data first reaches IoT Hub, where the next set of processing happens.
## Sensor topology
-The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each geometry under a party has a set of devices placed within it. A device can be either be a node or a gateway and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates helping in creating a geospatial time series for all measured data.
+The following diagram depicts the topology of a sensor in Azure Data Manager for Agriculture. Each geometry under a party has a set of devices placed within it. A device can be either a node or a gateway, and each device has a set of sensors associated with it. Sensors send the recordings via gateway to the cloud. Sensors are tagged with GPS coordinates to help in creating a geospatial time series for all measured data.
## Next steps
-How to [get started when you push and consume sensor data](./how-to-set-up-sensor-as-customer-and-partner.md).
-
-How to [get started as a customer](./how-to-set-up-sensors-customer.md) to consume sensor data from a supported sensor partner like Davis Instruments.
-
-How to [get started as a sensor partner](./how-to-set-up-sensors-partner.md) to push sensor data into Data Manager for Agriculture Service.
+* Learn how to [get started with pushing and consuming sensor data](./how-to-set-up-sensor-as-customer-and-partner.md).
+* Learn how to [get started as a customer](./how-to-set-up-sensors-customer.md) to consume sensor data from a supported sensor partner like Davis Instruments.
+* Learn how to [get started as a sensor partner](./how-to-set-up-sensors-partner.md) to push sensor data into Azure Data Manager for Agriculture.
data-manager-for-agri Concepts Ingest Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-weather-data.md
Title: Ingesting weather forecast data in Azure Data Manager for Agriculture
-description: Learn how to fetch weather data from various weather data providers through extensions and provider Agnostic APIs.
+ Title: Ingest weather forecast data in Azure Data Manager for Agriculture
+description: Learn how to fetch weather data from various weather data providers through extensions and provider-agnostic APIs.
Last updated 02/14/2023
-# Weather data overview
+# Ingest weather forecast data in Azure Data Manager for Agriculture
-Weather is a highly democratized service in the agriculture industry. Data Manager for Agriculture offers customers the ability to work with the weather provider of their choice.
+Weather is a highly democratized service in the agriculture industry. Azure Data Manager for Agriculture offers customers the ability to work with the weather provider of their choice.
-Data Manager for Agriculture provides weather current and forecast data through an extension-based and provider agnostic approach. Customers can work with a provider of their choice by following the steps [here](./how-to-write-weather-extension.md).
+Azure Data Manager for Agriculture provides current and forecast weather data through an extension-based and provider-agnostic approach. You can work with a provider of your choice by following the [steps for writing a weather extension](./how-to-write-weather-extension.md).
## Design overview
-Data Manager for Agriculture provides weather data through provider agnostic approach where the user doesn't have to be familiar with the provider's APIs. Instead, they can use the same Data Manager for Agriculture APIs irrespective of the provider.
+Because Azure Data Manager for Agriculture provides weather data through a provider-agnostic approach, you don't have to be familiar with a provider's APIs. Instead, you can use the same Azure Data Manager for Agriculture APIs irrespective of the provider.
-## Behavior of provider agnostic APIs
+Here are some notes about the behavior of provider-agnostic APIs:
-* Request weather data for up to 50 locations in a single call.
-* Forecast data provided isn't older than 15 mins and the current conditions data isn't older than 10 mins.
-* Once the initial call is made for a location, the data gets cached for the TTL defined.
-* To keep the cache warm, you can use the parameter called `apiFreshnessTimeInMinutes` in extension. The platform will keep a job running for the amount of time defined and update the cache. The default value is be zero that means the cache won't be kept warm by default
+* You can request weather data for up to 50 locations in a single call.
+* Forecast data isn't older than 15 minutes. Data for current conditions isn't older than 10 minutes.
+* After the initial call is made for a location, the data is cached for the defined time to live (TTL).
+* To keep the cache warm, you can use the `apiFreshnessTimeInMinutes` parameter in the weather extension. The platform keeps a job running for the defined amount of time and updates the cache. The default value is zero, which means the cache isn't kept warm by default.
-The steps to fetch weather data and ingest into Data Manager for Agriculture platform.
+The following sections provide the commands to fetch weather data and ingest it into Azure Data Manager for Agriculture.
-## Step 1: Install weather extension
+## Step 1: Install the weather extension
-Run the install command through Azure Resource Manager ARM Client tool. The command to install the extension is given here:
+To install the extension, run the following command by using the Azure Resource Manager ARMClient tool.
+
+Replace all values within angle brackets (`<>`) with your respective environment values. The extension ID that's currently supported is `IBM.TWC`.
-### Install command
```azurepowershell-interactive armclient PUT /subscriptions/<subscriptionid>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<farmbeats-resource-name>/extensions/<extensionid>?api-version=2020-05-12-preview '{}' ```
-> [!NOTE]
-> All values within < > is to be replaced with your respective environment values. The extension ID supported today is 'IBM.TWC'
->
-### Sample output
+Here's sample output for the installation command:
+ ```json { "id": "/subscriptions/<subscriptionid>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<farmbeats-resource-name>/extensions/<extensionid>",
armclient PUT /subscriptions/<subscriptionid>/resourceGroups/<resource-group-nam
} ```
-You can ingest weather date after completing the extension installation.
+After you finish installing the extension, you can ingest weather data.
-If you would like to update the `apiFreshnessTimeInMinutes` update the extension using below PowerShell command
+If you want to update `apiFreshnessTimeInMinutes`, update the extension by using the following PowerShell command. Replace all values within angle brackets with your respective environment values.
-### Update command
```azurepowershell-interactive armclient put /subscriptions/<subscriptionid>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<farmbeats-resource-name>/<extensionid>?api-version=2021-09-01-preview '{"additionalApiProperties": {""15-day-daily-forecast"": {"apiFreshnessTimeInMinutes": <time>}, ""currents-on-demand"": {"apiFreshnessTimeInMinutes": <time>},""15-day-hourly-forecast"":{"apiFreshnessTimeInMinutes": <time>}}}' ```
-> [!NOTE]
-> All values within < > is to be replaced with your respective environment values.
-> The above update command does merge patch operation which means it updates Freshness Time only for the API mentioned in the command and retains the Freshness Time values for other APIs as they were before.
+The preceding update command merges patch operations. It updates freshness time for only the API mentioned in the command and retains the freshness time values for other APIs as they were before.
+
+Here's sample output for the update command:
-### Sample output
```json { "id": "/subscriptions/<subscriptionid>/resourceGroups/<resource-group-name>/providers/Microsoft.AgFoodPlatform/farmBeats/<farmbeats-resource-name>/extensions/<extensionid>",
armclient put /subscriptions/<subscriptionid>/resourceGroups/<resource-group-nam
## Step 2: Fetch weather data
-Once the credentials required to access the APIs is obtained, you need to call the fetch weather data API [here](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/weather-data) to fetch weather data.
+After you get the credentials that are required to access the APIs, you need to call the [Weather Data API](/rest/api/data-manager-for-agri/dataplane-version2022-11-01-preview/weather-data) to fetch weather data.
data-manager-for-agri Concepts Isv Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md
Title: ISV solution framework in Azure Data Manager for Agriculture
-description: Provides information on using solutions from ISVs
+description: Learn about solutions that ISVs build on top of Azure Data Manager for Agriculture.
Last updated 02/14/2023
-# What is our Solution Framework?
+# ISV solution framework in Azure Data Manager for Agriculture
-In this article, you learn how Azure Data Manager for Agriculture provides a framework for customer to use solutions built by Bayer and other ISV Partners.
+In this article, you learn how Azure Data Manager for Agriculture provides a framework for customers to use solutions built by Bayer and other independent software vendor (ISV) partners.
[!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)] ## Overview
-The agriculture industry is going through a significant technology transformation where technology is playing a key role towards building sustainable agriculture. With the increase in adoption of technology like drones, satellite imagery, IOT devices ΓÇô there are large volumes of data generated from these source systems and stored in cloud. Today, companies are looking at ways to efficiently manage this data and derive actionable insights that are provided to the user timely and help achieve more with less. Data Manager for Agriculture provides core technology platform that hides all the technical complexity and helps customers focus on their building their core business logic and drive business value.
+The agriculture industry is going through a significant technology transformation. Technology is playing a key role in building sustainable agriculture.
-The solution framework is built on top of Data Manager for Agriculture that provides extensibility capabilities. It enables our Independent Software Vendor (ISV) partners to apply their deep domain knowledge and develop specialized domain specific industry solutions to top of the core platform. The solution framework provides below capabilities:
+The adoption of technology like drones, satellite imagery, and Internet of Things (IoT) devices has increased. These source systems generate large volumes of data that's stored in the cloud. Companies want to efficiently manage this data and derive actionable insights that they can use to achieve more with less.
+Azure Data Manager for Agriculture provides a core technology platform that hides all the technical complexity and helps customers focus on building their core business logic and drive business value.
-* Enables ISV Partners to easily build industry specific solutions to top of Data Manager for Agriculture.
-* Helps ISVs generate revenue by monetizing their solution and publishing it on the Azure Marketplace* Provides simplified onboarding experience for ISV Partners and customers.
-* Asynchronous Application Programming Interface (API) based integration
-* Data privacy complaint ensuring the right level of access to customers and ISV Partners.
-* Hides all the technical complexity of the platform and allows ISVs and customers to focus on the core business logic
+The solution framework is built on top of Azure Data Manager for Agriculture to provide extensibility.
++
+The solution framework:
+
+* Enables ISV partners to apply their deep domain knowledge and build industry-specific solutions on top of Azure Data Manager for Agriculture.
+* Helps ISV partners generate revenue by monetizing their solutions and publishing them on Azure Marketplace.
+* Provides a simplified onboarding experience for ISV partners and customers.
+* Offers integration that's based on asynchronous APIs.
+* Complies with data privacy standards to help ensure that ISV partners and customers have the right level of access.
## Use cases
- Following are some of the examples of use cases on how an ISV partner could use the solution framework to build an industry specific solution.
+Here are a few examples of how an ISV partner could use the solution framework to build an industry-specific solution:
-* Yield Prediction Model: An ISV partner can build a yield model using historical data for a specific geometry and track periodic progress. The ISV can then enable forecast of estimated yield for the upcoming season.
-* Carbon Emission Model: An ISV partner can estimate the amount of carbon emitted from the field based upon the imagery, sensors data for a particular farm.
-* Crop Identification: Use imagery data to identify crop growing in an area of interest.
+* **Yield prediction model**: Build a yield model by using historical data for a specific geometry, forecast estimated crop yield for the upcoming season, and track progress.
+* **Carbon emission model**: Estimate the amount of carbon emitted from a field based on imagery and sensor data for a particular farm.
+* **Crop identification**: Use imagery data to identify crops growing in an area of interest.
-The above list has only a few examples but an ISV partner can come with their own specific scenario and build a solution.
+An ISV partner can come up with its own specific scenario and build a solution.
## Bayer AgPowered Services
-Additionally, Bayer has built the below Solutions in partnership with Microsoft and can be installed on top of customer's ADMA instance.
+Bayer built the following solutions in partnership with Microsoft. A customer can install them on top of an Azure Data Manager for Agriculture instance.
+ * Growing Degree Days * Crop Water Usage Maps * Biomass Variability
-To install the above Solutions, please refer to [this](./how-to-set-up-isv-solution.md) article.
+To install the preceding solutions, see the [article about working with ISV solutions](./how-to-set-up-isv-solution.md).
## Next steps
-* Test our APIs [here](/rest/api/data-manager-for-agri).
+* [Test the Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri)
data-manager-for-agri Concepts Llm Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-llm-apis.md
Title: Using generative AI in Data Manager for Agriculture
-description: Provides information on using generative AI feature in Azure Data Manager for Agriculture
+ Title: Generative AI in Azure Data Manager for Agriculture
+description: Learn how to use generative AI features in Azure Data Manager for Agriculture.
Last updated 3/19/2024
-# About Generative AI and Data Manager for Agriculture
+# Generative AI in Azure Data Manager for Agriculture
-The copilot templates for agriculture enable seamless retrieval of data stored in Data Manager for Agriculture so that farming-related context and insights can be queried in conversational context. These capabilities enable customers and partners to build their own agriculture copilots. Customers and partners can deliver insights to users around disease, yield, harvest windows and more, using actual planning, and observational data. While Data Manager for Agriculture isn't required to operationalize copilot templates for agriculture, the Data Manager enables customers to more easily integrate generative AI scenarios for their users.
+The copilot templates for agriculture enable seamless retrieval of data stored in Azure Data Manager for Agriculture so that farming-related context and insights can be queried in a conversational context. These capabilities enable customers and partners to build their own agriculture copilots.
-Many customers have proprietary data outside of our data manager, for example Agronomy PDFs, market price data etc. These customers can benefit from our orchestration framework that allows for plugins, embedded data structures, and sub processes to be selected as part of the query flow.
+Customers and partners can deliver insights to users around disease, yield, harvest windows, and more, by using actual planning and observational data. Although Azure Data Manager for Agriculture isn't required to operationalize copilot templates for agriculture, it enables customers to more easily integrate generative AI scenarios for their users.
-Customers with farm operations data in our data manager can use our plugins that enable seamless selection of APIs mapped to farm operations today. In the time to come we'll add the capability to select APIs mapped to soil sensors, weather, and imagery type of data. Our data manager focused plugin allows for a combination of results, calculation of area, ranking, summarizing to help serve customer prompts.
+Many customers have proprietary data outside Azure Data Manager for Agriculture; for example, agronomy PDFs or market price data. These customers can benefit from an orchestration framework that allows for plugins, embedded data structures, and subprocesses to be selected as part of the query flow.
-Our copilot templates for agriculture make generative AI in agriculture a reality.
+Customers who have farm operations data in Azure Data Manager for Agriculture can use plugins that enable seamless selection of APIs mapped to farm operations. These plugins allow for a combination of results, calculation of area, ranking, and summarizing to help serve customer prompts.
+
+The copilot templates for agriculture make generative AI in agriculture a reality.
> [!NOTE]
->Azure might include preview, beta, or other pre-release features, services, software, or regions offered by Microsoft for optional evaluation ("Previews"). Previews are licensed to you as part of [**your agreement**](https://azure.microsoft.com/support) governing use of Azure, and subject to terms applicable to "Previews".
->
->The Azure Data Manager for Agriculture (Preview) and related Microsoft Generative AI Services Previews of Azure Data Manager for Agriculture are subject to additional terms set forth at [**Preview Terms Of Use | Microsoft Azure**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/)
+> Azure might include preview, beta, or other prerelease features, services, software, or regions offered by Microsoft for optional evaluation. Previews are licensed to you as part of [your agreement](https://azure.microsoft.com/support) governing the use of Azure, and are subject to terms applicable to previews.
>
->These Previews are made available to you pursuant to these additional terms, which supplement your agreement governing your use of Azure. If you do not agree to these terms, do not use the Preview(s).
+> The preview of Azure Data Manager for Agriculture and related Microsoft generative AI services are subject to [additional terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). These additional terms supplement your agreement governing your use of Azure. If you don't agree to these terms, don't use the previews.
## Prerequisites+ - An instance of [Azure Data Manager for Agriculture](quickstart-install-data-manager-for-agriculture.md)-- An instance of [Azure OpenAI](../ai-services/openai/how-to/create-resource.md) created in your Azure subscription-- You need [Azure Key Vault](../key-vault/general/quick-create-portal.md)-- You need [Azure Container Registry](../container-registry/container-registry-get-started-portal.md)
+- An instance of [Azure OpenAI Service](../ai-services/openai/how-to/create-resource.md) created in your Azure subscription
+- [Azure Key Vault](../key-vault/general/quick-create-portal.md)
+- [Azure Container Registry](../container-registry/container-registry-get-started-portal.md)
-> [!TIP]
->To get started with testing our Azure Data Manager for Agriculture LLM Plugin APIs please fill in this onboarding [**form**](https://forms.office.com/r/W4X381q2rd). In case you need help then reach out to us at madma@microsoft.com.
+## High-level architecture
-## High level architecture
-The customer has full control as key component deployment is within the customer tenant. Our feature is available to customers via a docker container, which needs to be deployed to the customers Azure App Service.
+You have full control because deployment of key components is within your tenant. The copilot templates for agriculture are available via a Docker container, which is deployed to your Azure App Service instance.
-We recommend that you apply content and safety filters on your Azure OpenAI instance. Taking this step ensures that the generative AI capability is aligned with guidelines from MicrosoftΓÇÖs Office of Responsible AI. Follow instructions on how to use content filters with Azure OpenAI service at this [link](../ai-services/openai/how-to/content-filters.md) to get started.
+We recommend that you apply content and safety filters on your Azure OpenAI instance. Taking this step helps ensure that the generative AI capability is aligned with guidelines from Microsoft's Office of Responsible AI. To get started, follow the [instructions on how to use content filters with Azure OpenAI](../ai-services/openai/how-to/content-filters.md).
-## Current farm operations related uses cases
+## Use cases for farm operations
-We support seamless selection of APIs mapped to farm operations today. This enables use cases that are based on tillage, planting, applications, and harvesting type of farm operations. Here's a sample list of queries that you can test and use:
+Azure Data Manager for Agriculture supports seamless selection of APIs mapped to farm operations. This support enables use cases that are based on tillage, planting, applications, and harvesting types of farm operations. Here's a sample list of queries that you can test and use:
-* Show me active fields
-* What crop was planted in my field (use field name)
-* Tell me the application details for my field (use field name)
-* Give me a list of all fields with planting dates
-* Give me a list of all fields with application dates
-* What is the delta between planted and harvested fields
-* Which farms were harvested
-* What is the area of harvested fields
-* Convert area to acres/hectares
-* What is the average yield for my field (use field name) with crop (use crop name)
-* What is the effect of planting dates on yield for crop (use crop name)
+- Show me active fields
+- What crop was planted in my field (use field name)
+- Tell me the application details for my field (use field name)
+- Give me a list of all fields with planting dates
+- Give me a list of all fields with application dates
+- What is the delta between planted and harvested fields
+- Which farms were harvested
+- What is the area of harvested fields
+- Convert area to acres/hectares
+- What is the average yield for my field (use field name) with crop (use crop name)
+- What is the effect of planting dates on yield for crop (use crop name)
-These use cases help input providers to plan equipment, seeds, applications, and related services and engage better with the farmer.
+These use cases can help input providers to plan equipment, seeds, applications, and related services and engage better with the farmer.
## Next steps
-* Fill this onboarding [**form**](https://forms.office.com/r/W4X381q2rd) to get started with testing our copilot templates feature.
-* View our Azure Data Manager for Agriculture APIs [here](/rest/api/data-manager-for-agri).
+- Fill in [this onboarding form](https://forms.office.com/r/W4X381q2rd) to get started with testing the copilot templates feature.
+- Test the [Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri).
data-manager-for-agri Concepts Understanding Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-understanding-throttling.md
Title: APIs throttling guidance for customers using Azure Data Manager for Agriculture
-description: Provides information on APIs throttling limits to plan usage.
--
+ Title: API throttling guidance for Azure Data Manager for Agriculture
+description: This article provides information on API throttling limits to plan usage in Azure Data Manager for Agriculture.
++ Last updated 11/15/2023
-# APIs throttling guidance for Azure Data Manager for Agriculture
-The REST APIs throttling in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers calling our service APIs.
-- Throttling limits, the number of requests to our service in a time span to prevent overuse of resources.-- Azure Data Manager for Agriculture is designed to handle a high volume of requests, if an overwhelming number of requests occur by few customers, throttling helps maintain optimal performance and reliability for all customers.-- Throttling limits are contingent on selected version and the specific capabilities of the product being used. Now, we support two distinct versions: **Standard** (recommended) and **Basic** (suitable for prototyping requirements). These limits operate within three different time windows (per 1 minute, per 5 minutes, and per one month) to safeguard against sudden surges in traffic.
+# API throttling guidance for Azure Data Manager for Agriculture
-This article shows you how to track the number of requests that remain before reaching the limit, and how to respond when you reach the limit. These [APIs](/rest/api/data-manager-for-agri/#data-plane-rest-apis), falling under the purview of the throttling limits.
-
-## Classification of APIs
-We categorize all our APIs into three main parts for better understanding:
-- **Write operations** - Comprising APIs utilizing REST API methods like `PATCH`, `POST`, and `DELETE` for altering data.-- **Read operations** - Encompassing APIs that use REST API method type `GET` to retrieve data including search APIs of method type `POST`.-- **Long running job operations** - Involving Long running asynchronous job APIs using the REST API method type `PUT`.
+Throttling limits the number of requests to a service in a time span to prevent the overuse of resources. The throttling of the REST API in Azure Data Manager for Agriculture allows more consistent performance within a time span for customers who call the service's APIs.
-The overall available quota units as explained in the following table, are shared among these categories. For instance, using up the entire quota on write operations means no remaining quota for other operations. Each operation consumes a specific unit of quota, detailed in the table, helping tracking the remaining quota for further use.
+Azure Data Manager for Agriculture can handle a high volume of requests. If an overwhelming number of requests occur from a few customers, throttling helps maintain optimal performance and reliability for all customers.
-Operation | Units cost for each request|
+Throttling limits are contingent on the selected version and the capabilities of the product that a customer is using. Azure Data Manager for Agriculture supports two distinct versions:
+
+- **Standard**: The version that we generally recommend.
+- **Basic**: Suitable for prototyping requirements.
+
+These limits operate within three time windows (per one minute, per five minutes, and per one month) to safeguard against sudden surges in traffic.
+
+This article shows you how to track the number of requests that remain before you reach the limit, and how to respond when you reach the limit. Throttling limits apply to [these APIs](/rest/api/data-manager-for-agri/#data-plane-rest-apis).
+
+## Classification of APIs
+
+Azure Data Manager for Agriculture APIs fall into three main categories:
+
+- **Write operations**: APIs that use REST API methods like `PATCH`, `POST`, and `DELETE` for altering data.
+- **Read operations**: APIs that use the REST API method type `GET` to retrieve data, including search APIs of the method type `POST`.
+- **Long-running job operations**: Long-running asynchronous job APIs that use the REST API method type `PUT`.
+
+The overall available quota units, as explained in the following table, are shared among these categories. For instance, using the entire quota on write operations means no remaining quota for other operations. Each operation consumes a specific unit of quota, which helps you track the remaining quota for further use.
+
+Operation | Unit cost for each request|
-| -- | Write | 5 | Read| 1 <sup>1</sup>|
-Long running job [Solution inference](/rest/api/data-manager-for-agri/#solution-and-model-inferences) | 5 |
-Long running job [Farm operation](/rest/api/data-manager-for-agri/#farm-operation-job) | 5 |
-Long running job [Image rasterize](/rest/api/data-manager-for-agri/#image-rasterize-job) | 2 |
-Long running job (Cascade delete of an entity) | 2 |
-Long running job [Weather ingestion](/rest/api/data-manager-for-agri/#weather) | 1 |
-Long running job [Satellite ingestion](/rest/api/data-manager-for-agri/#satellite-data-ingestion-job) | 1 |
-
-<sup>1</sup>An extra unit cost is taken into account for each item returned in the response when more than one item is being retrieved.
-
-## Basic version API limits
-
-### Total available units per category
-Operation | Throttling time window | Units reset after each time window.|
+Long-running job: [solution inference](/rest/api/data-manager-for-agri/#solution-inferences) | 5 |
+Long-running job: [farm operation](/rest/api/data-manager-for-agri/#farm-operations) | 5 |
+Long-running job: image rasterization | 2 |
+Long-running job: cascading deletion of an entity | 2 |
+Long-running job: [weather ingestion](/rest/api/data-manager-for-agri/#weather) | 1 |
+Long-running job: [satellite ingestion](/rest/api/data-manager-for-agri/#satellite) | 1 |
+
+<sup>1</sup>An extra unit cost is taken into account for each item returned in the response when you're retrieving more than one item.
+
+## API limits for the Basic version
+
+The following table lists the total available units per category for the Basic version:
+
+Operation | Throttling time window | Units reset after each time window|
-| -- | |
-Write/Read| per 1 Minute | 25,000 |
-Write/Read| per 5 Minutes| 100,000|
-Write/Read| per one Month| 5,000,000 |
-Long running job| per 5 Minutes| 1000|
-Long running job| per one Month| 100,000 |
+Write/read| Per one minute | 25,000 |
+Write/read| Per five minutes| 100,000|
+Write/read| Per one month| 5,000,000 |
+Long-running job| Per five minutes| 1000|
+Long-running job| Per one month| 100,000 |
-## Standard version API limits
-Standard version offers a five times increase in API quota per month compared to the Basic version, while all other quota limits remain unchanged.
+## API limits for the Standard version
-### Total available units per category
-Operation | Throttling time window | Units reset after each time window.|
+The Standard version offers a fivefold increase in API quota per month, compared to the Basic version. All other quota limits remain unchanged.
+
+The following table lists the total available units per category for the Standard version:
+
+Operation | Throttling time window | Units reset after each time window|
-| -- | |
-Write/Read| per 1 Minute | 25,000 |
-Write/Read| per 5 Minutes| 100,000|
-Write/Read| per one Month| 25,000,000 <sup>1</sup>
-Long running job| per 5 Minutes| 1000|
-Long running job| per one Month| 500,000 <sup>2</sup>|
+Write/read| Per one minute | 25,000 |
+Write/read| Per five minutes| 100,000|
+Write/read| Per one month| 25,000,000 <sup>1</sup> |
+Long-running job| Per five minutes| 1000|
+Long-running job| Per one month| 500,000 <sup>1</sup>|
-<sup>1</sup>This limit is five times the Basic version limit.
+<sup>1</sup>This limit is five times the Basic version's limit.
-<sup>2</sup>This limit is five times the Basic version limit.
-
## Error code
-When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before sending the next request. If you send a request before the retry value elapses, your request isn't processed and a new retry value is returned.
-After the specified time elapses, you can make requests again to the Azure Data Manager for Agriculture. Attempting to establish a TCP connection or using different user authentication methods doesn't bypass these limits, as they're specific to each tenant.
-## Frequently asked questions (FAQs)
+When you reach the limit, you receive the HTTP status code **429 Too many requests**. The response includes a **Retry-After** value, which specifies the number of seconds your application should wait (or sleep) before it sends the next request.
+
+If you send a request before the retry value elapses, your request isn't processed and a new retry value is returned. After the specified time elapses, you can make requests again to Azure Data Manager for Agriculture. Trying to establish a TCP connection or using different user authentication methods doesn't bypass these limits, because they're specific to each tenant.
+
+## Frequently asked questions
+
+### If I exhaust the allocated API quota entirely for write operations within a per-minute time window, can I successfully make requests for read operations within the same time window?
+
+The quota limits are shared among the listed operation categories. Using the entire quota for write operations implies no remaining quota for other operations. This article details the specific quota units consumed for each operation.
-### 1. If I exhaust the allocated API quota entirely for write operations within a per-minute time window, can I successfully make requests for read operations within the same time window?
-The quota limits are shared among the listed operation categories. Using the entire quota for write operations implies no remaining quota for other operations. The specific quota units consumed for each operation are detailed in this article.
+### How can I calculate the total number of successful requests allowed for a particular time window?
-### 2. How can I calculate the total number of successful requests allowed for a particular time window?
-The total allowed number of successful API requests depends on the specific version provisioned and the time window in which requests are made. For instance, with the Standard version, you can make 25,000 (Units reset after each time window) / 5 (Units cost for each request) = 5,000 write operation APIs within a 1-minute time window. Or combination of 4000 write operations & 5000 read operations which results in total 4000 * 5 + 5000 * 1 = 25000 total units consumption. Similarly, for the Basic version, you can perform 5,000,000 (Units reset after each time window) / 1 (Units cost for each request) = 5,000,000 read operation APIs within a one month time window.
+The total allowed number of successful API requests depends on the version that you provisioned and the time window in which you make requests.
+
+For instance, with the Standard version, you can make 25,000 (units reset after each time window) / 5 (unit cost for each request) = 5,000 write operation APIs within a one-minute time window. Or you can use a combination of 4,000 write operations and 5,000 read operations, which results in 4,000 * 5 + 5,000 * 1 = 25,000 total units of consumption.
+
+Similarly, for the Basic version, you can perform 5,000,000 (units reset after each time window) / 1 (unit cost for each request) = 5,000,000 read operation APIs within a one-month time window.
+
+### How many sensor events can a customer ingest as the maximum number?
+
+The system allows a maximum of 100,000 event ingestions per hour. Although new events are continually accepted, there might be a delay in processing. The delay might mean that these events aren't immediately available for real-time egress scenarios alongside the ingestion.
-### 3. How many sensor events can a customer ingest as the maximum number?
-The system allows a maximum limit of 100,000 event ingestions per hour. While new events are continually accepted, there might be a delay in processing, resulting in these events not being immediately available for real-time egress scenarios alongside the ingestion.
-
## Next steps
-* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
-* Understand our APIs [here](/rest/api/data-manager-for-agri).
-* Also look at common API [response headers](/rest/api/data-manager-for-agri/common-rest-response-headers).
+
+- [Learn about the hierarchy model and how to create and organize your agriculture data](./concepts-hierarchy-model.md)
+- [Test the Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri)
+- [Learn about common API response headers](/rest/api/data-manager-for-agri/common-rest-response-headers)
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Title: Release notes for Microsoft Azure Data Manager for Agriculture Preview
+ Title: Release notes for Microsoft Azure Data Manager for Agriculture Preview
description: This article provides release notes for Azure Data Manager for Agriculture Preview releases, improvements, bug fixes, and known issues.
Last updated 11/16/2023
-# Release Notes for Azure Data Manager for Agriculture Preview
+# Release notes for Azure Data Manager for Agriculture Preview
-Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To stay up to date with the most recent developments, this article provides you with information about:
+Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To keep you informed about recent developments, this article provides information about:
- The latest releases - Known issues
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
- Deprecated functionality - Plans for changes
- We provide information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Agriculture Preview monthly.
- [!INCLUDE [public-preview-notice.md](includes/public-preview-notice.md)] ## March 2024
-### Copilot Templates for Agriculture
-Our copilot templates for agriculture enable seamless retrieval of data stored in our data manager and customers own data. Many customers have proprietary data outside of our data manager, for example Agronomy PDFs, market price data etc. Such customers can benefit from our orchestration framework that allows for plugins, embedded data structures, and sub processes to be selected as part of the query flow. While Data Manager for Agriculture isn't required to operationalize copilot templates for agriculture, the data manager enables customers to more easily integrate generative AI scenarios for their users. Learn more about this [here](concepts-llm-apis.md).
+### Copilot templates for agriculture
+
+Copilot templates for agriculture enable seamless retrieval of data stored in Azure Data Manager for Agriculture and customers' own data. Many customers have proprietary data outside Azure Data Manager for Agriculture; for example, agronomy PDFs or market price data. Such customers can benefit from an orchestration framework that allows for plugins, embedded data structures, and subprocesses to be selected as part of the query flow.
+
+Although Azure Data Manager for Agriculture isn't required to operationalize copilot templates for agriculture, it enables customers to more easily integrate generative AI scenarios for their users. Learn more in [Generative AI in Azure Data Manager for Agriculture](concepts-llm-apis.md).
## November 2023
-### LLM capability
-Our LLM capability enables seamless selection of APIs mapped to farm operations today. This enables use cases that are based on tillage, planting, applications, and harvesting type of farm operations. In the time to come we'll add the capability to select APIs mapped to soil sensors, weather, and imagery type of data. The skills in our LLM capability allow for a combination of results, calculation of area, ranking, summarizing to help serve customer prompts. These capabilities enable others to build their own agriculture copilots that deliver insights to farmers. Learn more about this [here](concepts-llm-apis.md).
+### Generative AI capability
+
+The generative AI capability in Azure Data Manager for Agriculture enables seamless selection of APIs mapped to farm operations. This support enables use cases that are based on tillage, planting, applications, and harvesting types of farm operations.
+
+Plugins in the generative AI capability allow for a combination of results, calculation of area, ranking, and summarizing to help serve customer prompts. These capabilities enable customers and partners to build their own agriculture copilots that deliver insights to farmers. Learn more in [Generative AI in Azure Data Manager for Agriculture](concepts-llm-apis.md).
### Imagery enhancements
-We improved our satellite ingestion service. The improvements include:
+
+We improved the satellite ingestion capability. The improvements include:
+ - Search caching. - Pixel source control to a single tile by specifying the item ID.-- Improved the reprojection method to more accurately reflect on the ground dimensions across the globe.-- Adapted nomenclature to better converge with standards.
+- Improvements to the reprojection method to more accurately reflect on-the-ground dimensions across the globe.
+- Nomenclature adaptations to better converge with standards.
-These improvements might require changes in how you consume services to ensure continuity. More details on the satellite service and these changes found [here](concepts-ingest-satellite-imagery.md).
+These improvements might require changes in how you consume services to ensure continuity. You can find more details on the satellite service and these changes in [Ingest satellite imagery in Azure Data Manager for Agriculture](concepts-ingest-satellite-imagery.md).
### Farm activity records
-Listing of activities by party ID and by activity ID is consolidated into a more powerful common search endpoint. Read more about [here](how-to-ingest-and-egress-farm-operations-data.md).
+
+Listing of activities by party ID and by activity ID is consolidated into a more powerful, common search endpoint. Read more in [Work with farm activities and activity data in Azure Data Manager for Agriculture](how-to-ingest-and-egress-farm-operations-data.md).
## October 2023
-### Azure portal experience enhancement
-We released a new user friendly experience to install ISV solutions that are available for Azure Data Manager for Agriculture users. You can now go to your Azure Data Manager for Agriculture instance on the Azure portal, view, and install available solutions in a seamless user experience. Today the ISV solutions available are from Bayer AgPowered services, you can see the marketplace listing [here](https://azuremarketplace.microsoft.com/marketplace/apps?search=bayer&page=1). You can learn more about installing ISV solutions [here](how-to-set-up-isv-solution.md).
+### Enhancement of the Azure portal experience
+
+We released a new user-friendly experience to install independent software vendor (ISV) solutions that are available for Azure Data Manager for Agriculture. You can now go to your Azure Data Manager for Agriculture instance on the Azure portal, view available solutions, and install them in a seamless experience.
+
+Currently, the available ISV solutions are from Bayer AgPowered Services. You can view the listing in [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?search=bayer&page=1). You can learn more about installing ISV solutions in [Work with an ISV solution](how-to-set-up-isv-solution.md).
## July 2023
-### Weather API update
-We deprecated the old weather APIs from API version 2023-07-01. The old weather APIs are replaced with new simple yet powerful provider agnostic weather APIs. Have a look at the API documentation [here](/rest/api/data-manager-for-agri/#weather).
+### Weather API update
+
+We deprecated the old weather APIs from API version 2023-07-01. We replaced them with simple yet powerful provider-agnostic weather APIs. See the [API documentation](/rest/api/data-manager-for-agri/#weather).
### New farm operations connector
-We added support for Climate FieldView as a built-in data source. You can now auto sync planting, application, and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this [here](concepts-farm-operations-data.md).
-### Common Data Model now with geo-spatial support
-We updated our data model to improve flexibility. The boundary object is deprecated in favor of a geometry property that is now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity, and observation themes. It allows for more flexible integration when ingesting data from a provider with strict hierarchy requirements. You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more [here](concepts-hierarchy-model.md).
+We added support for Climate FieldView as a built-in data source. You can now automatically sync planting, application, and harvest activity files from FieldView accounts directly into Azure Data Manager for Agriculture. Learn more about this capability in [Work with farm activities data in Azure Data Manager for Agriculture](concepts-farm-operations-data.md).
+
+### Geospatial support in the data model
+
+We updated our data model to improve flexibility. The boundary object is deprecated in favor of a geometry property that's now supported in nearly all data objects. This change brings consistency to how space is handled across hierarchy, activity, and observation themes. It allows for more flexible integration when you're ingesting data from a provider that has strict hierarchy requirements.
+
+You can now sync data that might not perfectly align with an existing hierarchy definition and resolve the conflicts with spatial overlap queries. Learn more in [Hierarchy model in Azure Data Manager for Agriculture](concepts-hierarchy-model.md).
## June 2023 ### Use your license keys via key vault
-Azure Data Manager for Agriculture supports a range of data ingress connectors. These connections require customer keys in a Bring Your Own License (BYOL) model. You can use your license keys safely by storing your secrets in the Azure Key Vault, enabling system identity and providing read access to our Data Manager. Details are available [here](concepts-byol-and-credentials.md).
+
+Azure Data Manager for Agriculture supports a range of data ingress connectors. These connections require customer keys in a bring your own license (BYOL) model. You can use your license keys safely by storing your secrets in a key vault, enabling system identity, and providing read access to Azure Data Manager for Agriculture. Details are available in [Store and use your own license keys in Azure Data Manager for Agriculture](concepts-byol-and-credentials.md).
### Sensor integration as both partner and customer
-Now you can start pushing data from your own sensors into Data Manager for Agriculture. It's useful in case your sensor provider doesn't want to take steps to onboard their sensors or if you don't have such support from your sensor provider. Details are available [here](how-to-set-up-sensor-as-customer-and-partner.md).
+
+You can start pushing data from your own sensors into Azure Data Manager for Agriculture. This capability useful if your sensor provider doesn't want to take steps to onboard its sensors or if you don't have such support from your sensor provider. Details are available in [Sensor integration as both partner and customer in Azure Data Manager for Agriculture](how-to-set-up-sensor-as-customer-and-partner.md).
## May 2023
-### Understanding throttling
-Azure Data Manager for Agriculture implements API throttling to ensure consistent performance by limiting the number of requests within a specified time frame. Throttling prevents resource overuse and maintains optimal performance and reliability for all customers. Details are available [here](concepts-understanding-throttling.md).
+### API throttling
+
+Azure Data Manager for Agriculture implements API throttling to help ensure consistent performance by limiting the number of requests within a specified time frame. Throttling prevents resource overuse and maintains optimal performance and reliability for all customers. Details are available in [API throttling guidance for Azure Data Manager for Agriculture](concepts-understanding-throttling.md).
## April 2023 ### Audit logs
-In Azure Data Manager for Agriculture Preview, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure for data-plane requests. [Audit Logs](how-to-set-up-audit-logs.md) are now available for your use.
+
+In Azure Data Manager for Agriculture Preview, you can monitor how and when your resources are accessed, and by whom. You can also debug reasons for failure of data-plane requests. [Audit logs](how-to-set-up-audit-logs.md) are now available for your use.
### Private links
-You can connect to Azure Data Manager for Agriculture service from your virtual network via a private endpoint. You can then limit access to your Azure Data Manager for Agriculture Preview instance over these private IP addresses. [Private Links](how-to-set-up-private-links.md) are now available for your use.
+
+You can connect to Azure Data Manager for Agriculture Preview from your virtual network via a private endpoint. You can then limit access to your Azure Data Manager for Agriculture instance over these private IP addresses. [Private links](how-to-set-up-private-links.md) are now available for your use.
### BYOL for satellite imagery
-To support scalable ingestion of geometry-clipped imagery, we partnered with Sentinel Hub by Sinergise to provide a seamless bring your own license (BYOL) experience. Read more about our satellite connector [here](concepts-ingest-satellite-imagery.md).
+
+To support scalable ingestion of geometry-clipped imagery, we partnered with Sentinel Hub by Sinergise to provide a seamless BYOL experience. Read more about the satellite connector in [Ingest satellite imagery in Azure Data Manager for Agriculture](concepts-ingest-satellite-imagery.md).
## March 2023
-### Key Announcement: Preview Release
-Azure Data Manager for Agriculture is now available in preview. See our blog post [here](https://azure.microsoft.com/blog/announcing-microsoft-azure-data-manager-for-agriculture-accelerating-innovation-across-the-agriculture-value-chain/).
+### Key announcement: Preview release
+
+Azure Data Manager for Agriculture is now available in preview. See the [blog post](https://azure.microsoft.com/blog/announcing-microsoft-azure-data-manager-for-agriculture-accelerating-innovation-across-the-agriculture-value-chain/).
## Next steps
-* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
-* Understand our APIs [here](/rest/api/data-manager-for-agri).
+
+- [Learn about the hierarchy model and how to create and organize your agriculture data](./concepts-hierarchy-model.md)
+- [Test the Azure Data Manager for Agriculture REST APIs](/rest/api/data-manager-for-agri)
deployment-environments Concept Azure Developer Cli With Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-azure-developer-cli-with-deployment-environments.md
At scale, using ADE and `azd` together enables you to provide a way for develope
The Azure Developer CLI commands are designed to work with standardized templates. Each template is a code repository that adheres to specific file and folder conventions. The templates contain the assets `azd` needs to provision an Azure Deployment Environment environment. When you run a command like `azd up`, the tool uses the template assets to execute various workflow steps, such as provisioning or deploying resources to Azure.
-The following is a typical `azd` template structure:
+The following is a typical template structure:
```txt Γö£ΓöÇΓöÇ infra [ Contains infrastructure as code files ]
The following is a typical `azd` template structure:
All `azd` templates include the following assets: -- *infra folder* - Contains all of the Bicep or Terraform infrastructure as code files for the azd template. The infra folder is not used in `azd` with ADE. ADE provides the infrastructure as code files for the `azd` template. You don't need to include these files in your `azd` template.
+- *infra folder* - The infra folder is not used in `azd` with ADE. It contains all of the Bicep or Terraform infrastructure as code files for the azd template. ADE provides the infrastructure as code files for the `azd` template. You don't need to include these files in your `azd` template.
- *azure.yaml file* - A configuration file that defines one or more services in your project and maps them to Azure resources for deployment. For example, you might define an API service and a web front-end service, each with attributes that map them to different Azure resources for deployment.
When the dev center feature is enabled, the default behavior of some common azd
- [Add and configure an environment definition](./configure-environment-definition.md) - [Create an environment by using the Azure Developer CLI](./how-to-create-environment-with-azure-developer.md)-- [Make your project compatible with Azure Developer CLI](/azure/developer/azure-developer-cli/make-azd-compatible?pivots=azd-create)
+- [Make your project compatible with Azure Developer CLI](/azure/developer/azure-developer-cli/make-azd-compatible?pivots=azd-create)
deployment-environments Concept Environments Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environments-key-concepts.md
An environment is a collection of Azure resources on which your application is d
## Identities
-in Azure Deployment Environments, you use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) to provide elevation-of-privilege capabilities. Identities can help you provide self-serve capabilities to your development teams without giving them access to the target subscriptions in which the Azure resources are created.
+In Azure Deployment Environments, you use [managed identities](../active-directory/managed-identities-azure-resources/overview.md) to provide elevation-of-privilege capabilities. Identities can help you provide self-serve capabilities to your development teams without giving them access to the target subscriptions in which the Azure resources are created.
The managed identity that's attached to the dev center needs to be granted appropriate access to connect to the catalogs. You should grant Contributor and User Access Administrator access to the target deployment subscriptions that are configured at the project level. The Azure Deployment Environments service uses the specific managed identity to perform the deployment on behalf of the developer.
Project environment types allow you to automatically apply the right set of poli
## Catalogs
-Catalogs help you provide a set of curated IaC templates for your development teams to create environments. Microsoft provides a [*quick start* catalog](https://github.com/microsoft/devcenter-catalog) that contains a set of sample environment defintions. You can attach the quick start catalog to a dev center to make these environment defintions available to all the projects associated with the dev center. You can modify the sample environment definitions to suit your needs.
+Catalogs help you provide a set of curated IaC templates for your development teams to create environments. Microsoft provides a [*quick start* catalog](https://github.com/microsoft/devcenter-catalog) that contains a set of sample environment definitions. You can attach the quick start catalog to a dev center to make these environment definitions available to all the projects associated with the dev center. You can modify the sample environment definitions to suit your needs.
Alternately, you can attach your own catalog. You can attach either a [GitHub repository](https://docs.github.com/repositories/creating-and-managing-repositories/about-repositories) or an [Azure DevOps Services repository](/azure/devops/repos/get-started/what-is-repos) as a catalog.
An environment definition is a combination of an IaC template and an environment
To learn about the structure of an ARM template, the sections of a template, and the properties that are available in those sections, see [Understand the structure and syntax of Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md).
+## Built-in roles
+
+Azure Deployment Environments supports three [built-in roles](../role-based-access-control/built-in-roles.md):
+
+- **Dev Center Project Admin**: Creates environments and manages the environment types for a project.
+- **Deployment Environments User**: Creates environments based on appropriate access.
+- **Deployment Environments Reader**: Reads environments that other users created.
+ ## Resources shared with Microsoft Dev Box Azure Deployment Environments and Microsoft Dev Box are complementary services that share certain architectural components. Dev centers and projects are common to both services, and they help organize resources in an enterprise. You can configure projects for Deployment Environments and projects for Dev Box resources in the same dev center.
deployment-environments Overview What Is Azure Deployment Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/overview-what-is-azure-deployment-environments.md
Title: What is Azure Deployment Environments?
-description: Enable developer teams to spin up infrastructure for deploying apps with project-based templates, while adding governance for Azure resource types, security, and cost.
+description: Enable developer teams to spin up infrastructure for deploying apps with templates, adding governance for Azure resource types, security, and cost.
Previously updated : 04/25/2023 Last updated : 03/28/2024+
+#customer intent: As a customer, I want to understand to purpose and capabilities of Azure Deployment Environments so that I can determine if the service will benefit my developers.
# What is Azure Deployment Environments? Azure Deployment Environments empowers development teams to quickly and easily spin up app infrastructure with project-based templates that establish consistency and best practices while maximizing security. This on-demand access to secure environments accelerates the stages of the software development lifecycle in a compliant and cost-efficient way.
-A deployment environment is a preconfigured collection of Azure resources deployed in predefined subscriptions. Azure governance is applied to those subscriptions based on the type of environment, such as sandbox, testing, staging, or production.
+A [*deployment environment*](./concept-environments-key-concepts.md#environments) is a collection of Azure infrastructure resources defined in a template called an [*environment definition*](./concept-environments-key-concepts.md#environment-definitions). Developers can deploy infrastructure defined in the templates in subscriptions where they have access, and build their applications on the infrastructure. For example, you can define a deployment environment that includes a web app, a database, and a storage account. Your web developer can begin coding the web app without worrying about the underlying infrastructure.
+Platform engineers can create and manage environment definitions. To specify which environment definitions are available to developers, platform engineers can associate environment definitions with projects, and assign permissions to developers. They can also apply Azure governance based on the type of environment, such as sandbox, testing, staging, or production.
-With Azure Deployment Environments, your platform engineer can enforce enterprise security policies and provide a curated set of predefined infrastructure as code (IaC) templates.
+The following diagram shows an overview of Azure Deployment Environments capabilities. Platform engineers define infrastructure templates and configure subscriptions, identity, and permissions. Developers create environments based on the templates, and build and deploy applications on the infrastructure. Environments can support different scenarios, like on-demand environments, sandbox environments for testing, and CI/CD pipelines for continuous integration and continuous deployment.
->[!NOTE]
-> Azure Deployment Environments currently supports only Azure Resource Manager (ARM) templates.
+
+Azure Deployment Environments currently supports only Azure Resource Manager (ARM) templates.
You can [learn more about the key concepts for Azure Deployment Environments](./concept-environments-key-concepts.md). ## Usage scenarios
-Azure Deployment Environments enables usage [scenarios](./concept-environments-scenarios.md) for both DevOps teams and developers. Common scenarios include:
--- Quickly create on-demand Azure environments by using reusable IaC templates.-- Create [sandbox environments](concept-environments-scenarios.md#sandbox-environments-for-investigations) to test your code.-- Preconfigure various types of environments and seamlessly integrate with your continuous integration and continuous delivery (CI/CD) pipeline.-- Create preconfigured environments for trainings and demos.-
-### Developer scenarios
-
-Developers have the following self-service experience when working with [environments](./concept-environments-key-concepts.md#environments).
--- Deploy a preconfigured environment for any stage of the development cycle.-- Spin up a sandbox environment to explore Azure.-- Create platform as a service (PaaS) and infrastructure as a service (IaaS) environments quickly and easily by following a few simple steps.-- Deploy environments right from where they work.-
-Developers create and manage environments for Azure Deployment Environments through the [developer portal](./quickstart-create-access-environments.md), with the [Azure CLI](./how-to-create-access-environments.md) or with the [Azure Developer CLI](./how-to-create-environment-with-azure-developer.md).
+Common [scenarios](./concept-environments-scenarios.md) for Azure Deployment Environments include:
### Platform engineering scenarios
-Azure Deployment Environments helps your platform engineer apply the right set of policies and settings on various types of environments, control the resource configuration that developers can create, and track environments across projects. They perform the following tasks:
+Azure Deployment Environments helps platform engineers apply the right set of policies and settings on various types of environments, control the resource configuration that developers can create, and track environments across projects. They perform the following tasks:
- Provide a project-based, curated set of reusable IaC templates. - Define specific Azure deployment configurations per project and per environment type. - Provide a self-service experience without giving control over subscriptions. - Track costs and ensure compliance with enterprise governance policies.
-Azure Deployment Environments supports three [built-in roles](../role-based-access-control/built-in-roles.md):
+### Developer scenarios
-- **Dev Center Project Admin**: Creates environments and manages the environment types for a project.-- **Deployment Environments User**: Creates environments based on appropriate access.-- **Deployment Environments Reader**: Reads environments that other users created.
+Developers can create environments whenever they need them, and develop their applications on the infrastructure. They can use Azure Deployment Environments to do the following tasks:
+- Deploy a preconfigured environment for any stage of the development cycle.
+- Spin up a sandbox environment to explore Azure.
+- Create and manage environments through the [developer portal](./quickstart-create-access-environments.md), with the [Azure CLI](./how-to-create-access-environments.md) or with the [Azure Developer CLI](./how-to-create-environment-with-azure-developer.md).
## Benefits
Capture and share IaC templates in source control within your team or organizati
Platform engineering teams can curate environment definitions to enforce enterprise security policies and map projects to Azure subscriptions, identities, and permissions by environment types. - **Project-based configurations**:
-Create and organize environment definitions by the types of applications that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
+Organize environment definitions by the type of application that development teams are working on, rather than using an unorganized list of templates or a traditional IaC setup.
- **Worry-free self-service**: Enable your development teams to quickly and easily create app infrastructure (PaaS, serverless, and more) resources by using a set of preconfigured templates. You can also track costs on these resources to stay within your budget.
Use APIs to provision environments directly from your preferred CI tool, integra
When configuring Deployment Environments, you might see Dev Box resources and components. You might even see informational messages regarding Dev Box features. If you're not configuring any Dev Box features, you can safely ignore these messages.
-## Next steps
-Start using Azure Deployment Environments:
+## Related content
-- [Key concepts for Azure Deployment Environments](./concept-environments-key-concepts.md) - [Azure Deployment Environments scenarios](./concept-environments-scenarios.md)-- [Quickstart: Create dev center and project (Azure Resource Manager)](./quickstart-create-dev-center-project-azure-resource-manager.md) - [Quickstart: Create and configure a dev center](./quickstart-create-and-configure-devcenter.md)-- [Quickstart: Create and access environments](./quickstart-create-access-environments.md)
+- [Quickstart: Create dev center and project (Azure Resource Manager)](./quickstart-create-dev-center-project-azure-resource-manager.md)
+
dev-box Concept Dev Box Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-architecture.md
To determine the best region to host the dev boxes, you can let dev box users ta
### Microsoft Intune integration
-Microsoft Intune is used to manage your dev boxes. After a dev box is provisioned, you can manage it like any other Windows device in Microsoft Intune. For example, you can create [device configuration profiles](/mem/intune/configuration/device-profiles) to turn different settings on and off in Windows, or push apps and updates to your usersΓÇÖ dev boxes.
+Microsoft Intune is used to manage your dev boxes. Every Dev Box user needs one Microsoft Intune license and can create multiple dev boxes. After a dev box is provisioned, you can manage it like any other Windows device in Microsoft Intune. For example, you can create [device configuration profiles](/mem/intune/configuration/device-profiles) to turn different settings on and off in Windows, or push apps and updates to your usersΓÇÖ dev boxes.
Microsoft Intune and associated Windows components have various [network endpoints](/mem/intune/fundamentals/intune-endpoints) that must be allowed through the Virtual Network. Apple and Android endpoints can be safely ignored if you donΓÇÖt use Microsoft Intune for managing those device types.
dev-box How To Configure Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-azure-compute-gallery.md
In this article, you learn how to configure and attach an Azure compute gallery to a dev center in Microsoft Dev Box. With Azure Compute Gallery, you can give developers customized images for their dev box.
-Azure Compute Gallery is a service for managing and sharing images. A gallery is a repository that's stored in your Azure subscription and helps you build structure and organization around your image resources.
+Azure Compute Gallery is a service for managing and sharing images. A gallery is a repository that's stored in your Azure subscription and helps you build structure and organization around your image resources. Dev Box supports GitHub, Azure Repos, and Bitbucket repositories to provide an image gallery.
After you attach a compute gallery to a dev center in Microsoft Dev Box, you can create dev box definitions based on images stored in the compute gallery.
When you create a virtual machine (VM) image, select an image from the Azure Mar
- [Visual Studio 2019](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudio2019plustools?tab=Overview) - [Visual Studio 2022](https://azuremarketplace.microsoft.com/marketplace/apps/microsoftvisualstudio.visualstudioplustools?tab=Overview) +
+### Image version requirements
+ The image version must meet the following requirements: - Generation 2 - Hyper-V v2 - Windows OS
- - Windows 10 Enterprise version 20H2 or later
- - Windows 11 Enterprise 21H2 or later
+ - Windows 10 Enterprise version 20H2 or later
+ - Windows 11 Enterprise 21H2 or later
- Generalized VM image
- - You must create the image by using these three sysprep options: `/generalize /oobe /mode:vm`. For more information, see [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true).
- - To speed up the dev box creation time:
- - Disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`. For more information, see [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true).
- - Run `defrag` and `chkdsk` during image creation, wait for them to finish. And disable `chkdisk` and `defrag` scheduled task.
-- Single-session VM images (Multiple-session VM images aren't supported.)
+ - For more information about creating a generalized image, see [Reduce provisioning and startup times](#reduce-provisioning-and-startup-times) for more information.
+- Single-session VM image (Multiple-session VM images aren't supported.)
- No recovery partition
- For information about how to remove a recovery partition, see the [Windows Server command: delete partition](/windows-server/administration/windows-commands/delete-partition).
+ - For information about how to remove a recovery partition, see the [Windows Server command: delete partition](/windows-server/administration/windows-commands/delete-partition).
- Default 64-GB OS disk size
- The OS disk size is automatically adjusted to the size specified in the SKU description of the Windows 365 license.
+ - The OS disk size is automatically adjusted to the size specified in the SKU description of the Windows 365 license.
- The image definition must have [trusted launch enabled as the security type](../virtual-machines/trusted-launch.md). You configure the security type when you create the image definition. :::image type="content" source="media/how-to-configure-azure-compute-gallery/image-definition.png" alt-text="Screenshot that shows Windows 365 image requirement settings.":::
The image version must meet the following requirements:
> - Microsoft Dev Box image requirements exceed [Windows 365 image requirements](/windows-365/enterprise/device-images) and include settings to optimize dev box creation time and performance. > - Any image that doesn't meet Windows 365 requirements isn't shown in the list of images that are available for creation.
+### Reduce provisioning and startup times
+
+When you create a generalized VM to capture to an image, the following issues can affect provisioning and startup times:
+
+1. Create the image by using these three sysprep options: `/generalize /oobe /mode:vm`.
+ - These options prevent a lengthy search for and installation of drivers during the first boot. For more information, see [Sysprep Command-Line Options](/windows-hardware/manufacture/desktop/sysprep-command-line-options?view=windows-11#modevm&preserve-view=true).1. Enable the Read/Write cache on the OS disk.
+ - To verify the cache is enabled, open the Azure portal and navigate to the image. Select **JSON view**, and make sure `properties.storageProfile.osDisk.caching` value is `ReadWrite`.
+
+1. Enable nested virtualization in your base image:
+ - In the UI, open **Turn Windows features on or off** and select **Virtual Machine Platform**.
+ - Or run the following PowerShell command: `Enable-WindowsOptionalFeature -FeatureName VirtualMachinePlatform -Online`
+
+1. Disable the reserved storage state feature in the image by using the following command: `DISM.exe /Online /Set-ReservedStorageState /State:Disabled`.
+ - For more information, see [DISM Storage reserve command-line options](/windows-hardware/manufacture/desktop/dism-storage-reserve?view=windows-11#set-reservedstoragestate&preserve-view=true).
+
+1. Run `defrag` and `chkdsk` during image creation, then disable the `chkdisk` and `defrag` scheduled tasks.
+ ## Provide permissions for services to access a gallery When you use an Azure Compute Gallery image to create a dev box definition, the Windows 365 service validates the image to ensure that it meets the requirements to be provisioned for a dev box. Microsoft Dev Box replicates the image to the regions specified in the attached network connections, so the images are present in the region required for dev box creation.
expressroute Expressroute Erdirect About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-erdirect-about.md
- ignite-2023 Previously updated : 02/05/2024 Last updated : 03/28/2024
Each peering location has access to the Microsoft global network and can access
The functionality in most scenarios is equivalent to circuits that use an ExpressRoute service provider to operate. To support further granularity and new capabilities offered using ExpressRoute Direct, there are certain key capabilities that exist only with ExpressRoute Direct circuits.
-You can enable or disable rate limiting (preview) for ExpressRoute Direct circuits at the circuit level. For more information, see [Rate limiting for ExpressRoute Direct circuits (Preview)](rate-limit.md).
+You can enable or disable rate limiting for ExpressRoute Direct circuits at the circuit level. For more information, see [Rate limiting for ExpressRoute Direct circuits](rate-limit.md).
## Circuit SKUs
For details on how ExpressRoute Direct is billed, see [ExpressRoute FAQ](express
## Next steps - Learn how to [configure ExpressRoute Direct](expressroute-howto-erdirect.md).-- Learn how to [Enable Rate limiting for ExpressRoute Direct circuits (Preview)](rate-limit.md).
+- Learn how to [Enable Rate limiting for ExpressRoute Direct circuits](rate-limit.md).
hdinsight-aks Assign Kafka Topic Event Message To Azure Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2.md
Title: Write event messages into Azure Data Lake Storage Gen2 with Apache Flink
description: Learn how to write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API. Previously updated : 03/14/2024 Last updated : 03/29/2024 # Write event messages into Azure Data Lake Storage Gen2 with Apache Flink® DataStream API
public class KafkaSinkToGen2 {
We are using Maven to package a jar onto local and submitting to Flink, and using Kafka to sink into ADLS Gen2. + :::image type="content" source="./media/assign-kafka-topic-event-message-to-azure-data-lake-storage-gen2/submit-the-job-flink-ui.png" alt-text="Screenshot showing jar submission to Flink dashboard.":::+ **Validate streaming data on ADLS Gen2**
hdinsight-aks Flink Catalog Delta Hive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/flink-catalog-delta-hive.md
Title: Table API and SQL - Use Delta Catalog type with Hive with Apache Flink®
description: Learn about how to create Delta Catalog with Apache Flink® on Azure HDInsight on AKS Previously updated : 03/14/2024 Last updated : 03/29/2024 # Create Delta Catalog with Apache Flink® on Azure HDInsight on AKS
Using the delta catalog
We use arrival data of flights from a sample data, you can choose a table of your choice. ```sql
- CREATE TABLE flightsintervaldata1 (arrivalAirportCandidatesCount INT, estArrivalHour INT) PARTITIONED BY (estArrivalHour) WITH ('connector' = 'delta', 'table-path' = 'abfs://container@storage_account.dfs.core.windows.net'/delta-output);
+ CREATE TABLE flightsintervaldata1 (arrivalAirportCandidatesCount INT, estArrivalHour INT) PARTITIONED BY (estArrivalHour) WITH ('connector' = 'delta', 'table-path' = 'abfs://container@storage_account.dfs.core.windows.net/delta-output');
``` > [!NOTE] > In the above step, the container and storage account *need not be same* as specified during the cluster creation. In case you want to specify another storage account, you can update `core-site.xml` with `fs.azure.account.key.<account_name>.dfs.core.windows.net: <azure_storage_key>` using configuration management.
hdinsight-aks Monitor Changes Postgres Table Flink https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/monitor-changes-postgres-table-flink.md
Title: Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
description: Learn how to perform CDC on PostgreSQL table using Apache Flink® Previously updated : 03/28/2024 Last updated : 03/29/2024 # Change Data Capture (CDC) of PostgreSQL table using Apache Flink®
Now, let's learn how to monitor changes on PostgreSQL table using Flink-SQL CDC.
'hostname' = 'flinkpostgres.postgres.database.azure.com', 'port' = '5432', 'username' = 'username',
- ....
+ 'password' = 'password',
'database-name' = 'postgres', 'schema-name' = 'public', 'table-name' = 'shipments',
hdinsight-aks Use Hive Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-catalog.md
Title: Use Hive Catalog, Hive Read & Write demo on Apache Flink®
description: Learn how to use Hive Catalog, Hive Read & Write demo on Apache Flink® on HDInsight on AKS Previously updated : 03/18/2023 Last updated : 03/29/2024 # How to use Hive Catalog with Apache Flink® on HDInsight on AKS
wget https://repo1.maven.org/maven2/org/apache/flink/flink-connector-kafka/1.17.
**Moving the planner jar**
-Move the jar flink-table-planner_2.12-1.16.0-0.0.18.jar located in webssh pod's /opt to /lib and move out the jar flink-table-planner-loader-1.16.0-0.0.18.jar from /lib. Refer to [issue](https://issues.apache.org/jira/browse/FLINK-25128) for more details. Perform the following steps to move the planner jar.
+Move the jar flink-table-planner_2.12-1.17.0-*.*.*.*.jar located in webssh pod's /opt to /lib and move out the jar flink-table-planner-loader1.17.0-*.*.*.*.jar /opt/flink-webssh/opt/ from /lib. Refer to [issue](https://issues.apache.org/jira/browse/FLINK-25128) for more details. Perform the following steps to move the planner jar.
```
-mv /opt/flink-webssh/lib/flink-table-planner-loader-1.17.0-1.1.1.3.jar /opt/flink-webssh/opt/
-mv /opt/flink-webssh/opt/flink-table-planner_2.12-1.17.0-1.1.1.3.jar /opt/flink-webssh/lib/
+mv /opt/flink-webssh/lib/flink-table-planner-loader-1.17.0-*.*.*.*.jar /opt/flink-webssh/opt/
+mv /opt/flink-webssh/opt/flink-table-planner_2.12-1.17.0-*.*.*.*.jar /opt/flink-webssh/lib/
``` > [!NOTE]
hdinsight-aks Use Hive Metastore Datastream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/flink/use-hive-metastore-datastream.md
Title: Use Hive Metastore with Apache Flink® DataStream API
description: Use Hive Metastore with Apache Flink® DataStream API Previously updated : 03/22/2024 Last updated : 03/29/2024 # Use Hive Metastore with Apache Flink® DataStream API
public class hiveDemo {
``` On Webssh pod, move the planner jar
-Move the jar `flink-table-planner_2.12-1.16.0-0.0.18.jar` located in webssh pod's `/opt to /lib` and move out the jar `flink-table-planner-loader-1.16.0-0.0.18.jar` from `lib`. Refer to issue for more details. Perform the following steps to move the planner jar.
+Move the jar `flink-table-planner-loader-1.17.0-*.*.*.jar` located in webssh pod's `/opt to /lib` and move out the jar `flink-table-planner-loader-1.17.0-*.*.*.jar` from `lib`. Refer to issue for more details. Perform the following steps to move the planner jar.
``` mv /opt/flink-webssh/lib/flink-table-planner-loader-1.17.0-1.1.8.jar /opt/flink-webssh/opt/
hdinsight-aks In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/in-place-upgrade.md
upgraded with cluster pool), and the notification updates reflect the success of
1. The upgrade pane on the right side shows the details of the upgrade on AKS patch versions (current and upgrade path).
- :::image type="content" source="./media/in-place-upgrade/type-of-upgrade.png" alt-text="Screenshot showing the type of the upgrade." border="true" lightbox="./media/in-place-upgrade/type-of-upgrade.png"
+ :::image type="content" source="./media/in-place-upgrade/upgrade-cluster.png" alt-text="Screenshot showing the type of the upgrade as cluster upgrade." border="true" lightbox="./media/in-place-upgrade/upgrade-cluster.png"
1. Once the upgrade commences, the notification icon shows the cluster upgrade is in progress
hdinsight Apache Domain Joined Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/domain-joined/apache-domain-joined-architecture.md
Title: Azure HDInsight architecture with Enterprise Security Package
description: Learn how to plan Azure HDInsight security with Enterprise Security Package. -+ Last updated 05/11/2023
healthcare-apis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md
the link in the **Version** column to view the source on the
## Azure API for FHIR
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure API for FHIR should use a customer-managed key to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F051cba44-2429-45b9-9649-46cec11c7119) |Use a customer-managed key to control the encryption at rest of the data stored in Azure API for FHIR when this is a regulatory or compliance requirement. Customer-managed keys also deliver double encryption by adding a second layer of encryption on top of the default one done with service-managed keys. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_EnableByok_Audit.json) |
+|[Azure API for FHIR should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ee56206-5dd1-42ab-b02d-8aae8b1634ce) |Azure API for FHIR should have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links. For more information, visit: [https://aka.ms/fhir-privatelink](https://aka.ms/fhir-privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_PrivateLink_Audit.json) |
+|[CORS should not allow every domain to access your API for FHIR](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fea8f8a-4169-495d-8307-30ec335f387d) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your API for FHIR. To protect your API for FHIR, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_RestrictCORSAccess_Audit.json) |
+ ## Next steps - See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md). - Review [Understanding policy effects](../../governance/policy/concepts/effects.md).-- -- FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Azure Health Data Services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-health-data-services-policy-reference.md
+
+ Title: Built-in policy definitions for Azure Health Data Services
+description: Lists Azure Policy built-in policy definitions for Azure Health Data Services. These built-in policy definitions provide common approaches to managing your Azure resources.
Last updated : 03/26/2024+++++++
+# Azure Policy built-in definitions for Azure Health Data Services
+
+This page is an index of [Azure Policy](./../../articles/governance/policy/overview.md) built-in policy
+definitions for Azure Health Data Services. For additional Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](./../../articles/governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Health Data Services workspace should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F64528841-2f92-43f6-a137-d52e5c3dbeac) |Health Data Services workspace should have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links. For more information, visit: [https://aka.ms/healthcareapisprivatelink](https://aka.ms/healthcareapisprivatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Health%20Data%20Services%20workspace/PrivateLink_Audit.json) |
+|[CORS should not allow every domain to access your FHIR Service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffe1c9040-c46a-4e81-9aea-c7850fbb3aa6) |Cross-Origin Resource Sharing (CORS) should not allow all domains to access your FHIR Service. To protect your FHIR Service, remove access for all domains and explicitly define the domains allowed to connect. |audit, Audit, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Healthcare%20APIs/FHIR_Service_RestrictCORSAccess_Audit.json) |
+|[DICOM Service should use a customer-managed key to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14961b63-a1eb-4378-8725-7e84ca8db0e6) |Use a customer-managed key to control the encryption at rest of the data stored in Azure Health Data Services DICOM Service when this is a regulatory or compliance requirement. Customer-managed keys also deliver double encryption by adding a second layer of encryption on top of the default one done with service-managed keys. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Healthcare%20APIs/DICOM_Service_CMK_Enabled.json) |
+|[FHIR Service should use a customer-managed key to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc42dee8c-0202-4a12-bd8e-3e171cbf64dd) |Use a customer-managed key to control the encryption at rest of the data stored in Azure Health Data Services FHIR Service when this is a regulatory or compliance requirement. Customer-managed keys also deliver double encryption by adding a second layer of encryption on top of the default one done with service-managed keys. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Healthcare%20APIs/FHIR_Service_CMK_Enabled.json) |
+
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](./../../articles/governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](./../../articles/governance/policy/concepts/effects.md).
+
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Data Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/data-partitions.md
+
+ Title: Enable data partitioning for the DICOM service in Azure Health Data Services
+description: Learn how to enable data partitioning for efficient storage and management of medical images for the DICOM service in Azure Health Data Services.
++++ Last updated : 03/26/2024+++
+# Enable data partitioning
+
+Data partitioning allows you to set up a lightweight data partition scheme to store multiple copies of the same image with the same unique identifier (UID) in a single DICOM instance.
+
+Although UIDs should be [unique across all contexts](http://dicom.nema.org/dicom/2013/output/chtml/part05/chapter_9.html), it's common practice for healthcare providers to write DICOM files to portable storage media and then give them to a patient. The patient then gives the files to another healthcare provider, who then transfers the files into a new DICOM storage system. Therefore, multiple copies of one DICOM file do commonly exist in isolated DICOM systems. Data partitioning provides an on-ramp for your existing data stores and workflows.
+
+## Limitations
+
+- The partitions feature can't be turned off after you turn it on.
+- Querying across partitions isn't supported.
+- Updating and deleting partitions is also not supported.
+
+## Enable data partitions during initial deployment
+
+1. Select **Enable data partitions** when you deploy a new DICOM service. After data partitioning is turned on, it can't be turned off. In addition, data partitions can't be turned on for any DICOM service that is already deployed.
+
+ After the data partitions setting is turned on, the capability modifies the API surface of the DICOM server and makes any previous data accessible under the `Microsoft.Default` partition.
+
+ :::image type="content" source="media/enable-data-partitions/enable-data-partitions.png" alt-text="Screenshot showing the Enable data partitions option on the Create DICOM service page." lightbox="media/enable-data-partitions/enable-data-partitions.png":::
+
+> [!IMPORTANT]
+> Data partitions can't be disabled if partitions other than `Microsoft.Default` are present. When this situation happens, the system throws a `DataPartitionsFeatureCannotBeDisabledException` error on startup.
+
+## API changes
+
+### List all data partitions
+This command Lists all data partitions:
+
+```http
+GET /partitions
+```
+
+### Request header
+
+| Name | Required | Type | Description |
+| | | | - |
+| Content-Type | false | string | `application/json` is supported |
+
+### Responses
+
+| Name | Type | Description |
+| -- | -- | - |
+| 200 (OK) | `[Partition] []` | A list of partitions is returned. |
+| 204 (No Content) | | No partitions exist. |
+| 400 (Bad Request) | | Data partitions capability is disabled. |
+
+### STOW, WADO, QIDO, delete, export, update, and worklist APIs
+
+After partitions are enabled, STOW, WADO, QIDO, delete, export, update, and worklist requests must include a data partition URI segment after the base URI, with the form `/partitions/{partitionName}`, where `partitionName` is:
+
+ - Up to 64 characters long.
+ - Any combination of alphanumeric characters, `.`, `-`, and `_`, to allow both DICOM UID and GUID formats, as well as human-readable identifiers.
+
+| Action | Example URI |
+| - | - |
+| STOW | `POST /partitions/myPartition-1/studies` |
+| WADO | `GET /partitions/myPartition-1/studies/2.25.0000` |
+| QIDO | `GET /partitions/myPartition1/studies?StudyInstanceUID=2.25.0000` |
+| Delete | `DELETE /partitions/myPartition1/studies/2.25.0000` |
+| Export | `POST /partitions/myPartition1/export` |
+| Update | `POST /partitions/myPartition-1/studies/$bulkUpdate` |
+
+### New responses
+
+| Name | Message |
+| -- | |
+| 400 (Bad Request) | Data partitions capability is disabled. |
+| 400 (Bad Request) | `PartitionName` value is missing in the route segment. |
+| 400 (Bad Request) | Specified `PartitionName {PartitionName}` doesn't exist. |
+
+### Other APIs
+All other APIs, including extended query tags, operations, and change feed continue to be accessed at the base URI.
+
+### Manage data partitions
+
+The only management operation supported for partitions is an implicit creation during STOW and workitem create requests. If the partition specified in the URI doesn't exist, the system creates it implicitly and the response returns a retrieve URI including the partition path.
+
+### Partition definitions
+
+A partition is a unit of logical isolation and data uniqueness.
+
+| Name | Type | Description |
+| - | | -- |
+| PartitionKey | int | System-assigned identifier. |
+| PartitionName | string | Client-assigned unique name, up to 64 alphanumeric characters, `.`, `-`, or `_`. |
+| CreatedDate | string | The date and time when the partition was created. |
healthcare-apis Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/import-data.md
Content-Type:application/fhir+json
#### Body
-| Parameter name | Description | Cardinality | Accepted values |
+| Parameter name | Description | Cardinality | Accepted values |
| -- | -- | -- | -- |
-| `inputFormat` | String that represents the name of the data source format. Only FHIR NDJSON files are supported. | 1..1 | `application/fhir+ndjson` |
-| `mode` | Import mode value. | 1..1 | For an initial-mode import, use the `InitialLoad` mode value. For incremental-mode import, use the `IncrementalLoad` mode value. If you don't provide a mode value, the `IncrementalLoad` mode value is used by default. |
-| `input` | Details of the input files. | 1..* | A JSON array with the three parts described in the following table. |
+| `inputFormat`| String that represents the name of the data source format. Only FHIR NDJSON files are supported. | 1..1 | `application/fhir+ndjson` |
+| `mode`| Import mode value. | 1..1 | For an initial-mode import, use the `InitialLoad` mode value. For incremental-mode import, use the `IncrementalLoad` mode value. If you don't provide a mode value, the `IncrementalLoad` mode value is used by default. |
+| `input`| Details of the input files. | 1..* | A JSON array with the three parts described in the following table. |
| Input part name | Description | Cardinality | Accepted values | | -- | -- | -- | -- |
-| `type` | Resource type of the input file. | 1..1 | A valid [FHIR resource type](https://www.hl7.org/fhir/resourcelist.html) that matches the input file. |
-|`url` | Azure storage URL of the input file. | 1..1 | URL value of the input file. The value can't be modified. |
-| `etag` | ETag of the input file in the Azure storage. It's used to verify that the file content isn't changed after `import` registration. | 0..1 | ETag value of the input file. |
+| `type`| Resource type of the input file. | 0..1 | A valid [FHIR resource type](https://www.hl7.org/fhir/resourcelist.html) that matches the input file. |
+|`url`| Azure storage URL of the input file. | 1..1 | URL value of the input file. The value can't be modified. |
+| `etag`| ETag of the input file in the Azure storage. It's used to verify that the file content isn't changed after `import` registration. | 0..1 | ETag value of the input file.|
```json {
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-allocation-policies.md
Title: Tutorial - Use custom allocation policies with Azure IoT Hub Device Provisioning Service
+ Title: Tutorial - Assign devices to multiple hubs using DPS
description: This tutorial shows how to provision devices using a custom allocation policy in your Azure IoT Hub Device Provisioning Service (DPS) instance. Previously updated : 09/13/2022 Last updated : 03/21/2024
# Tutorial: Use custom allocation policies with Device Provisioning Service (DPS)
-Custom allocation policies give you more control over how devices are assigned to your IoT hubs. With custom allocation policies, you can define your own allocation policies when the policies provided by the Azure IoT Hub Device Provisioning Service (DPS) don't meet the requirements of your scenario. A custom allocation policy is implemented in a webhook hosted in [Azure functions](../azure-functions/functions-overview.md) and configured on one or more individual enrollments and/or enrollment groups. When a device registers with DPS using a configured enrollment entry, DPS calls the webhook to find out which IoT hub the device should be registered to and, optionally, its initial state. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md).
+Custom allocation policies give you more control over how devices are assigned to your IoT hubs. With custom allocation policies, you can define your own allocation policies when the policies provided by the Azure IoT Hub Device Provisioning Service (DPS) don't meet the requirements of your scenario. A custom allocation policy is implemented in a webhook hosted in [Azure Functions](../azure-functions/functions-overview.md) and configured on one or more individual enrollments and/or enrollment groups. When a device registers with DPS using a configured enrollment entry, DPS calls the webhook to find out which IoT hub the device should be registered to and, optionally, its initial state. To learn more, see [Understand custom allocation policies](concepts-custom-allocation.md).
This tutorial demonstrates a custom allocation policy using an Azure Function written in C#. Devices are assigned to one of two IoT hubs representing a *Contoso Toasters Division* and a *Contoso Heat Pumps Division*. Devices requesting provisioning must have a registration ID with one of the following suffixes to be accepted for provisioning: * **-contoso-tstrsd-007** for the Contoso Toasters Division * **-contoso-hpsd-088** for the Contoso Heat Pumps Division
-Devices will be simulated using a provisioning sample included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
+Devices are simulated using a provisioning sample included in the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
In this tutorial, you'll do the following: > [!div class="checklist"]
-> * Use the Azure CLI to create a DPS instance and to create and link two Contoso division IoT hubs (**Contoso Toasters Division** and **Contoso Heat Pumps Division**) to it
-> * Create an Azure Function that implements the custom allocation policy
-> * Create a new enrollment group uses the Azure Function for the custom allocation policy
-> * Create device symmetric keys for two simulated devices
-> * Set up the development environment for the Azure IoT C SDK
-> * Simulate the devices and verify that they are provisioned according to the example code in the custom allocation policy
+> * Use the Azure CLI to create a DPS instance and to create and link two Contoso division IoT hubs (**Contoso Toasters Division** and **Contoso Heat Pumps Division**) to it.
+> * Create an Azure Function that implements the custom allocation policy.
+> * Create a new enrollment group uses the Azure Function for the custom allocation policy.
+> * Create device symmetric keys for two simulated devices.
+> * Set up the development environment for the Azure IoT C SDK.
+> * Simulate the devices and verify that they are provisioned according to the example code in the custom allocation policy.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
The following prerequisites are for a Windows development environment. For Linux
- [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported. -- Latest version of [Git](https://git-scm.com/download/) installed.
+- Git installed. For more information, see [Git downloads](https://git-scm.com/download/).
+- Azure CLI installed. For more information, see [How to install the Azure CLI](/cli/azure/install-azure-cli). Or, you can run the commands in this tutorial in the Bash environment in [Azure Cloud Shell](/azure/cloud-shell/overview).
-## Create the provisioning service and two divisional IoT hubs
+## Create the provisioning service and two IoT hubs
In this section, you use the Azure Cloud Shell to create a provisioning service and two IoT hubs representing the **Contoso Toasters Division** and the **Contoso Heat Pumps division**.
-> [!TIP]
-> The commands used in this tutorial create the provisioning service and other resources in the West US location. We recommend that you create your resources in the region nearest you that supports Device Provisioning Service. You can view a list of available locations by running the command `az provider show --namespace Microsoft.Devices --query "resourceTypes[?resourceType=='ProvisioningServices'].locations | [0]" --out table` or by going to the [Azure Status](https://azure.microsoft.com/status/) page and searching for "Device Provisioning Service". In commands, locations can be specified either in one word or multi-word format; for example: westus, West US, WEST US, etc. The value is not case sensitive. If you use multi-word format to specify location, enclose the value in quotes; for example, `-- location "West US"`.
->
-
-1. Use the Azure Cloud Shell to create a resource group with the [az group create](/cli/azure/group#az-group-create) command. An Azure resource group is a logical container into which Azure resources are deployed and managed.
+1. First, set environment variables in your workspace to simplify the commands in this tutorial.
- The following example creates a resource group named *contoso-us-resource-group* in the *westus* region. We recommend that you use this group for all resources created in this tutorial. This approach will make clean up easier after you're finished.
+ The DPS and IoT Hub names must be globally unique. Replace the `SUFFIX` placeholder with your own value.
- ```azurecli-interactive
- az group create --name contoso-us-resource-group --location westus
- ```
+ Also, the Azure Function code you create later in this tutorial looks for IoT hubs that have either `-toasters-` or `-heatpumps-` in their names. If you change the suggested values, make sure to use names that contain the required substrings.
-2. Use the Azure Cloud Shell to create a device provisioning service (DPS) with the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command. The provisioning service will be added to *contoso-us-resource-group*.
+ ```bash
+ #!/bin/bash
+ export RESOURCE_GROUP="contoso-us-resource-group"
+ export LOCATION="westus"
+ export DPS="contoso-provisioning-service-SUFFIX"
+ export TOASTER_HUB="contoso-toasters-hub-SUFFIX"
+ export HEATPUMP_HUB="contoso-heatpumps-hub-SUFFIX"
+ ```
- The following example creates a provisioning service named *contoso-provisioning-service-1098* in the *westus* location. You must use a unique service name. Make up your own suffix in the service name in place of **1098**.
+ ```powershell
+ # PowerShell
+ $env:RESOURCE_GROUP = "contoso-us-resource-group"
+ $env:LOCATION = "westus"
+ $env:DPS = "contoso-provisioning-service-SUFFIX"
+ $env:TOASTER_HUB = "contoso-toasters-hub-SUFFIX"
+ $env:HEATPUMP_HUB = "contoso-heatpumps-hub-SUFFIX"
+ ```
- ```azurecli-interactive
- az iot dps create --name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --location westus
- ```
+ > [!TIP]
+ > The commands used in this tutorial create resources in the West US location by default. We recommend that you create your resources in the region nearest you that supports Device Provisioning Service. You can view a list of available locations by going to the [Azure Status](https://azure.microsoft.com/status/) page and searching for "Device Provisioning Service". In commands, locations can be specified either in one word or multi-word format; for example: westus, West US, WEST US, etc. The value is not case sensitive.
- This command may take a few minutes to complete.
+1. Use the [az group create](/cli/azure/group#az-group-create) command to create an Azure resource group. An Azure resource group is a logical container into which Azure resources are deployed and managed.
-3. Use the Azure Cloud Shell to create the **Contoso Toasters Division** IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. The IoT hub will be added to *contoso-us-resource-group*.
+ The following example creates a resource group. We recommend that you use a single group for all resources created in this tutorial. This approach will make clean up easier after you're finished.
- The following example creates an IoT hub named *contoso-toasters-hub-1098* in the *westus* location. You must use a unique hub name. Make up your own suffix in the hub name in place of **1098**.
+ ```azurecli-interactive
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ ```
- > [!CAUTION]
- > The example Azure Function code for the custom allocation policy requires the substring `-toasters-` in the hub name. Make sure to use a name containing the required toasters substring.
+1. Use the [az iot dps create](/cli/azure/iot/dps#az-iot-dps-create) command to create an instance of the Device Provisioning Service (DPS). The provisioning service is added to *contoso-us-resource-group*.
```azurecli-interactive
- az iot hub create --name contoso-toasters-hub-1098 --resource-group contoso-us-resource-group --location westus --sku S1
+ az iot dps create --name $DPS --resource-group $RESOURCE_GROUP --location $LOCATION
```
- This command may take a few minutes to complete.
+ This command might take a few minutes to complete.
-4. Use the Azure Cloud Shell to create the **Contoso Heat Pumps Division** IoT hub with the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command. This IoT hub will also be added to *contoso-us-resource-group*.
-
- The following example creates an IoT hub named *contoso-heatpumps-hub-1098* in the *westus* location. You must use a unique hub name. Make up your own suffix in the hub name in place of **1098**.
-
- > [!CAUTION]
- > The example Azure Function code for the custom allocation policy requires the substring `-heatpumps-` in the hub name. Make sure to use a name containing the required heatpumps substring.
+1. Use the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create the **Contoso Toasters Division** IoT hub. The IoT hub is added to *contoso-us-resource-group*.
```azurecli-interactive
- az iot hub create --name contoso-heatpumps-hub-1098 --resource-group contoso-us-resource-group --location westus --sku S1
+ az iot hub create --name $TOASTER_HUB --resource-group $RESOURCE_GROUP --location $LOCATION --sku S1
```
- This command may take a few minutes to complete.
-
-5. The IoT hubs must be linked to the DPS resource.
+ This command might take a few minutes to complete.
- Run the following two commands to get the connection strings for the hubs you created. Replace the hub resource names with the names you chose in each command:
+1. Use the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create the **Contoso Heat Pumps Division** IoT hub. This IoT hub also is added to *contoso-us-resource-group*.
- ```azurecli-interactive
- hubToastersConnectionString=$(az iot hub connection-string show --hub-name contoso-toasters-hub-1098 --key primary --query connectionString -o tsv)
- hubHeatpumpsConnectionString=$(az iot hub connection-string show --hub-name contoso-heatpumps-hub-1098 --key primary --query connectionString -o tsv)
+ ```azurecli-interactive
+ az iot hub create --name $HEATPUMP_HUB --resource-group $RESOURCE_GROUP --location $LOCATION --sku S1
```
- Run the following commands to link the hubs to the DPS resource. Replace the DPS resource name with the name you chose in each command:
+ This command might take a few minutes to complete.
- ```azurecli-interactive
- az iot dps linked-hub create --dps-name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --connection-string $hubToastersConnectionString --location westus
- az iot dps linked-hub create --dps-name contoso-provisioning-service-1098 --resource-group contoso-us-resource-group --connection-string $hubHeatpumpsConnectionString --location westus
- ```
+1. Run the following two commands to get the connection strings for the hubs you created.
+ ```azurecli-interactive
+ az iot hub connection-string show --hub-name $TOASTER_HUB --key primary --query connectionString -o tsv
+ az iot hub connection-string show --hub-name $HEATPUMP_HUB --key primary --query connectionString -o tsv
+ ```
+1. Run the following commands to link the hubs to the DPS resource. Replace the placeholders with the hub connection strings from the previous step.
+ ```azurecli-interactive
+ az iot dps linked-hub create --dps-name $DPS --resource-group $RESOURCE_GROUP --location $LOCATION --connection-string <toaster_hub_connection_string>
+ az iot dps linked-hub create --dps-name $DPS --resource-group $RESOURCE_GROUP --location $LOCATION --connection-string <heatpump_hub_connection_string>
+ ```
## Create the custom allocation function In this section, you create an Azure function that implements your custom allocation policy. This function decides which divisional IoT hub a device should be registered to based on whether its registration ID contains the string **-contoso-tstrsd-007** or **-contoso-hpsd-088**. It also sets the initial state of the device twin based on whether the device is a toaster or a heat pump.
-1. Sign in to the [Azure portal](https://portal.azure.com). From your home page, select **+ Create a resource**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the *Search the Marketplace* search box, type "Function App". From the drop-down list select **Function App**, and then select **Create**.
+1. In the search box, search for and select **Function App**.
-3. On the **Function App** create page, under the **Basics** tab, enter the following settings for your new function app and select **Review + create**:
+1. Select **Create** or **Create Function App**.
- **Resource Group**: Select the **contoso-us-resource-group** to keep all resources created in this tutorial together.
+1. On the **Function App** create page, under the **Basics** tab, enter the following settings for your new function app and select **Review + create**:
- **Function App name**: Enter a unique function app name. This example uses **contoso-function-app-1098**.
+ | Parameter | Value |
+ |--|-|
+ | **Subscription** | Make sure that the subscription where you created the resources for this tutorial is selected. |
+ | **Resource Group** | Select the resource group that you created in the previous section. The default provided in the previous section is **contoso-us-resource-group**. |
+ | **Function App name** | Provide a name for your function app.|
+ | **Do you want to deploy code or container image?** | **Code** |
+ | **Runtime Stack** | **.NET** |
+ | **Version** | Select any **in-process model** version. |
+ | **Region** | Select a region close to you. |
- **Publish**: Verify that **Code** is selected.
+ > [!NOTE]
+ > By default, Application Insights is enabled. Application Insights is not necessary for this tutorial, but it might help you understand and investigate any issues you encounter with the custom allocation. If you prefer, you can disable Application Insights by selecting the **Monitoring** tab and then selecting **No** for **Enable Application Insights**.
- **Runtime Stack**: Select **.NET** from the drop-down.
+ :::image type="content" source="./media/tutorial-custom-allocation-policies/create-function-app.png" alt-text="Screenshot that shows the Create Function App form in the Azure portal.":::
- **Version**: Select **3.1** from the drop-down.
+1. On the **Review + create** tab, select **Create** to create the function app.
- **Region**: Select the same region as your resource group. This example uses **West US**.
+1. Deployment might take several minutes. When it completes, select **Go to resource**.
- > [!NOTE]
- > By default, Application Insights is enabled. Application Insights is not necessary for this tutorial, but it might help you understand and investigate any issues you encounter with the custom allocation. If you prefer, you can disable Application Insights by selecting the **Monitoring** tab and then selecting **No** for **Enable Application Insights**.
+1. On the left pane of the function app **Overview** page, select **Create function**.
- ![Create an Azure Function App to host the custom allocation function](./media/tutorial-custom-allocation-policies/create-function-app.png)
+ :::image type="content" source="./media/tutorial-custom-allocation-policies/create-function-in-portal.png" alt-text="Screenshot that shows selecting the option to create function in the Azure portal.":::
-4. On the **Summary** page, select **Create** to create the function app. Deployment may take several minutes. When it completes, select **Go to resource**.
+1. On the **Create function** page, select the **HTTP Trigger** template then select **Next**.
-5. On the left pane of the function app **Overview** page, select **Functions** and then **+ Create** to add a new function.
+1. On the **Template details** tab, select **Anonymous** as the **Authorization level** then select **Create**.
-6. On the **Create function** page, make sure that **Development environment** is set to **Develop in portal**. Then select the **HTTP Trigger** template followed by the **Create** button.
+ :::image type="content" source="./media/tutorial-custom-allocation-policies/function-authorization-level.png" alt-text="Screenshot that shows setting the authorization level as anonymous.":::
-7. When the **HttpTrigger1** function opens, select **Code + Test** on the left pane. This allows you to edit the code for the function. The **run.csx** code file should be opened for editing.
+ >[!TIP]
+ >If you keep the authorization level as **Function**, then you'll need to configure your DPS enrollments with the function API key. For more information, see [Azure Functions HTTP trigger](../azure-functions/functions-bindings-http-webhook-trigger.md).
-8. Reference required NuGet packages. To create the initial device twin, the custom allocation function uses classes that are defined in two NuGet packages that must be loaded into the hosting environment. With Azure Functions, NuGet packages are referenced using a *function.proj* file. In this step, you save and upload a *function.proj* file for the required assemblies. For more information, see [Using NuGet packages with Azure Functions](../azure-functions/functions-reference-csharp.md#using-nuget-packages).
+1. When the **HttpTrigger1** function opens, select **Code + Test** on the left pane. This allows you to edit the code for the function. The **run.csx** code file should be opened for editing.
+
+1. Reference required NuGet packages. To create the initial device twin, the custom allocation function uses classes that are defined in two NuGet packages that must be loaded into the hosting environment. With Azure Functions, NuGet packages are referenced using a *function.proj* file. In this step, you save and upload a *function.proj* file for the required assemblies. For more information, see [Using NuGet packages with Azure Functions](../azure-functions/functions-reference-csharp.md#using-nuget-packages).
1. Copy the following lines into your favorite editor and save the file on your computer as *function.proj*.
In this section, you create an Azure function that implements your custom alloca
</Project> ```
- 2. Select the **Upload** button located above the code editor to upload your *function.proj* file. After uploading, select the file in the code editor using the drop-down box to verify the contents.
+ 1. Select the **Upload** button located above the code editor to upload your *function.proj* file. After uploading, select the file in the code editor using the drop-down box to verify the contents.
- 3. Select the *function.proj* file in the code editor and verify its contents. If the *function.proj* file is empty copy the lines above into the file and save it. (Sometimes the upload will create the file without uploading the contents.)
+ 1. Select the *function.proj* file in the code editor and verify its contents. If the *function.proj* file is empty copy the lines above into the file and save it. (Sometimes the upload creates the file without uploading the contents.)
-9. Make sure *run.csx* for **HttpTrigger1** is selected in the code editor. Replace the code for the **HttpTrigger1** function with the following code and select **Save**:
+1. Make sure *run.csx* for **HttpTrigger1** is selected in the code editor. Replace the code for the **HttpTrigger1** function with the following code and select **Save**:
```csharp #r "Newtonsoft.Json"
In this section, you create an Azure function that implements your custom alloca
## Create the enrollment
-In this section, you'll create a new enrollment group that uses the custom allocation policy. For simplicity, this tutorial uses [Symmetric key attestation](concepts-symmetric-key-attestation.md) with the enrollment. For a more secure solution, consider using [X.509 certificate attestation](concepts-x509-attestation.md) with a chain of trust.
+In this section, you create a new enrollment group that uses the custom allocation policy. For simplicity, this tutorial uses [Symmetric key attestation](concepts-symmetric-key-attestation.md) with the enrollment. For a more secure solution, consider using [X.509 certificate attestation](concepts-x509-attestation.md) with a chain of trust.
1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Device Provisioning Service instance.
In this section, you'll create a new enrollment group that uses the custom alloc
1. On the **Review + create** tab, verify all of your values then select **Create**.
-After saving the enrollment, reopen it and make a note of the **Primary key**. You must save the enrollment first to have the keys generated. This key will be used to generate unique device keys for simulated devices later.
+After saving the enrollment, reopen it and make a note of the **Primary key**. You must save the enrollment first to have the keys generated. This key is used to generate unique device keys for simulated devices in the next section.
## Derive unique device keys
-Devices don't use the enrollment group's primary symmetric key directly. Instead, you use the primary key to derive a device key for each device. In this section, you create two unique device keys. One key will be used for a simulated toaster device. The other key will be used for a simulated heat pump device.
+Devices don't use the enrollment group's primary symmetric key directly. Instead, you use the primary key to derive a device key for each device. In this section, you create two unique device keys. One key is used for a simulated toaster device. The other key is used for a simulated heat pump device.
-To derive the device key, you use the enrollment group **Primary Key** you noted earlier to compute the [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the device registration ID for each device and convert the result into Base64 format. For more information on creating derived device keys with enrollment groups, see the group enrollments section of [Symmetric key attestation](concepts-symmetric-key-attestation.md).
+To derive the device key, you use the enrollment group **Primary Key** you noted earlier to compute the [HMAC-SHA256](https://wikipedia.org/wiki/HMAC) of the device registration ID for each device and convert the result into Base 64 format. For more information on creating derived device keys with enrollment groups, see the group enrollments section of [Symmetric key attestation](concepts-symmetric-key-attestation.md).
For the example in this tutorial, use the following two device registration IDs and compute a device key for both devices. Both registration IDs have a valid suffix to work with the example code for the custom allocation policy: * **breakroom499-contoso-tstrsd-007** * **mainbuilding167-contoso-hpsd-088**
-# [Azure CLI](#tab/azure-cli)
- The IoT extension for the Azure CLI provides the [`iot dps enrollment-group compute-device-key`](/cli/azure/iot/dps/enrollment-group#az-iot-dps-enrollment-group-compute-device-key) command for generating derived device keys. This command can be used on Windows-based or Linux systems, from PowerShell or a Bash shell. Replace the value of `--key` argument with the **Primary Key** from your enrollment group. ```azurecli
-az iot dps enrollment-group compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA== --registration-id breakroom499-contoso-tstrsd-007
-
-"JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs="
+az iot dps enrollment-group compute-device-key --key <ENROLLMENT_GROUP_KEY> --registration-id breakroom499-contoso-tstrsd-007
``` ```azurecli
-az iot dps compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA== --registration-id mainbuilding167-contoso-hpsd-088
-
-"6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg="
+az iot dps compute-device-key --key <ENROLLMENT_GROUP_KEY> --registration-id mainbuilding167-contoso-hpsd-088
``` > [!NOTE]
az iot dps compute-device-key --key oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOt
> az iot dps enrollment-group compute-device-key -g contoso-us-resource-group --dps-name contoso-provisioning-service-1098 --enrollment-id contoso-custom-allocated-devices --registration-id breakroom499-contoso-tstrsd-007 > ```
-# [PowerShell](#tab/powershell)
-
-If you're using a Windows-based workstation, you can use PowerShell to generate your derived device key as shown in the following example.
-
-Replace the value of **KEY** with the **Primary Key** you noted earlier.
-
-```powershell
-$KEY='oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA=='
-
-$REG_ID1='breakroom499-contoso-tstrsd-007'
-$REG_ID2='mainbuilding167-contoso-hpsd-088'
-
-$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256
-$hmacsha256.key = [Convert]::FromBase64String($KEY)
-$sig1 = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID1))
-$sig2 = $hmacsha256.ComputeHash([Text.Encoding]::ASCII.GetBytes($REG_ID2))
-$derivedkey1 = [Convert]::ToBase64String($sig1)
-$derivedkey2 = [Convert]::ToBase64String($sig2)
-
-echo "`n`n$REG_ID1 : $derivedkey1`n$REG_ID2 : $derivedkey2`n`n"
-```
-
-```powershell
-breakroom499-contoso-tstrsd-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
-mainbuilding167-contoso-hpsd-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
-```
-
-# [Bash](#tab/bash)
-
-If you're using a Linux workstation, you can use openssl to generate your derived device keys as shown in the following example.
-
-Replace the value of **KEY** with the **Primary Key** you noted earlier.
-
-```bash
-KEY=oiK77Oy7rBw8YB6IS6ukRChAw+Yq6GC61RMrPLSTiOOtdI+XDu0LmLuNm11p+qv2I+adqGUdZHm46zXAQdZoOA==
-
-REG_ID1=breakroom499-contoso-tstrsd-007
-REG_ID2=mainbuilding167-contoso-hpsd-088
-
-keybytes=$(echo $KEY | base64 --decode | xxd -p -u -c 1000)
-devkey1=$(echo -n $REG_ID1 | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64)
-devkey2=$(echo -n $REG_ID2 | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64)
-
-echo -e $"\n\n$REG_ID1 : $devkey1\n$REG_ID2 : $devkey2\n\n"
-```
-
-```bash
-breakroom499-contoso-tstrsd-007 : JC8F96eayuQwwz+PkE7IzjH2lIAjCUnAa61tDigBnSs=
-mainbuilding167-contoso-hpsd-088 : 6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=
-```
---
-The simulated devices will use the derived device keys with each registration ID to perform symmetric key attestation.
+The simulated devices use the derived device keys with each registration ID to perform symmetric key attestation.
## Prepare an Azure IoT C SDK development environment
This section is oriented toward a Windows-based workstation. For a Linux example
If `cmake` doesn't find your C++ compiler, you might see build errors while running the command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
- Once the build succeeds, the last few output lines will look similar to the following output:
+ Once the build succeeds, the last few output lines look similar to the following output:
```cmd/sh $ cmake -Dhsm_type_symm_key:BOOL=ON -Duse_prov_client:BOOL=ON ..
This section is oriented toward a Windows-based workstation. For a Linux example
In this section, you update a provisioning sample named **prov\_dev\_client\_sample** located in the Azure IoT C SDK you set up previously.
-This sample code simulates a device boot sequence that sends the provisioning request to your Device Provisioning Service instance. The boot sequence will cause the toaster device to be recognized and assigned to the IoT hub using the custom allocation policy.
+This sample code simulates a device boot sequence that sends the provisioning request to your Device Provisioning Service instance. The boot sequence causes the toaster device to be recognized and assigned to the IoT hub using the custom allocation policy.
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service and note down the **_ID Scope_** value.
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service and note down the **ID Scope** value.
- ![Extract Device Provisioning Service endpoint information from the portal blade](./media/quick-create-simulated-device-x509/copy-id-scope.png)
+ ![Extract Device Provisioning Service endpoint information from the portal blade](./media/quick-create-simulated-device-x509/copy-id-scope.png)
-2. In Visual Studio, open the **azure_iot_sdks.sln** solution file that was generated by running CMake earlier. The solution file should be in the following location:
-
- ```
- azure-iot-sdk-c\cmake\azure_iot_sdks.sln
- ```
+2. In Visual Studio, open the **azure_iot_sdks.sln** solution file that was generated by running CMake earlier. The solution file should be in the following location: `azure-iot-sdk-c\cmake\azure_iot_sdks.sln`.
3. In Visual Studio's *Solution Explorer* window, navigate to the **Provision\_Samples** folder. Expand the sample project named **prov\_dev\_client\_sample**. Expand **Source Files**, and open **prov\_dev\_client\_sample.c**.
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
+4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
```c static const char* id_scope = "0ne00002193";
This sample code simulates a device boot sequence that sends the provisioning re
2022-08-03T20:34:41.399 [Information] Executed 'Functions.HttpTrigger1' (Succeeded, Id=12950752-6d75-4f41-844b-c253a6653d4f, Duration=227ms) ``` -- ### Simulate the Contoso heat pump device 1. To simulate the heat pump device, update the call to `prov_dev_set_symmetric_key_info()` in **prov\_dev\_client\_sample.c** again with the heat pump registration ID and derived device key you generated earlier. The key value **6uejA9PfkQgmYylj8Zerp3kcbeVrGZ172YLa7VSnJzg=** shown below is also only given as an example.
This sample code simulates a device boot sequence that sends the provisioning re
Press enter key to exit: ```
-## Troubleshooting custom allocation policies
+## Troubleshoot custom allocation policies
The following table shows expected scenarios and the results error codes you might receive. Use this table to help troubleshoot custom allocation policy failures with your Azure Functions.
The steps here assume you created all resources in this tutorial as instructed i
> [!IMPORTANT] > Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you don't accidentally delete the wrong resource group or resources. If you created the IoT Hub inside an existing resource group that contains resources you want to keep, only delete the IoT Hub resource itself instead of deleting the resource group.
->
To delete the resource group by name: 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
-2. In the **Filter by name...** textbox, type the name of the resource group containing your resources, **contoso-us-resource-group**.
+2. In the **Filter by name...** textbox, type the name of the resource group containing your resources, **contoso-us-resource-group**.
3. To the right of your resource group in the result list, select **...** then **Delete resource group**.
iot-edge How To Store Data Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-store-data-blob.md
sudo chmod -R 700 <blob-dir>
## Configure log files
-For information on configuring log files for your module, see these [production best practices](./production-checklist.md#set-up-logs-and-diagnostics).
+The default output log level is 'Info'. To change the output log level, set the `LogLevel` environment variable for this module in the deployment manifest. `LogLevel` accepts the following values:
+
+* Critical
+* Error
+* Warning
+* Info
+* Debug
+For information on configuring log files for your module, see these [production best practices](./production-checklist.md#set-up-logs-and-diagnostics).
## Connect to your blob storage module You can use the account name and account key that you configured for your module to access the blob storage on your IoT Edge device.
load-testing Resource Jmeter Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/resource-jmeter-support.md
See the Azure Load Testing overview to learn [how Azure Load Testing works](./ov
## Supported Apache JMeter version
-Azure Load Testing uses Apache JMeter version 5.5 for running load tests.
+Azure Load Testing uses Apache JMeter version 5.6.3 for running load tests.
## Apache JMeter support details
machine-learning Concept Model Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-monitoring.md
Model monitoring is the last step in the machine learning end-to-end lifecycle.
Unlike traditional software systems, the behavior of machine learning systems is governed not just by rules specified in code, but also by model behavior learned from data. Therefore, data distribution changes, training-serving skew, data quality issues, shifts in environments, or consumer behavior changes can all cause a model to become stale. When a model becomes stale, its performance can degrade to the point that it fails to add business value or starts to cause serious compliance issues in highly regulated environments.
+## Limitations of model monitoring in Azure Machine Learning
+
+Azure Machine Learning model monitoring supports only the use of credential-based authentication (e.g., SAS token) to access data contained in datastores. To learn more about datastores and authentication modes, see [Data administration](how-to-administrate-data-authentication.md).
+ ## How model monitoring works in Azure Machine Learning To implement monitoring, Azure Machine Learning acquires monitoring signals by performing statistical computations on streamed production inference data and reference data. The reference data can be historical training data, validation data, or ground truth data. On the other hand, the production inference data refers to the model's input and output data collected in production.
machine-learning How To Access Resources From Endpoints Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md
Previously updated : 04/07/2022 Last updated : 03/18/2024 #Customer intent: As a data scientist, I want to securely access Azure resources for my machine learning model deployment with an online endpoint and managed identity.
Learn how to access Azure resources from your scoring script with an online endpoint and either a system-assigned managed identity or a user-assigned managed identity.
-Both managed endpoints and Kubernetes endpoints allow Azure Machine Learning to manage the burden of provisioning your compute resource and deploying your machine learning model. Typically your model needs to access Azure resources such as the Azure Container Registry or your blob storage for inferencing; with a managed identity you can access these resources without needing to manage credentials in your code. [Learn more about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
+Both managed endpoints and Kubernetes endpoints allow Azure Machine Learning to manage the burden of provisioning your compute resource and deploying your machine learning model. Typically your model needs to access Azure resources such as the Azure Container Registry or your blob storage for inferencing; with a managed identity, you can access these resources without needing to manage credentials in your code. [Learn more about managed identities](../active-directory/managed-identities-azure-resources/overview.md).
-This guide assumes you don't have a managed identity, a storage account or an online endpoint. If you already have these components, skip to the [give access permission to the managed identity](#give-access-permission-to-the-managed-identity) section.
+This guide assumes you don't have a managed identity, a storage account, or an online endpoint. If you already have these components, skip to the [Give access permission to the managed identity](#give-access-permission-to-the-managed-identity) section.
## Prerequisites
This guide assumes you don't have a managed identity, a storage account or an on
* Install and configure the Azure CLI and ML (v2) extension. For more information, see [Install, set up, and use the 2.0 CLI](how-to-configure-cli.md).
-* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+* An Azure resource group, in which you (or the service principal you use) need to have *User Access Administrator* and *Contributor* access. You have such a resource group if you configured your ML extension per the preceding article.
-* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
+* An Azure Machine Learning workspace. You already have a workspace if you configured your ML extension per the preceding article.
-* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
+* A trained machine learning model ready for scoring and deployment. If you're following along with the sample, a model is provided.
* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
This guide assumes you don't have a managed identity, a storage account or an on
az configure --defaults gitworkspace=<Azure Machine Learning workspace name> group=<resource group> ```
-* To follow along with the sample, clone the samples repository
+* To follow along with the sample, clone the samples repository and then change directory to *cli*.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
This guide assumes you don't have a managed identity, a storage account or an on
* Install and configure the Azure CLI and ML (v2) extension. For more information, see [Install, set up, and use the 2.0 CLI](how-to-configure-cli.md).
-* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+* An Azure Resource group, in which you (or the service principal you use) need to have *User Access Administrator* and *Contributor* access. You have such a resource group if you configured your ML extension per the preceding article.
-* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
+* An Azure Machine Learning workspace. You have a workspace if you configured your ML extension per the above article.
-* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
+* A trained machine learning model ready for scoring and deployment. If you're following along with the sample, a model is provided.
* If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
This guide assumes you don't have a managed identity, a storage account or an on
az configure --defaults gitworkspace=<Azure Machine Learning workspace name> group=<resource group> ```
-* To follow along with the sample, clone the samples repository
+* To follow along with the sample, clone the samples repository and then change directory to *cli*.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
This guide assumes you don't have a managed identity, a storage account or an on
* Install and configure the Azure Machine Learning Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
-* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+* An Azure Resource group, in which you (or the service principal you use) need to have *User Access Administrator* and *Contributor* access. You have such a resource group if you configured your ML extension per the preceding article.
-* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
+* An Azure Machine Learning workspace. You already have a workspace if you configured your ML extension per the preceding article.
-* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
+* A trained machine learning model ready for scoring and deployment. If you're following along with the sample, a model is provided.
-* Clone the samples repository.
+* Clone the samples repository, then change the directory.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/sdk/endpoints/online/managed/managed-identities ```
-* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
-* Additional Python packages are required for this example:
+* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the *sdk/endpoints/online/managed/managed-identities* directory.
- * Microsoft Azure Storage Management Client
+* Other Python packages are required for this example:
+ * Microsoft Azure Storage Management Client
* Microsoft Azure Authorization Management Client Install them with the following code:
This guide assumes you don't have a managed identity, a storage account or an on
* To use Azure Machine Learning, you must have an Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* Role creation permissions for your subscription or the Azure resources accessed by the User-assigned identity.
+* Role creation permissions for your subscription or the Azure resources accessed by the user-assigned identity.
* Install and configure the Azure Machine Learning Python SDK (v2). For more information, see [Install and set up SDK (v2)](https://aka.ms/sdk-v2-install).
-* An Azure Resource group, in which you (or the service principal you use) need to have `User Access Administrator` and `Contributor` access. You'll have such a resource group if you configured your ML extension per the above article.
+* An Azure Resource group, in which you (or the service principal you use) need to have *User Access Administrator* and *Contributor* access. You have such a resource group if you configured your ML extension per the preceding article.
-* An Azure Machine Learning workspace. You'll have a workspace if you configured your ML extension per the above article.
+* An Azure Machine Learning workspace. You already have a workspace if you configured your ML extension per the preceding article.
-* A trained machine learning model ready for scoring and deployment. If you are following along with the sample, a model is provided.
+* A trained machine learning model ready for scoring and deployment. If you're following along with the sample, a model is provided.
* Clone the samples repository.
This guide assumes you don't have a managed identity, a storage account or an on
git clone https://github.com/Azure/azureml-examples --depth 1 cd azureml-examples/sdk/endpoints/online/managed/managed-identities ```
-* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the `sdk/endpoints/online/managed/managed-identities` directory.
-* Additional Python packages are required for this example:
+* To follow along with this notebook, access the companion [example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb) within in the *sdk/endpoints/online/managed/managed-identities* directory.
- * Microsoft Azure Msi Management Client
+* Other Python packages are required for this example:
+ * Microsoft Azure MSI Management Client
* Microsoft Azure Storage Client- * Microsoft Azure Authorization Management Client Install them with the following code:
This guide assumes you don't have a managed identity, a storage account or an on
## Limitations
-* The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint has been created.
-* If your ARC and blob storage are configured as private, i.e. behind a Vnet, then access from the Kubernetes endpoint should be over the private link regardless of whether your workspace is public or private. More details about private link setting, please refer to [How to secure workspace vnet](./how-to-secure-workspace-vnet.md#azure-container-registry).
-
+* The identity for an endpoint is immutable. During endpoint creation, you can associate it with a system-assigned identity (default) or a user-assigned identity. You can't change the identity after the endpoint is created.
+* If your ARC and blob storage are configured as private, that is, behind a virtual network, then access from the Kubernetes endpoint should be over the private link regardless of whether your workspace is public or private. More details about private link setting, refer to [How to secure workspace vnet](./how-to-secure-workspace-vnet.md#azure-container-registry).
## Configure variables for deployment
The following code exports those values as environment variables:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-sai.sh" id="configure_storage_names" :::
-After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the system-assigned managed identity that's generated upon endpoint creation.
+After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script accesses this text file using the system-assigned managed identity that's generated upon endpoint creation.
# [User-assigned (CLI)](#tab/user-identity-cli)
-Decide on the name of your endpoint, workspace, workspace location and export that value as an environment variable:
+Decide on the name of your endpoint, workspace, and workspace location, then export that value as an environment variable:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="set_variables" :::
Next, specify what you want to name your blob storage account, blob container, a
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="configure_storage_names" :::
-After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the user-assigned managed identity used in the endpoint.
+After these variables are exported, create a text file locally. When the endpoint is deployed, the scoring script accesses this text file using the user-assigned managed identity used in the endpoint.
Decide on the name of your user identity name, and export that value as an environment variable:
Next, specify what you want to name your blob storage account, blob container, a
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=1-specify-storage-details)]
-After these variables are assigned, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the system-assigned managed identity that's generated upon endpoint creation.
+After these variables are assigned, create a text file locally. When the endpoint is deployed, the scoring script accesses this text file using the system-assigned managed identity that's generated upon endpoint creation.
Now, get a handle to the workspace and retrieve its location: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=1-retrieve-workspace-location)]
-We will use this value to create a storage account.
-
+Use this value to create a storage account.
# [User-assigned (Python)](#tab/user-identity-python) - Assign values for the workspace and deployment-related variables: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=1-assign-variables)]
Next, specify what you want to name your blob storage account, blob container, a
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=1-specify-storage-details)]
-After these variables are assigned, create a text file locally. When the endpoint is deployed, the scoring script will access this text file using the user-assigned managed identity that's generated upon endpoint creation.
+After these variables are assigned, create a text file locally. When the endpoint is deployed, the scoring script accesses this text file using the user-assigned managed identity that's generated upon endpoint creation.
Decide on the name of your user identity name: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=1-decide-name-user-identity)]
Now, get a handle to the workspace and retrieve its location:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=1-retrieve-workspace-location)]
-We will use this value to create a storage account.
+Use this value to create a storage account.
## Define the deployment configuration - # [System-assigned (CLI)](#tab/system-identity-cli) To deploy an online endpoint with the CLI, you need to define the configuration in a YAML file. For more information on the YAML schema, see [online endpoint YAML reference](reference-yaml-endpoint-online.md) document. The YAML files in the following examples are used to create online endpoints.
-The following YAML example is located at `endpoints/online/managed/managed-identities/1-sai-create-endpoint`. The file,
+The following YAML example is located at *endpoints/online/managed/managed-identities/1-sai-create-endpoint*. The file,
* Defines the name by which you want to refer to the endpoint, `my-sai-endpoint`. * Specifies the type of authorization to use to access the endpoint, `auth-mode: key`. :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/managed-identities/1-sai-create-endpoint.yml":::
-This YAML example, `2-sai-deployment.yml`,
+This YAML example, *2-sai-deployment.yml*,
* Specifies that the type of endpoint you want to create is an `online` endpoint. * Indicates that the endpoint has an associated deployment called `blue`.
To deploy an online endpoint with the CLI, you need to define the configuration
The YAML files in the following examples are used to create online endpoints.
-The following YAML example is located at `endpoints/online/managed/managed-identities/1-uai-create-endpoint`. The file,
+The following YAML example is located at *endpoints/online/managed/managed-identities/1-uai-create-endpoint*. The file,
* Defines the name by which you want to refer to the endpoint, `my-uai-endpoint`. * Specifies the type of authorization to use to access the endpoint, `auth-mode: key`.
The following YAML example is located at `endpoints/online/managed/managed-ident
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/managed-identities/1-uai-create-endpoint.yml":::
-This YAML example, `2-sai-deployment.yml`,
+This YAML example, *2-sai-deployment.yml*,
* Specifies that the type of endpoint you want to create is an `online` endpoint. * Indicates that the endpoint has an associated deployment called `blue`.
This YAML example, `2-sai-deployment.yml`,
# [System-assigned (Python)](#tab/system-identity-python)
-To deploy an online endpoint with the Python SDK (v2), objects can be used to define the configuration as below. Alternatively, YAML files can be loaded using the `.load` method.
+To deploy an online endpoint with the Python SDK (v2), objects can be used to define the following configuration. Alternatively, YAML files can be loaded using the `.load` method.
The following Python endpoint object:
-* Assigns the name by which you want to refer to the endpoint to the variable `endpoint_name.
+* Assigns the name by which you want to refer to the endpoint to the variable `endpoint_name`.
* Specifies the type of authorization to use to access the endpoint `auth-mode="key"`. [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=2-define-endpoint-configuration)]
This deployment object:
* Specifies that the type of deployment you want to create is a `ManagedOnlineDeployment` via the class. * Indicates that the endpoint has an associated deployment called `blue`. * Configures the details of the deployment such as the `name` and `instance_count`
-* Defines additional objects inline and associates them with the deployment for `Model`,`CodeConfiguration`, and `Environment`.
+* Defines extra objects inline and associates them with the deployment for `Model`,`CodeConfiguration`, and `Environment`.
* Includes environment variables needed for the system-assigned managed identity to access storage. - [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=2-define-deployment-configuration)] # [User-assigned (Python)](#tab/user-identity-python)
-To deploy an online endpoint with the Python SDK (v2), objects can be used to define the configuration as below. Alternatively, YAML files can be loaded using the `.load` method.
+To deploy an online endpoint with the Python SDK (v2), objects can be used to define the following configuration. Alternatively, YAML files can be loaded using the `.load` method.
-For a user-assigned identity, we will define the endpoint configuration below once the User-Assigned Managed Identity has been created.
+For a user-assigned identity, you define the endpoint configuration after the user-assigned managed identity is created.
This deployment object: * Specifies that the type of deployment you want to create is a `ManagedOnlineDeployment` via the class. * Indicates that the endpoint has an associated deployment called `blue`. * Configures the details of the deployment such as the `name` and `instance_count`
-* Defines additional objects inline and associates them with the deployment for `Model`,`CodeConfiguration`, and `Environment`.
+* Defines more objects inline and associates them with the deployment for `Model`,`CodeConfiguration`, and `Environment`.
* Includes environment variables needed for the user-assigned managed identity to access storage.
-* Adds a placeholder environment variable for `UAI_CLIENT_ID`, which will be added after creating one and before actually deploying this configuration.
-
+* Adds a placeholder environment variable for `UAI_CLIENT_ID`, which is added after creating one and before actually deploying this configuration.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=2-define-deployment-configuration)]
+## Create the managed identity
-## Create the managed identity
To access Azure resources, create a system-assigned or user-assigned managed identity for your online endpoint. # [System-assigned (CLI)](#tab/system-identity-cli)
When you [create an online endpoint](#create-an-online-endpoint), a system-assig
# [User-assigned (CLI)](#tab/user-identity-cli)
-To create a user-assigned managed identity, use the following:
+To create a user-assigned managed identity, use the following command:
::: code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-access-resource-uai.sh" id="create_user_identity" :::
Then, create the identity:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=3-create-identity)]
-Now, retrieve the identity object, which contains details we will use below:
+Now, retrieve the identity object, which contains details you use:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=3-retrieve-identity-object)]
Now, retrieve the identity object, which contains details we will use below:
## Create storage account and container
-For this example, create a blob storage account and blob container, and then upload the previously created text file to the blob container.
-This is the storage account and blob container that you'll give the online endpoint and managed identity access to.
+For this example, create a blob storage account and blob container, and then upload the previously created text file to the blob container. You give the online endpoint and managed identity access to this storage account and blob container.
# [System-assigned (CLI)](#tab/system-identity-cli)
First, get a handle to the `StorageManagementclient`:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=4-get-handle)] - Then, create a storage account: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=4-create-storage-account)]
If you encounter any issues, see [Troubleshooting online endpoints deployment an
The following Python endpoint object:
-* Assigns the name by which you want to refer to the endpoint to the variable `endpoint_name.
+* Assigns the name by which you want to refer to the endpoint to the variable `endpoint_name`.
* Specifies the type of authorization to use to access the endpoint `auth-mode="key"`.
-* Defines its identity as a ManagedServiceIdentity and specifies the Managed Identity created above as user-assigned.
+* Defines its identity as a `ManagedServiceIdentity` and specifies the Managed Identity as user assigned.
Define and deploy the endpoint: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=5-create-online-endpoint)] - Check the status of the endpoint via the details of the deployed endpoint object with the following code: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=5-get-details)]
Give permission of default workspace storage to user-assigned managed identity.
# [System-assigned (Python)](#tab/system-identity-python)
-First, make an `AuthorizationManagementClient` to list Role Definitions:
+First, make an `AuthorizationManagementClient` to list role definitions:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=6-get-role-definitions-client)]
-Now, initialize one to make Role Assignments:
+Now, initialize one to make role assignments:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=6-get-role-assignments-client)] -
-Then, get the Principal ID of the System-assigned managed identity:
+Then, get the principal ID of the system-assigned managed identity:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=6-get-sai-details)]
-Next, assign the `Storage Blob Data Reader` role to the endpoint. The Role Definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above and allows the endpoint to read the file.
+Next, assign the *Storage Blob Data Reader* role to the endpoint. The role definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above and allows the endpoint to read the file.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-sai.ipynb?name=6-give-permission-user-storage-account)] - # [User-assigned (Python)](#tab/user-identity-python)
-First, make an `AuthorizationManagementClient` to list Role Definitions:
+First, make an `AuthorizationManagementClient` to list role definitions:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=6-get-role-definitions-client)]
-Now, initialize one to make Role Assignments:
+Now, initialize one to make role assignments:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=6-get-role-assignments-client)]
-Then, get the Principal ID and Client ID of the User-assigned managed identity. To assign roles, we only need the Principal ID. However, we will use the Client ID to fill the `UAI_CLIENT_ID` placeholder environment variable before creating the deployment.
+Then, get the principal ID and client ID of the user-assigned managed identity. To assign roles, you only need the principal ID. However, you use the client ID to fill the `UAI_CLIENT_ID` placeholder environment variable before creating the deployment.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=6-get-uai-details)]
-Next, assign the `Storage Blob Data Reader` role to the endpoint. The Role Definition is retrieved by name and passed along with the Principal ID of the endpoint. The role is applied at the scope of the storage account created above to allow the endpoint to read the file.
+Next, assign the *Storage Blob Data Reader* role to the endpoint. The role definition is retrieved by name and passed along with the principal ID of the endpoint. The role is applied at the scope of the storage account created above to allow the endpoint to read the file.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=6-give-permission-user-storage-account)]
-For the next two permissions, we'll need the workspace and container registry objects:
+For the next two permissions, you need the workspace and container registry objects:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=6-retrieve-workspace-acr)]
-Next, assign the `AcrPull` role to the User-assigned identity. This role allows images to be pulled from an Azure Container Registry. The scope is applied at the level of the container registry associated with the workspace.
+Next, assign the *AcrPull* role to the user-assigned identity. This role allows images to be pulled from an Azure Container Registry. The scope is applied at the level of the container registry associated with the workspace.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=6-give-permission-container-registry)]
-Finally, assign the `Storage Blob Data Reader` role to the endpoint at the workspace storage account scope. This role assignment will allow the endpoint to read blobs in the workspace storage account as well as the newly created storage account.
+Finally, assign the *Storage Blob Data Reader* role to the endpoint at the workspace storage account scope. This role assignment allows the endpoint to read blobs in the workspace storage account as well as the newly created storage account.
The role has the same name and capabilities as the first role assigned above, however it is applied at a different scope and has a different ID.
Now that the deployment is confirmed, set the traffic to 100%:
# [User-assigned (Python)](#tab/user-identity-python)
-Before we deploy, update the `UAI_CLIENT_ID` environment variable placeholder.
+Before you deploy, update the `UAI_CLIENT_ID` environment variable placeholder.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=7-update-uai-client-id)]
Now that the deployment is confirmed, set the traffic to 100%:
-When your deployment completes, the model, the environment, and the endpoint are registered to your Azure Machine Learning workspace.
+When your deployment completes, the model, the environment, and the endpoint are registered to your Azure Machine Learning workspace.
## Test the endpoint
To call your endpoint, run:
# [User-assigned (Python)](#tab/user-identity-python) - [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=8-confirm-endpoint-deployed-successfully)]
Delete the endpoint:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=9-delete-endpoint)] - Delete the storage account: [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/managed-identities/online-endpoints-managed-identity-uai.ipynb?name=9-delete-storage-account)]
Delete the User-assigned managed identity:
-## Next steps
+## Related content
-* [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md).
-* For more on deployment, see [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md).
-* For more information on using the CLI, see [Use the CLI extension for Azure Machine Learning](how-to-configure-cli.md).
-* To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
-* For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md).
-* For information on monitoring endpoints, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).
-* For limitations for managed online endpoint and Kubernetes online endpoint, see [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).
+* [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
+* [Perform safe rollout of new deployments for real-time inference](how-to-safely-rollout-online-endpoints.md)
+* [Install and set up the CLI (v2)](how-to-configure-cli.md)
+* [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md)
+* [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)
+* [Monitor online endpoints](how-to-monitor-online-endpoints.md)
+* For limitations of managed online endpoint and Kubernetes online endpoint, see [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints)
machine-learning How To Collect Production Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-collect-production-data.md
You can enable data collection for new or existing online endpoint deployments.
If you're interested in collecting production inference data for an MLflow model that is deployed to a real-time endpoint, see [Data collection for MLflow models](#collect-data-for-mlflow-models). - ## Prerequisites # [Azure CLI](#tab/azure-cli)
To begin, add custom logging code to your scoring script (`score.py`). For custo
> [!NOTE] > Currently, the `collect()` API logs only pandas DataFrames. If the data is not in a DataFrame when passed to `collect()`, it won't get logged to storage and an error will be reported.
-The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK. In this example, a third `Collector` called `inputs_outputs_collector` logs a joined DataFrame of the `model_inputs` and the `model_outputs`. This joined DataFrame enables more monitoring signals such as feature attribution drift. If you're not interested in these monitoring signals, you can remove this `Collector`.
+The following code is an example of a full scoring script (`score.py`) that uses the custom logging Python SDK.
```python import pandas as pd
def predict(input_df):
return output_df ```
+### Update your scoring script to log custom unique IDs
+
+In addition to logging pandas DataFrames directly within your scoring script, you can log data with unique IDs of your choice. These IDs can come from your application, an external system, or you can generate them. If you don't provide a custom ID, as detailed in this section, the Data collector will autogenerate a unique `correlationid` to help you correlate your model's inputs and outputs later. If you supply a custom ID, the `correlationid` field in the logged data will contain the value of your supplied custom ID.
+
+1. First complete the steps in the previous section, then import the `azureml.ai.monitoring.context` package by adding the following line to your scoring script:
+
+ ```python
+ from azureml.ai.monitoring.context import BasicCorrelationContext
+ ```
+
+1. In your scoring script, instantiate a `BasicCorrelationContext` object and pass in the `id` you wish to log for that row. We recommend that this `id` be a unique ID from your system, so that you can uniquely identify each logged row from your Blob Storage. Pass this object into your `collect()` API call as a parameter:
+
+ ```python
+ # create a context with a custom unique id
+ artificial_context = BasicCorrelationContext(id='test')
+
+ # collect inputs data, store correlation_context
+ context = inputs_collector.collect(input_df, artificial_context)
+ ```
+
+1. Ensure that you pass in the context into your `outputs_collector` so that your model inputs and outputs have the same unique ID logged with them, and they can be easily correlated later:
+
+ ```python
+ # collect outputs data, pass in context so inputs and outputs data can be correlated later
+ outputs_collector.collect(output_df, context)
+ ```
+
+The following code is an example of a full scoring script (`score.py`) that logs custom unique IDs.
+
+```python
+import pandas as pd
+import json
+from azureml.ai.monitoring import Collector
+from azureml.ai.monitoring.context import BasicCorrelationContext
+
+def init():
+ global inputs_collector, outputs_collector, inputs_outputs_collector
+
+ # instantiate collectors with appropriate names, make sure align with deployment spec
+ inputs_collector = Collector(name='model_inputs')
+ outputs_collector = Collector(name='model_outputs')
+
+def run(data):
+ # json data: { "data" : { "col1": [1,2,3], "col2": [2,3,4] } }
+ pdf_data = preprocess(json.loads(data))
+
+ # tabular data: { "col1": [1,2,3], "col2": [2,3,4] }
+ input_df = pd.DataFrame(pdf_data)
+
+ # create a context with a custom unique id
+ artificial_context = BasicCorrelationContext(id='test')
+
+ # collect inputs data, store correlation_context
+ context = inputs_collector.collect(input_df, artificial_context)
+
+ # perform scoring with pandas Dataframe, return value is also pandas Dataframe
+ output_df = predict(input_df)
+
+ # collect outputs data, pass in context so inputs and outputs data can be correlated later
+ outputs_collector.collect(output_df, context)
+
+ return output_df.to_dict()
+
+def preprocess(json_data):
+ # preprocess the payload to ensure it can be converted to pandas DataFrame
+ return json_data["data"]
+
+def predict(input_df):
+ # process input and return with outputs
+ ...
+
+ return output_df
+```
+
+#### Collect data for model performance monitoring
+
+If you want to use your collected data for model performance monitoring, it's important that each logged row has a unique `correlationid` that can be used to correlate the data with ground truth data, when such data becomes available. The data collector will autogenerate a unique `correlationid` for each logged row and include this autogenerated ID in the `correlationid` field in the JSON object. For more information on the JSON schema, see [store collected data in blob storage](#store-collected-data-in-blob-storage).
+
+If you want to use your own unique ID for logging with your production data, we recommend that you log this ID as a separate column in your pandas DataFrame, since the [data collector batches requests](#data-collector-batching) that are in close proximity to one another. By logging the `correlationid` as a separate column, it will be readily available downstream for integration with ground truth data.
+ ### Update your dependencies
-Before you can create your deployment with the updated scoring script, you need to create your environment with the base image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04` and the appropriate conda dependencies. Thereafter, you can build the environment using the specification in the following YAML.
+Before you can create your deployment with the updated scoring script, you need to create your environment with the base image `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04` and the appropriate conda dependencies. Thereafter, you can build the environment, using the specification in the following YAML.
```yml channels:
For more information on how to format your deployment YAML for data collection w
For more information on how to format your deployment YAML for data collection with managed online endpoints, see [CLI (v2) managed online deployment YAML schema](reference-yaml-deployment-managed-online.md).
-### Store collected data in a blob
+## Perform payload logging
+
+In addition to custom logging with the provided Python SDK, you can collect request and response HTTP payload data directly without the need to augment your scoring script (`score.py`).
+
+1. To enable payload logging, in your deployment YAML, use the names `request` and `response`:
+
+ ```yml
+ $schema: http://azureml/sdk-2-0/OnlineDeployment.json
+
+ endpoint_name: my_endpoint
+ name: blue
+ model: azureml:my-model-m1:1
+ environment: azureml:env-m1:1
+ data_collector:
+ collections:
+ request:
+ enabled: 'True'
+ response:
+ enabled: 'True'
+ ```
+
+1. Deploy the model with payload logging enabled:
+
+ ```bash
+ $ az ml online-deployment create -f deployment.YAML
+ ```
+
+With payload logging, the collected data is not guaranteed to be in tabular format. Therefore, if you want to use collected payload data with model monitoring, you'll be required to provide a preprocessing component to make the data tabular. If you're interested in a seamless model monitoring experience, we recommend using the [custom logging Python SDK](#perform-custom-logging-for-model-monitoring).
+
+As your deployment is used, the collected data flows to your workspace Blob storage. The following JSON code is an example of an HTTP _request_ collected:
+
+```json
+{"specversion":"1.0",
+"id":"19790b87-a63c-4295-9a67-febb2d8fbce0",
+"source":"/subscriptions/d511f82f-71ba-49a4-8233-d7be8a3650f4/resourceGroups/mire2etesting/providers/Microsoft.MachineLearningServices/workspaces/mirmasterenvws/onlineEndpoints/localdev-endpoint/deployments/localdev",
+"type":"azureml.inference.request",
+"datacontenttype":"application/json",
+"time":"2022-05-25T08:59:48Z",
+"data":{"data": [ [1,2,3,4,5,6,7,8,9,10], [10,9,8,7,6,5,4,3,2,1]]},
+"path":"/score",
+"method":"POST",
+"contentrange":"bytes 0-59/*",
+"correlationid":"f6e806c9-1a9a-446b-baa2-901373162105","xrequestid":"f6e806c9-1a9a-446b-baa2-901373162105"}
+```
+
+And the following JSON code is another example of an HTTP _response_ collected:
+
+```json
+{"specversion":"1.0",
+"id":"bbd80e51-8855-455f-a719-970023f41e7d",
+"source":"/subscriptions/d511f82f-71ba-49a4-8233-d7be8a3650f4/resourceGroups/mire2etesting/providers/Microsoft.MachineLearningServices/workspaces/mirmasterenvws/onlineEndpoints/localdev-endpoint/deployments/localdev",
+"type":"azureml.inference.response",
+"datacontenttype":"application/json",
+"time":"2022-05-25T08:59:48Z",
+"data":[11055.977245525679, 4503.079536107787],
+"contentrange":"bytes 0-38/39",
+"correlationid":"f6e806c9-1a9a-446b-baa2-901373162105","xrequestid":"f6e806c9-1a9a-446b-baa2-901373162105"}
+```
+
+## Store collected data in blob storage
+
+Data collection allows you to log production inference data to a Blob storage destination of your choice. The data destination settings are configurable at the `collection_name` level.
__Blob storage output/format__:
The collected data follows the following JSON schema. The collected data is avai
> [!TIP] > Line breaks are shown only for readability. In your collected .jsonl files, there won't be any line breaks. - #### Store large payloads
-If the payload of your data is greater than 256 KB, there will be an event in the `{instance_id}.jsonl` file contained within the `{endpoint_name}/{deployment_name}/request/.../{instance_id}.jsonl` path that points to a raw file path, which should have the following path: `blob_url/{blob_container}/{blob_path}/{endpoint_name}/{deployment_name}/{rolled_time}/{instance_id}.jsonl`. The collected data will exist at this path.
+If the payload of your data is greater than 4 MB, there will be an event in the `{instance_id}.jsonl` file contained within the `{endpoint_name}/{deployment_name}/request/.../{instance_id}.jsonl` path that points to a raw file path, which should have the following path: `blob_url/{blob_container}/{blob_path}/{endpoint_name}/{deployment_name}/{rolled_time}/{instance_id}.jsonl`. The collected data will exist at this path.
#### Store binary data
With collected binary data, we show the raw file directly, with `instance_id` as
} ```
+#### Data collector batching
+
+If requests are sent within short time intervals of one another, the data collector batches them together into the same JSON object. For example, if you run a script to send sample data to your endpoint, and the deployment has data collection enabled, some of the requests can get batched together, depending on the time interval between them. If you're using data collection with [Azure Machine Learning model monitoring](concept-model-monitoring.md), the model monitoring service handles each request independently. However, if you expect each logged row of data to have its own unique `correlationid`, you can include the `correlationid` as a column in the pandas DataFrame you're logging with the data collector. For more information on how you can include your unique `correlationid` as a column in the pandas DataFrame, see [Collect data for model performance monitoring](#collect-data-for-model-performance-monitoring).
+
+Here is an example of two logged requests that are batched together:
+
+```json
+{"specversion":"1.0",
+"id":"720b8867-54a2-4876-80eb-1fd6a8975770",
+"source":"/subscriptions/79a1ba0c-35bb-436b-bff2-3074d5ff1f89/resourceGroups/rg-bozhlinmomoignite/providers/Microsoft.MachineLearningServices/workspaces/momo-demo-ws/onlineEndpoints/credit-default-mdc-testing-4/deployments/main2",
+"type":"azureml.inference.model_inputs",
+"datacontenttype":"application/json",
+"time":"2024-03-05T18:16:25Z",
+"data":[{"LIMIT_BAL":502970,"AGE":54,"BILL_AMT1":308068,"BILL_AMT2":381402,"BILL_AMT3":442625,"BILL_AMT4":320399,"BILL_AMT5":322616,"BILL_AMT6":397534,"PAY_AMT1":17987,"PAY_AMT2":78764,"PAY_AMT3":26067,"PAY_AMT4":24102,"PAY_AMT5":-1155,"PAY_AMT6":2154,"SEX":2,"EDUCATION":2,"MARRIAGE":2,"PAY_0":0,"PAY_2":0,"PAY_3":0,"PAY_4":0,"PAY_5":0,"PAY_6":0},{"LIMIT_BAL":293458,"AGE":35,"BILL_AMT1":74131,"BILL_AMT2":-71014,"BILL_AMT3":59284,"BILL_AMT4":98926,"BILL_AMT5":110,"BILL_AMT6":1033,"PAY_AMT1":-3926,"PAY_AMT2":-12729,"PAY_AMT3":17405,"PAY_AMT4":25110,"PAY_AMT5":7051,"PAY_AMT6":1623,"SEX":1,"EDUCATION":3,"MARRIAGE":2,"PAY_0":-2,"PAY_2":-2,"PAY_3":-2,"PAY_4":-2,"PAY_5":-1,"PAY_6":-1}],
+"contentrange":"bytes 0-6794/6795",
+"correlationid":"test",
+"xrequestid":"test",
+"modelversion":"default",
+"collectdatatype":"pandas.core.frame.DataFrame",
+"agent":"azureml-ai-monitoring/0.1.0b4"}
+```
+ #### View the data in the studio UI To view the collected data in Blob Storage from the studio UI:
-1. Go to thee **Data** tab in your Azure Machine Learning workspace:
+1. Go to the **Data** tab in your Azure Machine Learning workspace:
:::image type="content" source="./media/how-to-collect-production-data/datastores.png" alt-text="Screenshot highlights Data page in Azure Machine Learning workspace" lightbox="media/how-to-collect-production-data/datastores.png":::
To view the collected data in Blob Storage from the studio UI:
:::image type="content" source="./media/how-to-collect-production-data/data-view.png" alt-text="Screenshot highlights tree structure of data in Datastore" lightbox="media/how-to-collect-production-data/data-view.png":::
-## Log payload
-
-In addition to custom logging with the provided Python SDK, you can collect request and response HTTP payload data directly without the need to augment your scoring script (`score.py`).
-
-1. To enable payload logging, in your deployment YAML, use the names `request` and `response`:
-
- ```yml
- $schema: http://azureml/sdk-2-0/OnlineDeployment.json
-
- endpoint_name: my_endpoint
- name: blue
- model: azureml:my-model-m1:1
- environment: azureml:env-m1:1
- data_collector:
- collections:
- request:
- enabled: 'True'
- response:
- enabled: 'True'
- ```
-
-1. Deploy the model with payload logging enabled:
-
- ```bash
- $ az ml online-deployment create -f deployment.YAML
- ```
-
-With payload logging, the collected data is not guaranteed to be in tabular format. Therefore, if you want to use collected payload data with model monitoring, you'll be required to provide a preprocessing component to make the data tabular. If you're interested in a seamless model monitoring experience, we recommend using the [custom logging Python SDK](#perform-custom-logging-for-model-monitoring).
-
-As your deployment is used, the collected data flows to your workspace Blob storage. The following JSON code is an example of an HTTP _request_ collected:
-
-```json
-{"specversion":"1.0",
-"id":"19790b87-a63c-4295-9a67-febb2d8fbce0",
-"source":"/subscriptions/d511f82f-71ba-49a4-8233-d7be8a3650f4/resourceGroups/mire2etesting/providers/Microsoft.MachineLearningServices/workspaces/mirmasterenvws/onlineEndpoints/localdev-endpoint/deployments/localdev",
-"type":"azureml.inference.request",
-"datacontenttype":"application/json",
-"time":"2022-05-25T08:59:48Z",
-"data":{"data": [ [1,2,3,4,5,6,7,8,9,10], [10,9,8,7,6,5,4,3,2,1]]},
-"path":"/score",
-"method":"POST",
-"contentrange":"bytes 0-59/*",
-"correlationid":"f6e806c9-1a9a-446b-baa2-901373162105","xrequestid":"f6e806c9-1a9a-446b-baa2-901373162105"}
-```
-
-And the following JSON code is another example of an HTTP _response_ collected:
-
-```json
-{"specversion":"1.0",
-"id":"bbd80e51-8855-455f-a719-970023f41e7d",
-"source":"/subscriptions/d511f82f-71ba-49a4-8233-d7be8a3650f4/resourceGroups/mire2etesting/providers/Microsoft.MachineLearningServices/workspaces/mirmasterenvws/onlineEndpoints/localdev-endpoint/deployments/localdev",
-"type":"azureml.inference.response",
-"datacontenttype":"application/json",
-"time":"2022-05-25T08:59:48Z",
-"data":[11055.977245525679, 4503.079536107787],
-"contentrange":"bytes 0-38/39",
-"correlationid":"f6e806c9-1a9a-446b-baa2-901373162105","xrequestid":"f6e806c9-1a9a-446b-baa2-901373162105"}
-```
- ## Collect data for MLflow models If you're deploying an MLflow model to an Azure Machine Learning online endpoint, you can enable production inference data collection with single toggle in the studio UI. If data collection is toggled on, Azure Machine Learning auto-instruments your scoring script with custom logging code to ensure that the production data is logged to your workspace Blob Storage. Your model monitors can then use the data to monitor the performance of your MLflow model in production.
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
- sdkv2 - build-2023 - ignite-2023
+ - update-code
# Create and run machine learning pipelines using components with the Azure Machine Learning SDK v2
Fashion-MNIST is a dataset of fashion images divided into 10 classes. Each image
To define the input data of a job that references the Web-based data, run:
-```
-[!notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
-```
+
+[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=define-input)]
+ By defining an `Input`, you create a reference to the data source location. The data remains in its existing location, so no extra storage cost is incurred.
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
This example uses `train.yml` [in the directory](https://github.com/Azure/azurem
>[!Note] > When register components in UI, `code` defined in the component YAML file can only point to the current folder where YAML file locates or the subfolders, which means you cannot specify `../` for `code` as UI cannot recognize the parent directory. > `additional_includes` can only point to the current or sub folder.
+> Currently, UI only supports registering components with `command` type.
2. Select Upload from **Folder**, and select the `1b_e2e_registered_components` folder to upload. Select `train.yml` from the drop-down list.
machine-learning How To Debug Managed Online Endpoints Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code.md
Title: Debug online endpoints locally in VS Code (preview)
+ Title: Debug online endpoints locally in Visual Studio Code
description: Learn how to use Visual Studio Code to test and debug online endpoints locally before deploying them to Azure.
Previously updated : 11/03/2021 Last updated : 03/20/2024 ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Learn how to use the Visual Studio Code (VS Code) debugger to test and debug online endpoints locally before deploying them to Azure.
+Learn how to use the Microsoft Visual Studio Code debugger to test and debug online endpoints locally before deploying them to Azure.
Azure Machine Learning local endpoints help you test and debug your scoring script, environment configuration, code configuration, and machine learning model locally. [!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
-## Online endpoint local debugging
-
-Debugging endpoints locally before deploying them to the cloud can help you catch errors in your code and configuration earlier. You have different options for debugging endpoints locally with VS Code.
+Debugging endpoints locally before deploying them to the cloud can help you catch errors in your code and configuration earlier. You have different options for debugging endpoints locally with Visual Studio Code.
- [Azure Machine Learning inference HTTP server](how-to-inference-server-http.md) - Local endpoint
This guide focuses on local endpoints.
The following table provides an overview of scenarios to help you choose what works best for you.
-| Scenario | Inference HTTP Server | Local endpoint |
+| Scenario | Inference HTTP server | Local endpoint |
|--|--|--| | Update local Python environment, **without** Docker image rebuild | Yes | No | | Update scoring script | Yes | Yes | | Update deployment configurations (deployment, environment, code, model) | No | Yes |
-| VS Code Debugger integration | Yes | Yes |
+| Visual Studio Code debugger integration | Yes | Yes |
## Prerequisites
The following table provides an overview of scenarios to help you choose what wo
This guide assumes you have the following items installed locally on your PC. - [Docker](https://docs.docker.com/engine/install/)-- [VS Code](https://code.visualstudio.com/#alt-downloads)
+- [Visual Studio Code](https://code.visualstudio.com/#alt-downloads)
- [Azure CLI](/cli/azure/install-azure-cli)-- [Azure CLI `ml` extension (v2)](how-to-configure-cli.md)
+- [Azure CLI ml extension (v2)](how-to-configure-cli.md)
For more information, see the guide on [how to prepare your system to deploy online endpoints](how-to-deploy-online-endpoints.md#prepare-your-system).
-The examples in this article are based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to the `cli` directory in the repo:
+The examples in this article are based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) GitHub repository. To run the commands locally without having to copy/paste YAML and other files, clone the repo and then change directories to *azureml-examples/cli*:
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples
-cd cli
+cd azureml-examples/cli
``` If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, use the following commands. Replace the following parameters with values for your specific configuration:
If you haven't already set the defaults for the Azure CLI, save your default set
* Replace `<resource-group>` with the Azure resource group that contains your workspace. * Replace `<location>` with the Azure region that contains your workspace.
-> [!TIP]
-> You can see what your current defaults are by using the `az configure -l` command.
- ```azurecli az account set --subscription <subscription> az configure --defaults workspace=<workspace> group=<resource-group> location=<location> ```
+> [!TIP]
+> You can see what your current defaults are by using the `az configure -l` command.
+ # [Python](#tab/python)+ [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] This guide assumes you have the following items installed locally on your PC. - [Docker](https://docs.docker.com/engine/install/)-- [VS Code](https://code.visualstudio.com/#alt-downloads)
+- [Visual Studio Code](https://code.visualstudio.com/#alt-downloads)
- [Azure CLI](/cli/azure/install-azure-cli)-- [Azure CLI `ml` extension (v2)](how-to-configure-cli.md)
+- [Azure CLI ml extension (v2)](how-to-configure-cli.md)
- [Azure Machine Learning Python SDK (v2)](https://aka.ms/sdk-v2-install)
+- [Windows Subsystem for Linux (WSL)](/windows/wsl/install)
For more information, see the guide on [how to prepare your system to deploy online endpoints](how-to-deploy-online-endpoints.md#prepare-your-system).
-The examples in this article can be found in the [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) notebook within the[azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory `sdk/endpoints/online/managed`.
+The examples in this article can be found in the Jupyter notebook called [Debug online endpoints locally in Visual Studio Code](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb) within the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the code locally, clone the repo and then change directories to the notebook's parent directory *sdk/endpoints/online/managed*.
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
cd azureml-examples
cd sdk/python/endpoints/online/managed ```
-Import the required modules:
+Open the Jupyter notebook and import the required modules:
```python from azure.ai.ml import MLClient
from azure.ai.ml.entities import (
Environment, ) from azure.identity import DefaultAzureCredential
-```
+```
-Set up variables for the workspace and endpoint:
+Set up variables for the workspace and endpoint:
-```python
+```python
subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>" workspace_name = "<AML_WORKSPACE_NAME>" endpoint_name = "<ENDPOINT_NAME>"
-```
+```
-
+ ## Launch development container # [Azure CLI](#tab/cli)
-Azure Machine Learning local endpoints use Docker and VS Code development containers (dev container) to build and configure a local debugging environment. With dev containers, you can take advantage of VS Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container).
+Azure Machine Learning local endpoints use Docker and Visual Studio Code development containers (dev containers) to build and configure a local debugging environment. With dev containers, you can take advantage of Visual Studio Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container).
-To debug online endpoints locally in VS Code, use the `--vscode-debug` flag when creating or updating and Azure Machine Learning online deployment. The following command uses a deployment example from the examples repo:
+To debug online endpoints locally in Visual Studio Code, use the `--vscode-debug` flag when creating or updating and Azure Machine Learning online deployment. The following command uses a deployment example from the examples repo:
```azurecli az ml online-deployment create --file endpoints/online/managed/sample/blue-deployment.yml --local --vscode-debug ``` > [!IMPORTANT]
-> On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the VS Code executable or use WSL interop. For more information, see [Windows interoperability with Linux](/windows/wsl/interop).
+> On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the Visual Studio Code executable or use WSL interop. For more information, see [Windows interoperability with Linux](/windows/wsl/interop).
A Docker image is built locally. Any environment configuration or model file errors are surfaced at this stage of the process. > [!NOTE]
-> The first time you launch a new or updated dev container it can take several minutes.
+> The first time you launch a new or updated dev container, it can take several minutes.
-Once the image successfully builds, your dev container opens in a VS Code window.
+Once the image successfully builds, your dev container opens in a Visual Studio Code window.
-You'll use a few VS Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
+You'll use a few Visual Studio Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
- Inference Debug - [Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
You'll use a few VS Code extensions to debug your deployments in the dev contain
- [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) > [!IMPORTANT]
-> Before starting your debug session, make sure that the VS Code extensions have finished installing in your dev container.
-
+> Before starting your debug session, make sure that the Visual Studio Code extensions have finished installing in your dev container.
# [Python](#tab/python)
-Azure Machine Learning local endpoints use Docker and VS Code development containers (dev container) to build and configure a local debugging environment. With dev containers, you can take advantage of VS Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container).
+Azure Machine Learning local endpoints use Docker and Visual Studio Code development containers (dev containers) to build and configure a local debugging environment. With dev containers, you can take advantage of Visual Studio Code features from inside a Docker container. For more information on dev containers, see [Create a development container](https://code.visualstudio.com/docs/remote/create-dev-container).
-Get a handle to the workspace:
+Get a handle to the workspace:
-```python
+```python
credential = DefaultAzureCredential() ml_client = MLClient( credential,
ml_client = MLClient(
resource_group_name=resource_group, workspace_name=workspace_name, )
-```
+```
-To debug online endpoints locally in VS Code, set the `vscode-debug` and `local` flags when creating or updating an Azure Machine Learning online deployment. The following code mirrors a deployment example from the examples repo:
+To debug online endpoints locally in Visual Studio Code, set the `vscode-debug` and `local` flags when creating or updating an Azure Machine Learning online deployment. The following code mirrors a deployment example from the examples repo:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb?name=launch-container-4)] > [!IMPORTANT]
-> On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the VS Code executable or use WSL interop. For more information, see [Windows interoperability with Linux](/windows/wsl/interop).
+> On Windows Subsystem for Linux (WSL), you'll need to update your PATH environment variable to include the path to the Visual Studio Code executable or use WSL interop. For more information, see [Windows interoperability with Linux](/windows/wsl/interop).
A Docker image is built locally. Any environment configuration or model file errors are surfaced at this stage of the process. > [!NOTE]
-> The first time you launch a new or updated dev container it can take several minutes.
+> It can take several minutes to launch a new or updated dev container for the first time.
-Once the image successfully builds, your dev container opens in a VS Code window.
+Once the image successfully builds, your dev container opens in a Visual Studio Code window.
-You'll use a few VS Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
+You'll use a few Visual Studio Code extensions to debug your deployments in the dev container. Azure Machine Learning automatically installs these extensions in your dev container.
- Inference Debug - [Pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance)
You'll use a few VS Code extensions to debug your deployments in the dev contain
- [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) > [!IMPORTANT]
-> Before starting your debug session, make sure that the VS Code extensions have finished installing in your dev container.
----
+> Before starting your debug session, make sure that the Visual Studio Code extensions have finished installing in your dev container.
## Start debug session
-Once your environment is set up, use the VS Code debugger to test and debug your deployment locally.
+Once your environment is set up, use the Visual Studio Code debugger to test and debug your deployment locally.
1. Open your scoring script in Visual Studio Code. > [!TIP]
- > The score.py script used by the endpoint deployed earlier is located at `azureml-samples/cli/endpoints/online/managed/sample/score.py` in the repository you cloned. However, the steps in this guide work with any scoring script.
+ > The *score.py* script used by the endpoint deployed earlier is located at *azureml-samples/cli/endpoints/online/managed/sample/score.py* in the repository you cloned. However, the steps in this guide work with any scoring script.
1. Set a breakpoint anywhere in your scoring script. - To debug startup behavior, place your breakpoint(s) inside the `init` function. - To debug scoring behavior, place your breakpoint(s) inside the `run` function.
-1. Select the VS Code Job view.
-1. In the Run and Debug dropdown, select **AzureML: Debug Local Endpoint** to start debugging your endpoint locally.
+1. Select the Visual Studio Code Job view.
+
+1. In the **Run and Debug** dropdown, select **AzureML: Debug Local Endpoint** to start debugging your endpoint locally.
In the **Breakpoints** section of the Run view, check that: - **Raised Exceptions** is **unchecked** - **Uncaught Exceptions** is **checked**
- :::image type="content" source="media/how-to-debug-managed-online-endpoints-visual-studio-code/configure-debug-profile.png" alt-text="Configure Azure Machine Learning Debug Local Environment debug profile":::
+ :::image type="content" source="media/how-to-debug-managed-online-endpoints-visual-studio-code/configure-debug-profile.png" alt-text="Screenshot showing how to configure Azure Machine Learning Debug Local Environment debug profile." lightbox="media/how-to-debug-managed-online-endpoints-visual-studio-code/configure-debug-profile.png":::
-1. Select the play icon next to the Run and Debug dropdown to start your debugging session.
+1. Select the play icon next to the **Run and Debug** dropdown to start your debugging session.
At this point, any breakpoints in your `init` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions).
-For more information on the VS Code debugger, see [Debugging in VS Code](https://code.visualstudio.com/Docs/editor/debugging)
+For more information on the Visual Studio Code debugger, see [Debugging](https://code.visualstudio.com/Docs/editor/debugging).
## Debug your endpoint
In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples f
At this point, any breakpoints in your `run` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions). - # [Python](#tab/python) Now that your application is running in the debugger, try making a prediction to debug your scoring script.
In this case, `<REQUEST-FILE>` is a JSON file that contains input data samples f
At this point, any breakpoints in your `run` function are caught. Use the debug actions to step through your code. For more information on debug actions, see the [debug actions guide](https://code.visualstudio.com/Docs/editor/debugging#_debug-actions). -
-
-+ ## Edit your endpoint
As you debug and troubleshoot your application, there are scenarios where you ne
To apply changes to your code:
-1. Update your code
+1. Update your code.
1. Restart your debug session using the `Developer: Reload Window` command in the command palette. For more information, see the [command palette documentation](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette). > [!NOTE] > Since the directory containing your code and endpoint assets is mounted onto the dev container, any changes you make in the dev container are synced with your local file system.
-For more extensive changes involving updates to your environment and endpoint configuration, use the `ml` extension `update` command. Doing so will trigger a full image rebuild with your changes.
+For more extensive changes involving updates to your environment and endpoint configuration, use the `ml` extension `update` command. Doing so triggers a full image rebuild with your changes.
```azurecli az ml online-deployment update --file <DEPLOYMENT-YAML-SPECIFICATION-FILE> --local --vscode-debug ```
-Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
+Once the updated image is built and your development container launches, use the Visual Studio Code debugger to test and troubleshoot your updated endpoint.
# [Python](#tab/python)
As you debug and troubleshoot your application, there are scenarios where you ne
To apply changes to your code:
-1. Update your code
+1. Update your code.
1. Restart your debug session using the `Developer: Reload Window` command in the command palette. For more information, see the [command palette documentation](https://code.visualstudio.com/docs/getstarted/userinterface#_command-palette). > [!NOTE] > Since the directory containing your code and endpoint assets is mounted onto the dev container, any changes you make in the dev container are synced with your local file system.
-For more extensive changes involving updates to your environment and endpoint configuration, use your `MLClient`'s `online_deployments.update` module/method. Doing so will trigger a full image rebuild with your changes.
+For more extensive changes involving updates to your environment and endpoint configuration, use your `MLClient`'s `online_deployments.update` module/method. Doing so triggers a full image rebuild with your changes.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/online/managed/debug-online-endpoints-locally-in-visual-studio-code.ipynb?name=edit-endpoint-1)]
-Once the updated image is built and your development container launches, use the VS Code debugger to test and troubleshoot your updated endpoint.
+Once the updated image is built and your development container launches, use the Visual Studio Code debugger to test and troubleshoot your updated endpoint.
+
+## Related content
-
-
-## Next steps
--- [Deploy and score a machine learning model by using an online endpoint)](how-to-deploy-online-endpoints.md)-- [Troubleshooting managed online endpoints deployment and scoring)](how-to-troubleshoot-managed-online-endpoints.md)
+- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
+- [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md)
machine-learning How To Deploy Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md
Previously updated : 10/13/2022 Last updated : 03/26/2024 ms.devlang: azurecli
ms.devlang: azurecli
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-Learn how to use a custom container for deploying a model to an online endpoint in Azure Machine Learning.
+Learn how to use a custom container to deploy a model to an online endpoint in Azure Machine Learning.
Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication. The following table lists various [deployment examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) that use custom containers such as TensorFlow Serving, TorchServe, Triton Inference Server, Plumber R package, and Azure Machine Learning Inference Minimal image.
-|Example|Script (CLI)|Description|
+|Example|Script (CLI)|Description|
|-||| |[minimal/multimodel](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel)|[deploy-custom-container-minimal-multimodel](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-minimal-multimodel.sh)|Deploy multiple models to a single deployment by extending the Azure Machine Learning Inference Minimal image.| |[minimal/single-model](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/single-model)|[deploy-custom-container-minimal-single-model](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-minimal-single-model.sh)|Deploy a single model by extending the Azure Machine Learning Inference Minimal image.| |[mlflow/multideployment-scikit](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/mlflow/multideployment-scikit)|[deploy-custom-container-mlflow-multideployment-scikit](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-mlflow-multideployment-scikit.sh)|Deploy two MLFlow models with different Python requirements to two separate deployments behind a single endpoint using the Azure Machine Learning Inference Minimal Image.| |[r/multimodel-plumber](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/r/multimodel-plumber)|[deploy-custom-container-r-multimodel-plumber](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-r-multimodel-plumber.sh)|Deploy three regression models to one endpoint using the Plumber R package|
-|[tfserving/half-plus-two](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/tfserving/half-plus-two)|[deploy-custom-container-tfserving-half-plus-two](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-tfserving-half-plus-two.sh)|Deploy a simple Half Plus Two model using a TensorFlow Serving custom container using the standard model registration process.|
-|[tfserving/half-plus-two-integrated](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/tfserving/half-plus-two-integrated)|[deploy-custom-container-tfserving-half-plus-two-integrated](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-tfserving-half-plus-two-integrated.sh)|Deploy a simple Half Plus Two model using a TensorFlow Serving custom container with the model integrated into the image.|
+|[tfserving/half-plus-two](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/tfserving/half-plus-two)|[deploy-custom-container-tfserving-half-plus-two](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-tfserving-half-plus-two.sh)|Deploy a Half Plus Two model using a TensorFlow Serving custom container using the standard model registration process.|
+|[tfserving/half-plus-two-integrated](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/tfserving/half-plus-two-integrated)|[deploy-custom-container-tfserving-half-plus-two-integrated](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-tfserving-half-plus-two-integrated.sh)|Deploy a Half Plus Two model using a TensorFlow Serving custom container with the model integrated into the image.|
|[torchserve/densenet](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/torchserve/densenet)|[deploy-custom-container-torchserve-densenet](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-torchserve-densenet.sh)|Deploy a single model using a TorchServe custom container.| |[torchserve/huggingface-textgen](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/torchserve/huggingface-textgen)|[deploy-custom-container-torchserve-huggingface-textgen](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-torchserve-huggingface-textgen.sh)|Deploy Hugging Face models to an online endpoint and follow along with the Hugging Face Transformers TorchServe example.| |[triton/single-model](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/triton/single-model)|[deploy-custom-container-triton-single-model](https://github.com/Azure/azureml-examples/blob/main/cli/deploy-custom-container-triton-single-model.sh)|Deploy a Triton model using a custom container|
The following table lists various [deployment examples](https://github.com/Azure
This article focuses on serving a TensorFlow model with TensorFlow (TF) Serving. > [!WARNING]
-> Microsoft may not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you may be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
+> Microsoft might not be able to help troubleshoot problems caused by a custom image. If you encounter problems, you might be asked to use the default image or one of the images Microsoft provides to see if the problem is specific to your image.
## Prerequisites [!INCLUDE [cli & sdk](includes/machine-learning-cli-sdk-v2-prereqs.md)]
-* You, or the service principal you use, must have `Contributor` access to the Azure Resource Group that contains your workspace. You'll have such a resource group if you configured your workspace using the quickstart article.
+* You, or the service principal you use, must have *Contributor* access to the Azure resource group that contains your workspace. You have such a resource group if you configured your workspace using the quickstart article.
-* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It will help you debug issues.
+* To deploy locally, you must have [Docker engine](https://docs.docker.com/engine/install/) running locally. This step is **highly recommended**. It helps you debug issues.
## Download source code
-To follow along with this tutorial, download the source code below.
+To follow along with this tutorial, clone the source code from GitHub.
# [Azure CLI](#tab/cli)
cd azureml-examples/cli
```azurecli git clone https://github.com/Azure/azureml-examples --depth 1
-cd azureml-examples/sdk
+cd azureml-examples/cli
```
-See also [the example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/custom-container/online-endpoints-custom-container.ipynb) but note that `3. Test locally` section in the notebook assumes to run under the `azureml-examples/sdk` directory.
+See also [the example notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/custom-container/online-endpoints-custom-container.ipynb), but note that `3. Test locally` section in the notebook assumes that it runs under the `azureml-examples/sdk` directory.
Use docker to run your image locally for testing:
### Check that you can send liveness and scoring requests to the image
-First, check that the container is "alive," meaning that the process inside the container is still running. You should get a 200 (OK) response.
+First, check that the container is *alive*, meaning that the process inside the container is still running. You should get a 200 (OK) response.
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-custom-container-tfserving-half-plus-two.sh" id="check_liveness_locally":::
Then, check that you can get predictions about unlabeled data:
### Stop the image
-Now that you've tested locally, stop the image:
+Now that you tested locally, stop the image:
:::code language="azurecli" source="~/azureml-examples-main/cli/deploy-custom-container-tfserving-half-plus-two.sh" id="stop_image"::: ## Deploy your online endpoint to Azure+ Next, deploy your online endpoint to Azure. # [Azure CLI](#tab/cli)
__tfserving-deployment.yml__
# [Python SDK](#tab/python) ### Connect to Azure Machine Learning workspace
-Connect to Azure Machine Learning Workspace, configure workspace details, and get a handle to the workspace as follows:
+
+Connect to your Azure Machine Learning workspace, configure workspace details, and get a handle to the workspace as follows:
1. Import the required libraries:
endpoint = ManagedOnlineEndpoint(
### Configure online deployment
-A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
+A deployment is a set of resources required for hosting the model that does the actual inferencing. Create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
> [!TIP] > - `name` - Name of the deployment.
There are a few important concepts to notice in this YAML/Python parameter:
#### Readiness route vs. liveness route
-An HTTP server defines paths for both _liveness_ and _readiness_. A liveness route is used to check whether the server is running. A readiness route is used to check whether the server is ready to do work. In machine learning inference, a server could respond 200 OK to a liveness request before loading a model. The server could respond 200 OK to a readiness request only after the model has been loaded into memory.
+An HTTP server defines paths for both _liveness_ and _readiness_. A liveness route is used to check whether the server is running. A readiness route is used to check whether the server is ready to do work. In machine learning inference, a server could respond 200 OK to a liveness request before loading a model. The server could respond 200 OK to a readiness request only after the model is loaded into memory.
-Review the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) for more information about liveness and readiness probes.
+For more information about liveness and readiness probes, see the [Kubernetes documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/).
Notice that this deployment uses the same path for both liveness and readiness, since TF Serving only defines a liveness route. #### Locating the mounted model
-When you deploy a model as an online endpoint, Azure Machine Learning _mounts_ your model to your endpoint. Model mounting enables you to deploy new versions of the model without having to create a new Docker image. By default, a model registered with the name *foo* and version *1* would be located at the following path inside of your deployed container: `/var/azureml-app/azureml-models/foo/1`
+When you deploy a model as an online endpoint, Azure Machine Learning _mounts_ your model to your endpoint. Model mounting allows you to deploy new versions of the model without having to create a new Docker image. By default, a model registered with the name *foo* and version *1* would be located at the following path inside of your deployed container: */var/azureml-app/azureml-models/foo/1*
-For example, if you have a directory structure of `/azureml-examples/cli/endpoints/online/custom-container` on your local machine, where the model is named `half_plus_two`:
+For example, if you have a directory structure of */azureml-examples/cli/endpoints/online/custom-container* on your local machine, where the model is named *half_plus_two*:
:::image type="content" source="./media/how-to-deploy-custom-container/local-directory-structure.png" alt-text="Diagram showing a tree view of the local directory structure."::: # [Azure CLI](#tab/cli)
-and `tfserving-deployment.yml` contains:
+And *tfserving-deployment.yml* contains:
```yaml model:
model:
# [Python SDK](#tab/python)
-and `Model` class contains:
+And `Model` class contains:
```python model = Model(name="tfserving-mounted", version="1", path="half_plus_two")
model = Model(name="tfserving-mounted", version="1", path="half_plus_two")
-then your model will be located under `/var/azureml-app/azureml-models/tfserving-deployment/1` in your deployment:
+Then your model will be located under */var/azureml-app/azureml-models/tfserving-deployment/1* in your deployment:
:::image type="content" source="./media/how-to-deploy-custom-container/deployment-location.png" alt-text="Diagram showing a tree view of the deployment directory structure.":::
-You can optionally configure your `model_mount_path`. It enables you to change the path where the model is mounted.
+You can optionally configure your `model_mount_path`. It lets you change the path where the model is mounted.
> [!IMPORTANT] > The `model_mount_path` must be a valid absolute path in Linux (the OS of the container image). # [Azure CLI](#tab/cli)
-For example, you can have `model_mount_path` parameter in your _tfserving-deployment.yml_:
+For example, you can have `model_mount_path` parameter in your *tfserving-deployment.yml*:
```YAML name: tfserving-deployment
blue_deployment = ManagedOnlineDeployment(
-then your model will be located at `/var/tfserving-model-mount/tfserving-deployment/1` in your deployment. Note that it's no longer under `azureml-app/azureml-models`, but under the mount path you specified:
+Then your model is located at */var/tfserving-model-mount/tfserving-deployment/1* in your deployment. Note that it's no longer under *azureml-app/azureml-models*, but under the mount path you specified:
:::image type="content" source="./media/how-to-deploy-custom-container/mount-path-deployment-location.png" alt-text="Diagram showing a tree view of the deployment directory structure when using mount_model_path.":::
then your model will be located at `/var/tfserving-model-mount/tfserving-deploym
# [Azure CLI](#tab/cli)
-Now that you've understood how the YAML was constructed, create your endpoint.
+Now that you understand how the YAML was constructed, create your endpoint.
```azurecli az ml online-endpoint create --name tfserving-endpoint -f endpoints/online/custom-container/tfserving-endpoint.yml ```
-Creating a deployment might take few minutes.
+Creating a deployment might take a few minutes.
```azurecli az ml online-deployment create --name tfserving-deployment -f endpoints/online/custom-container/tfserving-deployment.yml --all-traffic ``` -- # [Python SDK](#tab/python)
-Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
+Using the `MLClient` created earlier, create the endpoint in the workspace. This command starts the endpoint creation and returns a confirmation response while the endpoint creation continues.
```python ml_client.begin_create_or_update(endpoint) ```
-Create the deployment by running as well.
+Create the deployment by running:
```python ml_client.begin_create_or_update(blue_deployment)
Once your deployment completes, see if you can make a scoring request to the dep
# [Python SDK](#tab/python)
-Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
+Using the `MLClient` created earlier, you get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
- `endpoint_name` - Name of the endpoint - `request_file` - File with request data - `deployment_name` - Name of the specific deployment to test in an endpoint
-We'll send a sample request using a json file. The sample json is in the [example repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/custom-container).
+Send a sample request using a JSON file. The sample JSON is in the [example repository](https://github.com/Azure/azureml-examples/tree/main/sdk/python/endpoints/online/custom-container).
```python # test the blue deployment with some sample data
ml_client.online_endpoints.invoke(
### Delete the endpoint
-Now that you've successfully scored with your endpoint, you can delete it:
+Now that you successfully scored with your endpoint, you can delete it:
# [Azure CLI](#tab/cli)
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
-## Next steps
+## Related content
- [Safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md) - [Troubleshooting online endpoints deployment](./how-to-troubleshoot-online-endpoints.md)
machine-learning How To Deploy Model Custom Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-custom-output.md
Previously updated : 10/10/2022 Last updated : 03/18/2024
[!INCLUDE [ml v2](includes/machine-learning-dev-v2.md)]
-Sometimes you need to execute inference having a higher control of what is being written as output of the batch job. Those cases include:
+This guide explains how to create deployments that generate custom outputs and files. Sometimes you need more control over what's written as output from batch inference jobs. These cases include the following situations:
> [!div class="checklist"]
-> * You need to control how the predictions are being written in the output. For instance, you want to append the prediction to the original data (if data is tabular).
-> * You need to write your predictions in a different file format from the one supported out-of-the-box by batch deployments.
+> * You need to control how predictions are written in the output. For instance, you want to append the prediction to the original data if the data is tabular.
+> * You need to write your predictions in a different file format than the one supported out-of-the-box by batch deployments.
> * Your model is a generative model that can't write the output in a tabular format. For instance, models that produce images as outputs.
-> * Your model produces multiple tabular files instead of a single one. This is the case for instance of models that perform forecasting considering multiple scenarios.
+> * Your model produces multiple tabular files instead of a single one. For example, models that perform forecasting by considering multiple scenarios.
-In any of those cases, Batch Deployments allow you to take control of the output of the jobs by allowing you to write directly to the output of the batch deployment job. In this tutorial, we'll see how to deploy a model to perform batch inference and writes the outputs in `parquet` format by appending the predictions to the original input data.
+Batch deployments allow you to take control of the output of the jobs by letting you write directly to the output of the batch deployment job. In this tutorial, you learn how to deploy a model to perform batch inference and write the outputs in *parquet* format by appending the predictions to the original input data.
## About this sample
-This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. This example uses a model based on the [UCI Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but we are using a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It is integer valued from 0 (no presence) to 1 (presence).
+This example shows how you can deploy a model to perform batch inference and customize how your predictions are written in the output. The model is based on the [UCI Heart Disease dataset](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). The database contains 76 attributes, but this example uses a subset of 14 of them. The model tries to predict the presence of heart disease in a patient. It's integer valued from 0 (no presence) to 1 (presence).
-The model has been trained using an `XGBBoost` classifier and all the required preprocessing has been packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
+The model was trained using an `XGBBoost` classifier and all the required preprocessing was packaged as a `scikit-learn` pipeline, making this model an end-to-end pipeline that goes from raw data to predictions.
[!INCLUDE [machine-learning-batch-clone](includes/azureml-batch-clone-samples.md)]
The files for this example are in:
cd endpoints/batch/deploy-models/custom-outputs-parquet ```
-### Follow along in Jupyter Notebooks
+### Follow along in a Jupyter notebook
-You can follow along this sample in a Jupyter Notebook. In the cloned repository, open the notebook: [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb).
+There's a Jupyter notebook that you can use to follow this example. In the cloned repository, open the notebook called [custom-output-batch.ipynb](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb).
## Prerequisites [!INCLUDE [machine-learning-batch-prereqs](includes/azureml-batch-prereqs.md)]
-## Creating a batch deployment with a custom output
+## Create a batch deployment with a custom output
-In this example, we are going to create a deployment that can write directly to the output folder of the batch deployment job. The deployment will use this feature to write custom parquet files.
+In this example, you create a deployment that can write directly to the output folder of the batch deployment job. The deployment uses this feature to write custom parquet files.
-### Registering the model
+### Register the model
+
+You can only deploy registered models using a batch endpoint. In this case, you already have a local copy of the model in the repository, so you only need to publish the model to the registry in the workspace. You can skip this step if the model you're trying to deploy is already registered.
-Batch Endpoint can only deploy registered models. In this case, we already have a local copy of the model in the repository, so we only need to publish the model to the registry in the workspace. You can skip this step if the model you are trying to deploy is already registered.
-
# [Azure CLI](#tab/cli) :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="register_model" :::
Batch Endpoint can only deploy registered models. In this case, we already have
# [Python](#tab/python) [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=register_model)]+
-### Creating a scoring script
+### Create a scoring script
-We need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. We are also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
+You need to create a scoring script that can read the input data provided by the batch deployment and return the scores of the model. You're also going to write directly to the output folder of the job. In summary, the proposed scoring script does as follows:
1. Reads the input data as CSV files. 2. Runs an MLflow model `predict` function over the input data.
__code/batch_driver.py__
__Remarks:__ * Notice how the environment variable `AZUREML_BI_OUTPUT_PATH` is used to get access to the output path of the deployment job.
-* The `init()` function is populating a global variable called `output_path` that can be used later to know where to write.
-* The `run` method returns a list of the processed files. It is required for the `run` function to return a `list` or a `pandas.DataFrame` object.
+* The `init()` function populates a global variable called `output_path` that can be used later to know where to write.
+* The `run` method returns a list of the processed files. It's required for the `run` function to return a `list` or a `pandas.DataFrame` object.
> [!WARNING]
-> Take into account that all the batch executors will have write access to this path at the same time. This means that you need to account for concurrency. In this case, we are ensuring each executor writes its own file by using the input file name as the name of the output folder.
+> Take into account that all the batch executors have write access to this path at the same time. This means that you need to account for concurrency. In this case, ensure that each executor writes its own file by using the input file name as the name of the output folder.
-## Creating the endpoint
+## Create the endpoint
-We are going to create a batch endpoint named `heart-classifier-batch` where to deploy the model.
+You now create a batch endpoint named `heart-classifier-batch` where the model is deployed.
-1. Decide on the name of the endpoint. The name of the endpoint will end-up in the URI associated with your endpoint. Because of that, __batch endpoint names need to be unique within an Azure region__. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
+1. Decide on the name of the endpoint. The name of the endpoint appears in the URI associated with your endpoint, so *batch endpoint names need to be unique within an Azure region*. For example, there can be only one batch endpoint with the name `mybatchendpoint` in `westus2`.
# [Azure CLI](#tab/cli)
- In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+ In this case, place the name of the endpoint in a variable so you can easily reference it later.
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="name_endpoint" ::: # [Python](#tab/python)
- In this case, let's place the name of the endpoint in a variable so we can easily reference it later.
+ In this case, place the name of the endpoint in a variable so you can easily reference it later.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=name_endpoint)]
-1. Configure your batch endpoint
+1. Configure your batch endpoint.
# [Azure CLI](#tab/cli)
We are going to create a batch endpoint named `heart-classifier-batch` where to
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=create_endpoint)]
-### Creating the deployment
+### Create the deployment
Follow the next steps to create a deployment using the previous scoring script:
-1. First, let's create an environment where the scoring script can be executed:
+1. First, create an environment where the scoring script can be executed:
# [Azure CLI](#tab/cli)
- No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file.
+ No extra step is required for the Azure Machine Learning CLI. The environment definition is included in the deployment file.
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml" range="7-10"::: # [Python](#tab/python)
- Let's get a reference to the environment:
+ Get a reference to the environment:
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=configure_environment)]
-2. Create the deployment. Notice that now `output_action` is set to `SUMMARY_ONLY`.
+2. Create the deployment. Notice that `output_action` is now set to `SUMMARY_ONLY`.
> [!NOTE]
- > This example assumes you have aa compute cluster with name `batch-cluster`. Change that name accordinly.
+ > This example assumes you have a compute cluster with name `batch-cluster`. Change that name accordingly.
# [Azure CLI](#tab/cli)
- To create a new deployment under the created endpoint, create a `YAML` configuration like the following. You can check the [full batch endpoint YAML schema](reference-yaml-endpoint-batch.md) for extra properties.
+ To create a new deployment under the created endpoint, create a YAML configuration like the following. You can check the [full batch endpoint YAML schema](reference-yaml-endpoint-batch.md) for extra properties.
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml":::
Follow the next steps to create a deployment using the previous scoring script:
3. At this point, our batch endpoint is ready to be used.
-## Testing out the deployment
+## Test the deployment
-For testing our endpoint, we are going to use a sample of unlabeled data located in this repository and that can be used with the model. Batch endpoints can only process data that is located in the cloud and that is accessible from the Azure Machine Learning workspace. In this example, we are going to upload it to an Azure Machine Learning data store. Particularly, we are going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
+To test your endpoint, use a sample of unlabeled data located in this repository, which can be used with the model. Batch endpoints can only process data that's located in the cloud and is accessible from the Azure Machine Learning workspace. In this example, you upload it to an Azure Machine Learning data store. You're going to create a data asset that can be used to invoke the endpoint for scoring. However, notice that batch endpoints accept data that can be placed in multiple type of locations.
-1. Let's invoke the endpoint with data from a storage account:
+1. Invoke the endpoint with data from a storage account:
# [Azure CLI](#tab/cli) :::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="start_batch_scoring_job" ::: > [!NOTE]
- > The utility `jq` may not be installed on every installation. You can get instructions in [this link](https://stedolan.github.io/jq/download/).
+ > The utility `jq` might not be installed on every installation. You can [get instructions](https://jqlang.github.io/jq/download) on GitHub.
# [Python](#tab/python)
For testing our endpoint, we are going to use a sample of unlabeled data located
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=get_job)]
-## Analyzing the outputs
+## Analyze the outputs
-The job generates a named output called `score` where all the generated files are placed. Since we wrote into the directory directly, one file per each input file, then we can expect to have the same number of files. In this particular example we decided to name the output files the same as the inputs, but they will have a parquet extension.
+The job generates a named output called `score` where all the generated files are placed. Since you wrote into the directory directly, one file per each input file, then you can expect to have the same number of files. In this particular example, name the output files the same as the inputs, but they have a parquet extension.
> [!NOTE]
-> Notice that a file `predictions.csv` is also included in the output folder. This file contains the summary of the processed files.
+> Notice that a file *predictions.csv* is also included in the output folder. This file contains the summary of the processed files.
You can download the results of the job by using the job name:
To download the predictions, use the following command:
# [Python](#tab/python) [!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=download_outputs)]+ Once the file is downloaded, you can open it using your favorite tool. The following example loads the predictions using `Pandas` dataframe.
The output looks as follows:
# [Azure CLI](#tab/cli)
-Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs aren't deleted.
::: code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/custom-outputs-parquet/deploy-and-run.sh" ID="delete_endpoint" ::: # [Python](#tab/python)
-Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs won't be deleted.
+Run the following code to delete the batch endpoint and all the underlying deployments. Batch scoring jobs aren't deleted.
[!notebook-python[] (~/azureml-examples-main/sdk/python/endpoints/batch/deploy-models/custom-outputs-parquet/custom-output-batch.ipynb?name=delete_endpoint)]
+## Related content
-## Next steps
-
-* [Using batch deployments for image file processing](how-to-image-processing-batch.md)
-* [Using batch deployments for NLP processing](how-to-nlp-processing-batch.md)
+* [Image processing with batch model deployments](how-to-image-processing-batch.md)
+* [Deploy language models in batch endpoints](how-to-nlp-processing-batch.md)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
tags = run.data.tags
>[!NOTE] > The metrics dictionary returned by `mlflow.get_run` or `mlflow.search_runs` only returns the most recently logged value for a given metric name. For example, if you log a metric called `iteration` multiple times with values, *1*, then *2*, then *3*, then *4*, only *4* is returned when calling `run.data.metrics['iteration']`. >
-> To get all metrics logged for a particular metric name, you can use `MlFlowClient.get_metric_history()` as explained in the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
+> To get all metrics logged for a particular metric name, you can use `MlFlowClient.get_metric_history()` as explained in the example [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#get-params-and-metrics-from-a-run).
<a name="view-the-experiment-in-the-web-portal"></a>
This method lists all the artifacts logged in the run, but they remain stored in
file_path = client.download_artifacts("<RUN_ID>", path="feature_importance_weight.png") ```
-For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#getting-metrics-parameters-artifacts-and-models).
+For more information, please refer to [Getting metrics, parameters, artifacts and models](how-to-track-experiments-mlflow.md#get-metrics-parameters-artifacts-and-models).
## View jobs/runs information in the studio
machine-learning How To Manage Inputs Outputs Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-inputs-outputs-pipeline.md
Last updated 08/27/2023 -+ # Manage inputs and outputs of component and pipeline
machine-learning How To Monitor Model Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-model-performance.md
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-In this article, you learn to perform out-of box and advanced monitoring setup for models that are deployed to Azure Machine Learning online endpoints. You also learn to set up monitoring for models that are deployed outside Azure Machine Learning or deployed to Azure Machine Learning batch endpoints.
+Learn to use Azure Machine Learning's model monitoring to continuously track the performance of machine learning models in production. Model monitoring provides you with a broad view of monitoring signals and alerts you to potential issues. When you monitor signals and performance metrics of models in production, you can critically evaluate the inherent risks associated with them and identify blind spots that could adversely affect your business.
-Once a machine learning model is in production, it's important to critically evaluate the inherent risks associated with it and identify blind spots that could adversely affect your business. Azure Machine Learning's model monitoring continuously tracks the performance of models in production by providing a broad view of monitoring signals and alerting you to potential issues.
+In this article you, learn to perform the following tasks:
+
+> [!div class="checklist"]
+> * Set up out-of box and advanced monitoring for models that are deployed to Azure Machine Learning online endpoints
+> * Monitor performance metrics for models in production
+> * Monitor models that are deployed outside Azure Machine Learning or deployed to Azure Machine Learning batch endpoints
+> * Set up model monitoring with custom signals and metrics
+> * Interpret monitoring results
+> * Integrate Azure Machine Learning model monitoring with Azure Event Grid
## Prerequisites
from azure.ai.ml.constants import (
) from azure.ai.ml.entities import ( AlertNotification,
+ BaselineDataRange,
DataDriftSignal, DataQualitySignal, PredictionDriftSignal, DataDriftMetricThreshold, DataQualityMetricThreshold,
+ FeatureAttributionDriftMetricThreshold,
+ FeatureAttributionDriftSignal,
PredictionDriftMetricThreshold, NumericalDriftMetrics, CategoricalDriftMetrics,
from azure.ai.ml.entities import (
RecurrencePattern, RecurrenceTrigger, ServerlessSparkCompute,
- ReferenceData
+ ReferenceData,
+ ProductionData
) # get a handle to the workspace
monitoring_target = MonitoringTarget(
endpoint_deployment_id="azureml:credit-default:main" )
+# specify a lookback window size and offset, or omit this to use the defaults, which are specified in the documentation
+data_window = BaselineDataRange(lookback_window_size="P1D", lookback_window_offset="P0D")
+
+production_data = ProductionData(
+ input_data=Input(
+ type="uri_folder",
+ path="azureml:credit-default-main-model_inputs:1"
+ ),
+ data_window=data_window,
+ data_context=MonitorDatasetContext.MODEL_INPUTS,
+)
+ # training data to be used as reference dataset reference_data_training = ReferenceData( input_data=Input( type="mltable", path="azureml:credit-default-reference:1" ),
- target_column_name="DEFAULT_NEXT_MONTH",
+ data_column_names={
+ "target_column":"DEFAULT_NEXT_MONTH"
+ },
data_context=MonitorDatasetContext.TRAINING, )
advanced_data_quality = DataQualitySignal(
alert_enabled=False )
+# create feature attribution drift signal
+metric_thresholds = FeatureAttributionDriftMetricThreshold(normalized_discounted_cumulative_gain=0.9)
+
+feature_attribution_drift = FeatureAttributionDriftSignal(
+ reference_data=reference_data_training,
+ metric_thresholds=metric_thresholds,
+ alert_enabled=False
+)
+ # put all monitoring signals in a dictionary monitoring_signals = { 'data_drift_advanced':advanced_data_drift,
- 'data_quality_advanced':advanced_data_quality
+ 'data_quality_advanced':advanced_data_quality,
+ 'feature_attribution_drift':feature_attribution_drift,
} # create alert notification object
To set up advanced monitoring:
1. Select **Next** to open the **Configure data asset** page of the **Advanced settings** section. 1. **Add** a dataset to be used as the reference dataset. We recommend that you use the model training data as the comparison reference dataset for data drift and data quality. Also, use the model validation data as the comparison reference dataset for prediction drift.
- :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-data.png" alt-text="Screenshot showing how to add datasets for the monitoring signals to use." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-data.png":::
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-configuration-data.png" alt-text="Screenshot showing how to add datasets for the monitoring signals to use." lightbox="media/how-to-monitor-models/model-monitoring-advanced-configuration-data.png":::
1. Select **Next** to go to the **Select monitoring signals** page. On this page, you see some monitoring signals already added (if you selected an Azure Machine Learning online deployment earlier). The signals (data drift, prediction drift, and data quality) use recent, past production data as the comparison reference dataset and use smart defaults for metrics and thresholds. :::image type="content" source="media/how-to-monitor-models/model-monitoring-monitoring-signals.png" alt-text="Screenshot showing default monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-monitoring-signals.png"::: 1. Select **Edit** next to the data drift signal.
-1. In the data drift **Edit signal** window, configure the following:
+1. Configure the data drift in the **Edit signal** window as follows:
- 1. For the production data asset, select your model inputs with the desired lookback window size.
- 1. Select your training dataset to use as the reference dataset.
- 1. Select the target (output) column.
- 1. Select to monitor drift for the top N most important features, or monitor drift for a specific set of features.
- 1. Select your preferred metrics and thresholds.
+ 1. In step 1, for the production data asset, select your model inputs dataset. Also, make the following selection:
+ - Select the desired lookback window size.
+ 1. In step 2, for the reference data asset, select your training dataset. Also, make the following selection:
+ - Select the target (output) column.
+ 1. In step 3, select to monitor drift for the top N most important features, or monitor drift for a specific set of features.
+ 1. In step 4, select your preferred metric and thresholds to use for numerical features.
+ 1. In step 5, select your preferred metric and thresholds to use for categorical features.
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-signals.png" alt-text="Screenshot showing how to configure selected monitoring signals." lightbox="media/how-to-monitor-models/model-monitoring-configure-signals.png":::
To set up advanced monitoring:
1. Select **Add** to open the **Edit Signal** window. 1. Select **Feature attribution drift (preview)** to configure the feature attribution drift signal as follows:
- 1. Select the production data asset with your model inputs and the desired lookback window size.
- 1. Select the production data asset with your model outputs.
- 1. Select the common column between these data assets to join them on. If the data was collected with the [data collector](how-to-collect-production-data.md), the common column is `correlationid`.
- 1. (Optional) If you used the data collector to collect data where your model inputs and outputs are already joined, select the joined dataset as your production data asset and **Remove** step 2 in the configuration panel.
- 1. Select your training dataset to use as the reference dataset.
- 1. Select the target (output) column for your training dataset.
- 1. Select your preferred metric and threshold.
+ 1. In step 1, select the production data asset that has your model inputs
+ - Also, select the desired lookback window size.
+ 1. In step 2, select the production data asset that has your model outputs.
+ - Also, select the common column between these data assets to join them on. If the data was collected with the [data collector](how-to-collect-production-data.md), the common column is `correlationid`.
+ 1. (Optional) If you used the data collector to collect data that has your model inputs and outputs already joined, select the joined dataset as your production data asset (in step 1)
+ - Also, **Remove** step 2 in the configuration panel.
+ 1. In step 3, select your training dataset to use as the reference dataset.
+ - Also, select the target (output) column for your training dataset.
+ 1. In step 4, select your preferred metric and threshold.
:::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-feature-attribution-drift.png" alt-text="Screenshot showing how to configure feature attribution drift signal." lightbox="media/how-to-monitor-models/model-monitoring-configure-feature-attribution-drift.png":::
To set up advanced monitoring:
1. On the **Notifications** page, enable alert notifications for each signal and select **Next**. 1. Review your settings on the **Review monitoring settings** page.
- :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-config-review.png" alt-text="Screenshot showing review page of the advanced configuration for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-config-review.png":::
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-configuration-review.png" alt-text="Screenshot showing review page of the advanced configuration for model monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-configuration-review.png":::
1. Select **Create** to create your advanced model monitor.
-## Set up model monitoring by bringing your production data to Azure Machine Learning
+## Set up model performance monitoring
+
+Azure Machine Learning model monitoring enables you to track the performance of your models in production by calculating their performance metrics. The following model performance metrics are currently supported:
+
+For classification models:
+
+- Precision
+- Accuracy
+- Recall
+
+For regression models:
+
+- Mean Absolute Error (MAE)
+- Mean Squared Error (MSE)
+- Root Mean Squared Error (RMSE)
+
+### More prerequisites for model performance monitoring
+
+You must satisfy the following requirements for you to configure your model performance signal:
+
+* Have output data for the production model (the model's predictions) with a unique ID for each row. If you collect production data with the [Azure Machine Learning data collector](how-to-collect-production-data.md), a `correlation_id` is provided for each inference request for you. With the data collector, you also have the option to log your own unique ID from your application.
+
+ > [!NOTE]
+ >
+ > For Azure Machine Learning model performance monitoring, we recommend that you log your unique ID in its own column, using the [Azure Machine Learning data collector](how-to-collect-production-data.md).
+
+* Have ground truth data (actuals) with a unique ID for each row. The unique ID for a given row should match the unique ID for the model outputs for that particular inference request. This unique ID is used to join your ground truth dataset with the model outputs.
+
+ Without having ground truth data, you can't perform model performance monitoring. Since ground truth data is encountered at the application level, it's your responsibility to collect it as it becomes available. You should also maintain a data asset in Azure Machine Learning that contains this ground truth data.
+
+* (Optional) Have a pre-joined tabular dataset with model outputs and ground truth data already joined together.
+
+### Monitor model performance requirements when using data collector
+
+If you use the [Azure Machine Learning data collector](concept-data-collection.md) to collect production inference data without supplying your own unique ID for each row as a separate column, a `correlationid` will be autogenerated for you and included in the logged JSON object. However, the data collector will [batch rows](how-to-collect-production-data.md#data-collector-batching) that are sent within short time intervals of each other. Batched rows will fall within the same JSON object and will thus have the same `correlationid`.
+
+In order to differentiate between the rows in the same JSON object, Azure Machine Learning model performance monitoring uses indexing to determine the order of the rows in the JSON object. For example, if three rows are batched together, and the `correlationid` is `test`, row one will have an ID of `test_0`, row two will have an ID of `test_1`, and row three will have an ID of `test_2`. To ensure that your ground truth dataset contains unique IDs that match to the collected production inference model outputs, ensure that you index each `correlationid` appropriately. If your logged JSON object only has one row, then the `correlationid` would be `correlationid_0`.
+
+To avoid using this indexing, we recommend that you log your unique ID in its own column within the pandas DataFrame that you're logging with the [Azure Machine Learning data collector](how-to-collect-production-data.md). Then, in your model monitoring configuration, you specify the name of this column to join your model output data with your ground truth data. As long as the IDs for each row in both datasets are the same, Azure Machine Learning model monitoring can perform model performance monitoring.
+
+### Example workflow for monitoring model performance
+
+To understand the concepts associated with model performance monitoring, consider this example workflow. Suppose you're deploying a model to predict whether credit card transactions are fraudulent or not, you can follow these steps to monitor the model's performance:
+
+1. Configure your deployment to use the data collector to collect the model's production inference data (input and output data). Let's say that the output data is stored in a column `is_fraud`.
+1. For each row of the collected inference data, log a unique ID. The unique ID can come from your application, or you can use the `correlationid` that Azure Machine Learning uniquely generates for each logged JSON object.
+1. Later, when the ground truth (or actual) `is_fraud` data becomes available, it also gets logged and mapped to the same unique ID that was logged with the model's outputs.
+1. This ground truth `is_fraud` data is also collected, maintained, and registered to Azure Machine Learning as a data asset.
+1. Create a model performance monitoring signal that joins the model's production inference and ground truth data assets, using the unique ID columns.
+1. Finally, compute the model performance metrics.
+
+# [Azure CLI](#tab/azure-cli)
+
+Once you've satisfied the [prerequisites for model performance monitoring](#more-prerequisites-for-model-performance-monitoring), you can set up model monitoring with the following CLI command and YAML definition:
+
+```azurecli
+az ml schedule create -f ./model-performance-monitoring.yaml
+```
+
+The following YAML contains the definition for model monitoring with production inference data that you've collected.
+
+```YAML
+$schema: http://azureml/sdk-2-0/Schedule.json
+name: model_performance_monitoring
+display_name: Credit card fraud model performance
+description: Credit card fraud model performance
+
+trigger:
+ type: recurrence
+ frequency: day
+ interval: 7
+ schedule:
+ hours: 10
+ minutes: 15
+
+create_monitor:
+ compute:
+ instance_type: standard_e8s_v3
+ runtime_version: "3.3"
+ monitoring_target:
+ ml_task: classification
+ endpoint_deployment_id: azureml:loan-approval-endpoint:loan-approval-deployment
+
+ monitoring_signals:
+ fraud_detection_model_performance:
+ type: model_performance
+ production_data:
+ data_column_names:
+ prediction: is_fraud
+ correlation_id: correlation_id
+ reference_data:
+ input_data:
+ path: azureml:my_model_ground_truth_data:1
+ type: mltable
+ data_column_names:
+ actual: is_fraud
+ correlation_id: correlation_id
+ data_context: actuals
+ alert_enabled: true
+ metric_thresholds:
+ tabular_classification:
+ accuracy: 0.95
+ precision: 0.8
+ alert_notification:
+ emails:
+ - abc@example.com
+```
+
+# [Python SDK](#tab/python)
+
+Once you've satisfied the [prerequisites for model performance monitoring](#more-prerequisites-for-model-performance-monitoring), you can set up model monitoring with the following Python code:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.ai.ml import Input, MLClient
+from azure.ai.ml.constants import (
+ MonitorDatasetContext,
+)
+from azure.ai.ml.entities import (
+ AlertNotification,
+ BaselineDataRange,
+ ModelPerformanceMetricThreshold,
+ ModelPerformanceSignal,
+ ModelPerformanceClassificationThresholds,
+ MonitoringTarget,
+ MonitorDefinition,
+ MonitorSchedule,
+ RecurrencePattern,
+ RecurrenceTrigger,
+ ServerlessSparkCompute,
+ ReferenceData,
+ ProductionData
+)
+
+# get a handle to the workspace
+ml_client = MLClient(
+ DefaultAzureCredential(),
+ subscription_id="subscription_id",
+ resource_group_name="resource_group_name",
+ workspace_name="workspace_name",
+)
+
+# create your compute
+spark_compute = ServerlessSparkCompute(
+ instance_type="standard_e4s_v3",
+ runtime_version="3.3"
+)
+
+# reference your azureml endpoint and deployment
+monitoring_target = MonitoringTarget(
+ ml_task="classification",
+)
+
+# MDC-generated production data with data column names
+production_data = ProductionData(
+ input_data=Input(
+ type="uri_folder",
+ path="azureml:credit-default-main-model_outputs:1"
+ ),
+ data_column_names={
+ "target_column": "DEFAULT_NEXT_MONTH",
+ "join_column": "correlationid"
+ },
+ data_window=BaselineDataRange(
+ lookback_window_offset="P0D",
+ lookback_window_size="P10D",
+ )
+)
+
+# ground truth reference data
+reference_data_ground_truth = ReferenceData(
+ input_data=Input(
+ type="mltable",
+ path="azureml:credit-ground-truth:1"
+ ),
+ data_column_names={
+ "target_column": "ground_truth",
+ "join_column": "correlationid"
+ },
+ data_context=MonitorDatasetContext.GROUND_TRUTH_DATA,
+)
+
+# create the model performance signal
+metric_thresholds = ModelPerformanceMetricThreshold(
+ classification=ModelPerformanceClassificationThresholds(
+ accuracy=0.50,
+ precision=0.50,
+ recall=0.50
+ ),
+)
+
+model_performance = ModelPerformanceSignal(
+ production_data=production_data,
+ reference_data=reference_data_ground_truth,
+ metric_thresholds=metric_thresholds
+)
+
+# put all monitoring signals in a dictionary
+monitoring_signals = {
+ 'model_performance':model_performance,
+}
+
+# create alert notification object
+alert_notification = AlertNotification(
+ emails=['abc@example.com', 'def@example.com']
+)
+
+# Finally monitor definition
+monitor_definition = MonitorDefinition(
+ compute=spark_compute,
+ monitoring_target=monitoring_target,
+ monitoring_signals=monitoring_signals,
+ alert_notification=alert_notification
+)
+
+recurrence_trigger = RecurrenceTrigger(
+ frequency="day",
+ interval=1,
+ schedule=RecurrencePattern(hours=3, minutes=15)
+)
+
+model_monitor = MonitorSchedule(
+ name="credit_default_model_performance",
+ trigger=recurrence_trigger,
+ create_monitor=monitor_definition
+)
+
+poller = ml_client.schedules.begin_create_or_update(model_monitor)
+created_monitor = poller.result()
+```
+
+# [Studio](#tab/azure-studio)
+
+To set up model performance monitoring:
+
+1. Complete the entries on the **Basic settings** page as described earlier in the [Set up out-of-box model monitoring](#set-up-out-of-box-model-monitoring) section.
+1. Select **Next** to open the **Configure data asset** page of the **Advanced settings** section.
+1. Select **+ Add** to add a dataset for use as the ground truth dataset.
+
+ Ensure that your model outputs dataset is also included in the list of added datasets. The ground truth dataset you add should have a unique ID column.
+ The values in the unique ID column for both the ground truth dataset and the model outputs dataset must match in order for both datasets to be joined together prior to metric computation.
+
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-advanced-configuration-data-2.png" alt-text="Screenshot showing how to add datasets to use for model performance monitoring." lightbox="media/how-to-monitor-models/model-monitoring-advanced-configuration-data-2.png":::
+
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-added-ground-truth-dataset.png" alt-text="Screenshot showing the ground truth dataset and the model outputs and inputs datasets for the monitoring signals to connect to." lightbox="media/how-to-monitor-models/model-monitoring-added-ground-truth-dataset.png":::
+
+1. Select **Next** to go to the **Select monitoring signals** page. On this page, you will see some monitoring signals already added (if you selected an Azure Machine Learning online deployment earlier).
+1. Delete the existing monitoring signals on the page, since you're only interested in creating a model performance monitoring signal.
+1. Select **Add** to open the **Edit Signal** window.
+1. Select **Model performance (preview)** to configure the model performance signal as follows:
+
+ 1. In step 1, for the production data asset, select your model outputs dataset. Also, make the following selections:
+ - Select the appropriate target column (for example, `is_fraud`).
+ - Select the desired lookback window size and lookback window offset.
+ 1. In step 2, for the reference data asset, select the ground truth data asset that you added earlier. Also, make the following selections:
+ - Select the appropriate target column.
+ - Select the column on which to perform the join with the model outputs dataset. The column used for the join should be the column that is common between the two datasets and which has a unique ID for each row in the dataset (for example, `correlationid`).
+ 1. In step 3, select your desired performance metrics and specify their respective thresholds.
+
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-configure-model-performance.png" alt-text="Screenshot showing how to configure a model performance signal." lightbox="media/how-to-monitor-models/model-monitoring-configure-model-performance.png":::
+
+1. Select **Save** to return to the **Select monitoring signals** page.
+
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-configured-model-performance-signal.png" alt-text="Screenshot showing the configured model performance signal." lightbox="media/how-to-monitor-models/model-monitoring-configured-model-performance-signal.png":::
+
+1. Select **Next** to go to the **Notifications** page.
+1. On the **Notifications** page, enable alert notification for the model performance signal and select **Next**.
+1. Review your settings on the **Review monitoring settings** page.
+
+ :::image type="content" source="media/how-to-monitor-models/model-monitoring-review-monitoring-details.png" alt-text="Screenshot showing review page that includes the configured model performance signal." lightbox="media/how-to-monitor-models/model-monitoring-review-monitoring-details.png":::
+
+1. Select **Create** to create your model performance monitor.
+++
+## Set up model monitoring by bringing in your production data to Azure Machine Learning
You can also set up model monitoring for models deployed to Azure Machine Learning batch endpoints or deployed outside of Azure Machine Learning. If you don't have a deployment, but you have production data, you can use the data to perform continuous model monitoring. To monitor these models, you must be able to:
Once you've configured your monitor with the CLI or SDK, you can view the monito
## Set up model monitoring with custom signals and metrics
-With Azure Machine Learning model monitoring, you can define your own custom signal and implement any metric of your choice to monitor your model. You can register this signal as an Azure Machine Learning component. When your Azure Machine Learning model monitoring job runs on the specified schedule, it computes the metric(s) you've defined within your custom signal, just as it does for the prebuilt signals (data drift, prediction drift, and data quality).
+With Azure Machine Learning model monitoring, you can define a custom signal and implement any metric of your choice to monitor your model. You can register this custom signal as an Azure Machine Learning component. When your Azure Machine Learning model monitoring job runs on the specified schedule, it computes the metric(s) you've defined within your custom signal, just as it does for the prebuilt signals (data drift, prediction drift, and data quality).
To set up a custom signal to use for model monitoring, you must first define the custom signal and register it as an Azure Machine Learning component. The Azure Machine Learning component must have these input and output signatures:
machine-learning How To Share Models Pipelines Across Workspaces With Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-share-models-pipelines-across-workspaces-with-registries.md
For more information on components, see the following articles:
* [How to use components in pipelines (CLI)](how-to-create-component-pipelines-cli.md) * [How to use components in pipelines (SDK)](how-to-create-component-pipeline-python.md)
+ > [!IMPORTANT]
+ > Registry only support to have named assets (data/model/component/environment). If you to reference an asset in a registry, you need to create it in the registry first. Especially for pipeline component case, if you want reference component or environment in pipeline component, you need first create the component or environment in the registry.
+ # [Azure CLI](#tab/cli) Make sure you are in the folder `cli/jobs/pipelines-with-components/nyc_taxi_data_regression`. You'll find the component definition file `train.yml` that packages a Scikit Learn training script `train_src/train.py` and the [curated environment](resource-curated-environments.md) `AzureML-sklearn-0.24-ubuntu18.04-py37-cpu`. We'll use the Scikit Learn environment created in pervious step instead of the curated environment. You can edit `environment` field in the `train.yml` to refer to your Scikit Learn environment. The resulting component definition file `train.yml` will be similar to the following example:
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
Title: Query & compare experiments and runs with MLflow
-description: Explains how to use MLflow for managing experiments and runs in Azure Machine Learning
+description: Learn how to use MLflow for managing experiments and runs in Azure Machine Learning.
Previously updated : 06/08/2022 Last updated : 03/20/2024
Experiments and jobs (or runs) in Azure Machine Learning can be queried using ML
MLflow allows you to:
-* Create, query, delete and search for experiments in a workspace.
+* Create, query, delete, and search for experiments in a workspace.
* Query, delete, and search for runs in a workspace.
-* Track and retrieve metrics, parameters, artifacts and models from runs.
+* Track and retrieve metrics, parameters, artifacts, and models from runs.
-See [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments) for a detailed comparison between MLflow Open-Source and MLflow when connected to Azure Machine Learning.
+For a detailed comparison between open-source MLflow and MLflow when connected to Azure Machine Learning, see [Support matrix for querying runs and experiments in Azure Machine Learning](#support-matrix-for-querying-runs-and-experiments).
> [!NOTE] > The Azure Machine Learning Python SDK v2 does not provide native logging or tracking capabilities. This applies not just for logging but also for querying the metrics logged. Instead, use MLflow to manage experiments and runs. This article explains how to use MLflow to manage experiments and runs in Azure Machine Learning.
-### REST API
+You can also query and search experiments and runs by using the MLflow REST API. See [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb) for an example about how to consume it.
-Query and searching experiments and runs is also available using the MLflow REST API. See [Using MLflow REST with Azure Machine Learning](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/using-rest-api/using_mlflow_rest_api.ipynb) for an example about how to consume it.
-
-### Prerequisites
+## Prerequisites
[!INCLUDE [mlflow-prereqs](includes/machine-learning-mlflow-prereqs.md)]
Use MLflow to search for experiments inside of your workspace. See the following
``` > [!NOTE]
- > In legacy versions of MLflow (<2.0) use method `mlflow.list_experiments()` instead.
+ > In legacy versions of MLflow (<2.0), use method `mlflow.list_experiments()` instead.
* Get all the experiments, including archived:
Use MLflow to search for experiments inside of your workspace. See the following
mlflow.get_experiment('1234-5678-90AB-CDEFG') ```
-### Searching experiments
+### Search experiments
-The `search_experiments()` method available since Mlflow 2.0 allows searching experiment matching a criteria using `filter_string`.
+The `search_experiments()` method, available since Mlflow 2.0, lets you search for experiments that match criteria using `filter_string`.
* Retrieve multiple experiments based on their IDs:
The `search_experiments()` method available since Mlflow 2.0 allows searching ex
## Query and search runs
-MLflow allows searching runs inside of any experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids` and `experiment_name` to indicate on which experiments you want to search. You can also indicate `search_all_experiments=True` if you want to search across all the experiments in the workspace:
+MLflow lets you search for runs inside any experiment, including multiple experiments at the same time. The method `mlflow.search_runs()` accepts the argument `experiment_ids` and `experiment_name` to indicate which experiments you want to search. You can also indicate `search_all_experiments=True` if you want to search across all the experiments in the workspace:
* By experiment name:
MLflow allows searching runs inside of any experiment, including multiple experi
mlflow.search_runs(filter_string="params.num_boost_round='100'", search_all_experiments=True) ```
-Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments if required. This may be useful in case you want to compare runs of the same model when it is being logged in different experiments (by different people, different project iterations, etc.).
+Notice that `experiment_ids` supports providing an array of experiments, so you can search runs across multiple experiments, if necessary. This might be useful in case you want to compare runs of the same model when it's being logged in different experiments (for example, by different people or different project iterations).
> [!IMPORTANT]
-> If `experiment_ids`, `experiment_names`, or `search_all_experiments` are not indicated, then MLflow will search by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`
+> If `experiment_ids`, `experiment_names`, or `search_all_experiments` aren't specified, then MLflow searches by default in the current active experiment. You can set the active experiment using `mlflow.set_experiment()`.
By default, MLflow returns the data in Pandas `Dataframe` format, which makes it handy when doing further processing our analysis of the runs. Returned data includes columns with:
By default, MLflow returns the data in Pandas `Dataframe` format, which makes it
- Parameters with column's name `params.<parameter-name>`. - Metrics (last logged value of each) with column's name `metrics.<metric-name>`.
-All metrics and parameters are also returned when querying runs. However, for metrics containing multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#getting-params-and-metrics-from-a-run) for an example.
+All metrics and parameters are also returned when querying runs. However, for metrics that contain multiple values (for instance, a loss curve, or a PR curve), only the last value of the metric is returned. If you want to retrieve all the values of a given metric, uses `mlflow.get_metric_history` method. See [Getting params and metrics from a run](#get-params-and-metrics-from-a-run) for an example.
-### Ordering runs
+### Order runs
-By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
+By default, experiments are in descending order by `start_time`, which is the time the experiment was queued in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
* Order runs by attributes, like `start_time`:
By default, experiments are ordered descending by `start_time`, which is the tim
``` > [!TIP]
- > `attributes.duration` is not present in MLflow OSS, but provided in Azure Machine Learning for convenience.
+ > `attributes.duration` isn't present in MLflow OSS, but provided in Azure Machine Learning for convenience.
* Order runs by metric's values:
By default, experiments are ordered descending by `start_time`, which is the tim
``` > [!WARNING]
- > Using `order_by` with expressions containing `metrics.*`, `params.*`, or `tags.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the example.
+ > Using `order_by` with expressions containing `metrics.*`, `params.*`, or `tags.*` in the parameter `order_by` isn't currently supported. Instead, use the `order_values` method from Pandas as shown in the example.
-
-### Filtering runs
+### Filter runs
-You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters, `metrics` to access metrics logged in the run, and `attributes` to access run information details. MLflow supports expressions joined by the AND keyword (the syntax does not support OR):
+You can also look for a run with a specific combination in the hyperparameters using the parameter `filter_string`. Use `params` to access run's parameters, `metrics` to access metrics logged in the run, and `attributes` to access run information details. MLflow supports expressions joined by the AND keyword (the syntax doesn't support OR):
* Search runs based on a parameter's value:
You can also look for a run with a specific combination in the hyperparameters u
filter_string="attributes.user_id = 'John Smith'") ```
-* Search runs that have failed. See [Filter runs by status](#filter-runs-by-status) for possible values:
+* Search runs that failed. See [Filter runs by status](#filter-runs-by-status) for possible values:
```python mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
You can also look for a run with a specific combination in the hyperparameters u
``` > [!TIP]
- > Notice that for the key `attributes`, values should always be strings and hence encoded between quotes.
+ > For the key `attributes`, values should always be strings and hence encoded between quotes.
-* Search runs taking longer than one hour:
+* Search runs that take longer than one hour:
```python duration = 360 * 1000 # duration is in milliseconds
You can also look for a run with a specific combination in the hyperparameters u
``` > [!TIP]
- > `attributes.duration` is not present in MLflow OSS, but provided in Azure Machine Learning for convenience.
+ > `attributes.duration` isn't present in MLflow OSS, but provided in Azure Machine Learning for convenience.
-* Search runs having the ID in a given set:
+* Search runs that have the ID in a given set:
```python mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
You can also look for a run with a specific combination in the hyperparameters u
### Filter runs by status
-When filtering runs by status, notice that MLflow uses a different convention to name the different possible status of a run compared to Azure Machine Learning. The following table shows the possible values:
+When you filter runs by status, MLflow uses a different convention to name the different possible status of a run compared to Azure Machine Learning. The following table shows the possible values:
-| Azure Machine Learning Job status | MLFlow's `attributes.status` | Meaning |
-| :-: | :-: | :- |
+| Azure Machine Learning job status | MLFlow's `attributes.status` | Meaning |
+| :-: | :-: | :-: |
| Not started | `Scheduled` | The job/run was received by Azure Machine Learning. | | Queue | `Scheduled` | The job/run is scheduled for running, but it hasn't started yet. |
-| Preparing | `Scheduled` | The job/run has not started yet, but a compute has been allocated for its execution and it's preparing the environment and its inputs. |
+| Preparing | `Scheduled` | The job/run hasn't started yet, but a compute was allocated for its execution and it's preparing the environment and its inputs. |
| Running | `Running` | The job/run is currently under active execution. |
-| Completed | `Finished` | The job/run has been completed without errors. |
-| Failed | `Failed` | The job/run has been completed with errors. |
-| Canceled | `Killed` | The job/run has been canceled by the user or terminated by the system. |
+| Completed | `Finished` | The job/run was completed without errors. |
+| Failed | `Failed` | The job/run was completed with errors. |
+| Canceled | `Killed` | The job/run was canceled by the user or terminated by the system. |
Example:
Example:
mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], filter_string="attributes.status = 'Failed'") ```
-
-## Getting metrics, parameters, artifacts and models
-The method `search_runs` returns a Pandas `Dataframe` containing a limited amount of information by default. You can get Python objects if needed, which may be useful to get details about them. Use the `output_format` parameter to control how output is returned:
+## Get metrics, parameters, artifacts, and models
+
+The method `search_runs` returns a Pandas `Dataframe` that contains a limited amount of information by default. You can get Python objects if needed, which might be useful to get details about them. Use the `output_format` parameter to control how output is returned:
```python runs = mlflow.search_runs(
last_run = runs[-1]
print("Last run ID:", last_run.info.run_id) ```
-### Getting params and metrics from a run
+### Get params and metrics from a run
When runs are returned using `output_format="list"`, you can easily access parameters using the key `data`:
client = mlflow.tracking.MlflowClient()
client.get_metric_history("1234-5678-90AB-CDEFG", "log_loss") ```
-### Getting artifacts from a run
+### Get artifacts from a run
-Any artifact logged by a run can be queried by MLflow. Artifacts can't be accessed using the run object itself and the MLflow client should be used instead:
+MLflow can query any artifact logged by a run. Artifacts can't be accessed using the run object itself, and the MLflow client should be used instead:
```python client = mlflow.tracking.MlflowClient() client.list_artifacts("1234-5678-90AB-CDEFG") ```
-The method above will list all the artifacts logged in the run, but they will remain stored in the artifacts store (Azure Machine Learning storage). To download any of them, use the method `download_artifact`:
+The preceding method lists all the artifacts logged in the run, but they remain stored in the artifacts store (Azure Machine Learning storage). To download any of them, use the method `download_artifact`:
```python file_path = mlflow.artifacts.download_artifacts(
file_path = mlflow.artifacts.download_artifacts(
> [!NOTE] > In legacy versions of MLflow (<2.0), use the method `MlflowClient.download_artifacts()` instead.
-### Getting models from a run
+### Get models from a run
-Models can also be logged in the run and then retrieved directly from it. To retrieve it, you need to know the artifact's path where it is stored. The method `list_artifacts` can be used to find artifacts that are representing a model since MLflow models are always folders. You can download a model by indicating the path where the model is stored using the `download_artifact` method:
+Models can also be logged in the run and then retrieved directly from it. To retrieve a model, you need to know the path to the artifact where it's stored. The method `list_artifacts` can be used to find artifacts that represent a model since MLflow models are always folders. You can download a model by specifying the path where the model is stored, using the `download_artifact` method:
```python artifact_path="classifier"
You can then load the model back from the downloaded artifacts using the typical
model = mlflow.xgboost.load_model(model_local_path) ```
-MLflow also allows you to both operations at once and download and load the model in a single instruction. MLflow will download the model to a temporary folder and load it from there. The method `load_model` uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
+MLflow also allows you to perform both operations at once, and to download and load the model in a single instruction. MLflow downloads the model to a temporary folder and loads it from there. The method `load_model` uses an URI format to indicate from where the model has to be retrieved. In the case of loading a model from a run, the URI structure is as follows:
```python model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}") ``` > [!TIP]
-> For query and loading models registered in the Model Registry, view [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
+> To query and load models registered in the model registry, see [Manage models registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md).
-## Getting child (nested) runs
+## Get child (nested) runs
-MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
+MLflow supports the concept of child (nested) runs. These runs are useful when you need to spin off training routines that must be tracked independently from the main training process. Hyper-parameter tuning optimization processes or Azure Machine Learning pipelines are typical examples of jobs that generate multiple child runs. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
```python hyperopt_run = mlflow.last_active_run()
To compare and evaluate the quality of your jobs and models in Azure Machine Lea
The [MLflow with Azure Machine Learning notebooks](https://github.com/Azure/azureml-examples/tree/main/sdk/python/using-mlflow) demonstrate and expand upon concepts presented in this article.
- * [Training and tracking a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models and combine multiple flavors into pipelines.
- * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters and artifacts from Azure Machine Learning using MLflow.
+ * [Train and track a classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments using MLflow, log models, and combine multiple flavors into pipelines.
+ * [Manage experiments and runs with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning using MLflow.
## Support matrix for querying runs and experiments The MLflow SDK exposes several methods to retrieve runs, including options to control what is returned and how. Use the following table to learn about which of those methods are currently supported in MLflow when connected to Azure Machine Learning: | Feature | Supported by MLflow | Supported by Azure Machine Learning |
-| :- | :-: | :-: |
+| :-: | :-: | :-: |
| Ordering runs by attributes | **&check;** | **&check;** | | Ordering runs by metrics | **&check;** | <sup>1</sup> | | Ordering runs by parameters | **&check;** | <sup>1</sup> |
The MLflow SDK exposes several methods to retrieve runs, including options to co
| Renaming experiments | **&check;** | | > [!NOTE]
-> - <sup>1</sup> Check the section [Ordering runs](#ordering-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
+> - <sup>1</sup> Check the section [Ordering runs](#order-runs) for instructions and examples on how to achieve the same functionality in Azure Machine Learning.
> - <sup>2</sup> `!=` for tags not supported.
-## Next steps
+## Related content
-* [Manage your models with MLflow](how-to-manage-models.md).
-* [Deploy models with MLflow](how-to-deploy-mlflow-models.md).
+* [Manage your models with MLflow](how-to-manage-models.md)
+* [Deploy models with MLflow](how-to-deploy-mlflow-models.md)
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
Previously updated : 10/13/2021 Last updated : 03/25/2024
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-In this article, you'll learn how to train an object detection model to detect small objects in high-resolution images with [automated ML](concept-automated-ml.md) in Azure Machine Learning.
+In this article, you learn how to train an object detection model to detect small objects in high-resolution images with [automated ML](concept-automated-ml.md) in Azure Machine Learning.
-Typically, computer vision models for object detection work well for datasets with relatively large objects. However, due to memory and computational constraints, these models tend to under-perform when tasked to detect small objects in high-resolution images. Because high-resolution images are typically large, they are resized before input into the model, which limits their capability to detect smaller objects--relative to the initial image size.
+Typically, computer vision models for object detection work well for datasets with relatively large objects. However, due to memory and computational constraints, these models tend to under-perform when tasked to detect small objects in high-resolution images. Because high-resolution images are typically large, they're resized before input into the model, which limits their capability to detect smaller objects--relative to the initial image size.
To help with this problem, automated ML supports tiling as part of the computer vision capabilities. The tiling capability in automated ML is based on the concepts in [The Power of Tiling for Small Object Detection](https://openaccess.thecvf.com/content_CVPRW_2019/papers/UAVision/Unel_The_Power_of_Tiling_for_Small_Object_Detection_CVPRW_2019_paper.pdf).
Small object detection using tiling is supported for all models supported by Aut
## Enable tiling during training
-To enable tiling, you can set the `tile_grid_size` parameter to a value like '3x2'; where 3 is the number of tiles along the width dimension and 2 is the number of tiles along the height dimension. When this parameter is set to '3x2', each image is split into a grid of 3 x 2 tiles. Each tile overlaps with the adjacent tiles, so that any objects that fall on the tile border are included completely in one of the tiles. This overlap can be controlled by the `tile_overlap_ratio` parameter, which defaults to 25%.
+To enable tiling, you can set the `tile_grid_size` parameter to a value like '3x2'; where 3 is the number of tiles along the width dimension and 2 is the number of tiles along the height dimension. When this parameter is set to '3x2'; each image is split into a grid of 3 x 2 tiles. Each tile overlaps with the adjacent tiles, so that any objects that fall on the tile border are included completely in one of the tiles. This overlap is controlled by the `tile_overlap_ratio` parameter, which defaults to 25%.
When tiling is enabled, the entire image and the tiles generated from it are passed through the model. These images and tiles are resized according to the `min_size` and `max_size` parameters before feeding to the model. The computation time increases proportionally because of processing this extra data.
-For example, when the `tile_grid_size` parameter is '3x2', the computation time would be approximately seven times higher than without tiling.
+For example, when the `tile_grid_size` parameter is '3x2' the computation time would be approximately seven times higher than without tiling.
You can specify the value for `tile_grid_size` in your training parameters as a string.
When a model trained with tiling is deployed, tiling also occurs during inferenc
You also have the option to enable tiling only during inference without enabling it in training. To do so, set the `tile_grid_size` parameter only during inference, not for training.
-Doing so, may improve performance for some datasets, and won't incur the extra cost that comes with tiling at training time.
+Doing so, might improve performance for some datasets, and will not incur the extra cost that comes with tiling at training time.
## Tiling hyperparameters
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
print(metrics, params, tags)
``` > [!TIP]
-> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#getting-params-and-metrics-from-a-run).
+> For metrics, the previous example code will only return the last value of a given metric. If you want to retrieve all the values of a given metric, use the `mlflow.get_metric_history` method. For more information on retrieving values of a metric, see [Getting params and metrics from a run](how-to-track-experiments-mlflow.md#get-params-and-metrics-from-a-run).
To __download__ artifacts you've logged, such as files and models, use [mlflow.artifacts.download_artifacts()](https://www.mlflow.org/docs/latest/python_api/mlflow.artifacts.html#mlflow.artifacts.download_artifacts).
machine-learning How To Use Parallel Job In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-parallel-job-in-pipeline.md
Last updated 03/13/2023-+ # How to use parallel job in pipeline (V2)
machine-learning How To Use Pipeline Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-component.md
- ignite-2023
-# How to use pipeline component to build nested pipeline job (V2) (preview)
+# How to use pipeline component to build nested pipeline job (V2)
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
By using a pipeline component, the author can focus on developing sub-tasks and
In this article, you'll learn how to use pipeline component in Azure Machine Learning pipeline. - ## Prerequisites - Understand how to use Azure Machine Learning pipeline with [CLI v2](how-to-create-component-pipelines-cli.md) and [SDK v2](how-to-create-component-pipeline-python.md).
machine-learning Reference Yaml Component Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md
-+ Last updated 08/08/2022
machine-learning Reference Yaml Component Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-pipeline.md
Last updated 04/12/2023
-# CLI (v2) pipeline component YAML schema (preview)
+# CLI (v2) pipeline component YAML schema
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
[!INCLUDE [schema note](includes/machine-learning-preview-old-json-schema-note.md)] - ## YAML syntax | Key | Type | Description | Allowed values | Default value |
machine-learning Reference Yaml Core Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md
-+
machine-learning Reference Yaml Job Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md
-+ Last updated 03/06/2024
machine-learning Reference Yaml Job Sweep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md
-+ Last updated 03/05/2024
machine-learning Reference Yaml Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-monitor.md
-- Previously updated : 09/15/2023++ Last updated : 02/26/2024 reviewer: msakande
reviewer: msakande
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-The YAML syntax detailed in this document is based on the JSON schema for the latest version of the ML CLI v2 extension. This syntax is guaranteed only to work with the latest version of the ML CLI v2 extension.
+The YAML syntax detailed in this document is based on the JSON schema for the latest version of the ML CLI v2 extension. This syntax is guaranteed only to work with the latest version of the ML CLI v2 extension. The comprehensive JSON schema can be viewed at [https://azuremlschemas.azureedge.net/latest/monitorSchedule.schema.json](https://azuremlschemas.azureedge.net/latest/monitorSchedule.schema.json).
You can find the schemas for older extension versions at [https://azuremlschemasprod.azureedge.net/](https://azuremlschemasprod.azureedge.net/). ## YAML syntax
Recurrence schedule defines the recurrence pattern, containing `hours`, `minutes
| | --| -- | -- | -| | `compute` | Object | **Required**. Description of compute resources for Spark pool to run monitoring job. | | | | `compute.instance_type` | String |**Required**. The compute instance type to be used for Spark pool. | 'standard_e4s_v3', 'standard_e8s_v3', 'standard_e16s_v3', 'standard_e32s_v3', 'standard_e64s_v3' | n/a |
-| `compute.runtime_version` | String | **Optional**. Defines Spark runtime version. | `3.1`, `3.2` | `3.2`|
+| `compute.runtime_version` | String | **Optional**. Defines Spark runtime version. | `3.3` | `3.3` |
| `monitoring_target` | Object | Azure Machine Learning asset(s) associated with model monitoring. | | | | `monitoring_target.ml_task` | String | Machine learning task for the model. | Allowed values are: `classification`, `regression`, `question_answering`| | | `monitoring_target.endpoint_deployment_id` | String | **Optional**. The associated Azure Machine Learning endpoint/deployment ID in format of `azureml:myEndpointName:myDeploymentName`. This field is required if your endpoint/deployment has enabled model data collection to be used for model monitoring. | | |
Recurrence schedule defines the recurrence pattern, containing `hours`, `minutes
As the data used to train the model evolves in production, the distribution of the data can shift, resulting in a mismatch between the training data and the real-world data that the model is being used to predict. Data drift is a phenomenon that occurs in machine learning when the statistical properties of the input data used to train the model change over time. - | Key | Type | Description | Allowed values | Default value | | | - | | | - | | `type` | String | **Required**. Type of monitoring signal. Prebuilt monitoring signal processing component is automatically loaded according to the `type` specified here. | `data_drift` | `data_drift` | | `production_data` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `production_data.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs` | |
-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
-| `production_data.data_window_size` | ISO8601 format |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data drift. | By default the data window size is the last monitoring period. | |
+| `production_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `production_data.data_window.lookback_window_offset` and `production_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `production_data.data_window.window_start` and `production_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
| `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `training`, `test`, `validation` | |
-| `reference_data.target_column_name` | Object | **Optional**. If the 'reference_data' is training data, this property is required for monitoring top N features for data drift. | | |
-| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.trailing_window_offset` and `reference_data.data_window.trailing_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format | |
-| `reference_data_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
+| `reference_data.data_column_names.target_column` | Object | **Optional**. If the `reference_data` is training data, this property is required for monitoring top N features for data drift. | | |
+| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.lookback_window_offset` and `reference_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `reference_data_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
| `features` | Object | **Optional**. Target features to be monitored for data drift. Some models might have hundreds or thousands of features, it's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default `features.top_n_feature_importance = 10` if `production_data.data_context` is `training`, otherwise, default is `all_features` | | `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_enabled` is `true`, user will receive alert notification. | | | | `metric_thresholds.numerical` | Object | Optional. List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed numerical metric names: `jensen_shannon_distance`, `normalized_wasserstein_distance`, `population_stability_index`, `two_sample_kolmogorov_smirnov_test`| |
-| `metric_thresholds.categorical` | Object | Optional. List of metrics and thresholds in 'key:value' format, 'key' is the metric name, 'value' is the threshold. | Allowed `categorical` metric names: `jensen_shannon_distance`, `chi_squared_test`, `population_stability_index`| |
-
+| `metric_thresholds.categorical` | Object | Optional. List of metrics and thresholds in 'key:value' format, 'key' is the metric name, 'value' is the threshold. | Allowed categorical metric names: `jensen_shannon_distance`, `chi_squared_test`, `population_stability_index`| |
#### Prediction drift Prediction drift tracks changes in the distribution of a model's prediction outputs by comparing it to validation or test labeled data or recent past production data. | Key | Type | Description | Allowed values | Default value |
-| | | | --| -|
-| `type` | String | **Required**. Type of monitoring signal. Prebuilt monitoring signal processing component is automatically loaded according to the `type` specified here | `prediction_drift` | `prediction_drift`|
+| | - | | | - |
+| `type` | String | **Required**. Type of monitoring signal. Prebuilt monitoring signal processing component is automatically loaded according to the `type` specified here. | `prediction_drift` | `prediction_drift` |
| `production_data` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | |
-| `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | |
+| `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | |
| `production_data.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_outputs` | |
-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
-| `production_data.data_window_size` | ISO8601 format |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for prediction drift. | By default the data window size is the last monitoring period.| |
-| `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use validation or testing data as comparison baseline. | | |
+| `production_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `production_data.data_window.lookback_window_offset` and `production_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `production_data.data_window.window_start` and `production_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.data.input_data.type` is `uri_folder`. For more information on preprocessing component specification, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
+| `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | |
| `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | |
-| `reference_data.data_context` | String | The context of data, it refers to the context that dataset come from. | `model_outputs`, `testing`, `validation` | |
-| `reference_data.target_column_name` | String | The name of target column, **Required** if the `reference_data.data_context` is `testing` or `validation` | | |
-| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.trailing_window_offset` and `reference_data.data_window.trailing_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format | |
-| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. **Required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
+| `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `training`, `test`, `validation` | |
+| `reference_data.data_column_names.target_column` | Object | **Optional**. If the 'reference_data' is training data, this property is required for monitoring top N features for data drift. | | |
+| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.lookback_window_offset` and `reference_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `reference_data_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is **required** if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
+| `features` | Object | **Optional**. Target features to be monitored for data drift. Some models might have hundreds or thousands of features, it's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default `features.top_n_feature_importance = 10` if `production_data.data_context` is `training`, otherwise, default is `all_features` |
| `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_enabled` is `true`, user will receive alert notification. | | |
-| `metric_thresholds.numerical` | Object | Optional. List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed numerical metric names: `jensen_shannon_distance`, `normalized_wasserstein_distance`, `population_stability_index`, `two_sample_kolmogorov_smirnov_test`| |
-| `metric_thresholds.categorical` | Object | Optional. List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed `categorical` metric names: `jensen_shannon_distance`, `chi_squared_test`, `population_stability_index`| |
-
+| `metric_thresholds.numerical` | Object | Optional. List of metrics and thresholds in 'key:value' format, 'key' is the metric name, 'value' is the threshold. | Allowed numerical metric names: `jensen_shannon_distance`, `normalized_wasserstein_distance`, `population_stability_index`, `two_sample_kolmogorov_smirnov_test`| |
+| `metric_thresholds.categorical` | Object | Optional. List of metrics and thresholds in 'key:value' format, 'key' is the metric name, 'value' is the threshold. | Allowed categorical metric names: `jensen_shannon_distance`, `chi_squared_test`, `population_stability_index`| |
#### Data quality
Data quality signal tracks data quality issues in production by comparing to tra
| `production_data` | Object | **Optional**. Description of production data to be analyzed for monitoring signal. | | | | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | | | `production_data.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs`, `model_outputs` | |
-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
-| `production_data.data_window_size` | ISO8601 format |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data quality issues. | By default the data window size is the last monitoring period.| |
+| `production_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `production_data.data_window.lookback_window_offset` and `production_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `production_data.data_window.window_start` and `production_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
| `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before | `model_inputs`, `model_outputs`, `training`, `test`, `validation` | |
-| `reference_data.target_column_name` | Object | **Optional**. If the 'reference_data' is training data, this property is required for monitoring top N features for data drift. | | |
-| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.trailing_window_offset` and `reference_data.data_window.trailing_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format | |
-| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
+| `reference_data.data_column_names.target_column` | Object | **Optional**. If the 'reference_data' is training data, this property is required for monitoring top N features for data drift. | | |
+| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.lookback_window_offset` and `reference_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
| `features` | Object | **Optional**. Target features to be monitored for data quality. Some models might have hundreds or thousands of features. It's always recommended to specify interested features for monitoring. | One of following values: list of feature names, `features.top_n_feature_importance`, or `all_features` | Default to `features.top_n_feature_importance = 10` if `reference_data.data_context` is `training`, otherwise default is `all_features` | | `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | List of metrics and thresholds properties for the monitoring signal. When threshold is exceeded and `alert_enabled` is `true`, user will receive alert notification. | | | | `metric_thresholds.numerical` | Object | **Optional** List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed numerical metric names: `data_type_error_rate`, `null_value_rate`, `out_of_bounds_rate`| |
-| `metric_thresholds.categorical` | Object | **Optional** List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed `categorical` metric names: `data_type_error_rate`, `null_value_rate`, `out_of_bounds_rate`| |
+| `metric_thresholds.categorical` | Object | **Optional** List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed categorical metric names: `data_type_error_rate`, `null_value_rate`, `out_of_bounds_rate`| |
-#### Feature attribution drift
+#### Feature attribution drift (preview)
The feature attribution of a model may change over time due to changes in the distribution of data, changes in the relationships between features, or changes in the underlying problem being solved. Feature attribution drift is a phenomenon that occurs in machine learning models when the importance or contribution of features to the prediction output changes over time.
The feature attribution of a model may change over time due to changes in the di
| `type` | String | **Required**. Type of monitoring signal. Prebuilt monitoring signal processing component is automatically loaded according to the `type` specified here | `feature_attribution_drift` | `feature_attribution_drift` | | `production_data` | Array | **Optional**, default to collected data associated with Azure Machine Learning endpoint if this is not provided. The `production_data` is a list of dataset and its associated meta data, it must include both model inputs and model outputs data. It could be a single dataset with both model inputs and outputs, or it could be two separate datasets containing one model inputs and one model outputs.| | | | `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | |
+| `production_data.input_data.data_column_names` | Object | Correlation column name and prediction column names in `key:value` format, needed for data joining. | Allowed keys are: `correlation_id`, `target_column` |
| `production_data.data_context` | String | The context of data. It refers to production model inputs data. | `model_inputs`, `model_outputs`, `model_inputs_outputs` | |
-| `production_data.data_column_names` | Object | Correlation column name and prediction column names in `key:value` format, needed for data joining. | Allowed keys are: `correlation_id`, `prediction`, `prediction_probability` |
-| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
+| `production_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `production_data.data_window.lookback_window_offset` and `production_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `production_data.data_window.window_start` and `production_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
| `production_data.data_window_size` | String |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data quality issues. | By default the data window size is the last monitoring period.| | | `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | | | `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | | | `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before. Fro feature attribution drift, only `training` data allowed. | `training` | |
-| `reference_data.target_column_name` | String | **Required**. | | |
-| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-your-production-data-to-azure-machine-learning). | | |
+| `reference_data.data_column_names.target_column` | String | **Required**. | | |
+| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.lookback_window_offset` and `reference_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
| `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | | | `metric_thresholds` | Object | Metric name and threshold for feature attribution drift in `key:value` format, where `key` is the metric name, and `value` is the threshold. When threshold is exceeded and `alert_enabled` is on, user will receive alert notification. | Allowed metric name: `normalized_discounted_cumulative_gain` | |
+#### Custom monitoring signal
+
+Custom monitoring signal through a custom Azure Machine Learning component.
+| Key | Type | Description | Allowed values | Default value |
+| | - | | | - |
+| `type` | String | **Required**. Type of monitoring signal. Prebuilt monitoring signal processing component is automatically loaded according to the `type` specified here. | `custom` | `custom` |
+| `component_id` | String | **Required**. The Azure Machine Learning component ID corresponding to your custom signal. For example `azureml:mycustomcomponent:1` | | |
+| `input_data` | Object | **Optional**. Description of the input data to be analyzed by the monitoring signal, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | |
+| `input_data.<data_name>.data_context` | String | The context of data, it refers model production data and could be model inputs or model outputs | `model_inputs` | |
+| `input_data.<data_name>.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `input_data.<data_name>.data_window.lookback_window_offset` and `input_data.<data_name>.data_window.lookback_window_size` properties. For using fixed data windows, please specify `input_data.<data_name>.data_window.window_start` and `input_data.<data_name>.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `input_data.<data_name>.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `input_data.<data_name>.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
+| `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | |
+| `metric_thresholds.metric_name` | Object | Name of the custom metric. | | |
+| `threshold` | Object | Acceptable threshold for the custom metric. | | |
+
+#### Model performance (preview)
+
+Model performance tracks the objective performance of a model's output in production by comparing it to collected ground truth data.
+
+| Key | Type | Description | Allowed values | Default value |
+| | | | --| -|
+| `type` | String | **Required**. Type of monitoring signal. Prebuilt monitoring signal processing component is automatically loaded according to the `type` specified here | `model_performance` | `model_performance` |
+| `production_data` | Array | **Optional**, default to collected data associated with Azure Machine Learning endpoint if this is not provided. The `production_data` is a list of dataset and its associated meta data, it must include both model inputs and model outputs data. It could be a single dataset with both model inputs and outputs, or it could be two separate datasets containing one model inputs and one model outputs.| | |
+| `production_data.input_data` | Object | **Optional**. Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification.| | |
+| `production_data.input_data.data_column_names` | Object | Correlation column name and prediction column names in `key:value` format, needed for data joining. | Allowed keys are: `correlation_id`, `target_column` |
+| `production_data.data_context` | String | The context of data. It refers to production model inputs data. | `model_inputs`, `model_outputs`, `model_inputs_outputs` | |
+| `production_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `production_data.data_window.lookback_window_offset` and `production_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `production_data.data_window.window_start` and `production_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `production_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `production_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
+| `production_data.data_window_size` | String |**Optional**. Data window size in days with ISO8601 format, for example `P7D`. This is the production data window to be computed for data quality issues. | By default the data window size is the last monitoring period.| |
+| `reference_data` | Object | **Optional**. Recent past production data is used as comparison baseline data if this isn't specified. Recommendation is to use training data as comparison baseline. | | |
+| `reference_data.input_data` | Object | Description of input data source, see [job input data](./reference-yaml-job-command.md#job-inputs) specification. | | |
+| `reference_data.data_context` | String | The context of data, it refers to the context that dataset was used before. Fro feature attribution drift, only `training` data allowed. | `training` | |
+| `reference_data.data_column_names.target_column` | String | **Required**. | | |
+| `reference_data.data_window` | Object | **Optional**. Data window of the reference data to be used as comparison baseline data. | Allow either rolling data window or fixed data window only. For using rolling data window, please specify `reference_data.data_window.lookback_window_offset` and `reference_data.data_window.lookback_window_size` properties. For using fixed data windows, please specify `reference_data.data_window.window_start` and `reference_data.data_window.window_end` properties. All property values must be in ISO8601 format. | |
+| `reference_data.pre_processing_component` | String | Component ID in the format of `azureml:myPreprocessing@latest` for a registered component. This is required if `reference_data.input_data.type` is `uri_folder`, see [preprocessing component specification](./how-to-monitor-model-performance.md#set-up-model-monitoring-by-bringing-in-your-production-data-to-azure-machine-learning). | | |
+| `alert_enabled` | Boolean | Turn on/off alert notification for the monitoring signal. `True` or `False` | | |
+| `metric_thresholds.classification` | Object | **Optional** List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed `classification` metric names: `accuracy`, `precision`, `recall`| |
+| `metric_thresholds.regression` | Object | **Optional** List of metrics and thresholds in `key:value` format, `key` is the metric name, `value` is the threshold. | Allowed `regression` metric names: `mae`, `mse`, `rmse`| |
## Remarks
The `az ml schedule` command can be used for managing Azure Machine Learning mod
## Examples
-Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/schedules). A couple are as follows:
+Monitoring CLI examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/monitoring). A couple are as follows:
-## YAML: Schedule with recurrence pattern
+## YAML: Out-of-box monitor
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
-## YAML: Schedule with cron expression
+## YAML: Advanced monitor
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] ## Appendix
Current schedule supports the following timezones. The key can be used directly
| UTC +12:45 | CHATHAM_ISLANDS_STANDARD_TIME | "Chatham Islands Standard Time" | | UTC +13:00 | TONGA__STANDARD_TIME | "Tonga Standard Time" | | UTC +13:00 | SAMOA_STANDARD_TIME | "Samoa Standard Time" |
-| UTC +14:00 | LINE_ISLANDS_STANDARD_TIME | "Line Islands Standard Time" |
-
+| UTC +14:00 | LINE_ISLANDS_STANDARD_TIME | "Line Islands Standard Time" |
machine-learning Tutorial Network Isolation For Feature Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-network-isolation-for-feature-store.md
Last updated 03/20/2024 -+ #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
notification-hubs Monitor Notification Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/monitor-notification-hubs-reference.md
+
+ Title: Monitoring data reference for Azure Notification Hubs
+description: This article contains important reference material you need when you monitor Azure Notification Hubs.
Last updated : 03/21/2024+++++++
+# Azure Notification Hubs monitoring data reference
++
+See [Monitor Notification Hubs](monitor-notification-hubs.md) for details on the data you can collect for Azure Notification Hubs and how to use it.
++
+### Supported metrics for Microsoft.NotificationHubs/namespaces/notificationHubs
+The following table lists the metrics available for the Microsoft.NotificationHubs/namespaces/notificationHubs resource type.
++++
+### Supported resource logs for Microsoft.NotificationHubs/namespaces
+
+### Supported resource logs for Microsoft.NotificationHubs/namespaces/notificationHubs
+
+<!-- No table(s) at https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype. -->
+
+Azure Notification Hubs supports operational logs, which capture management operations that are performed on the Notification Hubs namespace. All logs are stored in JavaScript Object Notation (JSON) format in the following two locations:
+
+- **AzureActivity**: Displays logs from operations and actions that are conducted against the namespace in the Azure portal or through Azure Resource Manager template deployments.
+- **AzureDiagnostics**: Displays logs from operations and actions that are conducted against the namespace by using the API, or through management clients on the language SDK.
+
+Diagnostic log JSON strings include the elements listed in the following table:
+
+| Name | Description |
+| - | - |
+| time | UTC timestamp of the log |
+| resourceId | Relative path to the Azure resource |
+| operationName | Name of the management operation |
+| category | Log category. Valid values: `OperationalLogs` |
+| callerIdentity | Identity of the caller who initiated the management operation |
+| resultType | Status of the management operation. Valid values: `Succeeded` or `Failed` |
+| resultDescription | Description of the management operation |
+| correlationId | Correlation ID of the management operation (if specified) |
+| callerIpAddress | The caller IP address. Empty for calls that originated from the Azure portal |
++
+Operational logs capture all management operations that are performed on the Azure Notification Hubs namespace. Data operations aren't captured, because of the high volume of data operations that are conducted on notification hubs.
+
+[Microsoft.NotificationHubs resource provider operations](/azure/role-based-access-control/permissions/integration#microsoftnotificationhubs) lists all the management operations that are captured in operational logs.
+
+## Related content
+
+- See [Monitor Notification Hubs](monitor-notification-hubs.md) for a description of monitoring Notification Hubs.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
notification-hubs Monitor Notification Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/monitor-notification-hubs.md
+
+ Title: Monitor Azure Notification Hubs
+description: Start here to learn how to monitor Azure Notification Hubs.
Last updated : 03/21/2024+++++++
+# Monitor Azure Notification Hubs
+
+<!-- Intro. Required. -->
+
+For more information about the resource types for Azure Notification Hubs, see [Notification Hubs monitoring data reference](monitor-notification-hubs-reference.md).
+++
+For a list of available metrics for Notification Hubs, see [Notification Hubs monitoring data reference](monitor-notification-hubs-reference.md#metrics).
++
+### Notification Hubs logs
+
+Notification Hubs supports activity and operational logs, which capture management operations that are performed on the Notification Hubs namespace. Data operations aren't captured, because of the high volume of data operations that are conducted on notification hubs.
+
+You can archive the diagnostic logs to a storage account or stream them to an event hub. Sending the logs to a Log Analytics workspace isn't currently supported.
+
+- For more information about the logs and instructions for enabling log collection, see [Enable diagnostics logs for Notification Hubs](notification-hubs-diagnostic-logs.md).
+
+- For the available resource log categories, associated Log Analytics tables, and the management operations captured in operational logs, see [Notification Hubs monitoring data reference](monitor-notification-hubs-reference.md#resource-logs).
++
+## Azure Notification Hubs REST APIs
+
+The [Notification Hubs REST APIs](/rest/api/notificationhubs) fall into the following categories:
+
+- **Azure Resource
+- **Notification Hubs service:** APIs that enable operations directly on the Notification Hubs service, and have `<namespaceName>.servicebus.windows.net/` in the request URI.
+
+The [Get notification message telemetry](/rest/api/notificationhubs/get-notification-message-telemetry) API helps monitor push notifications sent from a hub by providing telemetry on the finished states of outgoing push notifications. The Notification ID that this API uses can be retrieved from the HTTP Location header included in the response of the REST API used to send the notification.
++++
+### Sample Kusto queries
+
+Failed operations:
+
+```kusto
+// List all reports of failed operations over the past hour.
+AzureActivity
+| where TimeGenerated > ago(1h)
+| where ActivityStatus == "Failed"
+```
+
+Errors:
+
+```kusto
+// List all the errors for the past 7 days.
+AzureDiagnostics
+| where TimeGenerated > ago(7d)
+| where ResourceProvider =="MICROSOFT.NOTIFICATIONHUBS"
+| where Category == "OperationalLogs"
+| summarize count() by "EventName", _ResourceId
+```
++
+### Notification Hubs alert rules
+
+The following table lists some suggested alert rules for Notification Hubs. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry that's listed in the [Notification Hubs monitoring data reference](monitor-notification-hubs-reference.md).
+
+| Alert type | Condition | Description |
+|:|:|:|
+| Platform metric | Payload Errors | Whenever the count of pushes that failed because the push notification service (PNS) returned a bad payload error is greater than a dynamic threshold |
+| Activity log | Delete Namespace (Namespace) | Whenever the Activity Log has an event with Category='Administrative', Signal name='Delete Namespace (Namespace)' |
++
+## Related content
+
+- See [Notification Hubs monitoring data reference](monitor-notification-hubs-reference.md) for a reference of the metrics, logs, and other important values created for Notification Hubs.
+- See [Enable diagnostics logs for Notification Hubs](notification-hubs-diagnostic-logs.md) for information about diagnostic logs for Notification Hubs and how to enable them.
+- See [Get notification message telemetry](/rest/api/notificationhubs/get-notification-message-telemetry) for information about using the API to monitor push notification success.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
notification-hubs Notification Hubs Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-diagnostic-logs.md
Title: Azure Notification Hubs diagnostics logs | Microsoft Docs
-description: This article provides an overview of all the operational and diagnostics logs that are available for Azure Notification Hubs.
+description: Learn about the operational and diagnostics logs that are available for Azure Notification Hubs, and how to enable diagnostic logging.
Previously updated : 10/23/2023 Last updated : 03/12/2024 # Enable diagnostics logs for Notification Hubs
For calls originating from the Azure portal the `identity` field is empty. The l
} ```
-For calls made through Azure Resource Manager the `identity` field will contain the username of the logged in user.
+For calls made through Azure Resource Manager the `identity` field contains the username of the logged in user.
```json {
For calls made through Azure Resource Manager the `identity` field will contain
} ```
-For calls to the Notification Hubs REST API the `identity` field will contain the name of the access policy used to generate the SharedAccessSignature token.
+For calls to the Notification Hubs REST API the `identity` field contains the name of the access policy used to generate the SharedAccessSignature token.
```json {
For calls to the Notification Hubs REST API the `identity` field will contain th
## Events and operations captured in operational logs
-Operational logs capture all management operations that are performed on the Azure Notification Hubs namespace. Data operations are not captured, because of the high volume of data operations that are conducted on Azure Notification Hubs.
-
-The following management operations are captured in operational logs:
-
-| Scope | Operation Name | Operation Description |
-| :-- | :-- | :-- |
-| Namespace | Microsoft.NotificationHubs/Namespaces/authorizationRules/action | List Authorization Rules |
-| Namespace | Microsoft.NotificationHubs/Namespaces/authorizationRules/delete | Delete Authorization Rule |
-| Namespace | Microsoft.NotificationHubs/Namespaces/authorizationRules/listkeys/action | List Keys |
-| Namespace | Microsoft.NotificationHubs/Namespaces/authorizationRules/read | Get Authorization Rule |
-| Namespace | Microsoft.NotificationHubs/Namespaces/authorizationRules/regenerateKeys/action | Regenerate Keys |
-| Namespace | Microsoft.NotificationHubs/Namespaces/authorizationRules/write | Create or Update Authorization Rule |
-| Namespace | Microsoft.NotificationHubs/Namespaces/delete | Delete Namespace |
-| Namespace | Microsoft.NotificationHubs/Namespaces/read | Get Namespace |
-| Namespace | Microsoft.NotificationHubs/Namespaces/write | Create or Update Namespace |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/action | List Authorization Rules |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/delete | Delete Authorization Rule |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/listkeys/action | List Keys |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/read | Read Authorization Rule |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/regenerateKeys/action | Regenerate Keys |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/authorizationRules/write | Create or Update Authorization Rule |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/delete | Delete Notification Hub |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/pnsCredentials/action | Create, Update, or Get PNS Credentials |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/read | Get Notification Hub |
-| Notification Hub | Microsoft.NotificationHubs/Namespaces/NotificationHubs/write | Create or Update Notification Hub |
-
-## Enable operational logs
+Operational logs capture all management operations that are performed on the Azure Notification Hubs namespace. Data operations aren't captured, because of the high volume of data operations that are conducted on notification hubs.
+
+For a list of the management operations that are captured in operational logs, see [Microsoft.NotificationHubs resource provider operations](/azure/role-based-access-control/permissions/integration#microsoftnotificationhubs).
+
+### Enable operational logs
Operational logs are disabled by default. To enable logs, do the following:
Operational logs are disabled by default. To enable logs, do the following:
![The "Diagnostic settings" link](./media/notification-hubs-diagnostic-logs/image-1.png)
-1. In the **Diagnostics settings** pane, select **Add diagnostic setting**.
+1. In the **Diagnostics settings** pane, select **Add diagnostic setting**.
![The "Add diagnostic setting" link](./media/notification-hubs-diagnostic-logs/image-2.png)
Operational logs are disabled by default. To enable logs, do the following:
The new settings take effect in about 10 minutes. The logs are displayed in the configured archival target, in the **Diagnostics logs** pane.
-## Next steps
+## Related content
To learn more about configuring diagnostics settings, see: * [Overview of Azure diagnostics logs](../azure-monitor/essentials/platform-logs-overview.md).
openshift Howto Infrastructure Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-infrastructure-nodes.md
Use this procedure for any additional ingress controllers you may have in the cl
grafana-599d4b948c-btlp2 3/3 Running 0 2m48s 10.131.4.10 cz-cluster-hsmtw-infra-aro-machinesets-eastus-1-vr56r <none> <none> kube-state-metrics-574c5bfdd7-f7fjk 3/3 Running 0 2m49s 10.131.4.8 cz-cluster-hsmtw-infra-aro-machinesets-eastus-1-vr56r <none> <none> ```+
+### DNS
+
+1. Allow the DNS pods to run on the infrastructure nodes.
+
+ ```
+ oc edit dns.operator/default
+ ```
+
+ ```
+ apiVersion: operator.openshift.io/v1
+ kind: DNS
+ metadata:
+ name: default
+ spec:
+ nodePlacement:
+ tolerations:
+ - operator: Exists
+ ```
+1. Verify that DNS pods are scheduled onto all infra nodes.
+
+```
+oc get ds/dns-default -n openshift-dns
+NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
+dns-default 7 7 7 7 7 kubernetes.io/os=linux 35d
+```
+
operator-5g-core Concept Observability Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/concept-observability-analytics.md
Previously updated : 02/21/2024 Last updated : 03/29/2024
Azure Operator 5G Core uses the following open source components for observabili
|Logs |Elasticsearch, Fluentd, and Kibana (EFK); Elastalert | |Tracing |Jaeger, OpenTelemetry Collector |
-## EFK logging framework
+## Logging framework
Elasticsearch, Fluentd, and Kibana (EFK) provide a distributed logging system used for collecting and visualizing the logs to troubleshoot microservices. ### Architecture
The following diagram shows EFK architecture:
[:::image type="content" source="media/concept-observability-analytics/elasticsearch-fluentd-kibana-architecture.png" alt-text="Diagram of text boxes showing the Elasticsearch, Fluentd, and Kibana (EFK) distributed logging system used to troubleshoot microservices in Azure Operator 5G Core.":::](media/concept-observability-analytics/elasticsearch-fluentd-kibana-architecture-expanded.png#lightbox) > [!NOTE]
-> The linked content is available only to customers with a current Affirmed Networks support agreement. To access the content, you must have Affirmed Networks login credentials. If you need assistance, please speak to the Affirmed Networks Support Team.
+> Sections of the following linked content is available only to customers with a current Affirmed Networks support agreement. To access the content, you must have Affirmed Networks login credentials. If you need assistance, please speak to the Affirmed Networks Support Team.
-The EFK logging framework includes the following components:
+The logging framework includes the following components:
- **Fluentd** - Fluentd is an open-source log collector. Fluentd allows you to unify data collection and consumption for better use and understanding of the data. Fluentd is deployed as a DaemonSet in the Kubernetes cluster. It collects the logs in each K8s node and streams the logs to Elasticsearch. See [Logs supported by Fluentd](https://manuals.metaswitch.com/UC/4.3.0/UnityCloud_Overview/Content/PaaS_Components/EFK_logging_FrameWork/Fluentd-logs-supported.htm).+ - **Elasticsearch** - Elasticsearch is an open source, distributed, real-time search back-end. Elasticsearch stores the logs securely and offers an HTTP web interface for log analysis. - **Kibana** - Kibana is used to visualize the logs stored in Elasticsearch. Kibana pulls the logs from Elasticsearch.
The EFK logging framework includes the following components:
### Features
-The EFK logging framework provides the following features:
+The logging framework provides the following features:
- **Log collection and streaming** - Fluentd collects and streams the logs to Elasticsearch.
This section describes the observability features (dashboards, statistics, logs,
#### Dashboards
-EFK supports various dashboard options, including:
+Various dashboards are supported, including:
- Grafana dashboards (see [Logging framework dashboards](https://manuals.metaswitch.com/UC/4.3.0/UnityCloud_Overview/Content/PaaS_Components/EFK_logging_FrameWork/EFK_Dashboards.htm)) - Kibana dashboards (see [Kibana dashboard overview](https://manuals.metaswitch.com/UC/4.3.0/UnityCloud_Overview/Content/PaaS_Components/EFK_logging_FrameWork/Kibana_Dashboards.htm))
For information about Elastic events, see [Elastic events](https://manuals.metas
#### Log visualization
-The EFK framework aggregates logs from nodes and applications running inside your Azure Operator 5G Core installation. When logging is enabled, the EFK framework uses Fluentd to aggregate event logs from all applications and nodes into Elasticsearch. The EFK framework also provides a centralized Kibana web UI where users can view the logs or create rich visualizations and dashboards with the aggregated data.
+The framework aggregates logs from nodes and applications running inside your Azure Operator 5G Core installation. When logging is enabled, the EFK framework uses Fluentd to aggregate event logs from all applications and nodes into Elasticsearch. The EFK framework also provides a centralized Kibana web UI where users can view the logs or create rich visualizations and dashboards with the aggregated data.
## Metrics framework
The core components of the metrics framework are:
- **Prometheus server** - The Prometheus server collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and triggers alerts if certain conditions are true. Azure Operator 5G Core supports integration with the Prometheus server out of the box, with minimal required configuration. - **Client libraries** - Client libraries instrument the application code. -- **Alertmanager** - Alertmanager handles alerts sent by client applications such as the Prometheus server. It handles deduplicating, grouping, and routing alerts to the correct receiver integrations (email, slack, etc.). Alertmanager also supports silencing and inhibition of alerts.
+- **AlertManager** - AlertManager handles alerts sent by client applications such as the Prometheus server. It handles deduplicating, grouping, and routing alerts to the correct receiver integrations (email, slack, etc.). AlertManager also supports silencing and inhibition of alerts.
- **Grafana** - Grafana provides an out of the box set of dashboards rich with 3GPP and other KPIs to query, visualize, and understand the collected data. The Grafana audit feature provides a mechanism to restore or recreate dashboards in the Grafana server when Grafana server pod restarts. The audit feature also helps to delete any stale dashboards from the Grafana server.
The metrics framework supports the following features:
- Multiple modes of graphing and dashboarding support. For more information about Prometheus, see [Prometheus documentation](https://prometheus.io/docs/introduction/overview/).
-For more information about Grafana, see [Grafana open source documentation](https://grafana.com/docs/grafana/latest/)
+For more information about Grafana, see [Grafana open source documentation](https://grafana.com/docs/grafana/latest/).
### Observability
IstioHTTPRequestLatencyTooHigh: Requests are taking more than the &lt;configured
- **HTTPClientRespRcvd5xxPercentageTooHigh** - HTTP client response received with 5xx error and the received error percentage is more than the &lt;configured_value&gt; %. - **HTTPClientRespRcvd4xxPercentageTooHigh** - HTTP client response received with 4xx error and the received error percentage is more than the &lt;configured_value&gt; %.
+## Tracing framework
+ #### Jaeger tracing with OpenTelemetry Protocol Azure Operator 5G Core uses the OpenTelemetry Protocol (OTLP) in Jaeger tracing. OTLP replaces the Jaeger agent in fed-paas-helpers. Azure Operator 5G Core deploys the fed-otel_collector federation. The OpenTelemetry (OTEL) Collector runs as part of the fed-otel_collector namespace:
operator-5g-core Quickstart Configure Extension For Status Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/quickstart-configure-extension-for-status-monitoring.md
+
+ Title: Configure the Azure Operator 5G Core extension for status monitoring
+description: Learn how to ensure your deployment is running at its highest capacity by performing health checks post-deployment.
+++++ Last updated : 02/21/2024++
+#CustomerIntent: As a < type of user >, I want < what? > so that < why? >.
+
+# Configure the Azure Operator 5G Core Preview extension for status monitoring
+
+After Azure Operator 5G Core Preview is deployed, you can perform health and configuration checks on the deployment. You must enable an ARC extension to monitor your deployment.
+
+## Set up the Azure CLI
+
+1. Sign in using the `az login--use-device-code` command. Complete the sign in process with your user account.
+1. Set the subscription: `az account set -s <subscriptionName>`
+1. Run the following commands to install the CLI extensions:
+
+```azurecli
+ az extension add --yes --name connectedk8s
+ az extension add --yes --name k8s-configuration
+ az extension add --yes --name k8s-extension
+```
+
+## Configure ARC for the Kubernetes/Azure Kubernetes Services Cluster
+
+Enter the following command to configure the ARC:
+
+```azurecli
+az connectedk8s connect --name <ConnectedK8sName> --resource-group <ResourceGroupName>
+```
+
+## Deploy the Azure Operator 5G Core Preview extension
+
+1. Enter the following commands to deploy the Azure Operator 5G Core extension:
+
+ ```azurecli
+ az k8s-extension create \
+ --name ao5gc-monitor \
+ --cluster-name <ConnectedK8sName> \
+ --resource-group <ResourceGroupName> \
+ --cluster-type connectedClusters \
+ --extension-type "Microsoft.AO5GC" \
+ --release-train <dev or preview or stable>\
+ --auto-upgrade true
+ ```
+
+2. Run the following command to create a **name=ao5gc-monitor** label for the newly created **ao5gc-monitor** namespace:
+
+ ```azurecli
+ kubectl label namespace ao5gc-monitor name=ao5gc-monitor
+ ```
+ The namespace and all necessary Azure Operator 5G Core extension pods, configuration maps, and services are created within the namespace.
+
+To delete the Azure Operator 5G Core extension, you can run the following command:
+
+```azurecli
+az k8s-extension delete \
+--name ao5gc-monitor \
+--cluster-name <ConnectedK8sName> \
+--resource-group <ResourceGroupName> \
+--cluster-type connectedClusters \
+```
+## Related content
+
+- [Monitor the status of your Azure Operator 5G Core Preview deployment](quickstart-monitor-deployment-status.md)
+- [Observability and analytics in Azure Operator 5G Core Preview](concept-observability-analytics.md)
operator-5g-core Quickstart Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-5g-core/quickstart-subscription.md
Access is currently limited. For now, we're working with customers that have an
[What is Azure Operator 5G Core?](overview-product.md) [Deploy Azure Operator 5G Core](quickstart-deploy-5g-core.md)
-[Deployment order for clusters, network functions, and observability](concept-deployment-order.md)
+[Deployment order for clusters, network functions, and observability.](concept-deployment-order.md)
operator-nexus Howto Kubernetes Cluster Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-dual-stack.md
+
+ Title: Create dual-stack Azure Operator Nexus Kubernetes cluster
+description: Learn how to create dual-stack Azure Operator Nexus Kubernetes cluster.
++++ Last updated : 03/28/2024+++
+# Create dual-stack Azure Operator Nexus Kubernetes cluster
+
+In this article, you learn how to create a dual-stack Nexus Kubernetes cluster. The dual-stack networking feature helps to enable both IPv4 and IPv6 communication in a Kubernetes cluster, allowing for greater flexibility and scalability in network communication. The focus of this guide is on the configuration aspects, providing examples to help you understand the process. By following this guide, you're able to effectively create a dual-stack Nexus Kubernetes cluster.
+
+In a dual-stack Kubernetes cluster, both the nodes and the pods are configured with an IPv4 and IPv6 network address. This means that any pod that runs on a dual-stack cluster will be assigned both IPv4 and IPv6 addresses within the pod, and the cluster nodes' CNI (Container Network Interface) interface will also be assigned both an IPv4 and IPv6 address. However, any multus interfaces attached, such as SRIOV/DPDK, are the responsibility of the application owner and must be configured accordingly.
+
+<!-- Network Address Translation (NAT) is configured to enable pods to access resources within the local network infrastructure. The source IP address of the traffic from the pods (either IPv4 or IPv6) is translated to the node's primary IP address corresponding to the same IP family (IPv4 to IPv4 and IPv6 to IPv6). This setup ensures seamless connectivity and resource access for the pods within the on-premises environment. -->
+
+## Prerequisites
+
+Before proceeding with this how-to guide, it's recommended that you:
+
+* Refer to the Nexus Kubernetes cluster [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) for a comprehensive overview and steps involved.
+* Ensure that you meet the outlined prerequisites to ensure smooth implementation of the guide.
+* Knowledge of Kubernetes concepts, including deployments and services.
+* The Layer 3 (L3) network used for the `cniNetworkId` must have both IPv4 and IPv6 addresses.
+
+## Limitations
+
+* Single stack IPv6-only isn't supported for node or pod IP addresses. Workload Pods and services can use dual-stack (IPv4/IPv6).
+* Kubernetes administration API access to the cluster is IPv4 only. Any kubeconfig must be IPv4 because kube-vip for the kubernetes API server only sets up an IPv4 address.
+* Network Address Translation for IPv6 is disabled by default. If you need NAT for IPv6, you must enable it manually.
+
+## Configuration options
+
+Operator Nexus Kubernetes dual-stack networking relies on the pod and service CIDR to enable both IPv4 and IPv6 communication. Before configuring the dual-stack networking, it's important to understand the various configuration options available. These options allow you to define the behavior and parameters of the dual-stack networking according to your specific requirements. Let's explore the configuration options for dual-stack networking.
+
+### Required parameters
+
+To configure dual-stack networking in your Operator Nexus Kubernetes cluster, you need to define the `Pod` and `Service` CIDRs. These configurations are essential for defining the IP address range for Pods and Kubernetes services in the cluster.
+
+* The `podCidrs` parameter takes a list of CIDR notation IP ranges to assign pod IPs from. Example, `["10.244.0.0/16", "fd12:3456:789a::/64"]`.
+* The `serviceCidrs` parameter takes a list of CIDR notation IP ranges to assign service IPs from. Example, `["10.96.0.0/16", "fd12:3456:789a:1::/108"]`.
+* The IPv6 subnet assigned to `serviceCidrs` can be no larger than a `/108`.
+
+## Bicep template parameters for dual-stack configuration
+
+The following JSON snippet shows the parameters required for creating dual-stack cluster in the [QuickStart Bicep template](./quickstarts-kubernetes-cluster-deployment-bicep.md).
+
+```json
+ "podCidrs": {
+ "value": ["10.244.0.0/16", "fd12:3456:789a::/64"]
+ },
+ "serviceCidrs": {
+ "value": ["10.96.0.0/16", "fd12:3456:789a:1::/108"]
+ },
+```
+
+To create a dual-stack cluster, you need to update the `kubernetes-deploy-parameters.json` file that you created during the [QuickStart](./quickstarts-kubernetes-cluster-deployment-bicep.md). Include the Pod and Service CIDR configuration in this file according to your desired settings, and change the cluster name to ensure that a new cluster is created with the updated configuration.
+
+After updating the Pod and Service CIDR configuration to your parameter file, you can proceed with deploying the Bicep template. This action sets up your new dual-stack cluster with the specified Pod and Server CIDR configuration.
+
+By following these instructions, you can create a dual-stack Nexus Kubernetes cluster with the desired IP pool configuration and take advantage of the dual-stack in your cluster services.
+
+To enable dual-stack `LoadBalancer` services in your cluster, you must ensure that the [IP pools are configured](./howto-kubernetes-service-load-balancer.md) with both IPv4 and IPv6 addresses. This allows the LoadBalancer service to allocate IP addresses from the specified IP pools for the services, enabling effective communication between the services and the external network.
+
+### Example parameters
+
+This parameter file is intended to be used with the [QuickStart guide](./quickstarts-kubernetes-cluster-deployment-bicep.md) Bicep template for creating a dual-stack cluster. It contains the necessary configuration settings to set up the dual-stack cluster with BGP load balancer functionality. By using this parameter file with the Bicep template, you can create a dual-stack cluster with the desired BGP load balancer capabilities.
+
+> [!IMPORTANT]
+> These instructions are for creating a new Operator Nexus Kubernetes cluster. Avoid applying the Bicep template to an existing cluster, as Pod and Service CIDR configurations are immutable.
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "kubernetesClusterName":{
+ "value": "dual-stack-cluster"
+ },
+ "adminGroupObjectIds": {
+ "value": [
+ "00000000-0000-0000-0000-000000000000"
+ ]
+ },
+ "cniNetworkId": {
+ "value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
+ },
+ "cloudServicesNetworkId": {
+ "value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
+ },
+ "extendedLocation": {
+ "value": "/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+ },
+ "location": {
+ "value": "eastus"
+ },
+ "sshPublicKey": {
+ "value": "ssh-rsa AAAAB...."
+ },
+ "podCidrs": {
+ "value": ["10.244.0.0/16", "fd12:3456:789a::/64"]
+ },
+ "serviceCidrs": {
+ "value": ["10.96.0.0/16", "fd12:3456:789a:1::/108"]
+ },
+ "ipAddressPools": {
+ "value": [
+ {
+ "addresses": ["<IPv4>/<CIDR>", "<IPv6>/<CIDR>"],
+ "name": "<pool-name>",
+ "autoAssign": "True",
+ "onlyUseHostIps": "True"
+ }
+ ]
+ }
+ }
+}
+```
+
+## Inspect the nodes to see both IP families
+
+* Once the cluster is provisioned, confirm the nodes are provisioned with dual-stack networking using the `kubectl get nodes` command.
+
+ ```azurecli
+ kubectl get nodes -o=custom-columns="NAME:.metadata.name,ADDRESSES:.status.addresses[?(@.type=='InternalIP')].address"
+ ```
+
+The output from the kubectl get nodes command shows the nodes have addresses and pod IP assignment space from both IPv4 and IPv6.
+
+ ```output
+ NAME ADDRESSES
+ dual-stack-cluster-374cc36c-agentpool1-md-6ff45 10.14.34.20,fda0:d59c:da0a:e22:a8bb:ccff:fe6d:9e2a,fda0:d59c:da0a:e22::11,fe80::a8bb:ccff:fe6d:9e2a
+ dual-stack-cluster-374cc36c-agentpool1-md-dpmqv 10.14.34.22,fda0:d59c:da0a:e22:a8bb:ccff:fe80:f66f,fda0:d59c:da0a:e22::13,fe80::a8bb:ccff:fe80:f66f
+ dual-stack-cluster-374cc36c-agentpool1-md-tcqpf 10.14.34.21,fda0:d59c:da0a:e22:a8bb:ccff:fed5:a3fb,fda0:d59c:da0a:e22::12,fe80::a8bb:ccff:fed5:a3fb
+ dual-stack-cluster-374cc36c-control-plane-gdmz8 10.14.34.19,fda0:d59c:da0a:e22:a8bb:ccff:fea8:5a37,fda0:d59c:da0a:e22::10,fe80::a8bb:ccff:fea8:5a37
+ dual-stack-cluster-374cc36c-control-plane-smrxl 10.14.34.18,fda0:d59c:da0a:e22:a8bb:ccff:fe7b:cfa9,fda0:d59c:da0a:e22::f,fe80::a8bb:ccff:fe7b:cfa9
+ dual-stack-cluster-374cc36c-control-plane-tjfc8 10.14.34.17,10.14.34.14,fda0:d59c:da0a:e22:a8bb:ccff:feaf:21ec,fda0:d59c:da0a:e22::c,fe80::a8bb:ccff:feaf:21ec
+ ```
+
+## Create an example workload
+
+Once the cluster has been created, you can deploy your workloads. This article walks you through an example workload deployment of an NGINX web server.
+
+### Deploy an NGINX web server
+
+1. Create an NGINX web server using the `kubectl create deployment nginx` command.
+
+ ```bash-interactive
+ kubectl create deployment nginx --image=mcr.microsoft.com/cbl-mariner/base/nginx:1.22 --replicas=3
+ ```
+
+2. View the pod resources using the `kubectl get pods` command.
+
+ ```bash-interactive
+ kubectl get pods -o custom-columns="NAME:.metadata.name,IPs:.status.podIPs[*].ip,NODE:.spec.nodeName,READY:.status.conditions[?(@.type=='Ready')].status"
+ ```
+
+ The output shows the pods have both IPv4 and IPv6 addresses. The pods don't show IP addresses until they're ready.
+
+ ```output
+ NAME IPs NODE READY
+ nginx-7d566f5967-gtqm8 10.244.31.200,fd12:3456:789a:0:9ca3:8a54:6c22:1fc8 dual-stack-cluster-374cc36c-agentpool1-md-6ff45 True
+ nginx-7d566f5967-sctn2 10.244.106.73,fd12:3456:789a:0:1195:f83e:f6bd:4809 dual-stack-cluster-374cc36c-agentpool1-md-tcqpf True
+ nginx-7d566f5967-wh2rp 10.244.100.196,fd12:3456:789a:0:c296:3da:b545:aa04 dual-stack-cluster-374cc36c-agentpool1-md-dpmqv True
+ ```
+
+### Expose the workload via a `LoadBalancer` type service
+
+1. Expose the NGINX deployment using the `kubectl expose deployment nginx` command.
+
+ ```bash-interactive
+ kubectl expose deployment nginx --name=nginx --port=80 --type=LoadBalancer --overrides='{"spec":{"ipFamilyPolicy": "PreferDualStack", "ipFamilies": ["IPv4", "IPv6"]}}'
+ ```
+
+ You receive an output that shows the services have been exposed.
+
+ ```output
+ service/nginx exposed
+ ```
+
+2. Once the deployment is exposed and the `LoadBalancer` services are fully provisioned, get the IP addresses of the services using the `kubectl get services` command.
+
+ ```bash-interactive
+ kubectl get services
+ ```
+
+ ```output
+ NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+ nginx LoadBalancer 10.96.119.27 10.14.35.240,fda0:d59c:da0a:e23:ffff:ffff:ffff:fffc 80:30122/TCP 10s
+ ```
+
+ ```bash-interactive
+ kubectl get services nginx -ojsonpath='{.spec.clusterIPs}'
+ ```
+
+ ```output
+ ["10.96.119.27","fd12:3456:789a:1::e6bb"]
+ ```
+
+## Next steps
+
+You can try deploying a network function (NF) within your Nexus Kubernetes cluster utilizing the dual-stack address.
operator-nexus Quickstarts Kubernetes Cluster Deployment Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-arm.md
Once you have reviewed and saved the template file named ```kubernetes-deploy.js
--parameters @kubernetes-deploy-parameters.json ```
+If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.
+
+> [!NOTE]
+> The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.
+ ## Review deployed resources [!INCLUDE [quickstart-review-deployment-cli](./includes/kubernetes-cluster/quickstart-review-deployment-cli.md)]
operator-nexus Quickstarts Kubernetes Cluster Deployment Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-bicep.md
Once you have reviewed and saved the template file named ```kubernetes-deploy.bi
--parameters @kubernetes-deploy-parameters.json ```
+If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.
+
+> [!NOTE]
+> The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.
+ ## Review deployed resources [!INCLUDE [quickstart-review-deployment-cli](./includes/kubernetes-cluster/quickstart-review-deployment-cli.md)]
operator-nexus Quickstarts Kubernetes Cluster Deployment Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-cli.md
az networkcloud kubernetescluster create \
dns-service-ip="${DNS_SERVICE_IP}" ```
+If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.
+
+> [!NOTE]
+> The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.
+ After a few minutes, the command completes and returns information about the cluster. For more advanced options, see [Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep](./quickstarts-kubernetes-cluster-deployment-bicep.md). ## Review deployed resources
operator-nexus Quickstarts Kubernetes Cluster Deployment Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-powershell.md
New-AzNetworkCloudKubernetesCluster -KubernetesClusterName $CLUSTER_NAME `
-NetworkConfigurationServiceCidr $DNS_SERVICE_IP ```
+If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.
+
+> [!NOTE]
+> The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.
+ After a few minutes, the command completes and returns information about the cluster. For more advanced options, see [Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep](./quickstarts-kubernetes-cluster-deployment-bicep.md). ## Review deployed resources
oracle Database Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md
Billing and payment for the service is done through Azure. Payment for Oracle Da
Oracle Database@Azure is available in the following locations. Oracle Database@Azure infrastructure resources must be provisioned in the Azure regions listed.
-### North America (NA)
+### United States
|Azure region| || |East US (Virginia)|
+### Germany
+
+|Azure region|
+||
+|Germany West Central (Frankfurt)|
+ ## Azure Support scope and contact information See [Contact Microsoft Azure Support](https://support.microsoft.com/topic/contact-microsoft-azure-support-2315e669-8b1f-493b-5fb1-d88a8736ffe4) in the Azure documentation for information on Azure support. For SLA information about the service offering, please refer to the [Oracle PaaS and IaaS Public Cloud Services Pillar Document](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.oracle.com%2Fcontracts%2Fdocs%2Fpaas_iaas_pub_cld_srvs_pillar_4021422.pdf%3Fdownload%3Dfalse&data=05%7C02%7Cjacobjaygbay%40microsoft.com%7Cc226ce0d176442b3302608dc3ed3a6d0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C638454325970975560%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=VZvhVUJzmUCzI25kKlf9hKmsf5GlrMPsQujqjGNsJbk%3D&reserved=0)
postgresql Concepts Networking Ssl Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-ssl-tls.md
There are many connection parameters for configuring the client for SSL. Few imp
**Certificate Authorities (CAs)** are the institutions responsible for issuing certificates. A trusted certificate authority is an entity thatΓÇÖs entitled to verify someone is who they say they are. In order for this model to work, all participants must agree on a set of trusted CAs. All operating systems and most web browsers ship with a set of trusted CAs. > [!NOTE]
-> Using verify-ca and verify-full **sslmode** configuration settings can also be known as **[certificate pinning](../../security/fundamentals/certificate-pinning.md#how-to-address-certificate-pinning-in-your-application)**. In this case root CA certificates on the PostgreSQL server have to match certificate signature and even host name against certificate on the client. Important to remember, you might periodically need to update client stored certificates when Certificate Authorities change or expire on PostgreSQL server certificates.
+> Using verify-ca and verify-full **sslmode** configuration settings can also be known as **[certificate pinning](../../security/fundamentals/certificate-pinning.md#how-to-address-certificate-pinning-in-your-application)**. In this case root CA certificates on the PostgreSQL server have to match certificate signature and even host name against certificate on the client. Important to remember, you might periodically need to update client stored certificates when Certificate Authorities change or expire on PostgreSQL server certificates. To determine if you are pinning CAs, please refer to [Certificate pinning and Azure services](../../security/fundamentals/certificate-pinning.md#how-to-address-certificate-pinning-in-your-application).
For more on SSL\TLS configuration on the client, see [PostgreSQL documentation](https://www.postgresql.org/docs/current/ssl-tcp.html#SSL-CLIENT-CERTIFICATES).
System.setProperty("javax.net.ssl.trustStorePassword","password");
``` 6. Replace the original root CA pem file with the combined root CA file and restart your application/client.
+> [!NOTE]
+> Azure Database for PostgreSQL - Flexible server doesn't support [certificate based authentication](https://www.postgresql.org/docs/current/auth-cert.html) at this time.
+
+### Get list of trusted certificates in Java Key Store
+
+As stated above, Java, by default, stores the trusted certificates in a special file named *cacerts* that is located inside Java installation folder on the client.
+Example below first reads *cacerts* and loads it into *KeyStore* object:
+```java
+private KeyStore loadKeyStore() {
+ String relativeCacertsPath = "/lib/security/cacerts".replace("/", File.separator);
+ String filename = System.getProperty("java.home") + relativeCacertsPath;
+ FileInputStream is = new FileInputStream(filename);
+ KeyStore keystore = KeyStore.getInstance(KeyStore.getDefaultType());
+ String password = "changeit";
+ keystore.load(is, password.toCharArray());
+
+ return keystore;
+}
+```
+The default password for *cacerts* is *changeit* , but should be different on real client, as administrators recommend changing password immediately after Java installation.
+Once we loaded KeyStore object, we can use the *PKIXParameters* class to read certificates present.
+```java
+public void whenLoadingCacertsKeyStore_thenCertificatesArePresent() {
+ KeyStore keyStore = loadKeyStore();
+ PKIXParameters params = new PKIXParameters(keyStore);
+ Set<TrustAnchor> trustAnchors = params.getTrustAnchors();
+ List<Certificate> certificates = trustAnchors.stream()
+ .map(TrustAnchor::getTrustedCert)
+ .collect(Collectors.toList());
+
+ assertFalse(certificates.isEmpty());
+}
+```
+### Updating Root certificates when using clients in Azure App Services with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+
+For Azure App services, connecting to Azure Database for PostgreSQL, we can have two possible scenarios on updating client certificates and it depends on how on you're using SSL with your application deployed to Azure App Services.
+
+* Usually new certificates are added to App Service at platform level prior to changes in Azure Database for PostgreSQL - Flexible Server. If you are using the SSL certificates included on App Service platform in your application, then no action is needed. Consult following [Azure App Service documentation](../../app-service/configure-ssl-certificate.md) for more information.
+* If you're explicitly including the path to SSL cert file in your code, then you would need to download the new cert and update the code to use the new cert. A good example of this scenario is when you use custom containers in App Service as shared in the [App Service documentation](../../app-service/tutorial-multi-container-app.md#configure-database-variables-in-wordpress)
+
+ ### Updating Root certificates when using clients in Azure Kubernetes Service (AKS) with Azure Database for PostgreSQL - Flexible Server for certificate pinning scenarios
+If you're trying to connect to the Azure Database for PostgreSQL using applications hosted in Azure Kubernetes Services (AKS) and pinning certificates, it's similar to access from a dedicated customers host environment. Refer to the steps [here](../../aks/ingress-tls.md).
## Cipher Suites
private-link Create Private Link Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-cli.md
Previously updated : 02/03/2023 Last updated : 03/28/2024 ms.devlang: azurecli
private-link Disable Private Link Service Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-link-service-network-policy.md
Previously updated : 02/02/2023 Last updated : 03/28/2024 ms.devlang: azurecli
reliability Migrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-sql-database.md
To create a geo-replica of the database:
1. To clean up, consider removing the original non-zone redundant database from the geo replica relationship. You can choose to delete it. + ## Disable zone-redundancy
-To disable zone-redundancy, you can use the portal or ARM API. For Hyperscale service tier, you can simply reverse the steps documented in [Redeployment (Hyperscale)](#redeployment-hyperscale).
+To disable zone-redundancy for a single database or an elastic pool, you can use the portal or ARM API.
+
+To disable zone-redundancy for Hyperscale service tier, you can reverse the steps documented in [Redeployment (Hyperscale)](#redeployment-hyperscale).
+# [Elastic pool](#tab/pool)
**To disable zone-redundancy with Azure portal:**
-1. Go to the [Azure portal](https://portal.azure.com) to find and select the elastic pool that you want to migrate.
+1. Go to the [Azure portal](https://portal.azure.com) to find and select the elastic pool that you no longer want to be zone-redundant.
1. Select **Settings**, and then select **Configure**.
To disable zone-redundancy, you can use the portal or ARM API. For Hyperscale se
1. Select **Save**.
+**To disable zone-redundancy with ARM,** see [Databases - Create Or Update in ARM](/rest/api/sql/elastic-pools/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property.
+
+# [Single database](#tab/single)
++
+**To disable zone-redundancy with Azure portal:**
+
+1. Go to the [Azure portal](https://portal.azure.com) to find and select the database that you no longer want to be zone-redundant.
+
+1. Select **Settings**, and then select **Configure**.
+
+1. Select **No** for **Would you like to make this database zone redundant?**
+
+1. Select **Save**.
++ **To disable zone-redundancy with ARM,** see [Databases - Create Or Update in ARM](/rest/api/sql/2022-05-01-preview/databases/create-or-update?tabs=HTTP) and use the `properties.zoneRedundant` property.
+
++ ## Next steps
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Steps to create an active-active architecture for your web app in App Service ar
1. Deploy code to both the web apps with [continuous deployment](../app-service/deploy-continuous-deployment.md).
-[Tutorial: Create a highly available multi-region app in Azure App Service](../app-service/tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture. The same steps with minimal changes (setting priority to ΓÇ£1ΓÇ¥ for both origin groups in Azure Front Door) give you an *active-active* architecture.
+[Tutorial: Create a highly available multi-region app in Azure App Service](../app-service/tutorial-multi-region-app.md) shows you how to set up an *active-passive* architecture. The same steps with minimal changes (setting priority to ΓÇ£1ΓÇ¥ for both origins in the origin group in Azure Front Door) give you an active-active architecture.
##### Active-passive architecture
sentinel Siem Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/siem-migration.md
Last updated 3/11/2024
+appliesto:
+- Microsoft Sentinel in the Azure portal
#customer intent: As an SOC administrator, I want to use the SIEM migration experience so I can migrate to Microsoft Sentinel.
-# Migrate to Microsoft Sentinel with the SIEM migration experience (preview)
+# Migrate to Microsoft Sentinel with the SIEM migration experience
Migrate your SIEM to Microsoft Sentinel for all your security monitoring use cases. Automated assistance from the SIEM Migration experience simplifies your migration.
You need the following from the source SIEM:
You need the following on the target, Microsoft Sentinel: - The SIEM migration experience deploys analytics rules. This capability requires the **Microsoft Sentinel Contributor** role. For more information, see [Permissions in Microsoft Sentinel](roles.md). -- Ingest security data previously used in your source SIEM into Microsoft Sentinel by enabling an out-of-the-box (OOTB) data connector.
- - If the data connector isn't installed yet, find the relevant solution in **Content hub**.
+- Ingest security data previously used in your source SIEM into Microsoft Sentinel. Install and enable out-of-the-box (OOTB) data connectors to match your security monitoring estate from your source SIEM.
+ - If the data connectors aren't installed yet, find the relevant solutions in **Content hub**.
- If no data connector exists, create a custom ingestion pipeline.<br>For more information, see [Discover and manage Microsoft Sentinel out-of-the-box content](sentinel-solutions-deploy.md) or [Custom data ingestion and transformation](data-transformation.md). ## Translate Splunk detection rules At the core of Splunk detection rules is the Search Processing Language (SPL). The SIEM migration experience systematically translates SPL to Kusto query language (KQL) for each Splunk rule. Carefully review translations and make adjustments to ensure migrated rules function as intended in your Microsoft Sentinel workspace. For more information on the concepts important in translating detection rules, see [migrate Splunk detection rules](migration-splunk-detection-rules.md).
-Capabilities in public preview:
+Current capabilities:
- Translate simple queries with a single data source - Direct translations listed in the article, [Splunk to Kusto cheat sheet](/azure/data-explorer/kusto/query/splunk-cheat-sheet) - Review translated query error feedback with edit capability to save time in the detection rule translation process
+- Translated queries feature a completeness status with translation states
Here are some of the priorities that are important to us as we continue to develop the translation technology: - Splunk Common Information Model (CIM) to Microsoft Sentinel's Advanced Security Information Model (ASIM) translation support-- Translated queries feature a completeness status with translation states -- Multiple data sources and index-- Rule correlations-- Support for macros-- Support for lookups -- Complex queries with joins
+- Support for Splunk macros
+- Support for Splunk lookups
+- Translation of complex correlation logic that queries and correlates events across multiple data sources
## Start the SIEM migration experience 1. Navigate to Microsoft Sentinel in the [Azure portal](https://portal.azure.com), under **Content management**, select **Content hub**.
-1. Select **SIEM Migration (Preview)**.
+1. Select **SIEM Migration**.
:::image type="content" source="media/siem-migration/siem-migration-experience.png" alt-text="Screenshot showing content hub with menu item for the SIEM migration experience.":::
Here are some of the priorities that are important to us as we continue to devel
1. Review the analysis of the Splunk export. - **Name** is the original Splunk detection rule name.
- - **Compatibility** indicates if a Sentinel OOTB analytics rule matches the Splunk detection logic.
+ - **Translation Type** indicates if a Sentinel OOTB analytics rule matches the Splunk detection logic.
+ - **Translation State** has the following values:
+ - **Fully Translated** queries in this rule were fully translated to KQL
+ - **Partially Translated** queries in this rule weren't fully translated to KQL
+ - **Not Translated** indicates an error in translation
+ - **Manually Translated** when any rule is reviewed and saved
:::image type="content" source="media/siem-migration/configure-rules.png" alt-text="Screenshot showing the results of the automatic rule mapping." lightbox="media/siem-migration/configure-rules.png"::: > [!NOTE] > Check the schema of the data types and fields used in the rule logic. Microsoft Sentinel Analytics require that the data type be present in the Log Analytics Workspace before the rule is enabled. It's also important the fields used in the query are accurate for the defined data type schema.
+1. Highlight a rule to resolve translation and select **Edit**. When you are satisfied with the results, select **Save Changes**.
+
+1. Switch on the **Ready to deploy** toggle for Analytics rules you want to deploy.
+ 1. When the review is complete, select **Review and migrate**. ## Deploy the Analytics rules
-1. Select **Deploy** to start the deployment of analytics rules to your Microsoft Sentinel workspace.
+1. Select **Deploy**.
+
+ | Translation Type | Resource deployed |
+ |:-|:|
+ | Out of the box | The corresponding solutions from **Content hub** that contain the matched analytics rule templates are installed. The matched rules are deployed as active analytics rules in the disabled state. <br><br>For more information, see [Manage Analytics rule templates](manage-analytics-rule-templates.md). |
+ | Custom | Rules are deployed as active analytics rules in the disabled state. |
- The following resources are deployed:
- - For all OOTB matches, the corresponding solutions with the matched analytics rule are installed, and the matched rules are deployed as active analytics rules.
- - All custom rules translated to Sentinel analytics rules are deployed as active analytics rules.
+1. (Optional) Choose Analytics rules and select **Export Templates** to download them as ARM templates for use in your CI/CD or custom deployment processes.
+
+ :::image type="content" source="media/siem-migration/export-templates.png" alt-text="Screenshot showing the Review and Migrate tab highlighting the Export Templates button.":::
+
+1. Before exiting the SIEM Migration experience, select **Download Migration Summary** to keep a summary of the Analytics deployment.
+
+ :::image type="content" source="media/siem-migration/download-migration-summary.png" alt-text="Screenshot showing the Download Migration Summary button from the Review and Migrate tab.":::
+
+## Validate and enable rules
1. View the properties of deployed rules from Microsoft Sentinel **Analytics**.
Here are some of the priorities that are important to us as we continue to devel
`triggerThreshold`<br> `suppressionDuration`
-1. Enable rules you've reviewed and verified.
+1. Enable rules after you review and verify them.
+
+ :::image type="content" source="media/siem-migration/enable-deployed-translated-rules.png" alt-text="Screenshot showing Analytics rules with deployed Splunk rules highlighted ready to be enabled.":::
## Next step
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## March 2024
+- [SIEM migration experience now generally available (GA)](#siem-migration-experience-now-generally-available-ga)
- [Amazon Web Services S3 connector now generally available (GA)](#amazon-web-services-s3-connector-now-generally-available-ga) - [Codeless Connector builder (preview)](#codeless-connector-builder-preview)-- [SIEM migration experience (preview)](#siem-migration-experience-preview) - [Data connectors for Syslog and CEF based on Azure Monitor Agent now generally available (GA)](#data-connectors-for-syslog-and-cef-based-on-azure-monitor-agent-now-generally-available-ga)
+### SIEM migration experience now generally available (GA)
+
+At the beginning of the month, we announced the SIEM migration preview. Now at the end of the month, it's already GA! The new Microsoft Sentinel Migration experience helps customers and partners automate the process of migrating their security monitoring use cases hosted in non-Microsoft products into Microsoft Sentinel.
+- This first version of the tool supports migrations from Splunk
+
+For more information, see [Migrate to Microsoft Sentinel with the SIEM migration experience](siem-migration.md)
+
+Join our Security Community for a [webinar](https://forms.office.com/pages/responsepage.aspx?id=v4j5cvGGr0GRqy180BHbR_0A4IaJRDNBnp8pjCkWnwhUM1dFNFpVQlZJREdEQjkwQzRaV0RZRldEWC4u) showcasing the SIEM migration experience on May 2nd, 2024.
+ ### Amazon Web Services S3 connector now generally available (GA) Microsoft Sentinel has released the AWS S3 data connector to general availability (GA). You can use this connector to ingest logs from several AWS services to Microsoft Sentinel using an S3 bucket and AWS's simple message queuing service.
See our blog post for more details, [Create Codeless Connectors with the Codeles
For more information on the CCP, see [Create a codeless connector for Microsoft Sentinel (Public preview)](create-codeless-connector.md).
-### SIEM migration experience (preview)
-
-The new Microsoft Sentinel Migration experience helps customers and partners to automate the process of migrating their security monitoring use cases hosted in non-Microsoft products into Microsoft Sentinel.
-- This first version of the tool supports migrations from Splunk-
-For more information, see [Migrate to Microsoft Sentinel with the SIEM migration experience](siem-migration.md)
### Data connectors for Syslog and CEF based on Azure Monitor Agent now generally available (GA)
site-recovery Azure To Azure Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-quickstart.md
To disable replication, perform these steps:
## Next steps
-In this quickstart, you replicated a single VM to a secondary region. Next, set up replication for multiple Azure VMs.
+In this quickstart, you replicated a single VM to a secondary region. Next, [set up replication for multiple Azure VMs](azure-to-azure-tutorial-enable-replication.md).
-> [!div class="nextstepaction"]
-> [Set up disaster recovery for Azure VMs](azure-to-azure-tutorial-enable-replication.md)
site-recovery Azure To Azure Replicate After Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-replicate-after-migration.md
Install the [Azure Linux VM](../virtual-machines/extensions/agent-linux.md) agen
## Next steps
-[Review troubleshooting](site-recovery-extension-troubleshoot.md) for the Site Recovery extension on the Azure VM agent.
-[Quickly replicate](azure-to-azure-quickstart.md) an Azure VM to a secondary region.
+- [Review troubleshooting](site-recovery-extension-troubleshoot.md) for the Site Recovery extension on the Azure VM agent.
+- [Quickly replicate](azure-to-azure-quickstart.md) an Azure VM to a secondary region.
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
To resolve this issue, wait till system time crosses the skewed future time. Ano
## Next steps
-[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md)
+[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md).
site-recovery Azure To Azure Troubleshoot Network Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-network-connectivity.md
To allow [the required URLs](azure-to-azure-about-networking.md#outbound-connect
## Next steps
-[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md)
+[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md).
site-recovery Azure To Azure Troubleshoot Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-replication.md
Previously updated : 01/03/2024 Last updated : 03/29/2024
Restart the following
- VSS service. - Azure Site Recovery VSS Provider. - VDS service.+
+## Next steps
+
+[Replicate Azure VMs to another Azure region](azure-to-azure-how-to-enable-replication.md).
site-recovery Azure To Azure Tutorial Dr Drill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-dr-drill.md
Title: Tutorial to run an Azure VM disaster recovery drill with Azure Site Reco
description: In this tutorial, run an Azure VM disaster recovery drill to another region using Site Recovery. Previously updated : 11/05/2020 Last updated : 03/29/2024 #Customer intent: As an Azure admin, I want to run a drill to check that VM disaster recovery is working.
Before you start this tutorial, you must enable disaster recovery for one or mor
## Next steps
-In this tutorial, you ran a disaster recovery drill to check that failover works as expected. Now you can try out a full failover.
-
-> [!div class="nextstepaction"]
-> [Run a production failover](azure-to-azure-tutorial-failover-failback.md)
+In this tutorial, you ran a disaster recovery drill to check that failover works as expected. Now you can try to [run a production failover](azure-to-azure-tutorial-failover-failback.md).
site-recovery Azure To Azure Tutorial Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-failback.md
Title: Tutorial to fail back Azure VMs to a primary region during disaster recov
description: Tutorial to learn about failing back Azure VMs to a primary region with Azure Site Recovery. Previously updated : 08/01/2023 Last updated : 03/29/2024
If you completely disable replication after failing back, Site Recovery cleans u
In this tutorial, you failed VMs back from the secondary region to the primary. This is the last step in the process that includes enabling replication for a VM, trying out a disaster recovery drill, failing over from the primary region to the secondary, and finally failing back.
-> [!div class="nextstepaction"]
-> Now, try out disaster recovery to Azure for an [on-premises VM](vmware-azure-tutorial-prepare-on-premises.md)
+Now, try out disaster recovery to Azure for an [on-premises VM](vmware-azure-tutorial-prepare-on-premises.md).
site-recovery Azure To Azure Tutorial Failover Failback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-tutorial-failover-failback.md
Title: Tutorial to fail over Azure VMs to a secondary region for disaster recovery with Azure Site Recovery. description: Tutorial to learn how to fail over and reprotect Azure VMs replicated to a secondary Azure region for disaster recovery, with the Azure Site Recovery service. Previously updated : 11/05/2020 Last updated : 03/29/2024
After failover, you reprotect the VM in the secondary region, so that it replica
## Next steps
-In this tutorial, you failed over from the primary region to the secondary, and started replicating VMs back to the primary region. Now you can fail back from the secondary region to the primary.
+In this tutorial, you failed over from the primary region to the secondary, and started replicating VMs back to the primary region. Now you can [fail back from the secondary region to the primary](azure-to-azure-tutorial-failback.md).
-> [!div class="nextstepaction"]
-> [Fail back to the primary region](azure-to-azure-tutorial-failback.md)
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
If you have enabled Accelerated Networking on the source virtual machine after e
The above process should also be followed for existing replicated virtual machines that didn't previously have Accelerated Networking enabled automatically by Site Recovery. ## Next steps+ - Learn more about [benefits of Accelerated Networking](../virtual-network/accelerated-networking-overview.md#benefits). - Learn more about limitations and constraints of Accelerated Networking for [Windows virtual machines](../virtual-network/accelerated-networking-overview.md#limitations-and-constraints) and [Linux virtual machines](../virtual-network/accelerated-networking-overview.md#limitations-and-constraints). - Learn more about [recovery plans](site-recovery-create-recovery-plans.md) to automate application failover.
site-recovery Concepts Azure To Azure High Churn Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-azure-to-azure-high-churn-support.md
Previously updated : 07/14/2023 Last updated : 03/29/2024
The following table summarizes Site Recovery limits:
## Cost Implications - **High Churn** uses *Premium Block Blob* storage accounts which may have higher cost implications as compared to **Normal Churn** which uses *Standard* storage accounts. For more information, see [pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).-- For High churn VMs, more data changes may get replicated to target for **High churn** compared to **Normal churn**. This may lead to more network cost.
+- For High churn VMs, more data changes may get replicated to target for **High churn** compared to **Normal churn**. This may lead to more network cost.
+
+## Next steps
+
+[Set up disaster recovery for Azure VMs](azure-to-azure-tutorial-enable-replication.md).
site-recovery Concepts Expressroute With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-expressroute-with-site-recovery.md
If you are already using ExpressRoute to connect from your on-premises datacente
You can replicate Azure virtual machines to any Azure region within the same geographic cluster as detailed [here](../site-recovery/azure-to-azure-support-matrix.md#region-support). If the chosen target Azure region is not within the same geopolitical region as the source, you might need to enable ExpressRoute Premium. For more details, check [ExpressRoute locations](../expressroute/expressroute-locations.md) and [ExpressRoute pricing](https://azure.microsoft.com/pricing/details/expressroute/). ## Next steps+ - Learn more about [ExpressRoute circuits](../expressroute/expressroute-circuit-peerings.md). - Learn more about [ExpressRoute routing domains](../expressroute/expressroute-circuit-peerings.md#peeringcompare). - Learn more about [ExpressRoute locations](../expressroute/expressroute-locations.md).
site-recovery Concepts Network Security Group With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-network-security-group-with-site-recovery.md
Considering the [example scenario](concepts-network-security-group-with-site-rec
Once the NSGs are created and configured, we recommend running a [test failover](azure-to-azure-tutorial-dr-drill.md) to verify scripted NSG associations and post-failover VM connectivity. ## Next steps+ - Learn more about [Network Security Groups](../virtual-network/network-security-groups-overview.md#network-security-groups). - Learn more about NSG [security rules](../virtual-network/network-security-groups-overview.md#security-rules). - Learn more about [effective security rules](../virtual-network/diagnose-network-traffic-filter-problem.md) for an NSG.
site-recovery Concepts On Premises To Azure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-on-premises-to-azure-networking.md
In this scenario, the Azure VM gets a new IP address after failover. To setup a
Site Recovery will now honor these settings and ensure that the virtual machine on failover is connected to the selected resource via the corresponding IP Address, if it is available in the target IP range. In this scenario, there's no need to failover the entire subnet. A DNS update will be required to update records for failed over machine to point to the new IP address of the virtual machine. ## Next steps+ [Learn about](site-recovery-active-directory.md) replicating on-premises Active Directory and DNS to Azure.
site-recovery Concepts Traffic Manager With Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/concepts-traffic-manager-with-site-recovery.md
You can additionally optimize the DNS Time to Live (TTL) value for the Traffic M
The TTL experienced by the client also does not increase if the number of DNS resolvers between the client and the authoritative DNS server increases. DNS resolvers ΓÇÿcount downΓÇÖ the TTL and only pass on a TTL value that reflects the elapsed time since the record was cached. This ensures that the DNS record gets refreshed at the client after the TTL, irrespective of the number of DNS Resolvers in the chain. ## Next steps+ - Learn more about Traffic Manager [routing methods](../traffic-manager/traffic-manager-routing-methods.md). - Learn more about [nested Traffic Manager profiles](../traffic-manager/traffic-manager-nested-profiles.md). - Learn more about [endpoint monitoring](../traffic-manager/traffic-manager-monitoring.md).
site-recovery Delete Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/delete-vault.md
Title: Delete an Azure Site Recovery vault description: Learn how to delete a Recovery Services vault configured for Azure Site Recovery - Previously updated : 11/05/2019 Last updated : 03/29/2024
$vault = Get-AzRecoveryServicesVault -Name "vaultname"
Remove-AzRecoveryServicesVault -Vault $vault ```
-Learn more about [Get-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/get-azrecoveryservicesvault), and [Remove-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/remove-azrecoveryservicesvault).
+## Next steps
+
+Learn more about:
+
+- [Get-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/get-azrecoveryservicesvault)
+- [Remove-AzRecoveryServicesVault](/powershell/module/az.recoveryservices/remove-azrecoveryservicesvault).
site-recovery Monitor Site Recovery Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-site-recovery-reference.md
+
+ Title: Monitoring data reference for Azure Site Recovery
+description: This article contains important reference material you need when you monitor Azure Site Recovery.
Last updated : 03/21/2024+++++++
+# Azure Site Recovery monitoring data reference
++
+See [Monitor Azure Site Recovery](monitor-site-recovery.md) for details on the data you can collect for Azure Site Recovery and how to use it.
+
+## Metrics
+
+There are no automatically collected metrics for Azure Site Recovery. All the automatically collected metrics for the `Microsoft.RecoveryServices/Vaults` namespace are for the Azure Backup service. For information about Azure Backup metrics, see [Monitor Azure Backup](/azure/backup/backup-azure-monitoring-built-in-monitor).
++
+### Supported resource logs for Microsoft.RecoveryServices/Vaults
+
+Note that some of the following logs apply to Azure Backup and others apply to Azure Site Recovery, as noted in the **Category display name** column.
+++
+### Recovery Services Vaults
+Microsoft.RecoveryServices/Vaults
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
+- [ASRJobs](/azure/azure-monitor/reference/tables/ASRJobs#columns)
+- [ASRReplicatedItems](/azure/azure-monitor/reference/tables/ASRReplicatedItems#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/AzureDiagnostics#columns)
+
+- [Microsoft.RecoveryServices](/azure/role-based-access-control/permissions/management-and-governance#microsoftrecoveryservices)
+
+## Related content
+
+- See [Monitor Site Recovery](monitor-site-recovery.md) for a description of monitoring Site Recovery.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
site-recovery Monitor Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/monitor-site-recovery.md
+
+ Title: Monitor Azure Site Recovery
+description: Start here to learn how to monitor Azure Site Recovery.
Last updated : 03/21/2024+++++++
+# Monitor Azure Site Recovery
++
+## Built-in monitoring for Azure Site Recovery
+
+An Azure Recovery Services vault supports both Azure Site Recovery and Azure Backup services and features. In the Azure portal, the **Site Recovery** tab of the Recovery Services vault **Overview** page provides a dashboard that shows the following monitoring information:
+
+- Replication health
+- Failover health
+- Configuration issues
+- Recovery plans
+- Errors
+- Jobs
+- Infrastructure view of machines replicating to Azure
+
+For a detailed description of how to monitor Azure Site Recovery in the Azure portal by using the Recovery Services dashboard, see [Monitor in the dashboard](site-recovery-monitor-and-troubleshoot.md#monitor-in-the-dashboard).
+
+Azure Backup Center also provides at-scale monitoring and management capabilities for Azure Site Recovery. For more information, see [About Backup center for Azure Backup and Azure Site Recovery](/azure/backup/backup-center-overview).
+
+### Monitor churn rate
+
+High data change rates (churn) are a common source of replication issues. You can use various tools, including Azure Monitor Logs, to monitor churn patterns on virtual machines. For more information, see [Monitor churn patterns on virtual machines](monitoring-high-churn.md).
++
+Azure Site Recovery shares the `Microsoft.RecoveryServices/Vaults` namespace with Azure Backup. For more information, see [Azure Site Recovery monitoring data reference](monitor-site-recovery-reference.md).
+++
+There are no automatically collected platform metrics for Azure Site Recovery. All the automatically collected metrics for the `Microsoft.RecoveryServices/Vaults` namespace pertain to the Azure Backup service. For information about Azure Backup metrics, see [Monitor the health of your backups using Azure Backup Metrics (preview)](/azure/backup/metrics-overview).
++
+### Azure Site Recovery resource logs
+
+Using Azure Monitor Logs with Azure Site Recovery is supported for **Azure to Azure** replication and **VMware VM/physical server to Azure** replication.
+
+You can use Azure Monitor Logs to monitor:
+
+- Replication health
+- Test failover status
+- Site Recovery events
+- Recovery point objectives (RPOs) for protected machines
+- Disk/data change rates (churn)
+
+For detailed instructions on using diagnostic settings to collect and route Site Recovery logs and events, see [Monitor Site Recovery with Azure Monitor Logs](monitor-log-analytics.md).
+
+To get churn data and upload rate logs for VMware and physical machines, you need to install a Microsoft monitoring agent on the process server. This agent sends the logs of the replicating machines to the workspace.
+
+For instructions, see [Configure Microsoft monitoring agent on the process server to send churn and upload rate logs](monitor-log-analytics.md#configure-microsoft-monitoring-agent-on-the-process-server-to-send-churn-and-upload-rate-logs). For more information about monitoring the process server and the health alerts it generates, see [Monitor the process server](vmware-physical-azure-monitor-process-server.md).
+
+- For more information about the resource logs collected for Site Recovery, see [Common questions about Azure Site Recovery monitoring](monitoring-common-questions.md#azure-monitor-logging).
+- For the available resource log categories, associated Log Analytics tables, and logs schemas for Azure Site Recovery, see [Site Recovery monitoring data reference](monitor-site-recovery-reference.md#resource-logs).
+++++
+### Example queries
+
+For example Kusto queries you can use for Site Recovery monitoring, see [Query the logs - examples](monitor-log-analytics.md#query-the-logsexamples).
++
+You can set up alerts for any log entry listed in the [Azure Site Recovery monitoring data reference](monitor-site-recovery-reference.md). For example, you can configure alerts for machine health, test failover status, or Site Recovery job status.
+
+For detailed query examples and scenarios you can use for setting up Site Recovery alerts, see [Set up alerts - examples](monitor-log-analytics.md#set-up-alertsexamples).
+
+### Built-in Azure Monitor alerts for Azure Site Recovery
+
+Azure Site Recovery provides default alerts via Azure Monitor as a preview feature. Once you register this feature, Azure Site Recovery surfaces a default alert via Azure Monitor whenever any of the following critical events occur:
+
+- Enable disaster recovery failure alerts for Azure VM, Hyper-V, and VMware replication.
+- Replication health critical alerts for Azure VM, Hyper-V, and VMware replication.
+- Azure Site Recovery agent version expiry alerts for Azure VM and Hyper-V replication.
+- Azure Site Recovery agent not reachable alerts for Hyper-V replication.
+- Failover failure alerts for Azure VM, Hyper-V, and VMware replication.
+- Auto certification expiry alerts for Azure VM replication.
+
+For detailed instructions on enabling and configuring these built-in alerts, see [Built-in Azure Monitor alerts for Azure Site Recovery (preview)](site-recovery-monitor-and-troubleshoot.md#built-in-azure-monitor-alerts-for-azure-site-recovery-preview). Also see [Common questions about built-in Azure Monitor alerts for Azure Site Recovery](monitoring-common-questions.md#built-in-azure-monitor-alerts-for-azure-site-recovery).
++
+## Related content
+
+- See [Site Recovery monitoring data reference](monitor-site-recovery-reference.md) for a reference of the metrics, logs, and other important values created for Site Recovery.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
site-recovery Site Recovery Monitor And Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md
Title: Monitor Azure Site Recovery | Microsoft Docs
-description: Monitor and troubleshoot Azure Site Recovery replication issues and operations using the portal
+ Title: Azure Site Recovery dashboard and built-in alerts
+description: Monitor and troubleshoot Azure Site Recovery replication issues and operations, and enable built-in alerts, by using the portal.
Previously updated : 03/13/2024 Last updated : 03/22/2024
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-application-configuration-service.md
Previously updated : 02/28/2024 Last updated : 03/27/2024
The **Refresh Interval** specifies the frequency (in seconds) for checking updat
The following table describes the properties for each repository entry:
-| Property | Required? | Description |
-||--|-|
-| `Name` | Yes | A unique name to label each Git repository. |
-| `Patterns` | Yes | Patterns to search in Git repositories. For each pattern, use a format such as *{application}* or *{application}/{profile}* rather than *{application}-{profile}.yml*. Separate the patterns with commas. For more information, see the [Pattern](./how-to-enterprise-application-configuration-service.md#pattern) section of this article. |
-| `URI` | Yes | A Git URI (for example, `https://github.com/Azure-Samples/piggymetrics-config` or `git@github.com:Azure-Samples/piggymetrics-config`) |
-| `Label` | Yes | The branch name to search in the Git repository. |
-| `Search path` | No | Optional search paths, separated by commas, for searching subdirectories of the Git repository. |
+| Property | Required? | Description |
+||--|-|
+| `Name` | Yes | A unique name to label each Git repository. |
+| `Patterns` | Yes | The patterns to search for in Git repositories. For each pattern, use a format such as *{application}* or *{application}/{profile}* rather than *{application}-{profile}.yml*. Separate the patterns with commas. For more information, see the [Pattern](#pattern) section of this article. |
+| `URI` | Yes | A Git URI (for example, `https://github.com/Azure-Samples/piggymetrics-config` or `git@github.com:Azure-Samples/piggymetrics-config`) |
+| `Label` | Yes | The branch name to search for in the Git repository. |
+| `Search path` | No | Optional search paths, separated by commas, for searching subdirectories of the Git repository. |
### Pattern
The Application Configuration Service also supports polyglot apps like dotNET, G
When you modify and commit your configurations in a Git repository, several steps are involved before these changes are reflected in your applications. This process, though automated, involves the following distinct stages and components, each with its own timing and behavior: - Polling by Application Configuration Service: The Application Configuration Service regularly polls the backend Git repositories to detect any changes. This polling occurs at a set frequency, defined by the refresh interval. When a change is detected, Application Configuration Service updates the Kubernetes `ConfigMap`.-- ConfigMap update and interaction with kubelet cache: In Azure Spring Apps, this `ConfigMap` is mounted as a data volume to the relevant application. However, there's a natural delay in this process due to the frequency at which the kubelet refreshes its cache to recognize changes in `ConfigMap`.
+- `ConfigMap` update and interaction with kubelet cache: In Azure Spring Apps, this `ConfigMap` is mounted as a data volume to the relevant application. However, there's a natural delay in this process due to the frequency at which the kubelet refreshes its cache to recognize changes in `ConfigMap`.
- Application reads updated configuration: Your application running in the Azure Spring Apps environment can access the updated configuration values. The existing beans in the Spring Context aren't automatically refreshed to use the updated configurations. These stages are summarized in the following diagram:
az spring application-configuration-service delete \
+## Examine configuration file in ConfigMap
+
+The following section shows you how to examine the content of the configuration file pulled by Application Configuration Service from upstream Git repositories in the related Kubernetes `ConfigMap`. For more information, see the [Refresh strategies](#refresh-strategies) section of this article.
+
+### Assign an Azure role
+
+First, you must have the Azure role `Azure Spring Apps Application Configuration Service Config File Pattern Reader Role` assigned to you.
+
+#### [Azure portal](#tab/azure-Portal)
+
+Use the following steps to assign an Azure role:
+
+1. Open the [Azure portal](https://portal.azure.com) and go to your Azure Spring Apps service instance.
+
+1. In the navigation pane, select **Access Control (IAM)**.
+
+1. On the **Access Control (IAM)** page, select **Add**, and then select **Add role assignment**.
+
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service/add-role-assignment.png" alt-text="Screenshot of the Azure portal that shows the Access Control (IAM) page for an Azure Spring Apps instance with the Add role assignment option highlighted." lightbox="media/how-to-enterprise-application-configuration-service/add-role-assignment.png":::
+
+1. On the **Add role assignment** page, in the **Name** list, search for and select the target role, and then select **Next**.
+
+ :::image type="content" source="media/how-to-enterprise-application-configuration-service\application-configuration-service-config-pattern-file-reader-role.png" alt-text="Screenshot of the Azure portal that shows the Add role assignment page for an Azure Spring Apps instance with the Azure Spring Apps Application Configuration Service Config File Pattern Reader Role name highlighted." lightbox="media/how-to-enterprise-application-configuration-service\application-configuration-service-config-pattern-file-reader-role.png":::
+
+1. Select **Members** and then search for and select your username.
+
+1. Select **Review + assign**.
+
+#### [Azure CLI](#tab/azure-CLI)
+
+Use the following command to assign an Azure role:
+
+```azurecli
+az role assignment create \
+ --role "Azure Spring Apps Application Configuration Service Config File Pattern Reader Role" \
+ --scope "<service-instance-resource-id>" \
+ --assignee "<your-identity>"
+```
+++
+### Examine configuration file with Azure CLI
+
+Use the following command to view the content of the configuration file by [Pattern](#pattern):
+
+```azurecli
+az spring application-configuration-service config show \
+ --resource-group <resource-group-name> \
+ --service <Azure-Spring-Apps-instance-name> \
+ --config-file-pattern <pattern>
+```
+
+This command produces JSON output similar to the following example:
+
+```json
+{
+ "configurationFiles": {
+ "application.properties": [
+ "example.property.application.name: example-service",
+ "example.property.cloud: Azure"
+ ]
+ }
+}
+```
+
+You can also use this command with the `--export-path {/path/to/target/folder}` parameter to export the configuration file to the specified folder. It supports both relative paths and absolute paths. If you don't specify the path, the command uses the path of the current directory by default.
+
+## Examine configuration file in the app
+
+After you bind the app to the Application Configuration Service and set the [Pattern](#pattern) for the app deployment, as described in the [Use Application Configuration Service with applications](#use-application-configuration-service-with-applications) section of this article, the `ConfigMap` containing the configuration file for the pattern should be mounted to the application container. Use the following steps to check the configuration files in each instance of the app deployment:
+
+1. Connect to one of the application instances. For more information, see [Connect to an app instance for troubleshooting](./how-to-connect-to-app-instance-for-troubleshooting.md).
+
+1. Use the `echo $AZURE_SPRING_APPS_CONFIG_FILE_PATH` command to find the folders containing the configuration files. A list of locations shows up separated by commas, as shown in the following example:
+
+ ```output
+ $ echo $AZURE_SPRING_APPS_CONFIG_FILE_PATH
+ /etc/azure-spring-cloud/configmap/acs-default-payment-default-e9d46,/etc/azure-spring-cloud/configmap/acs-default-catalog-default-616f4
+ ```
+
+1. Check the content of the configuration file using commands such as `cat`.
+ ## Check logs The following sections show you how to view application logs by using either the Azure CLI or the Azure portal.
If the latest changes aren't reflected in the applications, check the following
- Confirm that the branch of the desired config file changes is updated. - Confirm that the pattern configured in the Application Configuration Service matches the updated config files. - Confirm that the application is bound to the Application Configuration Service.-- Confirm that the `ConfigMap` of the app is updated. If it isn't updated, raise a ticket.-- Confirm that the `ConfigMap` is mounted to the application as a file by using `web shell`. If the file isn't updated, wait for the Kubernetes refresh interval (1 minute), or force a refresh by restarting the application.
+- Confirm that the `ConfigMap` containing the configuration file for the [Pattern](#pattern) used by the application is updated, as described in the [Examine configuration file in ConfigMap](#examine-configuration-file-in-configmap) section of this article. If it isn't updated, raise a ticket.
+- Confirm that the `ConfigMap` is mounted to the application as a file, as described in the [Examine configuration file in the app](#examine-configuration-file-in-the-app) section of this article. If the file isn't updated, wait for the Kubernetes refresh interval (1 minute), or force a refresh by restarting the application.
After checking these items, the applications should be able to read the updated configurations. If the applications still aren't updated, raise a ticket.
storage Archive Rehydrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/archive-rehydrate-overview.md
During the blob rehydration operation, you can call the [Get Blob Properties](/r
Rehydration of an archived blob may take up to 15 hours, and it is inefficient to repeatedly poll **Get Blob Properties** to determine whether rehydration is complete. Microsoft recommends that you use [Azure Event Grid](../../event-grid/overview.md) to capture the event that fires when rehydration is complete for better performance and cost optimization.
-Azure Event Grid raises one of the following two events on blob rehydration, depending on which operation was used to rehydrate the blob:
+Azure Event Grid raises **Microsoft.Storage.BlobTierChanged** event on the completion of blob rehydration:
-- The **Microsoft.Storage.BlobCreated** event fires when a blob is created. In the context of blob rehydration, this event fires when a [Copy Blob](/rest/api/storageservices/copy-blob) operation creates a new destination blob in either the hot or cool tier and the blob's data is fully rehydrated from the archive tier. If the account has the **hierarchical namespace** feature enabled on it, the `CopyBlob` operation works a little differently. In that case, the **Microsoft.Storage.BlobCreated** event is triggered when the `CopyBlob` operation is **initiated** and not when the Block Blob is completely committed.
-
-- The **Microsoft.Storage.BlobTierChanged** event fires when a blob's tier is changed. In the context of blob rehydration, this event fires when a [Set Blob Tier](/rest/api/storageservices/set-blob-tier) operation successfully changes an archived blob's tier to the hot or cool tier.
+- The **Microsoft.Storage.BlobTierChanged** event fires when a blob's tier is changed. In the context of blob rehydration, this event fires when the access tier of a destination blob is successfully changed from archive tier to an online tier (hot, cool or cold tier). You can use Set Blob Tier operation to change the access tier of an archived blob or use Copy Blob operation to copy an archived blob to a new destination blob in an online tier.
To learn how to capture an event on rehydration and send it to an Azure Function event handler, see [Run an Azure Function in response to a blob rehydration event](archive-rehydrate-handle-event.md).
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
# Assign an Azure role for access to blob data
+<!-- replaycheck-task id="cb105ef6" -->
+<!-- replaycheck-task id="e3ce9356" -->
+<!-- replaycheck-task id="2de8753c" -->
+<!-- replaycheck-task id="542306be" -->
+<!-- replaycheck-task id="57011072" -->
+<!-- replaycheck-task id="c0f2f9d5" -->
Microsoft Entra authorizes access rights to secured resources through [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). Azure Storage defines a set of Azure built-in roles that encompass common sets of permissions used to access blob data. When an Azure role is assigned to a Microsoft Entra security principal, Azure grants access to those resources for that security principal. A Microsoft Entra security principal can be a user, a group, an application service principal, or a [managed identity for Azure resources](../../active-directory/managed-identities-azure-resources/overview.md).
To assign a role scoped to a storage account, specify a string containing the sc
The following example assigns the **Storage Blob Data Contributor** role to a user. The role assignment is scoped to level of the container. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values:
+<!-- replaycheck-task id="fee1778" -->
```powershell New-AzRoleAssignment -SignInName <email> ` -RoleDefinitionName "Storage Blob Data Contributor" `
New-AzRoleAssignment -SignInName <email> `
The following example assigns the **Storage Blob Data Reader** role to a user by specifying the object ID. The role assignment is scoped to the level of the storage account. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values:
+<!-- replaycheck-task id="3361d580" -->
```powershell New-AzRoleAssignment -ObjectID "ab12cd34-ef56-ab12-cd34-ef56ab12cd34" ` -RoleDefinitionName "Storage Blob Data Reader" `
To assign a role scoped to a container, specify a string containing the scope of
The following example assigns the **Storage Blob Data Contributor** role to a user. The role assignment is scoped to the level of the container. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values:
+<!-- replaycheck-task id="60f1639b" -->
```azurecli-interactive az role assignment create \ --role "Storage Blob Data Contributor" \
az role assignment create \
The following example assigns the **Storage Blob Data Reader** role to a user by specifying the object ID. To learn more about the `--assignee-object-id` and `--assignee-principal-type` parameters, see [az role assignment](/cli/azure/role/assignment). In this example, the role assignment is scoped to the level of the storage account. Make sure to replace the sample values and the placeholder values in brackets (`<>`) with your own values:
-<!-- replaycheck-task id="66526dae" -->
+<!-- replaycheck-task id="8cdad632" -->
```azurecli-interactive az role assignment create \ --role "Storage Blob Data Reader" \
storage Blob Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-inventory.md
Previously updated : 10/23/2023 Last updated : 03/28/2024
An object replication policy can prevent an inventory job from writing inventory
You can't configure an inventory policy in the account if support for version-level immutability is enabled on that account, or if support for version-level immutability is enabled on the destination container that is defined in the inventory policy.
+### Reports might exclude soft-deleted blobs in accounts that have a hierarchical namespace
+
+If a container or directory is deleted with soft-delete enabled, then the container or directory and all its contents are marked as soft-deleted. However, only the container or directory (reported as a zero-length blob) appears in an inventory report and not the soft-deleted blobs in that container or directory even if you set the `includeDeleted` field of the policy to **true**. This can lead to a difference between what appears in capacity metrics that you obtain in the Azure Portal and what is reported by an inventory report.
+
+Only blobs that are explicitly deleted appear in reports. Therefore, to obtain a complete listing of all soft-deleted blobs (directory and all child blobs), workloads should delete each blob in a directory before deleting the directory itself.
+ ## Next steps - [Enable Azure Storage blob inventory reports](blob-inventory-how-to.md)
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
All geo-redundant offerings support Microsoft-managed failover. In addition, som
The following features and services aren't supported for account failover: -- Azure File Sync doesn't support storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files.
+- Azure File Sync doesn't support customer initiated storage account failover. Storage accounts containing Azure file shares being used as cloud endpoints in Azure File Sync shouldn't be failed over. Doing so will cause sync to stop working and may also cause unexpected data loss in the case of newly tiered files. For more information, see [Best practices for disaster recovery with Azure File Sync](../file-sync/file-sync-disaster-recovery-best-practices.md#geo-redundancy) for details.
- A storage account containing premium block blobs can't be failed over. Storage accounts that support premium block blobs don't currently support geo-redundancy. - Customer-managed failover isn't supported for either the source or the destination account in an [object replication policy](../blobs/object-replication-overview.md). - To failover an account with SSH File Transfer Protocol (SFTP) enabled, you must first [disable SFTP for the account](../blobs/secure-file-transfer-protocol-support-how-to.md#disable-sftp-support). If you want to resume using SFTP after the failover is complete, simply [re-enable it](../blobs/secure-file-transfer-protocol-support-how-to.md#enable-sftp-support).
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
description: Learn about new features and enhancements in Azure Files and Azure
Previously updated : 02/01/2024 Last updated : 03/29/2024
Azure Files and Azure File Sync are updated regularly to offer new features and
### 2024 quarter 1 (January, February, March)
+#### Azure Files geo-redundancy for standard large file shares is generally available
+
+Standard SMB file shares that are geo-redundant (GRS and GZRS) can now scale up to 100TiB capacity with significantly improved IOPS and throughput limits. For more information, see [blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-files-geo-redundancy-for-standard/ba-p/4097935) and [documentation](geo-redundant-storage-for-large-file-shares.md).
++ #### Metadata caching for premium SMB file shares is in public preview Metadata caching is an enhancement for SMB Azure premium file shares aimed to reduce metadata latency, increase available IOPS, and boost network throughput. [Learn more](smb-performance.md#metadata-caching-for-premium-smb-file-shares).
storage Geo Redundant Storage For Large File Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/geo-redundant-storage-for-large-file-shares.md
description: Azure Files geo-redundancy for large file shares significantly impr
Previously updated : 03/26/2024 Last updated : 03/29/2024
Azure Files geo-redundancy for large file shares is generally available in the m
| Canada East | Preview | | Central India | Preview | | Central US | GA |
-| China East | Preview |
+| China East | GA |
| China East 2 | Preview | | China East 3 | GA |
-| China North | Preview |
+| China North | GA |
| China North 2 | Preview | | China North 3 | GA | | East Asia | GA |
Azure Files geo-redundancy for large file shares is generally available in the m
| UK West | GA | | US DoD Central | GA | | US DoD East | GA |
-| US Gov Arizona | Preview |
-| US Gov Texas | Preview |
-| US Gov Virginia | Preview |
+| US Gov Arizona | GA |
+| US Gov Texas | GA |
+| US Gov Virginia | GA |
| West Central US | GA | | West Europe | Preview | | West India | Preview |
stream-analytics Capture Event Hub Data Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-delta-lake.md
Use the following steps to configure a Stream Analytics job to capture data in A
1. Enter a **name** to identify your Stream Analytics job. Select **Create**. :::image type="content" source="./media/capture-event-hub-data-delta-lake/new-stream-analytics-job-name.png" alt-text="Screenshot showing the New Stream Analytics job window where you enter the job name." lightbox="./media/capture-event-hub-data-delta-lake/new-stream-analytics-job-name.png" :::
-1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job will use to connect to Event Hubs. Then select **Connect**.
+1. Specify the **Serialization** type of your data in the Event Hubs and the **Authentication method** that the job uses to connect to Event Hubs. Then select **Connect**.
:::image type="content" source="./media/capture-event-hub-data-delta-lake/event-hub-configuration.png" alt-text="Screenshot showing the Event Hubs connection configuration." lightbox="./media/capture-event-hub-data-delta-lake/event-hub-configuration.png" :::
-1. When the connection is established successfully, you'll see:
+1. When the connection is established successfully, you see:
- Fields that are present in the input data. You can choose **Add field** or you can select the three dot symbol next to a field to optionally remove, rename, or change its name. - A live sample of incoming data in the **Data preview** table under the diagram view. It refreshes periodically. You can select **Pause streaming preview** to view a static view of the sample input. :::image type="content" source="./media/capture-event-hub-data-delta-lake/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-delta-lake/edit-fields.png" ::: 1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration. 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps:
- 1. Select the subscription, storage account name and container from the drop-down menu.
+ 1. Select the subscription, storage account name, and container from the drop-down menu.
1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in.
- 1. For **Delta table path**, it's used to specify the location and name of your Delta Lake table stored in Azure Data Lake Storage Gen2. You can choose to use one or more path segments to define the path to the delta table and the delta table name. To learn more, see to [Write to Delta Lake table (Public Preview)](./write-to-delta-lake.md).
+ 1. For **Delta table path**, it's used to specify the location and name of your Delta Lake table stored in Azure Data Lake Storage Gen2. You can choose to use one or more path segments to define the path to the delta table and the delta table name. To learn more, see to [Write to Delta Lake table](./write-to-delta-lake.md).
1. Select **Connect**. :::image type="content" source="./media/capture-event-hub-data-delta-lake/blob-configuration.png" alt-text="First screenshot showing the Blob window where you edit a blob's connection configuration." lightbox="./media/capture-event-hub-data-delta-lake/blob-configuration.png" :::
-1. When the connection is established, you'll see fields that are present in the output data.
+1. When the connection is established, you see fields that are present in the output data.
1. Select **Save** on the command bar to save your configuration. 1. Select **Start** on the command bar to start the streaming flow to capture data. Then in the Start Stream Analytics job window: 1. Choose the output start time.
Use the following steps to configure a Stream Analytics job to capture data in A
:::image type="content" source="./media/capture-event-hub-data-delta-lake/start-job.png" alt-text="Screenshot showing the Start Stream Analytics job window where you set the output start time, streaming units, and error handling." lightbox="./media/capture-event-hub-data-delta-lake/start-job.png" :::
-1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section below.
+1. After you select **Start**, the job starts running within two minutes, and the metrics will be open in tab section as shown in the following image.
:::image type="content" source="./media/capture-event-hub-data-delta-lake/metrics-chart-in-tab-section.png" alt-text="Screenshot showing the metrics chart." lightbox="./media/capture-event-hub-data-delta-lake/metrics-chart-in-tab-section.png" ::: 1. The new job can be seen on the **Stream Analytics jobs** tab.
Use the following steps to configure a Stream Analytics job to capture data in A
## Verify output Verify that the parquet files with Delta lake format are generated in the Azure Data Lake Storage container. ## Next steps
stream-analytics No Code Power Bi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-power-bi-tutorial.md
Title: Build real-time dashboard with Azure Stream Analytics no-code editor, Synapse Analytics and Power BI
-description: Use no code editor to compute aggregations and write to Azure Synapse Analytics and build real-time dashboards using Power BI
+ Title: Build real-time dashboard with Azure Stream Analytics no-code editor, Synapse Analytics, and Power BI
+description: Use no code editor to compute aggregations and write to Azure Synapse Analytics and build real-time dashboards using Power BI.
Previously updated : 02/23/2023 Last updated : 03/29/2024 # Build real-time Power BI dashboards with Stream Analytics no code editor
Before you start, make sure you've completed the following steps:
> [!NOTE] > If you'd like to build the real-time Power BI dashboard directly without capturing the data into database, you can skip step#3 and 4, then go to this guide to [<u>build real-time dashboard with Power BI dataset produced by Stream Analytics job</u>](./no-code-build-power-bi-dashboard.md).
-4. Create a table named `carsummary` using your Dedicated SQL pool. You can do it by running the following SQL script:
+4. [Create a table](../synapse-analytics/sql/get-started-visual-studio.md) named `carsummary` using your Dedicated SQL pool. You can do it by running the following SQL script:
```SQL CREATE TABLE carsummary (
- Make nvarchar(20),
- CarCount int,
- times datetime
- )
+ Make nvarchar(20),
+ CarCount int,
+ times datetime
+ )
WITH ( CLUSTERED COLUMNSTORE INDEX ) ; ```
Before you start, make sure you've completed the following steps:
:::image type="content" source="./media/stream-analytics-no-code/job-name.png" alt-text="Screenshot of the New Stream Analytics job page." lightbox="./media/stream-analytics-no-code/job-name.png"::: 1. On the **event hub** configuration page, confirm the following settings, and then select **Connect**.
- - *Consumer Group*: Default
- - *Serialization type* of your input data: JSON
- - *Authentication mode* that the job will use to connect to your event hub: Connection string.
+ 1. For **Consumer group**, select **Use existing**, and then select **Default**.
+ 1. For **Serialization type**, confirm that **JSON** is selected.
+ 1. For **Authentication mode**, confirm that **Connection String** is used to connect to your event hub: Connection string.
:::image type="content" source="./media/stream-analytics-no-code/event-hub-configuration.png" alt-text="Screenshot of the configuration page for your event hub." lightbox="./media/stream-analytics-no-code/event-hub-configuration.png":::
-1. Within few seconds, you'll see sample input data and the schema. You can choose to drop fields, rename fields or change data type if you want.
+1. Within few seconds, you see sample input data and the schema. You can choose to drop fields, rename fields or change data type if you want.
:::image type="content" source="./media/stream-analytics-no-code/data-preview-fields.png" alt-text="Screenshot showing the preview of data in the event hub and the fields." lightbox="./media/stream-analytics-no-code/data-preview-fields.png":::
+1. Select **Operations** on the command bar and then select **Group by**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/select-operations-group-by.png" alt-text="Screenshot showing the Operations menu with Group by selected on the command bar.":::
1. Select the **Group by** tile on the canvas and connect it to the event hub tile. :::image type="content" source="./media/stream-analytics-no-code/connect-group.png" alt-text="Screenshot showing the Group tile connected to the Event Hubs tile." lightbox="./media/stream-analytics-no-code/connect-group.png"::: 1. Configure the **Group by** tile by specifying: 1. Aggregation as **Count**. 1. Field as **Make** which is a nested field inside **CarModel**.
- 1. Select **Save**.
- 1. In the **Group by** settings, select **Make** and **Tumbling window** of **3 minutes**
-
- :::image type="content" source="./media/stream-analytics-no-code/group-settings.png" alt-text="Screenshot of the Group by configuration page." lightbox="./media/stream-analytics-no-code/group-settings.png":::
-1. Select **Add field** on the **Manage fields** page, and add the **Make** field as shown in the following image, and then select **Save**.
-
- :::image type="content" source="./media/stream-analytics-no-code/add-make-field.png" alt-text="Screenshot showing the addition of the Make field." lightbox="./media/stream-analytics-no-code/add-make-field.png":::
-1. Select **Manage fields** on the command bar. Connect the **Manage Fields** tile to the **Group by tile** on canvas. Select **Add all fields** on the **Manage fields** configuration page.
+ 1. Select **Add**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/group-by-aggregations.png" alt-text="Screenshot of the Aggregations setting in the Group by configuration page." :::
+ 1. In the **Settings** section:
+ 1. For **Group aggregations by**, select **Make**.
+ 1. For **Time window**, confirm that the value is set to **Tumbling**.
+ 1. For **Duration**, enter **3 minutes**
+ 1. Select **Done** at the bottom of the page.
+
+ :::image type="content" source="./media/stream-analytics-no-code/group-settings.png" alt-text="Screenshot of the Group by configuration page." lightbox="./media/stream-analytics-no-code/group-settings.png":::
+1. Select **Group by**, and notice the grouped data in the **Data preview** tab at the bottom of the page.
+
+ :::image type="content" source="./media/stream-analytics-no-code/group-by-data-preview.png" alt-text="Screenshot that shows the Data Preview tab for the Group by operation." lightbox="./media/stream-analytics-no-code/group-by-data-preview.png":::
+1. On the command bar, select **Operations** and then **Manage fields**.
+1. Connect **Group by** and **Manage fields** tiles.
+1. On the **Manage fields** page, follow these steps:
+ 1. Add the **Make** field as shown in the following image, and then select **Add**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/add-make-field.png" alt-text="Screenshot showing the addition of the Make field." lightbox="./media/stream-analytics-no-code/add-make-field.png":::
+ 2. Select **Add** button.
+
+ :::image type="content" source="./media/stream-analytics-no-code/add-make-field-button.png" alt-text="Screenshot showing the Add button on the Manage fields page.":::
+1. Select **Add all fields** on the **Manage fields** configuration page.
:::image type="content" source="./media/stream-analytics-no-code/manage-fields.png" alt-text="Screenshot of the Manage fields page." lightbox="./media/stream-analytics-no-code/manage-fields.png"::: 1. Select **...** next to the fields, and select **Edit** to rename them.
Before you start, make sure you've completed the following steps:
- **Window_End_Time** to **times** :::image type="content" source="./media/stream-analytics-no-code/rename-fields.png" alt-text="Screenshot of the Manage fields page with the fields renamed." lightbox="./media/stream-analytics-no-code/rename-fields.png":::
-1. The **Manage fields** page should look as shown in the following image.
+1. Select **Done** on the **Manage fields** page. The **Manage fields** page should look as shown in the following image.
:::image type="content" source="./media/stream-analytics-no-code/manage-fields-page.png" alt-text="Screenshot of the Manage fields page with three fields." lightbox="./media/stream-analytics-no-code/manage-fields-page.png":::
-1. Select **Synapse** on the command bar. Connect the **Synapse** tile to the **Manage fields** tile on your canvas.
-1. Configure Azure Synapse Analytics by specifying:
- * Subscription where your Azure Synapse Analytics is located
- * Database of the Dedicated SQL pool that you used to create the `carsummary` table in the previous section.
- * Username and password to authenticate
- * Table name as `carsummary`
- * Select **Connect**. You'll see sample results that will be written to your Synapse SQL table.
+1. Select **Manage fields** tile, and see the data flowing into the operation in the **Data preview** tab at the bottom of the page.
+
+ :::image type="content" source="./media/stream-analytics-no-code/manage-fields-data-preview.png" alt-text="Screenshot that shows the Data Preview tab for the Managed Fields operation." lightbox="./media/stream-analytics-no-code/manage-fields-data-preview.png":::
+1. On the command bar, select **Outputs**, and then select **Synapse**.
+
+ :::image type="content" source="./media/stream-analytics-no-code/select-output-synapse.png" alt-text="Screenshot of command bar with Outputs, Synapse selected.":::
+1. Connect the **Synapse** tile to the **Manage fields** tile on your canvas.
+1. On the **Synapse** settings page, follow these steps:
+ 1. If the **Job storage account** isn't already set, select the Azure Data Lake Storage account in the resource group. It's the storage account that is used by Synapse SQL to load data into your data warehouse.
+
+ :::image type="content" source="./media/stream-analytics-no-code/select-synapse-storage-account.png" alt-text="Screenshot that shows the Synapse with selection of storage account.":::
+ 1. Select the Azure subscription where your Azure Synapse Analytics is located.
+ 1. Select the database of the Dedicated SQL pool that you used to create the `carsummary` table in the previous section.
+ 1. Enter username and password to authenticate.
+ 1. Enter table name as `carsummary`.
+ 1. Select **Connect**. You see sample results that will be written to your Synapse SQL table.
:::image type="content" source="./media/stream-analytics-no-code/synapse-settings.png" alt-text="Screenshot of the Synapse tile settings." lightbox="./media/stream-analytics-no-code/synapse-settings.png":::
-1. Select **Save** in the top ribbon to save your job and then select **Start**. Set Streaming Unit count to 3 and then select **Start** to run your job. Specify the storage account that will be used by Synapse SQL to load data into your data warehouse.
+1. Select **Synapse** tile and see the **Data preview** tab at the bottom of the page. You see the data flowing into the dedicated SQL pool.
+
+ :::image type="content" source="./media/stream-analytics-no-code/synapse-data-preview.png" alt-text="Screenshot that shows Data Preview for the Synapse tile." lightbox="./media/stream-analytics-no-code/synapse-data-preview.png":::
+1. Select **Save** in the top ribbon to save your job and then select **Start**.
+ :::image type="content" source="./media/stream-analytics-no-code/start-job-button.png" alt-text="Screenshot that shows the Start button selected on the command bar." lightbox="./media/stream-analytics-no-code/start-job-button.png":::
+1. On the **Start Stream Analytics Job** page, select **Start** to run your job.
:::image type="content" source="./media/stream-analytics-no-code/start-analytics-job.png" alt-text="Screenshot of the Start Stream Analytics Job page." lightbox="./media/stream-analytics-no-code/start-analytics-job.png":::
-1. You'll then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job will go to a **Running** state. Select the **Refresh** button on the page to see the status changing from Created -> Starting -> Running.
+1. You then see a list of all Stream Analytics jobs created using the no code editor. And within two minutes, your job goes to a **Running** state. Select the **Refresh** button on the page to see the status changing from Created -> Starting -> Running.
:::image type="content" source="./media/stream-analytics-no-code/job-list.png" alt-text="Screenshot showing the list of jobs." lightbox="./media/stream-analytics-no-code/job-list.png"::: ## Create a Power BI visualization 1. Download the latest version of [Power BI desktop](https://powerbi.microsoft.com/desktop).
-2. Use the Power BI connector for Azure Synapse SQL to connect to your database with **DirectQuery**.
-3. Use this query to fetch data from your database
+2. Use the Power BI connector for Azure Synapse SQL.
+
+ :::image type="content" source="./media/stream-analytics-no-code/power-bi-get-data-azure-synapse.png" alt-text="Screenshot that shows the Power BI Desktop with Azure and Synapse Analytics SQL selected." lightbox="./media/stream-analytics-no-code/power-bi-get-data-azure-synapse.png":::
+1. Connect to your database with **DirectQuery**, and use this query to fetch data from your database
+ ```SQL SELECT [Make],[CarCount],[times] FROM [dbo].[carsummary] WHERE times >= DATEADD(day, -1, GETDATE()) ```
-4. You can then create a line chart with
+
+ :::image type="content" source="./media/stream-analytics-no-code/power-bi-direct-query.png" alt-text="Screenshot that shows the configuration of Power BI Destop to connect to Azure Synapse SQL Database." lightbox="./media/stream-analytics-no-code/power-bi-direct-query.png":::
+
+ Switch to **Database** tab, and enter your credentials (user name and password) to connect to the database and run the query.
+1. Select **Load** to load data into the Power BI.
+1. You can then create a line chart with
* X-axis as times * Y-axis as CarCount * Legend as Make
stream-analytics No Code Stream Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-stream-processing.md
A Stream Analytics job is built on three main components: _streaming inputs_, _t
To access the no-code editor for building your stream analytics job, there are two approaches:
-1. **Through Azure Stream Analytics portal (preview)**: Create a Stream Analytics job, and then select the no-code editor in the **Get started** tab in **Overview** blade, or select **No-code editor** in the left panel.
+1. **Through Azure Stream Analytics portal (preview)**: Create a Stream Analytics job, and then select the no-code editor in the **Get started** tab in **Overview** page, or select **No-code editor** in the left panel.
- :::image type="content" source="./media/no-code-stream-processing/no-code-on-asa-portal.png" alt-text="Screenshot that shows no-code on ASA portal locations." lightbox="./media/no-code-stream-processing/no-code-on-asa-portal.png" :::
+ :::image type="content" source="./media/no-code-stream-processing/no-code-on-asa-portal.png" alt-text="Screenshot that shows no-code on Azure Stream Analytics portal locations." lightbox="./media/no-code-stream-processing/no-code-on-asa-portal.png" :::
-2. **Through Azure Event Hubs portal**: Open an Event Hubs instance. Select **Process Data**, and then select any pre-defined template.
+2. **Through Azure Event Hubs portal**: Open an Event Hubs instance. Select **Process Data**, and then select any predefined template.
:::image type="content" source="./media/no-code-stream-processing/new-stream-analytics-job.png" alt-text="Screenshot that shows selections to create a new Stream Analytics job." lightbox="./media/no-code-stream-processing/new-stream-analytics-job.png" :::
- The pre-defined templates can assist you in developing and running a job to address various scenarios, including:
+ The predefined templates can assist you in developing and running a job to address various scenarios, including:
- [Build real-time dashboard with Power BI dataset](./no-code-build-power-bi-dashboard.md) - [Capture data from Event Hubs in Delta Lake format (preview)](./capture-event-hub-data-delta-lake.md)
To access the no-code editor for building your stream analytics job, there are t
- [Transform and store data to Azure SQL database](./no-code-transform-filter-ingest-sql.md) - [Filter and ingest to Azure Data Explorer](./no-code-filter-ingest-data-explorer.md)
-The following screenshot shows a completed Stream Analytics job. It highlights all the sections available to you while you author.
+The following screenshot shows a completed Stream Analytics job. It highlights all the sections available to you as you author.
:::image type="content" source="./media/no-code-stream-processing/created-stream-analytics-job.png" alt-text="Screenshot that shows the authoring interface sections." lightbox="./media/no-code-stream-processing/created-stream-analytics-job.png" :::
-1. **Ribbon**: On the ribbon, sections follow the order of a classic analytics process: an event hub as input (also known as a data source), transformations (streaming ETL operations), outputs, a button to save your progress, and a button to start the job.
+1. **Ribbon**: On the ribbon, sections follow the order of a classic analytics process: an event hub as input (also known as a data source), transformations (streaming Etract, Transform, and Load operations), outputs, a button to save your progress, and a button to start the job.
2. **Diagram view**: This is a graphical representation of your Stream Analytics job, from input to operations to outputs.
-3. **Side pane**: Depending on which component you selected in the diagram view, you'll have settings to modify input, transformation, or output.
-4. **Tabs for data preview, authoring errors, runtime logs, and metrics**: For each tile, the data preview will show you results for that step (live for inputs; on demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning will select that transform. It also provides the job metrics for you to monitor the running job's health.
+3. **Side pane**: Depending on which component you selected in the diagram view, you see settings to modify input, transformation, or output.
+4. **Tabs for data preview, authoring errors, runtime logs, and metrics**: For each tile, the data preview shows you results for that step (live for inputs; on demand for transformations and outputs). This section also summarizes any authoring errors or warnings that you might have in your job when it's being developed. Selecting each error or warning selects that transform. It also provides the job metrics for you to monitor the running job's health.
## Streaming data input
If your event hub is in the Basic tier, you can use only the existing **$Default
![Screenshot that shows consumer group selection while setting up an event hub.](./media/no-code-stream-processing/consumer-group-nocode.png)
-When you're connecting to the event hub, if you select **Managed Identity** as the authentication mode, the Azure Event Hubs Data Owner role will be granted to the managed identity for the Stream Analytics job. To learn more about managed identities for an event hub, see [Use managed identities to access an event hub from an Azure Stream Analytics job](event-hubs-managed-identity.md).
+When you're connecting to the event hub, if you select **Managed Identity** as the authentication mode, the Azure Event Hubs Data Owner role is granted to the managed identity for the Stream Analytics job. To learn more about managed identities for an event hub, see [Use managed identities to access an event hub from an Azure Stream Analytics job](event-hubs-managed-identity.md).
Managed identities eliminate the limitations of user-based authentication methods. These limitations include the need to reauthenticate because of password changes or user token expirations that occur every 90 days.
Managed identities eliminate the limitations of user-based authentication method
After you set up your event hub's details and select **Connect**, you can add fields manually by using **+ Add field** if you know the field names. To instead detect fields and data types automatically based on a sample of the incoming messages, select **Autodetect fields**. Selecting the gear symbol allows you to edit the credentials if needed.
-When Stream Analytics jobs detect the fields, you'll see them in the list. You'll also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
+When Stream Analytics jobs detect the fields, you see them in the list. You also see a live preview of the incoming messages in the **Data Preview** table under the diagram view.
#### Modify input data You can edit the field names, or remove field, or change the data type, or change the event time (**Mark as event time**: TIMESTAMP BY clause if a datetime type field), by selecting the three-dot symbol next to each field. You can also expand, select, and edit any nested fields from the incoming messages, as shown in the following image. > [!TIP]
-> This applies to the input data from Azure IoT Hub and ADLS Gen2 as well.
+> This applies to the input data from Azure IoT Hub and Azure Data Lake Storage Gen2 as well.
:::image type="content" source="./media/no-code-stream-processing/event-hub-schema.png" alt-text="Screenshot that shows selections for adding, removing, and editing the fields for an event hub." lightbox="./media/no-code-stream-processing/event-hub-schema.png" :::
Managed identities eliminate the limitations of user-based authentication method
:::image type="content" source="./media/no-code-stream-processing/exactly-once-delivery-adls.png" alt-text="Screenshot that shows the exactly once configuration in ADLS Gen2 output." lightbox="./media/no-code-stream-processing/exactly-once-delivery-adls.png" :::
-**Write to Delta Lake table (preview)** is supported in the ADLS Gen2 as no code editor output. You can access this option in section **Serialization** in ADLS Gen2 configuration. For more information about this feature, see [Write to Delta Lake table (Public Preview)](./write-to-delta-lake.md).
+**Write to Delta Lake table (preview)** is supported in the ADLS Gen2 as no code editor output. You can access this option in section **Serialization** in ADLS Gen2 configuration. For more information about this feature, see [Write to Delta Lake table](./write-to-delta-lake.md).
:::image type="content" source="./media/no-code-stream-processing/delta-lake-format-output-in-adls.png" alt-text="Screenshot that shows the delta lake configuration in ADLS Gen2 output." lightbox="./media/no-code-stream-processing/delta-lake-format-output-in-adls.png" :::
To configure Azure SQL Database as output, select **SQL Database** under the **O
For more information about Azure SQL Database output for a Stream Analytics job, see [Azure SQL Database output from Azure Stream Analytics](./sql-database-output.md).
-### Event Hub
+### Event Hubs
-With the real-time data coming through event hub to ASA, no-code editor can transform, enrich the data and then output the data to another event hub as well. You can choose the **Event Hub** output when you configure your Azure Stream Analytics job.
+With the real-time data coming through to ASA, no-code editor can transform, enrich the data and then output the data to another event hub as well. You can choose the **Event Hubs** output when you configure your Azure Stream Analytics job.
To configure Event Hubs as output, select **Event Hub** under the Outputs section on the ribbon. Then fill in the needed information to connect your event hub that you want to write data to.
stream-analytics Stream Analytics Documentdb Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-documentdb-output.md
Title: Azure Stream Analytics output to Azure Cosmos DB
-description: This article describes how to use Azure Stream Analytics to save output to Azure Cosmos DB for JSON output, for data archiving and low-latency queries on unstructured JSON data.
--
+description: This article describes how to use Azure Stream Analytics to save output to Azure Cosmos DB for JSON output.
++ Previously updated : 09/15/2022 Last updated : 03/29/2024 # Azure Stream Analytics output to Azure Cosmos DB
-Azure Stream Analytics can target [Azure Cosmos DB](https://azure.microsoft.com/services/documentdb/) for JSON output, enabling data archiving and low-latency queries on unstructured JSON data. This document covers some best practices for implementing this configuration. We recommend that you set your job to compatability level 1.2 when using Azure Cosmos DB as output.
-
-If you're unfamiliar with Azure Cosmos DB, see the [Azure Cosmos DB documentation](../cosmos-db/index.yml) to get started.
+Azure Stream Analytics can output data in JSON format to [Azure Cosmos DB](https://azure.microsoft.com/services/documentdb/). It enables data archiving and low-latency queries on unstructured JSON data. This article covers some best practices for implementing this configuration (Stream Analytics to Cosmos DB). If you're unfamiliar with Azure Cosmos DB, see the [Azure Cosmos DB documentation](../cosmos-db/index.yml) to get started.
> [!Note]
-> At this time, Stream Analytics supports connection to Azure Cosmos DB only through the *SQL API*.
-> Other Azure Cosmos DB APIs are not yet supported. If you point Stream Analytics to Azure Cosmos DB accounts created with other APIs, the data might not be properly stored.
+> - At this time, Stream Analytics supports connection to Azure Cosmos DB only through the *SQL API*.Other Azure Cosmos DB APIs are not yet supported. If you point Stream Analytics to Azure Cosmos DB accounts created with other APIs, the data might not be properly stored.
+> - We recommend that you set your job to compatability level 1.2 when using Azure Cosmos DB as output.
## Basics of Azure Cosmos DB as an output target
-The Azure Cosmos DB output in Stream Analytics enables writing your stream processing results as JSON output into your Azure Cosmos DB containers.
-
-Stream Analytics doesn't create containers in your database. Instead, it requires you to create them beforehand. You can then control the billing costs of Azure Cosmos DB containers. You can also tune the performance, consistency, and capacity of your containers directly by using the [Azure Cosmos DB APIs](/rest/api/cosmos-db/).
--
-The following sections detail some of the container options for Azure Cosmos DB.
+The Azure Cosmos DB output in Stream Analytics enables writing your stream processing results as JSON output into your Azure Cosmos DB containers. Stream Analytics doesn't create containers in your database. Instead, it requires you to create them beforehand. You can then control the billing costs of Azure Cosmos DB containers. You can also tune the performance, consistency, and capacity of your containers directly by using the [Azure Cosmos DB APIs](/rest/api/cosmos-db/). The following sections detail some of the container options for Azure Cosmos DB.
## Tuning consistency, availability, and latency To match your application requirements, Azure Cosmos DB allows you to fine-tune the database and containers and make trade-offs between consistency, availability, latency, and throughput.
-Depending on what levels of read consistency your scenario needs against read and write latency, you can choose a consistency level on your database account. You can improve throughput by scaling up Request Units (RUs) on the container.
-
-Also by default, Azure Cosmos DB enables synchronous indexing on each CRUD operation to your container. This is another useful option to control write/read performance in Azure Cosmos DB.
-
-For more information, review the [Change your database and query consistency levels](../cosmos-db/consistency-levels.md) article.
+Depending on what levels of read consistency your scenario needs against read and write latency, you can choose a consistency level on your database account. You can improve throughput by scaling up Request Units (RUs) on the container. Also by default, Azure Cosmos DB enables synchronous indexing on each CRUD operation to your container. This option is another useful one to control write/read performance in Azure Cosmos DB. For more information, review the [Change your database and query consistency levels](../cosmos-db/consistency-levels.md) article.
## Upserts from Stream Analytics
-Stream Analytics integration with Azure Cosmos DB allows you to insert or update records in your container based on a given **Document ID** column. This is also called an *upsert*.
-
-Stream Analytics uses an optimistic upsert approach. Updates happen only when an insert fails with a document ID conflict.
+Stream Analytics integration with Azure Cosmos DB allows you to insert or update records in your container based on a given **Document ID** column. This operation is also called an *upsert*. Stream Analytics uses an optimistic upsert approach. Updates happen only when an insert fails with a document ID conflict.
With compatibility level 1.0, Stream Analytics performs this update as a PATCH operation, so it enables partial updates to the document. Stream Analytics adds new properties or replaces an existing property incrementally. However, changes in the values of array properties in your JSON document result in overwriting the entire array. That is, the array isn't merged.
If the incoming JSON document has an existing ID field, that field is automatica
If you want to save *all* documents, including the ones that have a duplicate ID, rename the ID field in your query (by using the **AS** keyword). Let Azure Cosmos DB create the ID field or replace the ID with another column's value (by using the **AS** keyword or by using the **Document ID** setting). ## Data partitioning in Azure Cosmos DB
-Azure Cosmos DB automatically scales partitions based on your workload. So we recommend [unlimited](../cosmos-db/partitioning-overview.md) containers as the approach for partitioning your data. When Stream Analytics writes to unlimited containers, it uses as many parallel writers as the previous query step or input partitioning scheme.
+Azure Cosmos DB automatically scales partitions based on your workload. So we recommend that you use [unlimited](../cosmos-db/partitioning-overview.md) containers for partitioning your data. When Stream Analytics writes to unlimited containers, it uses as many parallel writers as the previous query step or input partitioning scheme.
> [!NOTE] > Azure Stream Analytics supports only unlimited containers with partition keys at the top level. For example, `/region` is supported. Nested partition keys (for example, `/region/name`) are not supported.
Depending on your choice of partition key, you might receive this _warning_:
`CosmosDB Output contains multiple rows and just one row per partition key. If the output latency is higher than expected, consider choosing a partition key that contains at least several hundred records per partition key.`
-It's important to choose a partition key property that has a number of distinct values, and that lets you distribute your workload evenly across these values. As a natural artifact of partitioning, requests that involve the same partition key are limited by the maximum throughput of a single partition.
+It's important to choose a partition key property that has many distinct values, and that lets you distribute your workload evenly across these values. As a natural artifact of partitioning, requests that involve the same partition key are limited by the maximum throughput of a single partition.
-The storage size for documents that belong to the same partition key value is limited to 20 GB (the [physical partition size limit](../cosmos-db/partitioning-overview.md) is 50 GB). An [ideal partition key](../cosmos-db/partitioning-overview.md#choose-partitionkey) is one that appears frequently as a filter in your queries and has sufficient cardinality to ensure that your solution is scalable.
+The storage size for documents that belong to the same partition key value is limited to 20 GB (the [physical partition size limit](../cosmos-db/partitioning-overview.md) is 50 GB). An [ideal partition key](../cosmos-db/partitioning-overview.md#choose-partitionkey) is the one that appears frequently as a filter in your queries and has sufficient cardinality to ensure that your solution is scalable.
-Partition keys used for Stream Analytics queries and Azure Cosmos DB don't need to be identical. Fully parallel topologies recommend using *Input Partition key*, `PartitionId`, as the Stream Analytics query's partition key but that may not be the recommended choice for an Azure Cosmos DB container's partition key.
+Partition keys used for Stream Analytics queries and Azure Cosmos DB don't need to be identical. Fully parallel topologies recommend using *Input Partition key*, `PartitionId`, as the Stream Analytics query's partition key but that might not be the recommended choice for an Azure Cosmos DB container's partition key.
A partition key is also the boundary for transactions in stored procedures and triggers for Azure Cosmos DB. You should choose the partition key so that documents that occur together in transactions share the same partition key value. The article [Partitioning in Azure Cosmos DB](../cosmos-db/partitioning-overview.md) gives more details on choosing a partition key.
With compatibility level 1.2, Stream Analytics supports native integration to bu
The improved writing mechanism is available under a new compatibility level because of a difference in upsert behavior. With levels before 1.2, the upsert behavior is to insert or merge the document. With 1.2, upsert behavior is modified to insert or replace the document.
-With levels before 1.2, Stream Analytics uses a custom stored procedure to bulk upsert documents per partition key into Azure Cosmos DB. There, a batch is written as a transaction. Even when a single record hits a transient error (throttling), the whole batch has to be retried. This makes scenarios with even reasonable throttling relatively slow.
+With levels before 1.2, Stream Analytics uses a custom stored procedure to bulk upsert documents per partition key into Azure Cosmos DB. There, a batch is written as a transaction. Even when a single record hits a transient error (throttling), the whole batch has to be retried. This behavior makes scenarios with even reasonable throttling relatively slow.
The following example shows two identical Stream Analytics jobs reading from the same Azure Event Hubs input. Both Stream Analytics jobs are [fully partitioned](./stream-analytics-parallelization.md#embarrassingly-parallel-jobs) with a passthrough query and write to identical Azure Cosmos DB containers. Metrics on the left are from the job configured with compatibility level 1.0. Metrics on the right are configured with 1.2. An Azure Cosmos DB container's partition key is a unique GUID that comes from the input event.
-![Comparison of Stream Analytics metrics](media/stream-analytics-documentdb-output/stream-analytics-documentdb-output-3.png)
-The incoming event rate in Event Hubs is two times higher than Azure Cosmos DB containers (20,000 RUs) are configured to take in, so throttling is expected in Azure Cosmos DB. However, the job with 1.2 is consistently writing at a higher throughput (output events per minute) and with a lower average SU% utilization. In your environment, this difference will depend on few more factors. These factors include choice of event format, input event/message size, partition keys, and query.
+The incoming event rate in Event Hubs is two times higher than Azure Cosmos DB containers (20,000 RUs) are configured to take in, so throttling is expected in Azure Cosmos DB. However, the job with 1.2 is consistently writing at a higher throughput (output events per minute) and with a lower average SU% utilization. In your environment, this difference depends on few more factors. These factors include choice of event format, input event/message size, partition keys, and query.
-![Azure Cosmos DB metrics comparison](media/stream-analytics-documentdb-output/stream-analytics-documentdb-output-2.png)
-With 1.2, Stream Analytics is more intelligent in utilizing 100 percent of the available throughput in Azure Cosmos DB with very few resubmissions from throttling or rate limiting. This provides a better experience for other workloads like queries running on the container at the same time. If you want to see how Stream Analytics scales out with Azure Cosmos DB as a sink for 1,000 to 10,000 messages per second, try [this Azure sample project](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-cosmosdb).
+With 1.2, Stream Analytics is more intelligent in utilizing 100 percent of the available throughput in Azure Cosmos DB with few resubmissions from throttling or rate limiting. This behavior provides a better experience for other workloads like queries running on the container at the same time. If you want to see how Stream Analytics scales out with Azure Cosmos DB as a sink for 1,000 to 10,000 messages per second, try [this Azure sample project](https://github.com/Azure-Samples/streaming-at-scale/tree/main/eventhubs-streamanalytics-cosmosdb).
Throughput of Azure Cosmos DB output is identical with 1.0 and 1.1. We *strongly recommend* that you use compatibility level 1.2 in Stream Analytics with Azure Cosmos DB.
Throughput of Azure Cosmos DB output is identical with 1.0 and 1.1. We *strongly
Using Azure Cosmos DB as an output in Stream Analytics generates the following prompt for information.
-![Information fields for an Azure Cosmos DB output stream](media/stream-analytics-documentdb-output/stream-analytics-documentdb-output-1.png)
|Field | Description| |- | -|
Using Azure Cosmos DB as an output in Stream Analytics generates the following p
|Account key | The shared access key for the Azure Cosmos DB account.| |Database | The Azure Cosmos DB database name.| |Container name | The container name, such as `MyContainer`. One container named `MyContainer` must exist. |
-|Document ID | Optional. The column name in output events used as the unique key on which insert or update operations must be based. If you leave it empty, all events will be inserted, with no update option.|
+|Document ID | Optional. The column name in output events used as the unique key on which insert or update operations must be based. If you leave it empty, all events are inserted, with no update option.|
After you configure the Azure Cosmos DB output, you can use it in the query as the target of an [INTO statement](/stream-analytics-query/into-azure-stream-analytics). When you're using an Azure Cosmos DB output that way, [a partition key needs to be set explicitly](./stream-analytics-parallelization.md#partitions-in-inputs-and-outputs).
If a transient failure, service unavailability, or throttling happens while Stre
1. A unique index constraint is added to the collection and the output data from Stream Analytics violates this constraint. Ensure the output data from Stream Analytics doesn't violate unique constraints or remove constraints. For more information, see [Unique key constraints in Azure Cosmos DB](../cosmos-db/unique-keys.md).
-2. The `PartitionKey` column does not exists.
+2. The `PartitionKey` column doesn't exists.
-3. The `Id` column does not exist.
+3. The `Id` column doesn't exist.
## Next steps
stream-analytics Stream Analytics With Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-with-azure-functions.md
Title: Tutorial - Run Azure Functions in Azure Stream Analytics jobs description: "In this tutorial, you learn how to configure Azure Functions as an output sink to Stream Analytics jobs."--++ Previously updated : 02/27/2023 Last updated : 03/29/2024 #Customer intent: As an IT admin/developer I want to run Azure Functions with Stream Analytics jobs. # Tutorial: Run Azure Functions from Azure Stream Analytics jobs
+In this tutorial, you create an Azure Stream Analytics job that reads events from Azure Event Hubs, runs a query on the event data, and then invokes an Azure function, which writes to an Azure Cache for Redis instance.
-You can run Azure Functions from Azure Stream Analytics by configuring Functions as one of the sinks (outputs) to the Stream Analytics job. Functions are an event-driven, compute-on-demand experience that lets you implement code that is triggered by events occurring in Azure or third-party services. This ability of Functions to respond to triggers makes it a natural output to Stream Analytics jobs.
-
-Stream Analytics invokes Functions through HTTP triggers. The Functions output adapter allows users to connect Functions to Stream Analytics, such that the events can be triggered based on Stream Analytics queries.
> [!NOTE]
-> Connection to Azure Functions inside a virtual network (VNet) from an Stream Analytics job that is running in a multi-tenant cluster is not supported.
+> - You can run Azure Functions from Azure Stream Analytics by configuring Functions as one of the sinks (outputs) to the Stream Analytics job. Functions are an event-driven, compute-on-demand experience that lets you implement code that is triggered by events occurring in Azure or third-party services. This ability of Functions to respond to triggers makes it a natural output to Stream Analytics jobs.
+> - Stream Analytics invokes Functions through HTTP triggers. The Functions output adapter allows users to connect Functions to Stream Analytics, such that the events can be triggered based on Stream Analytics queries.
+> - Connection to Azure Functions inside a virtual network (VNet) from an Stream Analytics job that is running in a multi-tenant cluster is not supported.
In this tutorial, you learn how to: > [!div class="checklist"]
-> * Create and run a Stream Analytics job
+> * Create an Azure Event Hubs instance
> * Create an Azure Cache for Redis instance > * Create an Azure Function
+> * Create a Stream Analytics job
+> * Configure event hub as input and function as output
+> * Run the Stream Analytics job
> * Check Azure Cache for Redis for results If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-## Configure a Stream Analytics job to run a function
-
-This section demonstrates how to configure a Stream Analytics job to run a function that writes data to Azure Cache for Redis. The Stream Analytics job reads events from Azure Event Hubs, and runs a query that invokes the function. This function reads data from the Stream Analytics job, and writes it to Azure Cache for Redis.
-
-![Diagram showing relationships among the Azure services](./media/stream-analytics-with-azure-functions/image1.png)
-- ## Prerequisites Before you start, make sure you've completed the following steps:
Before you start, make sure you've completed the following steps:
4. Open your Stream Analytics job, and update the query to the following. > [!IMPORTANT]
- > If you didn't name your output sink **saop1**, remember to change it in the query.
+ > The following sample script assumes that you used **CallStream** for input name and **saop1** for the output name. If you used different names, DON'T forget to update the query.
```sql SELECT
If a failure occurs while sending events to Azure Functions, Stream Analytics re
> [!NOTE] > The timeout for HTTP requests from Stream Analytics to Azure Functions is set to 100 seconds. If your Azure Functions app takes more than 100 seconds to process a batch, Stream Analytics errors out and will rety for the batch.
-Retrying for timeouts may result in duplicate events written to the output sink. When Stream Analytics retries for a failed batch, it retries for all the events in the batch. For example, consider a batch of 20 events that are sent to Azure Functions from Stream Analytics. Assume that Azure Functions takes 100 seconds to process the first 10 events in that batch. After 100 seconds, Stream Analytics suspends the request since it hasn't received a positive response from Azure Functions, and another request is sent for the same batch. The first 10 events in the batch are processed again by Azure Functions, which causes a duplicate.
+Retrying for timeouts might result in duplicate events written to the output sink. When Stream Analytics retries for a failed batch, it retries for all the events in the batch. For example, consider a batch of 20 events that are sent to Azure Functions from Stream Analytics. Assume that Azure Functions takes 100 seconds to process the first 10 events in that batch. After 100 seconds, Stream Analytics suspends the request since it hasn't received a positive response from Azure Functions, and another request is sent for the same batch. The first 10 events in the batch are processed again by Azure Functions, which causes a duplicate.
## Known issues
stream-analytics Write To Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/write-to-delta-lake.md
Title: Azure Stream Analytics - Writing to Delta Lake table (Public Preview)
+ Title: Azure Stream Analytics - Writing to Delta Lake table
description: This article describes how to write data to a delta lake table stored in Azure Data Lake Storage Gen2.
# Azure Stream Analytics - write to Delta Lake table
-Delta Lake is an open format that brings reliability, quality and performance to data lakes. Azure Stream Analytics allows you to directly write streaming data to your delta lake tables without writing a single line of code.
+Delta Lake is an open format that brings reliability, quality, and performance to data lakes. Azure Stream Analytics allows you to directly write streaming data to your delta lake tables without writing a single line of code.
-A stream analytics job can be configured to write through a native delta lake output connector, either to a new or a pre-created Delta table in an Azure Data Lake Storage Gen2 account. This connector is optimized for high-speed ingestion to delta tables in append mode and also provides exactly once semantics, which guarantees that no data is lost or duplicated. Ingesting real-time data streams from Azure Event Hubs into Delta tables allows you to perform ad-hoc interactive or batch analytics.
+A stream analytics job can be configured to write through a native delta lake output connector, either to a new or a precreated Delta table in an Azure Data Lake Storage Gen2 account. This connector is optimized for high-speed ingestion to delta tables in append mode and also provides exactly once semantics, which guarantees that no data is lost or duplicated. Ingesting real-time data streams from Azure Event Hubs into Delta tables allows you to perform ad-hoc interactive or batch analytics.
## Delta Lake configuration
To write data in Delta Lake, you need to connect to an Azure Data Lake Storage G
|Property Name |Description | |-|--|
-|Event Serialization Format|Serialization format for output data. JSON, CSV, AVRO, Parquet are supported. Delta Lake is listed as an option here. The data will be in Parquet format if Delta Lake is selected. |
-|Delta path name| The path that is used to write your delta lake table within the specified container. It includes the table name. More details in the section below |
+|Event Serialization Format|Serialization format for output data. JSON, CSV, AVRO, Parquet are supported. Delta Lake is listed as an option here. The data is in Parquet format if Delta Lake is selected. |
+|Delta path name| The path that is used to write your delta lake table within the specified container. It includes the table name. More details in the next section. |
|Partition Column |Optional. The {field} name from your output data to partition. Only one partition column is supported. The column's value must be of string type | To see the full list of ADLS Gen2 configuration, see [ALDS Gen2 Overview](blob-storage-azure-data-lake-gen2-output.md).
The Delta Path Name is used to specify the location and name of your Delta Lake
You can choose to use one or more path segments to define the path to the delta table and the delta table name. A path segment is the string between consecutive delimiter characters (for example, the forward slash `/`) that corresponds to the name of a virtual directory.
-The segment name is alphanumeric and can include spaces, hyphens, and underscores. The last path segment will be used as the table name.
+The segment name is alphanumeric and can include spaces, hyphens, and underscores. The last path segment is used as the table name.
Restrictions on Delta Path name include the following ones: -- Field names aren't case-sensitive. For example, the service can't differentiate between column "ID" and "id".-- No dynamic {field} name is allowed. For example, {ID} will be treated as text {ID}.
+- Field names aren't case-sensitive. For example, the service can't differentiate between column `ID` and `id`.
+- No dynamic {field} name is allowed. For example, {ID} is treated as text {ID}.
- The number of path segments comprising the name can't exceed 254. ### Examples
Examples for Delta path name:
Example output files:
-1. Under the chosen container, directory path would be `WestEurope/CA/factory1`, delta table folder name would be **device-table**.
-2. Under the chosen container, directory path would be `Test`, delta table folder name would be **demo**.
+1. Under the chosen container, directory path would be `WestEurope/CA/factory1` and delta table folder name would be **device-table**.
+2. Under the chosen container, directory path would be `Test` and delta table folder name would be **demo**.
3. Under the chosen container, delta table folder name would be **mytable**. ## Creating a new table
-If there is not already a Delta Lake table with the same name and in the location specified by the Delta Path name, by default, Azure Stream Analaytics will create a new Delta Table. This new table will be created with the following configuration:
+If there isn't already a Delta Lake table with the same name and in the location specified by the Delta Path name, by default, Azure Stream Analytics creates a new Delta Table. This new table is created with the following configuration:
- [Writer Version 2 ](https://github.com/delta-io/delt#writer-version-requirements) - [Reader Version 1](https://github.com/delta-io/delt#reader-version-requirements)-- The table will be [Append-Only](https://github.com/delta-io/delt#append-only-tables)-- The table schema will be created with the schema of the first record encountered.
+- The table is [Append-Only](https://github.com/delta-io/delt#append-only-tables)
+- The table schema is created with the schema of the first record encountered.
## Writing to the table
The transaction log enables Delta Lake to guarantee exactly once processing. Azu
Schema enforcement means that all new writes to a table are enforced to be compatible with the target table's schema at write time, to ensure data quality.
-All records of output data are projected to the schema of the existing table. If the output is being written to a new delta table, the table schema will be created with the first record. If the incoming data has one extra column compared to the existing table schema, it will be written in the table without the extra column. If the incoming data is missing one column compared to the existing table schema, it will be written in the table with the column being null.
+All records of output data are projected to the schema of the existing table. If the output is being written to a new delta table, the table schema is created with the first record. If the incoming data has one extra column compared to the existing table schema, it's written in the table without the extra column. If the incoming data is missing one column compared to the existing table schema, it's written in the table with the column being null.
-If there is no intersection between the schema of the delta table and the schema of a record of the streaming job, this will be considered an instance of schema conversion failure. Please note that this is not the only case that would be considered schema conversion failure.
+If there's no intersection between the schema of the delta table and the schema of a record of the streaming job, it's considered an instance of schema conversion failure. It isn't the only case that would be considered schema conversion failure.
-At the failure of schema conversion, the job behavior will follow the [output data error handing policy](stream-analytics-output-error-policy.md) configured at the job level.
+At the failure of schema conversion, the job behavior follows the [output data error handing policy](stream-analytics-output-error-policy.md) configured at the job level.
### Delta Log checkpoints
-The Stream Analytics job will create [Delta Log checkpoints](https://github.com/delta-io/delt#checkpoints-1) periodically in the V1 format. Delta Log checkpoints are snapshots of the Delta Table and will typically contain the name of the data file generated by the Stream Analytics job. If the amount of data files is large, then this will lead to large checkpoints which can cause memory issues in the Stream Analytics Job.
+The Stream Analytics job creates [Delta Log checkpoints](https://github.com/delta-io/delt#checkpoints-1) periodically in the V1 format. Delta Log checkpoints are snapshots of the Delta Table and typically contain the name of the data file generated by the Stream Analytics job. If the number of data files is large, then it leads to large checkpoints, which can cause memory issues in the Stream Analytics Job.
## Limitations - Dynamic partition key(specifying the name of a column of the record schema in the Delta Path) isn't supported.-- Multiple partition columns are not supported. If multiple partition columns are desired, the recommendation is to use a composite key in the query and then specify it as the partition column.
+- Multiple partition columns aren't supported. If multiple partition columns are desired, the recommendation is to use a composite key in the query and then specify it as the partition column.
- A composite key can be created in the query for example: "SELECT concat (col1, col2) AS compositeColumn INTO [blobOutput] FROM [input]". - Writing to Delta Lake is append only. - Schema checking in query testing isn't available.-- Small file compaction is not performed by Stream Analytics.-- All data files will be created without compression.-- The [Date and Decimal types](https://github.com/delta-io/delt#valid-feature-names-in-table-features) are not supported.-- Writing to existing tables of Writer Version 7 or above with writer features will fail.
- - Example: Writing to existing tables with [Deletion Vectors](https://github.com/delta-io/delt#deletion-vectors) enabled will fail.
+- Small file compaction isn't performed by Stream Analytics.
+- All data files are created without compression.
+- The [Date and Decimal types](https://github.com/delta-io/delt#valid-feature-names-in-table-features) aren't supported.
+- Writing to existing tables of Writer Version 7 or above with writer features fail.
+ - Example: Writing to existing tables with [Deletion Vectors](https://github.com/delta-io/delt#deletion-vectors) enabled fail.
- The exceptions here are the [changeDataFeed and appendOnly Writer Features](https://github.com/delta-io/delt#valid-feature-names-in-table-features). - When a Stream Analytics job writes a batch of data to a Delta Lake, it can generate multiple [Add File Action](https://github.com/delta-io/delt#add-file-and-remove-file). When there are too many Add File Actions generated for a single batch, a Stream Analytics Job can be stuck.
- - The number of Add File Actions generated are determined by a number of factors:
- - Size of the batch. This is determined by the data volume and the batching parameters [Minimum Rows and Maximum Time](https://learn.microsoft.com/azure/stream-analytics/blob-storage-azure-data-lake-gen2-output#output-configuration)
- - Cardinality of the [Partition Column values](https://learn.microsoft.com/azure/stream-analytics/write-to-delta-lake#delta-lake-configuration) of the batch.
+ - The number of Add File Actions generated are determined by many factors:
+ - Size of the batch. It's determined by the data volume and the batching parameters [Minimum Rows and Maximum Time](blob-storage-azure-data-lake-gen2-output.md#output-configuration)
+ - Cardinality of the [Partition Column values](#delta-lake-configuration) of the batch.
- To reduce the number of Add File Actions generated for a batch the following steps can be taken:
- - Reduce the batching configurations [Minimum Rows and Maximum Time](https://learn.microsoft.com/azure/stream-analytics/blob-storage-azure-data-lake-gen2-output#output-configuration)
- - Reduce the cardinality of the [Partition Column values](https://learn.microsoft.com/azure/stream-analytics/write-to-delta-lake#delta-lake-configuration) by tweaking the input data or choosing a different partition column
-- Stream Analytics jobs can only read and write single part V1 Checkpoints. Multi-part checkpoints and the Checkpoint V2 format are not supported.
+ - Reduce the batching configurations [Minimum Rows and Maximum Time](blob-storage-azure-data-lake-gen2-output.md#output-configuration)
+ - Reduce the cardinality of the [Partition Column values](#delta-lake-configuration) by tweaking the input data or choosing a different partition column
+- Stream Analytics jobs can only read and write single part V1 Checkpoints. Multi-part checkpoints and the Checkpoint V2 format aren't supported.
## Next steps
-* [Create a Stream Analytics job writing to Delta Lake Table in ADLS Gen2](write-to-delta-lake.md)
+* [Capture data from Event Hubs in Delta Lake format](capture-event-hub-data-delta-lake.md)
synapse-analytics Setup Environment Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/setup-environment-cognitive-services.md
To get started on Azure Kubernetes Service, follow these steps:
1. [Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](../../aks/learn/quick-kubernetes-deploy-portal.md)
-1. [Install the Apache Spark 2.4.0 helm chart](https://hub.helm.sh/charts/microsoft/spark) - warning: [Spark 2.4](../spark/apache-spark-24-runtime.md) is retired and out of the support.
+1. [Install the Apache Spark 2.4.0 helm chart](https://artifacthub.io/packages/helm/microsoft/spark) - warning: [Spark 2.4](../spark/apache-spark-24-runtime.md) is retired and out of the support.
1. [Install an Azure AI container using Helm](../../ai-services/computer-vision/deploy-computer-vision-on-premises.md)
virtual-desktop Deploy Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/deploy-diagnostics.md
Title: Deploy the diagnostics tool for Azure Virtual Desktop (classic) - Azure
description: How to deploy the diagnostics UX tool for Azure Virtual Desktop (classic). -+ Last updated 12/15/2020
virtual-desktop Manage Resources Using Ui Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/virtual-desktop-fall-2019/manage-resources-using-ui-powershell.md
Last updated 03/30/2020 -+
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
The following JSON shows the schema for the Application Health extension. The ex
```json {
- "type": "extensions",
- "name": "HealthExtension",
- "apiVersion": "2018-10-01",
- "location": "<location>",
- "properties": {
- "publisher": "Microsoft.ManagedServices",
- "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
- "autoUpgradeMinorVersion": true,
- "typeHandlerVersion": "1.0",
- "settings": {
- "protocol": "<protocol>",
- "port": <port>,
- "requestPath": "</requestPath>",
- "intervalInSeconds": 5,
- "numberOfProbes": 1
- }
+ "extensionProfile" : {
+ "extensions" : [
+ "name": "HealthExtension",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "1.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5,
+ "numberOfProbes": 1
+ }
+ }
+ ]
}
-}
+}
``` ### Property values
The following JSON shows the schema for the Rich Health States extension. The ex
```json {
- "type": "extensions",
- "name": "HealthExtension",
- "apiVersion": "2018-10-01",
- "location": "<location>",
- "properties": {
- "publisher": "Microsoft.ManagedServices",
- "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
- "autoUpgradeMinorVersion": true,
- "typeHandlerVersion": "2.0",
- "settings": {
- "protocol": "<protocol>",
- "port": <port>,
- "requestPath": "</requestPath>",
- "intervalInSeconds": 5,
- "numberOfProbes": 1,
- "gracePeriod": 600
- }
+ "extensionProfile" : {
+ "extensions" : [
+ "name": "HealthExtension",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "2.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5,
+ "numberOfProbes": 1,
+ "gracePeriod": 600
+ }
+ }
+ ]
}
-}
+}
``` ### Property values
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md
Some properties may be changed, with exceptions depending on the current value.
- imageReferenceOffer - Availability Zones (Preview)
-#### Examples
+#### Example 1
To update your scale set to use a different OS version, you need to set all the updated properties in a single call. In this example, we are changing from Unbuntu Server 20.04 to 22.04. ```azurecli
az vmss update \
--set virtualMachineProfile.storageProfile.imageReference.version=latest ```
+#### Example 2
+To update your scale set to use a different OS version, you need to set all the updated properties in a single call. In this example, we are changing from Windows Server 2016 to Windows Server 2019.
+
+```powershell
+$VMSS = Get-AzVmss -ResourceGroupName "myResourceGroup" -VMScaleSetName "myScaleSet"
+
+Set-AzVmssStorageProfile $vmss `
+ -OsDiskCreateOption "FromImage" `
+ -ImageReferencePublisher "MicrosoftWindowsServer" `
+ -ImageReferenceOffer "WindowsServer" `
+ -ImageReferenceSku "2019-datacenter" `
+ -ImageReferenceVersion "latest"
+
+Update-AzVmss -ResourceGroupName "myResourceGroup" -Name "myScaleSet" -VirtualMachineScaleSet $VMSS
+```
++ ### Properties that require deallocation to change Some properties may only be changed to certain values if the VMs in the scale set are deallocated. These properties include:
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
A regional Virtual Machine Scale Set is when the zone assignment isn't explicitl
In the rare case of a full zonal outage, any or all instances within the scale set may be impacted. ### Fault domains and availability zones
-A fault domain a fault isolation group within an availability zone or datacenter of hardware nodes that share the same power, networking, cooling, and platform maintenance schedule. VM instances that are on different fault domains are not likely to be impacted by the same planned or unplanned outage. You can specify how instances are spread across fault domains within a region or zone.
+A fault domain is a fault isolation group within an availability zone or datacenter of hardware nodes that share the same power, networking, cooling, and platform maintenance schedule. VM instances that are on different fault domains are not likely to be impacted by the same planned or unplanned outage. You can specify how instances are spread across fault domains within a region or zone.
- Max spreading (platformFaultDomainCount = 1) - Static fixed spreading (platformFaultDomainCount = 5)
virtual-machines Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/health-extension.md
The following JSON shows the schema for the Application Health extension. The ex
```json {
- "type": "extensions",
- "name": "HealthExtension",
- "apiVersion": "2018-10-01",
- "location": "<location>",
- "properties": {
- "publisher": "Microsoft.ManagedServices",
- "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
- "autoUpgradeMinorVersion": true,
- "typeHandlerVersion": "1.0",
- "settings": {
- "protocol": "<protocol>",
- "port": <port>,
- "requestPath": "</requestPath>",
- "intervalInSeconds": 5,
- "numberOfProbes": 1
- }
+ "extensionProfile" : {
+ "extensions" : [
+ "name": "HealthExtension",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "1.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5,
+ "numberOfProbes": 1
+ }
+ }
+ ]
}
-}
+}
``` ### Property values
The following JSON shows the schema for the Rich Health States extension. The ex
```json {
- "type": "extensions",
- "name": "HealthExtension",
- "apiVersion": "2018-10-01",
- "location": "<location>",
- "properties": {
- "publisher": "Microsoft.ManagedServices",
- "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
- "autoUpgradeMinorVersion": true,
- "typeHandlerVersion": "2.0",
- "settings": {
- "protocol": "<protocol>",
- "port": <port>,
- "requestPath": "</requestPath>",
- "intervalInSeconds": 5,
- "numberOfProbes": 1,
- "gracePeriod": 600
- }
+ "extensionProfile" : {
+ "extensions" : [
+ "name": "HealthExtension",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "2.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5,
+ "numberOfProbes": 1,
+ "gracePeriod": 600
+ }
+ }
+ ]
}
-}
+}
``` ### Property values
virtual-machines Network Watcher Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-windows.md
Title: Network Watcher Agent VM extension - Windows
+ Title: Manage Network Watcher Agent VM extension - Windows
description: Learn about the Network Watcher Agent virtual machine extension on Windows virtual machines and how to deploy it. - Previously updated : 03/25/2024+ Last updated : 03/29/2024 +
+#CustomerIntent: As an Azure administrator, I want to learn about Network Watcher Agent VM extension so that I can use Network watcher features to diagnose and monitor my virtual machines (VMs).
-# Network Watcher Agent virtual machine extension for Windows
+# Manage Network Watcher Agent virtual machine extension for Windows
-[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is a network performance monitoring, diagnostic, and analytics service that allows monitoring for Azure networks. The Network Watcher Agent virtual machine extension is a requirement for some of the Network Watcher features on Azure virtual machines (VMs), such as capturing network traffic on demand, and other advanced functionality.
+[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is a network performance monitoring, diagnostic, and analytics service that allows monitoring for Azure networks. The Network Watcher Agent virtual machine extension is a requirement for some of the Network Watcher features on Azure virtual machines (VMs). For more information, see [Network Watcher Agent FAQ](../../network-watcher/frequently-asked-questions.yml#network-watcher-agent).
-This article details the supported platforms and deployment options for the Network Watcher Agent VM extension for Windows. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. You can install the extension on virtual machines that you deploy. If the virtual machine is deployed by an Azure service, check the documentation for the service to determine whether or not it permits installing extensions in the virtual machine.
+In this article, you learn about the supported platforms and deployment options for the Network Watcher Agent VM extension for Windows. Installation of the agent doesn't disrupt, or require a reboot of the virtual machine. You can install the extension on virtual machines that you deploy. If the virtual machine is deployed by an Azure service, check the documentation for the service to determine whether or not it permits installing extensions in the virtual machine.
## Prerequisites
-### Operating system
+# [**Portal**](#tab/portal)
+
+- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems).
+
+- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+# [**PowerShell**](#tab/powershell)
+
+- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems).
+
+- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+- Azure Cloud Shell or Azure PowerShell.
+
+ The steps in this article run the Azure PowerShell cmdlets interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code and then paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. This article requires the Azure PowerShell `Az` module. To find the installed version, run `Get-Module -ListAvailable Az`. If you run PowerShell locally, sign in to Azure using the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet.
+
+# [**Azure CLI**](#tab/cli)
+
+- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems).
-The Network Watcher Agent extension for Windows can be configured for Windows Server 2012, 2012 R2, 2016, 2019 and 2022 releases. Currently, Nano Server isn't supported.
+- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+- Azure Cloud Shell or Azure CLI.
+
+ The steps in this article run the Azure CLI commands interactively in [Azure Cloud Shell](/azure/cloud-shell/overview). To run the commands in the Cloud Shell, select **Open Cloud Shell** at the upper-right corner of a code block. Select **Copy** to copy the code, and paste it into Cloud Shell to run it. You can also run the Cloud Shell from within the Azure portal.
+
+ You can also [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. If you run Azure CLI locally, sign in to Azure using the [az login](/cli/azure/reference-index#az-login) command.
+
+# [**Resource Manager**](#tab/arm)
+
+- An Azure Windows virtual machine (VM). For more information, see [Supported Windows versions](#supported-operating-systems).
+
+- Internet connectivity: some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. For example, without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, see [Packet capture overview](../../network-watcher/packet-capture-overview.md).
+
+- Azure PowerShell or Azure CLI installed locally to deploy the template.
+
+ - You can [install Azure PowerShell locally](/powershell/azure/install-azure-powershell) to run the cmdlets. Use [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) cmdlet to sign in to Azure.
+
+ - You can [install Azure CLI locally](/cli/azure/install-azure-cli) to run the commands. Use [az login](/cli/azure/reference-index#az-login) command to sign in to Azure.
++
-### Internet connectivity
+## Supported operating systems
-Some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. Without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, please see the [Network Watcher documentation](../../network-watcher/index.yml).
+Network Watcher Agent extension for Windows can be installed on Windows Server 2012, 2012 R2, 2016, 2019 and 2022 releases. Currently, Nano Server isn't supported.
## Extension schema The following JSON shows the schema for the Network Watcher Agent extension. The extension doesn't require, or support, any user-supplied settings, and relies on its default configuration. - ```json {
- "type": "Microsoft.Compute/virtualMachines/extensions",
- "apiVersion": "[variables('apiVersion')]",
"name": "[concat(parameters('vmName'), '/AzureNetworkWatcherExtension')]",
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2023-03-01",
"location": "[resourceGroup().location]", "dependsOn": [
- "[resourceId('Microsoft.Compute/virtualMachines', parameters('vmName'))]"
+ "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
], "properties": { "autoUpgradeMinorVersion": true,
The following JSON shows the schema for the Network Watcher Agent extension. The
"typeHandlerVersion": "1.4" } }
+```
+## List installed extensions
+
+# [**Portal**](#tab/portal)
+
+From the virtual machine page in the Azure portal, you can view the installed extension by following these steps:
+
+1. Under **Settings**, select **Extensions + applications**.
+
+1. In the **Extensions** tab, you can see all installed extensions on the virtual machine. If the list is long, you can use the search box to filter the list.
+
+ :::image type="content" source="./media/network-watcher/list-vm-extensions.png" alt-text="Screenshot that shows how to view installed extensions on a VM in the Azure portal." lightbox="./media/network-watcher/list-vm-extensions.png":::
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Get-AzVMExtension](/powershell/module/az.compute/get-azvmextension) cmdlet to list all installed extensions on the virtual machine:
+```azurepowershell-interactive
+# List the installed extensions on the virtual machine.
+Get-AzVMExtension -VMName 'myVM' -ResourceGroupName 'myResourceGroup' | format-table Name, Publisher, ExtensionType, EnableAutomaticUpgrade
```
-### Property values
+The output of the cmdlet lists the installed extensions:
-| Name | Value / Example |
-| - | - |
-| apiVersion | 2023-03-01 |
-| publisher | Microsoft.Azure.NetworkWatcher |
-| type | NetworkWatcherAgentWindows |
-| typeHandlerVersion | 1.4 |
+```output
+Name Publisher ExtensionType EnableAutomaticUpgrade
+- - -
+AzureNetworkWatcherExtension Microsoft.Azure.NetworkWatcher NetworkWatcherAgentWindows True
+AzurePolicyforWindows Microsoft.GuestConfiguration ConfigurationforWindows True
+```
-## Template deployment
+# [**Azure CLI**](#tab/cli)
-You can deploy Azure VM extensions with an Azure Resource Manager template (ARM template) using the previous JSON [schema](#extension-schema).
+Use [az vm extension list](/cli/azure/vm/extension#az-vm-extension-list) command to list all installed extensions on the virtual machine:
-## PowerShell deployment
+```azurecli
+# List the installed extensions on the virtual machine.
+az vm extension list --resource-group 'myResourceGroup' --vm-name 'myVM' --out table
+```
-Use the `Set-AzVMExtension` command to deploy the Network Watcher Agent virtual machine extension to an existing virtual machine:
+The output of the command lists the installed extensions:
-```powershell
-Set-AzVMExtension `
- -ResourceGroupName "myResourceGroup" `
- -Location "WestUS" `
- -VMName "myVM" `
- -Name "networkWatcherAgent" `
- -Publisher "Microsoft.Azure.NetworkWatcher" `
- -Type "NetworkWatcherAgentWindows" `
- -TypeHandlerVersion "1.4"
+```output
+Name ProvisioningState Publisher Version AutoUpgradeMinorVersion
+- - -
+AzureNetworkWatcherExtension Succeeded Microsoft.Azure.NetworkWatcher 1.4 True
+AzurePolicyforWindows Succeeded Microsoft.GuestConfiguration 1.1 True
```
-## Troubleshooting
+# [**Resource Manager**](#tab/arm)
+
+N/A
+++
+## Install Network Watcher Agent VM extension
+
+# [**Portal**](#tab/portal)
+
+From the virtual machine page in the Azure portal, you can install the Network Watcher Agent VM extension by following these steps:
+
+1. Under **Settings**, select **Extensions + applications**.
+
+1. Select **+ Add** and search for **Network Watcher Agent** and install it. If the extension is already installed, you can see it in the list of extensions.
+
+ :::image type="content" source="./media/network-watcher/vm-extensions.png" alt-text="Screenshot that shows the VM's extensions page in the Azure portal." lightbox="./media/network-watcher/vm-extensions.png":::
-You can retrieve data about the state of extension deployments from the Azure portal and PowerShell. To see the deployment state of extensions for a given VM, run the following command using the Azure PowerShell module:
+1. In the search box of **Install an Extension**, enter *Network Watcher Agent for Windows*. Select the extension from the list and select **Next**.
-```powershell
-Get-AzVMExtension -ResourceGroupName myResourceGroup1 -VMName myVM1 -Name networkWatcherAgent
+ :::image type="content" source="./media/network-watcher/install-extension-windows.png" alt-text="Screenshot that shows how to install Network Watcher Agent for Windows in the Azure portal." lightbox="./media/network-watcher/install-extension-windows.png":::
+
+1. Select **Review + create** and then select **Create**.
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Set-AzVMExtension](/powershell/module/az.compute/set-azvmextension) cmdlet to install Network Watcher Agent VM extension on the virtual machine:
+
+```azurepowershell-interactive
+# Install Network Watcher Agent for Windows on the virtual machine.
+Set-AzVMExtension -Name 'AzureNetworkWatcherExtension' -Publisher 'Microsoft.Azure.NetworkWatcher' -ExtensionType 'NetworkWatcherAgentWindows' -EnableAutomaticUpgrade 1 -TypeHandlerVersion '1.4' -ResourceGroupName 'myResourceGroup' -VMName 'myVM'
+```
+
+Once the installation is successfully completed, you see the following output:
+
+```output
+RequestId IsSuccessStatusCode StatusCode ReasonPhrase
+ - -
+ True OK
+```
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set) command to install Network Watcher Agent VM extension on the virtual machine:
+
+```azurecli
+# Install Network Watcher Agent for Windows on the virtual machine.
+az vm extension set --name 'NetworkWatcherAgentWindows' --extension-instance-name 'AzureNetworkWatcherExtension' --publisher 'Microsoft.Azure.NetworkWatcher' --enable-auto-upgrade 'true' --version '1.4' --resource-group 'myResourceGroup' --vm-name 'myVM'
```
-Extension execution output is logged to files found in the following directory:
+# [**Resource Manager**](#tab/arm)
-```cmd
-C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.NetworkWatcher.NetworkWatcherAgentWindows\
+Use the following Azure Resource Manager template (ARM template) to install Network Watcher Agent VM extension on a Windows virtual machine:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "vmName": {
+ "type": "string"
+ }
+ },
+ "variables": {},
+ "resources": [
+ {
+ "name": "[parameters('vmName')]",
+ "type": "Microsoft.Compute/virtualMachines",
+ "apiVersion": "2023-03-01",
+ "location": "[resourceGroup().location]",
+ "properties": {
+ }
+ },
+ {
+ "name": "[concat(parameters('vmName'), '/AzureNetworkWatcherExtension')]",
+ "type": "Microsoft.Compute/virtualMachines/extensions",
+ "apiVersion": "2023-03-01",
+ "location": "[resourceGroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.Compute/virtualMachines/', parameters('vmName'))]"
+ ],
+ "properties": {
+ "autoUpgradeMinorVersion": true,
+ "publisher": "Microsoft.Azure.NetworkWatcher",
+ "type": "NetworkWatcherAgentWindows",
+ "typeHandlerVersion": "1.4"
+ }
+ }
+ ],
+ "outputs": {}
+}
```
+You can use either Azure PowerShell or Azure CLI to deploy the Resource Manager template:
+
+```azurepowershell
+# Deploy the JSON template file using Azure PowerShell.
+New-AzResourceGroupDeployment -ResourceGroupName 'myResourceGroup' -TemplateFile 'agent.json'
+```
+
+```azurecli
+# Deploy the JSON template file using the Azure CLI.
+az deployment group create --resource-group 'myResourceGroup' --template-file
+```
+++
+## Uninstall Network Watcher Agent VM extension
+
+# [**Portal**](#tab/portal)
+
+From the virtual machine page in the Azure portal, you can uninstall the Network Watcher Agent VM extension by following these steps:
+
+1. Under **Settings**, select **Extensions + applications**.
+
+1. Select **AzureNetworkWatcherExtension** from the list of extensions, and then select **Uninstall**.
+
+ :::image type="content" source="./media/network-watcher/uninstall-extension-windows.png" alt-text="Screenshot that shows how to uninstall Network Watcher Agent for Windows in the Azure portal." lightbox="./media/network-watcher/uninstall-extension-windows.png":::
+
+ > [!NOTE]
+ > In the list of extensions, you might see Network Watcher Agent VM extension named differently than **AzureNetworkWatcherExtension**.
+
+# [**PowerShell**](#tab/powershell)
+
+Use [Remove-AzVMExtension](/powershell/module/az.compute/remove-azvmextension) cmdlet to remove Network Watcher Agent VM extension from the virtual machine:
+
+```azurepowershell-interactive
+# Uninstall Network Watcher Agent VM extension.
+Remove-AzureVMExtension -Name 'AzureNetworkWatcherExtension' -ResourceGroupName 'myResourceGroup' -VMName 'myVM'
+```
+
+# [**Azure CLI**](#tab/cli)
+
+Use [az vm extension delete](/cli/azure/vm/extension#az-vm-extension-delete) command to remove Network Watcher Agent VM extension from the virtual machine:
+
+```azurecli-interactive
+# Uninstall Network Watcher Agent VM extension.
+az vm extension delete --name 'AzureNetworkWatcherExtension' --resource-group 'myResourceGroup' --vm-name 'myVM'
+```
+
+# [**Resource Manager**](#tab/arm)
+
+N/A
++++ ## Related content
+- [Update Azure Network Watcher extension to the latest version](network-watcher-update.md).
- [Network Watcher documentation](../../network-watcher/index.yml). - [Microsoft Q&A - Network Watcher](/answers/topics/azure-network-watcher.html).
virtual-machines Monitor Vm Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm-reference.md
Title: 'Reference: Monitoring Azure virtual machine data'
-description: This article covers important reference material for monitoring Azure virtual machines.
-- Previously updated : 12/03/2022
+ Title: Monitoring data reference for Azure Virtual Machines
+description: This article contains important reference material you need when you monitor Azure Virtual Machines.
Last updated : 03/27/2024+ +
-# Reference: Monitoring Azure virtual machine data
-
-For more information about collecting and analyzing monitoring data for Azure virtual machines (VMs), see [Monitoring Azure virtual machines](monitor-vm.md).
-
-## Metrics
+# Azure Virtual Machines monitoring data reference
-This section lists the platform metrics that are collected for Azure virtual machines and Virtual Machine Scale Sets.
-| Metric type | Resource provider / type namespace<br/> and link to individual metrics |
-|-|--|
-| Virtual machines | [Microsoft.Compute/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachines) |
-| Virtual Machine Scale Sets | [Microsoft.Compute/virtualMachineScaleSets](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesets)|
-| Virtual Machine Scale Sets and virtual machines | [Microsoft.Compute/virtualMachineScaleSets/virtualMachines](../azure-monitor/essentials/metrics-supported.md#microsoftcomputevirtualmachinescalesetsvirtualmachines)|
+See [Monitor Azure Virtual Machines](monitor-vm.md) for details on the data you can collect for Azure Virtual Machines and how to use it.
-For more information, see a list of [platform metrics that are supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
-## Metric dimensions
+>[!IMPORTANT]
+>Metrics for the guest operating system (guest OS) that runs in a virtual machine (VM) aren't listed here. Guest OS metrics must be collected through one or more agents that run on or as part of the guest operating system. Guest OS metrics include performance counters that track guest CPU percentage or memory usage, both of which are frequently used for autoscaling or alerting.
+>
+>Host OS metrics are available and listed in the following tables. Host OS metrics relate to the Hyper-V session that's hosting your guest OS session. For more information, see [Guest OS and host OS metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index#guest-os-and-host-os-metrics).
-For more information about metric dimensions, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
+### Supported metrics for Microsoft.Compute/virtualMachines
+The following table lists the metrics available for the Microsoft.Compute/virtualMachines resource type.
-Azure virtual machines and Virtual Machine Scale Sets have the following dimensions that are associated with their metrics.
-| Dimension name | Description |
-| - | -- |
-| LUN | Logical unit number |
-| VMName | Used with Virtual Machine Scale Sets |
-
-## VM availability metric (preview)
+### VM availability metric (preview)
The VM availability metric is currently in public preview. This metric value indicates whether a machine is currently running and available. You can use the metric to trend availability over time and to alert if the machine is stopped. VM availability has the values in the following table. | Value | Description | |:|:| | 1 | VM is running and available. |
-| 0 | VM is unavailable. The VM could be stopped or rebooting. If you shutdown a VM from within the VM, it will emit this value. |
-| Null | State of the VM is unknown. If you stop a VM from the Azure portal, CLI, or PowerShell, it will immediately stop emitting the availability metric, and you will see null values. |
--
-## Azure Monitor Logs tables
-
-This section refers to all the Azure Monitor Logs tables that are relevant to virtual machines and Virtual Machine Scale Sets and available for query by Log Analytics.
-
-|Resource type | Notes |
-|-|--|
-| [Virtual machines](/azure/azure-monitor/reference/tables/tables-resourcetype#virtual-machines) | |
-| [Virtual Machine Scale Sets](/azure/azure-monitor/reference/tables/tables-resourcetype#virtual-machine-scale-sets) | |
-
-For reference documentation about Azure Monitor Logs and Log Analytics tables, see the [Azure Monitor Logs table reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
-
-## Activity log
-
-The following table lists a few example operations that relate to creating virtual machines in the activity log. For a complete list of possible log entries, see [Microsoft.Compute Resource Provider options](../role-based-access-control/resource-provider-operations.md#compute).
+| 0 | VM is unavailable. The VM could be stopped or rebooting. If you shut down a VM from within the VM, it emits this value. |
+| Null | State of the VM is unknown. If you stop a VM from the Azure portal, CLI, or PowerShell, it immediately stops emitting the availability metric, and you see null values. |
++
+The dimension Logical Unit Number (`LUN`) is associated with some of the preceding metrics.
++
+### Supported resource logs for Microsoft.Compute/virtualMachines
+
+> [!IMPORTANT]
+> For Azure VMs, all the important data is collected by the Azure Monitor agent. The resource log categories available for Azure VMs aren't important and aren't available for collection from the Azure portal. For detailed information about how the Azure Monitor agent collects VM log data, see [Monitor virtual machines with Azure Monitor: Collect data](/azure/azure-monitor/vm/monitor-virtual-machine-data-collection).
++
+| Table | Categories | Solutions|[Supports basic log plan](/azure/azure-monitor/logs/basic-logs-configure?tabs=portal-1#compare-the-basic-and-analytics-log-data-plans)| Queries|
+||||||
+| [ADAssessmentRecommendation](/azure/azure-monitor/reference/tables/ADAssessmentRecommendation)<br>Recommendations generated by AD assessments that are started through a scheduled task. When you schedule the assessment it runs by default every seven days and uploads the data into Azure Log Analytics. | workloads | ADAssessment, ADAssessmentPlus, AzureResources | No| [Yes](/azure/azure-monitor/reference/queries/adassessmentrecommendation)|
+| [ADReplicationResult](/azure/azure-monitor/reference/tables/ADReplicationResult)<br>The AD Replication Status solution regularly monitors your Active Directory environment for any replication failures. | workloads | ADReplication, AzureResources | No| -|
+| [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity)<br>Entries from the Azure Activity log that provides insight into any subscription-level or management group level events that have occurred in Azure. | resources, audit, security | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/azureactivity)|
+| [AzureMetrics](/azure/azure-monitor/reference/tables/AzureMetrics)<br>Metric data emitted by Azure services that measure their health and performance. | resources | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/azuremetrics)|
+| [CommonSecurityLog](/azure/azure-monitor/reference/tables/CommonSecurityLog)<br>This table is for collecting events in the Common Event Format, that are most often sent from different security appliances such as Check Point, Palo Alto and more. | security | Security, SecurityInsights | No| [Yes](/azure/azure-monitor/reference/queries/commonsecuritylog)|
+| [ComputerGroup](/azure/azure-monitor/reference/tables/ComputerGroup)<br>Computer groups that can be used to scope log queries to a set of computers. Includes the computers in each group. | monitor, virtualmachines, management | LogManagement | No| -|
+| [ConfigurationChange](/azure/azure-monitor/reference/tables/ConfigurationChange)<br>View changes to in-guest configuration data such as Files Software Registry Keys Windows Services and Linux Daemons | management | ChangeTracking | No| [Yes](/azure/azure-monitor/reference/queries/configurationchange)|
+| [ConfigurationData](/azure/azure-monitor/reference/tables/ConfigurationData)<br>View the last reported state for in-guest configuration data such as Files Software Registry Keys Windows Services and Linux Daemons | management | ChangeTracking | No| [Yes](/azure/azure-monitor/reference/queries/configurationdata)|
+| [ContainerLog](/azure/azure-monitor/reference/tables/ContainerLog)<br>Log lines collected from stdout and stderr streams for containers. | container, applications | AzureResources, ContainerInsights, Containers | No| [Yes](/azure/azure-monitor/reference/queries/containerlog)|
+| [DnsEvents](/azure/azure-monitor/reference/tables/DnsEvents) | network | DnsAnalytics, SecurityInsights | No| [Yes](/azure/azure-monitor/reference/queries/dnsevents)|
+| [DnsInventory](/azure/azure-monitor/reference/tables/DnsInventory) | network | DnsAnalytics, SecurityInsights | No| -|
+| [Event](/azure/azure-monitor/reference/tables/Event)<br>Events from Windows Event Log on Windows computers using the Log Analytics agent. | virtualmachines | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/event)|
+| [HealthStateChangeEvent](/azure/azure-monitor/reference/tables/HealthStateChangeEvent)<br>Workload Monitor Health. This data represents state transitions of a health monitor. | undefined | AzureResources, VMInsights | No| -|
+| [Heartbeat](/azure/azure-monitor/reference/tables/Heartbeat)<br>Records logged by Log Analytics agents once per minute to report on agent health. | virtualmachines, container, management | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/heartbeat)|
+| [InsightsMetrics](/azure/azure-monitor/reference/tables/InsightsMetrics)<br>Table that stores metrics. 'Perf' table also stores many metrics and over time they all will converge to InsightsMetrics for Azure Monitor Solutions | virtualmachines, container, resources | AzureResources, ContainerInsights, InfrastructureInsights, LogManagement, ServiceMap, VMInsights | No| [Yes](/azure/azure-monitor/reference/queries/insightsmetrics)|
+| [Perf](/azure/azure-monitor/reference/tables/Perf)<br>Performance counters from Windows and Linux agents that provide insight into the performance of hardware components operating systems and applications. | virtualmachines, container | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/perf)|
+| [ProtectionStatus](/azure/azure-monitor/reference/tables/ProtectionStatus)<br>Antimalware installation info and security health status of the machine: | security | AntiMalware, Security, SecurityCenter, SecurityCenterFree | No| [Yes](/azure/azure-monitor/reference/queries/protectionstatus)|
+| [SQLAssessmentRecommendation](/azure/azure-monitor/reference/tables/SQLAssessmentRecommendation)<br>Recommendations generated by SQL assessments that are started through a scheduled task. When you schedule the assessment it runs by default every seven days and uploads the data into Azure Log Analytics. | workloads | AzureResources, SQLAssessment, SQLAssessmentPlus | No| [Yes](/azure/azure-monitor/reference/queries/sqlassessmentrecommendation)|
+| [SecurityBaseline](/azure/azure-monitor/reference/tables/SecurityBaseline) | security | Security, SecurityCenter, SecurityCenterFree | No| -|
+| [SecurityBaselineSummary](/azure/azure-monitor/reference/tables/SecurityBaselineSummary) | security | Security, SecurityCenter, SecurityCenterFree | No| -|
+| [SecurityEvent](/azure/azure-monitor/reference/tables/SecurityEvent)<br>Security events collected from windows machines by Azure Security Center or Azure Sentinel. | security | Security, SecurityInsights | No| [Yes](/azure/azure-monitor/reference/queries/securityevent)|
+| [Syslog](/azure/azure-monitor/reference/tables/Syslog)<br>Syslog events on Linux computers using the Log Analytics agent. | virtualmachines, security | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/syslog)|
+| [Update](/azure/azure-monitor/reference/tables/Update)<br>Details for update schedule run. Includes information such as which updates where available and which were installed. | management, security | Security, SecurityCenter, SecurityCenterFree, Updates | No| [Yes](/azure/azure-monitor/reference/queries/update)|
+| [UpdateRunProgress](/azure/azure-monitor/reference/tables/UpdateRunProgress)<br>Breaks down each run of your update schedule by the patches available at the time with details on the installation status of each patch. | management | Updates | No| [Yes](/azure/azure-monitor/reference/queries/updaterunprogress)|
+| [UpdateSummary](/azure/azure-monitor/reference/tables/UpdateSummary)<br>Summary for each update schedule run. Includes information such as how many updates weren't installed. | virtualmachines | Security, SecurityCenter, SecurityCenterFree, Updates | No| [Yes](/azure/azure-monitor/reference/queries/updatesummary)|
+| [VMBoundPort](/azure/azure-monitor/reference/tables/VMBoundPort)<br>Traffic for open server ports on the monitored machine. | virtualmachines | AzureResources, InfrastructureInsights, ServiceMap, VMInsights | No| -|
+| [VMComputer](/azure/azure-monitor/reference/tables/VMComputer)<br>Inventory data for servers collected by the Service Map and VM insights solutions using the Dependency agent and Log analytics agent. | virtualmachines | AzureResources, ServiceMap, VMInsights | No| -|
+| [VMConnection](/azure/azure-monitor/reference/tables/VMConnection)<br>Traffic for inbound and outbound connections to and from monitored computers. | virtualmachines | AzureResources, InfrastructureInsights, ServiceMap, VMInsights | No| -|
+| [VMProcess](/azure/azure-monitor/reference/tables/VMProcess)<br>Process data for servers collected by the Service Map and VM insights solutions using the Dependency agent and Log analytics agent. | virtualmachines | AzureResources, ServiceMap, VMInsights | No| -|
+| [W3CIISLog](/azure/azure-monitor/reference/tables/W3CIISLog)<br>Internet Information Server (IIS) log on Windows computers using the Log Analytics agent. | management, virtualmachines | LogManagement | No| [Yes](/azure/azure-monitor/reference/queries/w3ciislog)|
+| [WindowsFirewall](/azure/azure-monitor/reference/tables/WindowsFirewall) | security | Security, WindowsFirewall | No| -|
+| [WireData](/azure/azure-monitor/reference/tables/WireData)<br>Network data collected by the WireData solution using by the Dependency agent and Log analytics agent. | virtualmachines, security | WireData, WireData2 | No| [Yes](/azure/azure-monitor/reference/queries/wiredata)|
++
+The following table lists a few example operations that relate to creating VMs in the activity log. For a complete list of operations, see [Microsoft.Compute resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftcompute).
| Operation | Description | |:|:|
The following table lists a few example operations that relate to creating virtu
| Microsoft.Compute/virtualMachines/extensions/write | Creates a new virtual machine extension or updates an existing one | | Microsoft.Compute/virtualMachineScaleSets/write | Starts the instances of the virtual machine scale set |
-For more information about the schema of activity log entries, see [Activity log schema](../azure-monitor/essentials/activity-log-schema.md).
--
-## See also
+## Related content
-For a description of monitoring Azure virtual machines, see [Monitoring Azure virtual machines](../virtual-machines/monitor-vm.md).
+- See [Monitor Virtual Machines](monitor-vm.md) for a description of monitoring Virtual Machines.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
virtual-machines Monitor Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm.md
Title: Monitoring Azure virtual machines
-description: This article discusses how to monitor Azure virtual machines.
+ Title: Monitor Azure Virtual Machines
+description: Start here to learn how to monitor Azure Virtual Machines and Virtual Machine Scale Sets.
Last updated : 03/27/2024++ -- Previously updated : 06/07/2023+
+#customer intent: As a cloud administrator, I want to understand how to monitor Azure virtual machines so that I can ensure the health and performance of my virtual machines and applications.
-# Monitor Azure virtual machines
+# Monitor Azure Virtual Machines
-When you have critical applications and business processes that rely on Azure resources, it's important to monitor those resources for their availability, performance, and operation. This article describes the monitoring data that's generated by Azure virtual machines (VMs), and it discusses how to use the features of [Azure Monitor](../azure-monitor/overview.md) to analyze and alert you about this data.
-> [!NOTE]
-> This article provides basic information to help you get started with monitoring your VMs. For a complete guide to monitoring your entire environment of Azure and hybrid virtual machines, see [Monitor virtual machines with Azure Monitor](../azure-monitor/vm/monitor-virtual-machine.md).
+This article provides an overview of how to monitor the health and performance of Azure virtual machines (VMs).
-## What is Azure Monitor?
-[Azure Monitor](../azure-monitor/overview.md) is a full stack monitoring service that provides a complete set of features to monitor your Azure resources. You don't need to directly interact with Azure Monitor, though, to perform a variety of monitoring tasks, because its features are integrated with the Azure portal for the Azure services that it monitors. For a tutorial with an overview of how Azure Monitor works with Azure resources, see [Monitor Azure resources by using Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+>[!NOTE]
+>This article provides basic information to help you get started with monitoring Azure Virtual Machines. For a complete guide to monitoring your entire environment of Azure and hybrid virtual machines, see the [Monitor virtual machines deployment guide](/azure/azure-monitor/vm/monitor-virtual-machine).
-## Monitoring virtual machine data
+## Overview: Monitor VM host and guest metrics and logs
-Azure virtual machines collect the same kinds of monitoring data as other Azure resources, which are described in [Monitoring data from Azure resources](/azure/azure-monitor/insights/monitor-azure-resource#monitoring-data). For detailed information about the metrics and logs that are created by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md).
+You can collect metrics and logs from the **VM host**, which is the physical server and hypervisor that creates and manages the VM, and from the **VM guest**, which includes the operating system and applications that run inside the VM.
-## Overview page
-To begin exploring Azure Monitor, go to the **Overview** page for your virtual machine, and then select the **Monitoring** tab. You can see the number of active alerts on the tab.
+VM host and guest data is useful in different scenarios:
-The **Alerts** pane shows you the alerts fired in the last 24 hours, along with important statistics about those alerts. If there are no alerts configured for your VM, there is a link to help you quickly create new alerts for your VM.
+| Data type | Scenarios | Data collection | Available data |
+|-|-|-|-|
+| **VM host data** | Monitor the stability, health, and efficiency of the physical host on which the VM is running.<br>(Optional) [Scale up or scale down](/azure/azure-monitor/autoscale/autoscale-overview) based on the load on your application.| Available by default without any additional setup. |[Host performance metrics](#azure-monitor-platform-metrics)<br><br>[Activity logs](#azure-activity-log)<br><br>[Boot diagnostics](#boot-diagnostics)|
+| **VM guest data: overview** | Analyze and troubleshoot performance and operational efficiency of workloads running in your Azure environment. | Install [Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) on the VM and set up a [data collection rule (DCR)](#data-collection-rules). |See various levels of data in the following rows.|
+|**Basic VM guest data**|[VM insights](#vm-insights) is a quick and easy way to start monitoring your VM clients, especially useful for exploring overall VM usage and performance when you don't yet know the metric of primary interest.|[Enable VM insights](/azure/azure-monitor/vm/vminsights-enable-overview) to automatically install Azure Monitor Agent and create a predefined DCR.|[Guest performance counters](/azure/azure-monitor/vm/vminsights-performance)<br><br>[Dependencies between application components running on the VM](/azure/azure-monitor/vm/vminsights-maps)|
+|**VM operating system monitoring data**|Monitor application performance and events, resource consumption by specific applications and processes, and operating system-level performance and events. Valuable for troubleshooting application-specific issues, optimizing resource usage within VMs, and ensuring optimal performance for workloads running inside VMs.|Install [Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) on the VM and set up a [DCR](#data-collection-rules).|[Guest performance counters](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent)<br><br>[Windows events](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent)<br><br>[Syslog events](/azure/azure-monitor/agents/data-collection-syslog)|
+|**Advanced/custom VM guest data**|Monitoring of web servers, Linux appliances, and any type of data you want to collect from a VM. |Install [Azure Monitor Agent](/azure/azure-monitor/agents/agents-overview) on the VM and set up a [DCR](#data-collection-rules).|[IIS logs](/azure/azure-monitor/agents/data-collection-iis)<br><br>[SNMP traps](/azure/azure-monitor/agents/data-collection-snmp-data)<br><br>[Any data written to a text or JSON file](/azure/azure-monitor/agents/data-collection-text-log)|
+### VM insights
-The **Key Metrics** pane includes charts that show key health metrics, such as average CPU and network utilization. At the top of the pane, you can select a duration to change the time range for the charts, or select a chart to open the **Metrics** pane to drill down further or to create an alert rule.
+VM insights monitors your Azure and hybrid virtual machines in a single interface. VM insights provides the following benefits for monitoring VMs in Azure Monitor:
-## Activity log
-The [Activity log](../azure-monitor/essentials/activity-log.md) displays recent activity by the virtual machine, including any configuration changes and when it was stopped and started. View the Activity log in the Azure portal, or create a [diagnostic setting to send it to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace), where you can view events over time or analyze them with other collected data.
+- Simplified onboarding of the Azure Monitor agent and the Dependency agent, so that you can monitor a virtual machine (VM) guest operating system and workloads.
+- Predefined data collection rules that collect the most common set of performance data.
+- Predefined trending performance charts and workbooks, so that you can analyze core performance metrics from the virtual machine's guest operating system.
+- The Dependency map, which displays processes that run on each virtual machine and the interconnected components with other machines and external sources.
-## Azure Monitor agent
-Azure Monitor starts automatically collecting metric data for your virtual machine host when you create the VM. To collect logs and performance data from the guest operating system of the virtual machine, though, you must install the [Azure Monitor agent](../azure-monitor/agents/azure-monitor-agent-overview.md). You can install the agent and configure collection using either [VM insights](#vm-insights) or by [creating a data collection rule](#create-data-collection-rule) as described below.
-## VM insights
-Some services in Azure display customized monitoring experiences in Azure Monitor. These experiences are called *insights*, and they include pre-built workbooks and other specialized features for that particular service.
+For a tutorial on enabling VM insights for a virtual machine, see [Enable monitoring with VM insights for Azure virtual machine](/azure/azure-monitor/vm/tutorial-monitor-vm-enable-insights). For general information about enabling insights and a variety of methods for onboarding VMs, see [Enable VM insights overview](/azure/azure-monitor/vm/vminsights-enable-overview).
-VM insights is designed to monitor your Azure and hybrid virtual machines in a single interface. VM insights provides the following benefits beyond other features for monitoring VMs in Azure Monitor:
+If you enable VM insights, the Azure Monitor agent is installed and starts sending a predefined set of performance data to Azure Monitor Logs. You can create other data collection rules to collect events and other performance data. To learn how to install the Azure Monitor agent and create a data collection rule (DCR) that defines the data to collect, see [Tutorial: Collect guest logs and metrics from an Azure virtual machine](/azure/azure-monitor/vm/tutorial-monitor-vm-guest).
-- Simplified onboarding of the Azure Monitor agent and the Dependency agent, so that you can monitor a virtual machine guest operating system and workloads.-- Pre-defined data collection rules that collect the most common set of performance data.-- Pre-defined trending performance charts and workbooks, so that you can analyze core performance metrics from the virtual machine's guest operating system.-- The Dependency map, which displays processes that run on each virtual machine and the interconnected components with other machines and external sources.
+For more information about the resource types for Virtual Machines, see [Azure Virtual Machines monitoring data reference](monitor-vm-reference.md).
+Platform metrics for Azure VMs include important *host metrics* such as CPU, network, and disk utilization. Host OS metrics relate to the Hyper-V session that's hosting a guest operating system (guest OS) session.
+
+Metrics for the *guest OS* that runs in a VM must be collected through one or more agents, such as the [Azure Monitor agent](/azure/azure-monitor/agents/azure-monitor-agent-overview), that run on or as part of the guest OS. Guest OS metrics include performance counters that track guest CPU percentage or memory usage, both of which are frequently used for autoscaling or alerting. For more information, see [Guest OS and host OS metrics](/azure/azure-monitor/reference/supported-metrics/metrics-index#guest-os-and-host-os-metrics).
+
+For detailed information about how the Azure Monitor agent collects VM monitoring data, see [Monitor virtual machines with Azure Monitor: Collect data](/azure/azure-monitor/vm/monitor-virtual-machine-data-collection).
+
+For a list of available metrics for Virtual Machines, see [Virtual Machines monitoring data reference](monitor-vm-reference.md#metrics).
++
+- For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Virtual Machines, see [Virtual Machines monitoring data reference](monitor-vm-reference.md).
+
+> [!IMPORTANT]
+> For Azure VMs, all the important data is collected by the Azure Monitor agent. The resource log categories available for Azure VMs aren't important and aren't available for collection from the Azure portal. For detailed information about how the Azure Monitor agent collects VM log data, see [Monitor virtual machines with Azure Monitor: Collect data](/azure/azure-monitor/vm/monitor-virtual-machine-data-collection).
+
-For a tutorial on enabling VM insights for a virtual machine, see [Enable monitoring with VM insights for Azure virtual machine](../azure-monitor/vm/tutorial-monitor-vm-enable-insights.md). For general information about enabling insights and a variety of methods for onboarding virtual machines, see [Enable VM insights overview](../azure-monitor/vm/vminsights-enable-overview.md).
+## Data collection rules
-## Create data collection rule
-If you enable [VM insights](#vm-insights), the Azure Monitor agent is installed and starts sending a predefined set of performance data to Azure Monitor Logs. You can create additional data collection rules to collect events and other performance data. To learn how to install the Azure Monitor agent and create a data collection rule that defines the data to collect, see [Tutorial: Collect guest logs and metrics from an Azure virtual machine](../azure-monitor/vm/tutorial-monitor-vm-guest.md).
+[Data collection rules (DCRs)](/azure/azure-monitor/essentials/data-collection-rule-overview) define data collection from the Azure Monitor Agent and are stored in your Azure subscription. For VMs, DCRs define data such as events and performance counters to collect, and specify locations such as Log Analytics workspaces to send the data. A single VM can be associated with multiple DCRs, and a single DCR can be associated with multiple VMs.
+### VM insights DCR
-## Analyze metrics
-Metrics are numerical values that describe some aspect of a system at a particular point in time. Although platform metrics for the virtual machine host are collected automatically, you must install the Azure Monitor agent and [create a data collection rule](#create-data-collection-rule) to collect guest metrics.
+VM insights creates a DCR that collects common performance counters for the client operating system and sends them to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table in the Log Analytics workspace. For a list of performance counters collected, see [How to query logs from VM insights](/azure/azure-monitor/vm/vminsights-log-query#performance-records). You can use this DCR with other VMs instead of creating a new DCR for each VM.
-The **Overview** pane includes the most common host metrics, and you can access others by using the **Metrics** pane. With this tool, you can create charts from metric values and visually correlate trends. You can also create a metric alert rule or pin a chart to an Azure dashboard. For a tutorial on using this tool, see [Analyze metrics for an Azure resource](../azure-monitor/essentials/tutorial-metrics.md).
+You can also optionally enable collection of processes and dependencies, which populates the following tables and enables the VM insights Map feature.
+- [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport): Traffic for open server ports on the machine
+- [VMComputer](/azure/azure-monitor/reference/tables/vmcomputer): Inventory data for the machine
+- [VMConnection](/azure/azure-monitor/reference/tables/vmconnection): Traffic for inbound and outbound connections to and from the machine
+- [VMProcess](/azure/azure-monitor/reference/tables/vmprocess): Processes running on the machine
-For a list of the available metrics, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md#metrics).
+### Collect performance counters
-## Analyze logs
-Event data in Azure Monitor Logs is stored in a Log Analytics workspace, where it's separated into tables, each with its own set of unique properties.
+VM insights collects a common set of performance counters in Logs to support its performance charts. If you aren't using VM insights, or want to collect other counters or send them to other destinations, you can create other DCRs. You can quickly create a DCR by using the most common counters.
-VM insights stores the data it collects in Logs, and the insights provide performance and map views that you can use to interactively analyze the data. You can work directly with this data to drill down further or perform custom analyses. For more information and to get sample queries for this data, see [How to query logs from VM insights](../azure-monitor/vm/vminsights-log-query.md).
+You can send performance data from the client to either Azure Monitor Metrics or Azure Monitor Logs. VM insights sends performance data to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table. Other DCRs send performance data to the [Perf](/azure/azure-monitor/reference/tables/perf) table. For guidance on creating a DCR to collect performance counters, see [Collect events and performance counters from virtual machines with Azure Monitor Agent](/azure/azure-monitor/agents/data-collection-rule-azure-monitor-agent).
-To analyze other log data that you collect from your virtual machines, use [log queries](../azure-monitor/logs/get-started-queries.md) in [Log Analytics](../azure-monitor/logs/log-analytics-tutorial.md). Several [built-in queries](../azure-monitor/logs/queries.md) for virtual machines are available to use, or you can create your own. You can interactively work with the results of these queries, include them in a workbook to make them available to other users, or generate alerts based on their results.
++
+### Query logs from VM insights
+
+VM insights stores the data it collects in Azure Monitor Logs, and the insights provide performance and map views that you can use to interactively analyze the data. You can work directly with this data to drill down further or perform custom analyses. For more information and to get sample queries for this data, see [How to query logs from VM insights](/azure/azure-monitor/vm/vminsights-log-query).
++
+To analyze log data that you collect from your VMs, you can use [log queries](/azure/azure-monitor/logs/get-started-queries) in [Log Analytics](/azure/azure-monitor/logs/log-analytics-tutorial). Several [built-in queries](/azure/azure-monitor/logs/queries) for VMs are available to use, or you can create your own queries. You can interactively work with the results of these queries, include them in a workbook to make them available to other users, or generate alerts based on their results.
+
+To access built-in Kusto queries for your VM, select **Logs** in the **Monitoring** section of the left navigation on your VM's Azure portal page. On the **Logs** page, select the **Queries** tab, and then select the query to run.
:::image type="content" source="media/monitor-vm/log-analytics-query.png" lightbox="media/monitor-vm/log-analytics-query.png" alt-text="Screenshot of the 'Logs' pane displaying Log Analytics query results.":::
-## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. These alerts can help you identify and address issues in your system before your customers notice them. You can set alerts on [metrics](/azure/azure-monitor/platform/alerts-metric-overview), [logs](/azure/azure-monitor/platform/alerts-unified-log), and the [activity log](/azure/azure-monitor/platform/activity-log-alerts).
+
+You can create a single multi-resource alert rule that applies to all VMs in a particular resource group or subscription within the same region. See [Create availability alert rule for Azure virtual machine (preview)](/azure/azure-monitor/vm/tutorial-monitor-vm-alert-availability) for a tutorial using the availability metric.
++
+Recommended alert rules for Azure VMs include the [VM availability metric](monitor-vm-reference.md#vm-availability-metric-preview), which alerts when a VM stops running.
+
+For more information, see [Tutorial: Enable recommended alert rules for Azure virtual machine](/azure/azure-monitor/vm/tutorial-monitor-vm-alert-recommended).
+
+### Common alert rules
+
+To see common VM log alert rules in the Azure portal, go to the **Queries** pane in Log Analytics. For **Resource type**, enter **Virtual machines**, and for **Type**, enter **Alerts**.
+
+For a list and discussion of common Virtual Machines alert rules, see [Common alert rules](/azure/azure-monitor/vm/monitor-virtual-machine-alerts#common-alert-rules).
+
-### Recommended alerts
-Start by enabling recommended alerts. These are a predefined set of alert rules based on host metrics for the VM. You can quickly enable and customize each of these rules with a few clicks in the Azure portal. See [Tutorial: Enable recommended alert rules for Azure virtual machine](../azure-monitor/vm/tutorial-monitor-vm-alert-recommended.md). This includes the [VM availability metric](monitor-vm-reference.md#vm-availability-metric-preview) which alerts when the VM stops running.
+## Other VM monitoring options
-### Multi-resource metric alerts
-Using recommended alerts, a separate alert rule is created for each VM. You can choose to instead use a [multi-resource alert rule](../azure-monitor/alerts/alerts-types.md#monitor-multiple-resources-with-one-alert-rule) to use a single alert rule that applies to all VMs in a particular resource group or subscription (within the same region). See [Create availability alert rule for Azure virtual machine (preview)](../azure-monitor/vm/tutorial-monitor-vm-alert-availability.md) for a tutorial using the availability metric.
+Azure VMs has the following non-Azure Monitor monitoring options:
-### Other alert rules
-For more information about the various alerts for Azure virtual machines, see the following resources:
+### Boot diagnostics
-- See [Monitor virtual machines with Azure Monitor: Alerts](../azure-monitor/vm/monitor-virtual-machine-alerts.md) for common alert rules for virtual machines. -- See [Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md) for a tutorial on creating a log query alert rule.-- For common log alert rules, go to the **Queries** pane in Log Analytics. For **Resource type**, enter **Virtual machines**, and for **Type**, enter **Alerts**.
+Boot diagnostics is a debugging feature for Azure VMs that allows you to diagnose VM boot failures by collecting serial log information and screenshots of a VM as it boots up. When you create a VM in the Azure portal, boot diagnostics is enabled by default. For more information, see [Azure boot diagnostics](boot-diagnostics.md).
+### Troubleshoot performance issues
+[The Performance Diagnostics tool](/troubleshoot/azure/virtual-machines/performance-diagnostics?toc=/azure/azure-monitor/toc.json) helps troubleshoot performance issues on Windows or Linux virtual machines by quickly diagnosing and providing insights on issues it currently finds on your machines. The tool doesn't analyze historical monitoring data you collect, but rather checks the current state of the machine for known issues, implementation of best practices, and complex problems that involve slow VM performance or high usage of CPU, disk space, or memory.
-## Next steps
+## Related content
-For documentation about the logs and metrics that are generated by Azure virtual machines, see [Reference: Monitoring Azure virtual machine data](monitor-vm-reference.md).
+- For a reference of the metrics, logs, and other important values for Virtual Machines, see [Virtual Machines monitoring data reference](monitor-vm-reference.md).
+- For general details about monitoring Azure resources, see [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource).
+- For guidance based on the five pillars of the Azure Well-Architected Framework, see [Best practices for monitoring virtual machines in Azure Monitor](/azure/azure-monitor/best-practices-vm).
+- To get started with VM insights, see [Overview of VM insights](/azure/azure-monitor/vm/vminsights-overview).
+- To learn how to collect and analyze VM host and client metrics and logs, see the training course [Monitor your Azure virtual machines with Azure Monitor](/training/modules/monitor-azure-vm-using-diagnostic-data).
+- For a complete guide to monitoring Azure and hybrid VMs, see the [Monitor virtual machines deployment guide](/azure/azure-monitor/vm/monitor-virtual-machine).
virtual-network Public Ip Basic Upgrade Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-basic-upgrade-guidance.md
We recommend the following approach to upgrade to Standard SKU public IP address
| | | | Virtual Machine or Virtual Machine Scale Sets (flex model) | Disassociate IP(s) and utilize the upgrade options detailed after the table. For virtual machines, you can use the [upgrade script](public-ip-upgrade-vm.md). | | Load Balancer (Basic SKU) | New LB SKU required. Use the upgrade script [Upgrade Basic Load Balancer to Standard SKU](../../load-balancer/upgrade-basic-standard-with-powershell.md) to upgrade to Standard Load Balancer |
-| VPN Gateway (VpnGw1-5 SKU using Basic IPs) |At this time, it's not necessary to upgrade. When an upgrade is necessary, we'll update this decision path with migration information. |
+| VPN Gateway (using Basic IPs) |At this time, it's not necessary to upgrade. When an upgrade is necessary, we'll update this decision path with migration information and send out a service health alert. |
| ExpressRoute Gateway (using Basic IPs) | New ExpressRoute Gateway required. Create a [new ExpressRoute Gateway with a Standard SKU IP](../../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md). For non-production workloads, use this [migration script (Preview)](../../expressroute/gateway-migration.md). | | Application Gateway (v1 SKU) | New AppGW SKU required. Use this [migration script to migrate from v1 to v2](../../application-gateway/migrate-v1-v2.md). |