Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md | The following best practices and recommendations cover some of the primary aspec | Choose user flows for most scenarios | The Identity Experience Framework of Azure AD B2C is the core strength of the service. Policies fully describe identity experiences such as sign-up, sign-in, or profile editing. To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called user flows. With user flows, you can create great user experiences in minutes, with just a few clicks. [Learn when to use user flows vs. custom policies](user-flow-overview.md#comparing-user-flows-and-custom-policies).| | App registrations | Every application (web, native) and API that is being secured must be registered in Azure AD B2C. If an app has both a web and native version of iOS and Android, you can register them as one application in Azure AD B2C with the same client ID. Learn how to [register OIDC, SAML, web, and native apps](./tutorial-register-applications.md?tabs=applications). Learn more about [application types that can be used in Azure AD B2C](./application-types.md). | | Move to monthly active users billing | Azure AD B2C has moved from monthly active authentications to monthly active users (MAU) billing. Most customers will find this model cost-effective. [Learn more about monthly active users billing](https://azure.microsoft.com/updates/mau-billing/). |+| Follow Security best practices | There are continuous and evolving threats and attacks, and like all owned resources, your Azure AD B2C deployment should follow best practices for security, including guidance on implementing WAFs (defense against threats such as DDOS and Bots) and other defense in depth best guidance [B2C Security Architecture](/azure/active-directory-b2c/security-architecture). | ## Planning and design |
advisor | Advisor Assessments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-assessments.md | Title: Use Well Architected Framework assessments in Azure Advisor description: Azure Advisor offers Well Architected Framework assessments (curated and focused Advisor optimization reports) through the Assessments entry in the left menu of the Azure Advisor Portal.-- Last updated 02/18/2024 |
advisor | Advisor Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-get-started.md | Title: Azure Advisor portal basics description: Learn how to get started with Azure Advisor through the Azure portal, get and manage recommendations, and configure Advisor settings.-- Last updated 03/07/2024 |
advisor | Advisor How To Performance Resize High Usage Vm Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-how-to-performance-resize-high-usage-vm-recommendations.md | -- Title: Improve the performance of highly used VMs using Azure Advisor description: Use Azure Advisor to improve the performance of your Azure virtual machines with consistent high utilization. |
advisor | Advisor Reference Cost Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md | Title: Cost recommendations description: Full list of available cost recommendations in Advisor. -- Last updated 10/15/2023 |
advisor | Advisor Reference Operational Excellence Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md | Title: Operational excellence recommendations description: Operational excellence recommendations -- Last updated 10/05/2023 |
advisor | Advisor Reference Performance Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md | Title: Performance recommendations description: Full list of available performance recommendations in Advisor. -- Last updated 6/24/2024 |
advisor | Advisor Reference Reliability Recommendations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md | Title: Reliability recommendations description: Full list of available reliability recommendations in Advisor.-- Last updated 12/11/2023 |
advisor | Advisor Resiliency Reviews | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-resiliency-reviews.md | Title: Azure Advisor resiliency reviews description: Optimize resource resiliency with custom recommendation reviews.-- Last updated 03/8/2024 |
ai-services | App Schema Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/app-schema-definition.md | When you import and export the app, choose either `.json` or `.lu`. * Review labels * Suggest endpoint queries for entities * Suggest endpoint queries for intents- For more information, see the [LUIS reference documentation](/rest/api/cognitiveservices-luis/authoring/features?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true). + For more information, see the [LUIS reference documentation](/rest/api/luis/operation-groups). ```json { "luis_schema_version": "7.0.0", |
ai-services | Utterances | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/concepts/utterances.md | If you turn on a normalization setting, scores in the **Test** pane, batch tes When you clone a version in the LUIS portal, the version settings are kept in the new cloned version. -Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](/rest/api/cognitiveservices-luis/authoring/versions/update?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true). See the [Reference](../luis-reference-application-settings.md) documentation for more information. +Set your app's version settings using the LUIS portal by selecting **Manage** from the top navigation menu, in the **Application Settings** page. You can also use the [Update Version Settings API](/rest/api/luis/settings/update). See the [Reference](../luis-reference-application-settings.md) documentation for more information. ## Word forms If you want to ignore specific words or punctuation in patterns, use a [pattern] ## Training with all utterances -Training is nondeterministic: utterance prediction can vary slightly across versions or apps. You can remove nondeterministic training by updating the [version settings](/rest/api/cognitiveservices-luis/authoring/settings/update?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API with the UseAllTrainingData name/value pair to use all training data. +Training is nondeterministic: utterance prediction can vary slightly across versions or apps. You can remove nondeterministic training by updating the [version settings](/rest/api/luis/settings/update) API with the UseAllTrainingData name/value pair to use all training data. ## Testing utterances |
ai-services | Developer Reference Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md | Both authoring and prediction endpoint APIS are available from REST APIs: |Type|Version| |--|--|-|Authoring|[V2](https://go.microsoft.com/fwlink/?linkid=2092087)<br>[preview V3](/rest/api/cognitiveservices-luis/authoring/operation-groups)| -|Prediction|[V2](https://go.microsoft.com/fwlink/?linkid=2092356)<br>[V3](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)| +|Authoring|[V2](/rest/api/luis/operation-groups?view=rest-luis-v2.0)<br>[preview V3](/rest/api/luis/operation-groups?view=rest-luis-v3.0-preview)| +|Prediction|[V2](/rest/api/luis/operation-groups?view=rest-luis-v2.0)<br>[V3](/rest/api/luis/prediction?view=rest-luis-v3.0)| ### REST Endpoints |
ai-services | Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/faq.md | Yes, it is good to train your **None** intent with utterances, especially as y ## How do I edit my LUIS app programmatically? -To edit your LUIS app programmatically, use the [Authoring API](https://go.microsoft.com/fwlink/?linkid=2092087). See [Call LUIS authoring API](get-started-get-model-rest-apis.md) and [Build a LUIS app programmatically using Node.js](luis-tutorial-node-import-utterances-csv.md) for examples of how to call the Authoring API. The Authoring API requires that you use an [authoring key](luis-how-to-azure-subscription.md) rather than an endpoint key. Programmatic authoring allows up to 1,000,000 calls per month and five transactions per second. For more info on the keys you use with LUIS, see [Manage keys](luis-how-to-azure-subscription.md). +To edit your LUIS app programmatically, use the [Authoring API](/rest/api/luis/operation-groups). See [Call LUIS authoring API](get-started-get-model-rest-apis.md) and [Build a LUIS app programmatically using Node.js](luis-tutorial-node-import-utterances-csv.md) for examples of how to call the Authoring API. The Authoring API requires that you use an [authoring key](luis-how-to-azure-subscription.md) rather than an endpoint key. Programmatic authoring allows up to 1,000,000 calls per month and five transactions per second. For more info on the keys you use with LUIS, see [Manage keys](luis-how-to-azure-subscription.md). ## Should variations of an example utterance include punctuation? To get the same top intent between all the apps, make sure the intent prediction When training these apps, make sure to [train with all data](how-to/train-test.md). -Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) website or the authoring API for a [single utterance](/rest/api/cognitiveservices-luis/authoring/examples/add) or for a [batch](/rest/api/cognitiveservices-luis/authoring/examples/batch?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true). +Designate a single main app. Any utterances that are suggested for review should be added to the main app, then moved back to all the other apps. This is either a full export of the app, or loading the labeled utterances from the main app to the other apps. Loading can be done from either the [LUIS](./luis-reference-regions.md?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) website or the authoring API for a [single utterance](/rest/api/luis/examples/add) or for a [batch](/rest/api/luis/examples/batch?). Schedule a periodic review, such as every two weeks, of [endpoint utterances](how-to/improve-application.md) for active learning, then retrain and republish the app. |
ai-services | Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/sign-in.md | There are a couple of ways to create a LUIS app. You can create a LUIS app in th * Import a LUIS app from a .lu or .json file that already contains intents, utterances, and entities. **Using the authoring APIs** You can create a new app with the authoring APIs in a couple of ways:-* [Add application](/rest/api/cognitiveservices-luis/authoring/apps/add?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - start with an empty app and create intents, utterances, and entities. -* [Add prebuilt application](/rest/api/cognitiveservices-luis/authoring/apps/add-custom-prebuilt-domain?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - start with a prebuilt domain, including intents, utterances, and entities. +* [Add application](/rest/api/luis/apps/add) - start with an empty app and create intents, utterances, and entities. +* [Add prebuilt application](/rest/api/luis/apps/add-custom-prebuilt-domain) - start with a prebuilt domain, including intents, utterances, and entities. ## Create new app in LUIS using portal 1. On **My Apps** page, select your **Subscription** , and **Authoring resource** then select **+ New App**. |
ai-services | Train Test | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to/train-test.md | To train your app in the LUIS portal, you only need to select the **Train** butt Training with the REST APIs is a two-step process. -1. Send an HTTP POST [request for training](/rest/api/cognitiveservices-luis/authoring/train/train-version?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true). -2. Request the [training status](/rest/api/cognitiveservices-luis/authoring/train/get-status?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) with an HTTP GET request. +1. Send an HTTP POST [request for training](/rest/api/luis/train/train-version). +2. Request the [training status](/rest/api/luis/train/get-status) with an HTTP GET request. In order to know when training is complete, you must poll the status until all models are successfully trained. Inspect the test result details in the **Inspect** panel. ## Change deterministic training settings using the version settings API -Use the [Version settings API](/rest/api/cognitiveservices-luis/authoring/settings/update?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) with the UseAllTrainingData set to *true* to turn off deterministic training. +Use the [Version settings API](/rest/api/luis/settings/update) with the UseAllTrainingData set to *true* to turn off deterministic training. ## Change deterministic training settings using the LUIS portal |
ai-services | Luis Concept Data Alteration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-alteration.md | Use [Bing Spell Check](../../cognitive-services/bing-spell-check/overview.md) to ### Prior to V3 runtime -LUIS uses [Bing Spell Check API V7](../../cognitive-services/bing-spell-check/overview.md) to correct spelling errors in the utterance. LUIS needs the key associated with that service. Create the key, then add the key as a querystring parameter at the [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356). +LUIS uses [Bing Spell Check API V7](../../cognitive-services/bing-spell-check/overview.md) to correct spelling errors in the utterance. LUIS needs the key associated with that service. Create the key, then add the key as a querystring parameter at the [endpoint](/rest/api/luis/operation-groups). The endpoint requires two params for spelling corrections to work: In V3, the `datetimeReference` determines the timezone offset. The timezone is corrected by adding the user's timezone to the endpoint using the `timezoneOffset` parameter based on the API version. The value of the parameter should be the positive or negative number, in minutes, to alter the time. #### V2 prediction daylight savings example-If you need the returned prebuilt datetimeV2 to adjust for daylight savings time, you should use the querystring parameter with a +/- value in minutes for the [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) query. +If you need the returned prebuilt datetimeV2 to adjust for daylight savings time, you should use the querystring parameter with a +/- value in minutes for the [endpoint](/rest/api/luis/operation-groups) query. Add 60 minutes: |
ai-services | Luis Concept Devops Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-testing.md | When LUIS is training a model, such as an intent, it needs both positive data - The result of this non-deterministic training is that you may get a slightly [different prediction response between different training sessions](./luis-concept-prediction-score.md), usually for intents and/or entities where the [prediction score](./luis-concept-prediction-score.md) is not high. -If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](/rest/api/cognitiveservices-luis/authoring/versions?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) with the `UseAllTrainingData` setting set to `true`. +If you want to disable non-deterministic training for those LUIS app versions that you're building for the purpose of testing, use the [Version settings API](/rest/api/luis/versions) with the `UseAllTrainingData` setting set to `true`. ## Next steps |
ai-services | Luis Container Howto | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md | You can get your authoring key from the [LUIS portal](https://www.luis.ai/) by c Authoring APIs for packaged apps: -* [Published package API](/rest/api/cognitiveservices-luis/authoring/apps/package-published-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) -* [Not-published, trained-only package API](/rest/api/cognitiveservices-luis/authoring/apps/package-trained-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) +* [Published package API](/rest/api/luis/apps/package-published-application-as-gzip) +* [Not-published, trained-only package API](/rest/api/luis/apps/package-trained-application-as-gzip) ### The host computer Use the [`docker pull`](https://docs.docker.com/engine/reference/commandline/pul docker pull mcr.microsoft.com/azure-cognitive-services/language/luis:latest ``` -For a full description of available tags, such as `latest` used in the preceding command, see [LUIS](https://go.microsoft.com/fwlink/?linkid=2043204) on Docker Hub. +For a full description of available tags, such as `latest` used in the preceding command, see [LUIS](https://hub.docker.com/r/microsoft/azure-cognitive-services-language-luis) on Docker Hub. [!INCLUDE [Tip for using docker list](../includes/cognitive-services-containers-docker-list-tip.md)] Once the container is on the [host computer](#the-host-computer), use the follow 1. When you are done with the container, [import the endpoint logs](#import-the-endpoint-logs-for-active-learning) from the output mount in the LUIS portal and [stop](#stop-the-container) the container. 1. Use LUIS portal's [active learning](how-to/improve-application.md) on the **Review endpoint utterances** page to improve the app. -The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true). Then train and/or publish, then download a new package and run the container again. +The app running in the container can't be altered. In order to change the app in the container, you need to change the app in the LUIS service using the [LUIS](https://www.luis.ai) portal or use the LUIS [authoring APIs](/rest/api/luis/operation-groups). Then train and/or publish, then download a new package and run the container again. The LUIS app inside the container can't be exported back to the LUIS service. Only the query logs can be uploaded. In this article, you learned concepts and workflow for downloading, installing, * Use more [Azure AI containers](../cognitive-services-container-support.md) <!-- Links - external -->-[download-published-package]: /rest/api/cognitiveservices-luis/authoring/apps/package-published-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true -[download-versioned-package]: /rest/api/cognitiveservices-luis/authoring/apps/package-trained-application-as-gzip?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true +[download-published-package]: /rest/api/luis/apps/package-published-application-as-gzip +[download-versioned-package]: /rest/api/luis/apps/package-trained-application-as-gzip [unsupported-dependencies]: luis-container-limitations.md#unsupported-dependencies-for-latest-container |
ai-services | Luis Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-glossary.md | Learn more about authoring your app programmatically from the [Developer referen ### Prediction endpoint -The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](/rest/api/cognitiveservices-luis/authoring/apps/get?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API. +The LUIS prediction endpoint URL is where you submit LUIS queries after the [LUIS app](#application-app) is authored and published. The endpoint URL contains the region or custom subdomain of the published app as well as the app ID. You can find the endpoint on the **[Azure resources](luis-how-to-azure-subscription.md)** page of your app, or you can get the endpoint URL from the [Get App Info](/rest/api/luis/apps/get) API. Your access to the prediction endpoint is authorized with the LUIS prediction key. |
ai-services | Luis How To Azure Subscription | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-azure-subscription.md | An authoring resource lets you create, manage, train, test, and publish your app * 1 million authoring transactions * 1,000 testing prediction endpoint requests per month. -You can use the [v3.0-preview LUIS Programmatic APIs](/rest/api/cognitiveservices-luis/authoring/apps?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) to manage authoring resources. +You can use the [v3.0-preview LUIS Programmatic APIs](/rest/api/luis/apps) to manage authoring resources. ## Prediction resource A prediction resource lets you query your prediction endpoint beyond the 1,000 r * The free (F0) prediction resource, which gives you 10,000 prediction endpoint requests monthly. * Standard (S0) prediction resource, which is the paid tier. -You can use the [v3.0-preview LUIS Endpoint API](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-runtime-v3.0&preserve-view=true) to manage prediction resources. +You can use the [v3.0-preview LUIS Endpoint API](/rest/api/luis/operation-groups?view=rest-luis-v3.0-preview) to manage prediction resources. > [!Note] > * You can also use a [multi-service resource](../multi-service-resource.md?pivots=azcli) to get a single endpoint you can use for multiple Azure AI services. For automated processes like CI/CD pipelines, you can automate the assignment of az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv ``` -1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](/rest/api/cognitiveservices-luis/authoring/azure-accounts/get-assigned?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) that your user account has access to. +1. Use the token to request the LUIS runtime resources across subscriptions. Use the API to [get the LUIS Azure account](/rest/api/luis/azure-accounts/get-assigned) that your user account has access to. This POST API requires the following values: For automated processes like CI/CD pipelines, you can automate the assignment of The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app. -1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/assign-to-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API. +1. Assign the token to the LUIS resource by using the [Assign a LUIS Azure accounts to an application](/rest/api/luis/azure-accounts/assign-to-app) API. This POST API requires the following values: When you unassign a resource, it's not deleted from Azure. It's only unlinked fr az account get-access-token --resource=https://management.core.windows.net/ --query accessToken --output tsv ``` -1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](/rest/api/cognitiveservices-luis/authoring/azure-accounts/get-assigned?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true), which your user account has access to. +1. Use the token to request the LUIS runtime resources across subscriptions. Use the [Get LUIS Azure accounts API](/rest/api/luis/azure-accounts/get-assigned), which your user account has access to. This POST API requires the following values: When you unassign a resource, it's not deleted from Azure. It's only unlinked fr The API returns an array of JSON objects that represent your LUIS subscriptions. Returned values include the subscription ID, resource group, and resource name, returned as `AccountName`. Find the item in the array that's the LUIS resource that you want to assign to the LUIS app. -1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/remove-from-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API. +1. Assign the token to the LUIS resource by using the [Unassign a LUIS Azure account from an application](/rest/api/luis/azure-accounts/remove-from-app) API. This DELETE API requires the following values: |
ai-services | Luis How To Manage Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-manage-versions.md | You can import a `.json` or a `.lu` version of your application. See the following links to view the REST APIs for importing and exporting applications: -* [Importing applications](/rest/api/cognitiveservices-luis/authoring/versions/import?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) -* [Exporting applications](/rest/api/cognitiveservices-luis/authoring/versions/export?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) +* [Importing applications](/rest/api/luis/versions/import) +* [Exporting applications](/rest/api/luis/versions/export) |
ai-services | Luis Reference Application Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-application-settings.md | Last updated 01/19/2024 [!INCLUDE [deprecation notice](./includes/deprecation-notice.md)] -These settings are stored in the [exported](/rest/api/cognitiveservices-luis/authoring/versions/export?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&tabs=HTTP&preserve-view=true) app and updated with the REST APIs or LUIS portal. +These settings are stored in the [exported](/rest/api/luis/versions/export) app and updated with the REST APIs or LUIS portal. Changing your app version settings resets your app training status to untrained. |
ai-services | Luis Reference Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-regions.md | Every authoring region has corresponding prediction regions that you can publish Single data residency means that the data doesn't leave the boundaries of the region. > [!Note]-> * Make sure to set `log=false` for [V3 APIs](/rest/api/cognitiveservices-luis/runtime/prediction/get-slot-prediction?view=rest-cognitiveservices-luis-runtime-v3.0&tabs=HTTP&preserve-view=true) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region. +> * Make sure to set `log=false` for [V3 APIs](/rest/api/luis/prediction/get-slot-prediction) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region. > * If `log=true`, data is returned to the authoring region for active learning. ## Publishing to Europe |
ai-services | Luis Reference Response Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-response-codes.md | Last updated 01/19/2024 [!INCLUDE [deprecation notice](./includes/deprecation-notice.md)] -The [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) and [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) APIs return HTTP response codes. While response messages include information specific to a request, the HTTP response status code is general. +The API[/rest/api/luis/operation-groups] returns HTTP response codes. While response messages include information specific to a request, the HTTP response status code is general. ## Common status codes-The following table lists some of the most common HTTP response status codes for the [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) and [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) APIs: +The following table lists some of the most common HTTP response status codes for the API[/rest/api/luis/operation-groups]: |Code|API|Explanation| |:--|--|--| The following table lists some of the most common HTTP response status codes for ## Next steps -* REST API [authoring](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) and [endpoint](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-runtime-v3.0&preserve-view=true) documentation +* [REST API documentation](/rest/api/luis/operation-groups) |
ai-services | Luis Tutorial Node Import Utterances Csv | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-tutorial-node-import-utterances-csv.md | The following code adds the entities to the LUIS app. Copy or [download](https:/ ## Add utterances-Once the entities and intents have been defined in the LUIS app, you can add the utterances. The following code uses the [Utterances_AddBatch](/rest/api/cognitiveservices-luis/authoring/examples/batch?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) API, which allows you to add up to 100 utterances at a time. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_upload.js) it, and save it into `_upload.js`. +Once the entities and intents have been defined in the LUIS app, you can add the utterances. The following code uses the [Utterances_AddBatch](/rest/api/luis/examples/batch) API, which allows you to add up to 100 utterances at a time. Copy or [download](https://github.com/Azure-Samples/cognitive-services-language-understanding/blob/master/examples/build-app-programmatically-csv/_upload.js) it, and save it into `_upload.js`. [!code-javascript[Node.js code for adding utterances](~/samples-luis/examples/build-app-programmatically-csv/_upload.js)] Once the script completes, you can sign in to [LUIS](luis-reference-regions.md) ## Additional resources This sample application uses the following LUIS APIs:-- [create app](/rest/api/cognitiveservices-luis/authoring/apps/add?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)-- [add intents](/rest/api/cognitiveservices-luis/authoring/features/add-intent-feature?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)-- [add entities](/rest/api/cognitiveservices-luis/authoring/features/add-entity-feature?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)-- [add utterances](/rest/api/cognitiveservices-luis/authoring/examples/add?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true)+- [create app](/rest/api/luis/apps/add) +- [add intents](/rest/api/luis/features/add-intent-feature) +- [add entities](/rest/api/luis/features/add-entity-feature) +- [add utterances](/rest/api/luis/examples/add) |
ai-services | Luis User Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-user-privacy.md | Last updated 01/19/2024 Delete customer data to ensure privacy and compliance. ## Summary of customer data request featuresΓÇï-Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true). +Language Understanding Intelligent Service (LUIS) preserves customer content to operate the service, but the LUIS user has full control over viewing, exporting, and deleting their data. This can be done through the LUIS web [portal](luis-reference-regions.md) or the [LUIS Authoring (also known as Programmatic) APIs](/rest/api/luis/operation-groups). [!INCLUDE [GDPR-related guidance](~/reusable-content/ce-skilling/azure/includes/gdpr-intro-sentence.md)] LUIS users have full control to delete any user content, either through the LUIS | | **User Account** | **Application** | **Example Utterance(s)** | **End-user queries** | | | | | | | | **Portal** | [Link](luis-concept-data-storage.md#delete-an-account) | [Link](how-to/sign-in.md) | [Link](luis-concept-data-storage.md#utterances-in-an-intent) | [Active learning utterances](how-to/improve-application.md)<br>[Logged Utterances](luis-concept-data-storage.md#disable-logging-utterances) |-| **APIs** | [Link](/rest/api/cognitiveservices-luis/authoring/azure-accounts/remove-from-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/apps/delete?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/examples/delete?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/versions/delete-unlabelled-utterance?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | +| **APIs** | [Link](/rest/api/luis/azure-accounts/remove-from-app) | [Link](/rest/api/luis/apps/delete) | [Link](/rest/api/luis/examples/delete) | [Link](/rest/api/luis/versions/delete-unlabelled-utterance) | ## Exporting customer data LUIS users have full control to view the data on the portal, however it must be | | **User Account** | **Application** | **Utterance(s)** | **End-user queries** | | | | | | |-| **APIs** | [Link](/rest/api/cognitiveservices-luis/authoring/azure-accounts/list-user-luis-accounts?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/versions/export?view=rest-cognitiveservices-luis-authoring-v2.0&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/examples/list?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | [Link](/rest/api/cognitiveservices-luis/authoring/apps/download-query-logs?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) | +| **APIs** | [Link](/rest/api/luis/azure-accounts/list-user-luis-accounts) | [Link](/rest/api/luis/versions/export) | [Link](/rest/api/luis/examples/list) | [Link](/rest/api/luis/apps/download-query-logs) | ## Location of active learning |
ai-services | Reference Pattern Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/reference-pattern-syntax.md | The words of the book title are not confusing to LUIS because LUIS knows where t ## Explicit lists -create an [Explicit List](/rest/api/cognitiveservices-luis/authoring/model/add-explicit-list-item?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) through the authoring API to allow the exception when: +create an [Explicit List](/rest/api/luis/model/add-explicit-list-item) through the authoring API to allow the exception when: * Your pattern contains a [Pattern.any](concepts/entities.md#patternany-entity) * And that pattern syntax allows for the possibility of an incorrect entity extraction based on the utterance. In the following utterances, the **subject** and **person** entity are extracted In the preceding table, the subject should be `the man from La Mancha` (a book title) but because the subject includes the optional word `from`, the title is incorrectly predicted. -To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](/rest/api/cognitiveservices-luis/authoring/model/add-explicit-list-item?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true). +To fix this exception to the pattern, add `the man from la mancha` as an explicit list match for the {subject} entity using the [authoring API for explicit list](/rest/api/luis/model/add-explicit-list-item). ## Syntax to mark optional text in a template utterance |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md | A user that should only be validating and reviewing LUIS applications, typically :::column-end::: :::column span=""::: All GET APIs under: - * [LUIS Programmatic v3.0-preview](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) - * [LUIS Programmatic v2.0 APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v2.0&preserve-view=true) + * [LUIS Programmatic v3.0-preview](/rest/api/luis/operation-groups?view=rest-luis-v3.0-preview) + * [LUIS Programmatic v2.0 APIs](/rest/api/luis/operation-groups?view=rest-luis-v2.0) All the APIs under: * LUIS Endpoint APIs v2.0- * [LUIS Endpoint APIs v3.0](/rest/api/cognitiveservices-luis/runtime/operation-groups?view=rest-cognitiveservices-luis-runtime-v3.0&preserve-view=true) + * [LUIS Endpoint APIs v3.0](/rest/api/luis/operation-groups?view=rest-luis-v3.0) All the Batch Testing Web APIs :::column-end::: :::row-end::: A user that is responsible for building and modifying LUIS application, as a col All POST, PUT and DELETE APIs under: - * [LUIS Programmatic v3.0-preview](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v3.0-preview&preserve-view=true) - * [LUIS Programmatic v2.0 APIs](/rest/api/cognitiveservices-luis/authoring/operation-groups?view=rest-cognitiveservices-luis-authoring-v2.0&preserve-view=true) + * [LUIS Programmatic v3.0-preview](/rest/api/luis/operation-groups?view=rest-luis-v3.0-preview) + * [LUIS Programmatic v2.0 APIs](/rest/api/luis/operation-groups?view=rest-luis-v2.0) Except for- * [Delete application](/rest/api/cognitiveservices-luis/authoring/apps/delete?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) + * [Delete application](/rest/api/luis/apps/delete) * Move app to another LUIS authoring Azure resource- * [Publish an application](/rest/api/cognitiveservices-luis/authoring/apps/publish?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - * [Update application settings](/rest/api/cognitiveservices-luis/authoring/apps/update-settings?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - * [Assign a LUIS azure accounts to an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/assign-to-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) - * [Remove an assigned LUIS azure accounts from an application](/rest/api/cognitiveservices-luis/authoring/azure-accounts/remove-from-app?view=rest-cognitiveservices-luis-authoring-v3.0-preview&tabs=HTTP&preserve-view=true) + * [Publish an application](/rest/api/luis/apps/publish) + * [Update application settings](/rest/api/luis/apps/update-settings) + * [Assign a LUIS azure accounts to an application](/rest/api/luis/azure-accounts/assign-to-app) + * [Remove an assigned LUIS azure accounts from an application](/rest/api/luis/azure-accounts/remove-from-app) :::column-end::: :::row-end::: |
ai-services | What Is Luis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/what-is-luis.md | Learn more about planning and building your application [here](concepts/applicat [bot-framework]: /bot-framework/ [flow]: /connectors/luis/ [authoring-apis]: https://go.microsoft.com/fwlink/?linkid=2092087-[endpoint-apis]: https://go.microsoft.com/fwlink/?linkid=2092356 +[endpoint-apis]: /rest/api/luis/operation-groups [qnamaker]: https://qnamaker.ai/ |
ai-services | Limits And Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/limits-and-quotas.md | There are two tiers of keys for the Custom Vision service. You can sign up for a |Min labeled images per Tag, Classification (50+ recommended) |5|5| |Min labeled images per Tag, Object Detection (50+ recommended)|15|15| |How long prediction images stored|30 days|30 days|-|[Prediction](https://go.microsoft.com/fwlink/?linkid=865445) operations with storage (Transactions Per Second)|2|10| -|[Prediction](https://go.microsoft.com/fwlink/?linkid=865445) operations without storage (Transactions Per Second)|2|20| +|[Prediction](/rest/api/customvision/predictions) operations with storage (Transactions Per Second)|2|10| +|[Prediction](/rest/api/customvision/predictions) operations without storage (Transactions Per Second)|2|20| |[TrainProject](https://go.microsoft.com/fwlink/?linkid=865446) (API calls Per Second)|2|10| |[Other API calls](https://go.microsoft.com/fwlink/?linkid=865446) (Transactions Per Second)|10|10| |Accepted image types|jpg, png, bmp, gif|jpg, png, bmp, gif| |
ai-services | Storage Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/storage-integration.md | The `"exportStatus"` field may be either `"ExportCompleted"` or `"ExportFailed"` In this guide, you learned how to copy and back up a project between Custom Vision resources. Next, explore the API reference docs to see what else you can do with Custom Vision. * [REST API reference documentation (training)](/rest/api/customvision/training/operation-groups?view=rest-customvision-training-v3.3)-* [REST API reference documentation (prediction)](/rest/api/customvision/prediction/operation-groups?view=rest-customvision-prediction-v3.1) +* [REST API reference documentation (prediction)](/rest/api/customvision/predictions) |
ai-services | Use Prediction Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/use-prediction-api.md | -> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](/rest/api/customvision/prediction/operation-groups?view=rest-customvision-prediction-v3.1). +> This document demonstrates use of the .NET client library for C# to submit an image to the Prediction API. For more information and examples, see the [Prediction API reference](/rest/api/customvision/predictions). ## Setup |
ai-services | Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/developer-guide.md | The question answering APIs enables you to use the [question answering](../quest As you use this API in your application, see the following reference documentation for additional information. -* [Prebuilt API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers-from-text) - Use the prebuilt runtime API to answer specified question using text provided by users. -* [Custom authoring API](/rest/api/cognitiveservices/questionanswering/question-answering-projects) - Create a knowledge base to answer questions. -* [Custom runtime API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) - Query and knowledge base to generate an answer. +* [Prebuilt API](/azure/ai-services/language-service/question-answering/how-to/prebuilt) - Use the prebuilt runtime API to answer specified question using text provided by users. +* [Custom authoring API](/azure/ai-services/language-service/question-answering/how-to/authoring) - Create a knowledge base to answer questions. |
ai-services | Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/role-based-access-control.md | A user that should only be validating and reviewing the Language apps, typically All GET APIs under: * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring) * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)- * [Question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) + * [Question answering projects](/rest/api/questionanswering/question-answering-projects) Only `TriggerExportProjectJob` POST operation under: * [Language authoring conversational language understanding export API](/rest/api/language/2023-04-01/text-analysis-authoring/export) * [Language authoring text analysis export API](/rest/api/language/2023-04-01/text-analysis-authoring/export) Only Export POST operation under: - * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export) + * [Question Answering Projects](/rest/api/questionanswering/question-answering-projects/export) All the Batch Testing Web APIs *[Language Runtime CLU APIs](/rest/api/language) *[Language Runtime Text Analysis APIs](https://go.microsoft.com/fwlink/?linkid=2239169) A user that is responsible for building and modifying an application, as a colla * All POST, PUT and PATCH APIs under: * [Language conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring) * [Language text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)- * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) + * [question answering projects](/rest/api/questionanswering/question-answering-projects) Except for * Delete deployment * Delete trained model These users are the gatekeepers for the Language applications in production envi All APIs available under: * [Language authoring conversational language understanding APIs](/rest/api/language/2023-04-01/conversational-analysis-authoring) * [Language authoring text analysis APIs](/rest/api/language/2023-04-01/text-analysis-authoring)- * [question answering projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects) + * [question answering projects](/rest/api/questionanswering/question-answering-projects) :::column-end::: :::row-end::: |
ai-services | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/concepts/best-practices.md | Question answering takes casing into account but it's intelligent enough to unde ### How are question answer pairs prioritized for multi-turn questions? -When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [custom question answering REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted. +When a project has hierarchical relationships (either added manually or via extraction) and the previous response was an answer related to other question answer pairs, for the next query we give slight preference to all the children question answer pairs, sibling question answer pairs, and grandchildren question answer pairs in that order. Along with any query, the [custom question answering REST API](/rest/api/questionanswering/question-answering/get-answers) expects a `context` object with the property `previousQnAId`, which denotes the last top answer. Based on this previous `QnAID`, all the related `QnAs` are boosted. ### How are accents treated? |
ai-services | Authoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/authoring.md | Last updated 12/19/2023 The custom question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects. > [!NOTE]-> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects). +> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/questionanswering/question-answering-projects). ## Prerequisites |
ai-services | Migrate Qnamaker To Question Answering | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/migrate-qnamaker-to-question-answering.md | Here are [detailed steps on migration scenario 2](https://github.com/Azure/azure Learn more about the [pre-built API](../../../QnAMaker/How-To/using-prebuilt-api.md) -Learn more about the [custom question answering Get Answers REST API](/rest/api/cognitiveservices/questionanswering/question-answering/get-answers) +Learn more about the [custom question answering Get Answers REST API](/rest/api/questionanswering/question-answering/get-answers) ## Migration steps |
ai-services | Smart Url Refresh | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/smart-url-refresh.md | You can trigger a URL refresh in Language Studio by opening your project, select :::image type="content" source="../media/question-answering/refresh-url.png" alt-text="screenshot of language studio with refresh URL button highlighted."::: -You can also trigger a refresh programmatically using the REST API. See the **[Update Sources](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources)** reference documentation for parameters and a sample request. +You can also trigger a refresh programmatically using the REST API. See the **[Update Sources](/rest/api/questionanswering/question-answering-projects/update-sources)** reference documentation for parameters and a sample request. ## Smart refresh behavior If these two QnA pairs have individual prompts attached to them (for example, Q1 ## Next steps * [Custom question answering quickstart](../quickstart/sdk.md?pivots=studio)-* [Update Sources API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects/update-sources) +* [Update Sources API reference](/rest/api/questionanswering/question-answering-projects/update-sources) |
ai-services | Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/quickstart/sdk.md | If you want to clean up and remove an Azure AI services subscription, you can de To learn about automating your custom question answering pipeline consult the REST API documentation. Currently authoring functionality is only available via REST API: -* [Authoring API reference](/rest/api/cognitiveservices/questionanswering/question-answering-projects) +* [Authoring API reference](/rest/api/questionanswering/question-answering-projects) * [Authoring API cURL examples](../how-to/authoring.md)-* [Runtime API reference](/rest/api/cognitiveservices/questionanswering/question-answering) +* [Runtime API reference](/rest/api/questionanswering/question-answering) ## Next steps |
ai-services | Assistants Reference Messages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-messages.md | description: Learn how to use Azure OpenAI's Python & REST API messages with Ass Previously updated : 02/01/2024 Last updated : 07/25/2024 recommendations: false curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/mess -## List message files --```http -GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files?api-version=2024-05-01-preview -``` --Returns a list of message files. --|Parameter| Type | Required | Description | -||||| -|`thread_id` | string | Required | The ID of the thread that the message and files belong to. | -|`message_id`| string | Required | The ID of the message that the files belong to. | --**Query Parameters** --|Name | Type | Required | Description | -| | | | | -| `limit` | integer | Optional - Defaults to 20 |A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.| -| `order` | string | Optional - Defaults to desc |Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.| -| `after` | string | Optional | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.| -| `before` | string | Optional | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.| --### Returns --A list of [message file](#message-file-object) objects --### Example list message files request --# [Python 1.x](#tab/python) --```python -from openai import AzureOpenAI - -client = AzureOpenAI( - api_key=os.getenv("AZURE_OPENAI_API_KEY"), - api_version="2024-05-01-preview", - azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") - ) --message_files = client.beta.threads.messages.files.list( - thread_id="thread_abc123", - message_id="msg_abc123" -) -print(message_files) --``` --# [REST](#tab/rest) --```console -curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/files?api-version=2024-05-01-preview \ - -H "api-key: $AZURE_OPENAI_API_KEY" \ - -H 'Content-Type: application/json' -``` --- ## Retrieve message ```http The [message](#message-object) object matching the specified ID. ```python from openai import AzureOpenAI- + client = AzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version="2024-05-01-preview", curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/mess -## Retrieve message file +## Modify message ```http-GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files/{file_id}?api-version=2024-05-01-preview +POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-05-01-preview ``` -Retrieves a message file. +Modifies a message. **Path parameters** |Parameter| Type | Required | Description | |||||-|`thread_id` | string | Required | The ID of the thread, which the message and file belongs to. | -|`message_id`| string | Required | The ID of the message that the file belongs to. | -|`file_id` | string | Required | The ID of the file being retrieved. | +|`thread_id` | string | Required | The ID of the thread to which the message belongs. | +|`message_id`| string | Required | The ID of the message to modify. | ++**Request body** -**Returns** +|Parameter| Type | Required | Description | +||||| +| metadata | map| Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.| -The [message file](#message-file-object) object. +### Returns -### Example retrieve message file request +The modified [message](#message-object) object. # [Python 1.x](#tab/python) client = AzureOpenAI( azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") ) -message_files = client.beta.threads.messages.files.retrieve( - thread_id="thread_abc123", - message_id="msg_abc123", - file_id="assistant-abc123" +message = client.beta.threads.messages.update( + message_id="msg_abc12", + thread_id="thread_abc123", + metadata={ + "modified": "true", + "user": "abc123", + }, )-print(message_files) +print(message) ``` # [REST](#tab/rest) ```console-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files/{file_id}?api-version=2024-05-01-preview +curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-05-01-preview ``` \ -H "api-key: $AZURE_OPENAI_API_KEY" \- -H 'Content-Type: application/json' + -H 'Content-Type: application/json' \ + -d '{ + "metadata": { + "modified": "true", + "user": "abc123" + } + }' + ``` -## Modify message +## Delete message + ```http-POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-05-01-preview +DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-05-01-preview ``` -Modifies a message. +Deletes a message. **Path parameters** Modifies a message. |`thread_id` | string | Required | The ID of the thread to which the message belongs. | |`message_id`| string | Required | The ID of the message to modify. | -**Request body** --|Parameter| Type | Required | Description | -||||| -| metadata | map| Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.| - ### Returns -The modified [message](#message-object) object. +The deletion status of the [message](#message-object) object. # [Python 1.x](#tab/python) ```python from openai import AzureOpenAI- client = AzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version="2024-05-01-preview", azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") ) -message = client.beta.threads.messages.update( +deleted_message = client.beta.threads.messages.delete( message_id="msg_abc12", thread_id="thread_abc123",- metadata={ - "modified": "true", - "user": "abc123", - }, )-print(message) +print(deleted_message) ``` # [REST](#tab/rest) ```console-curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-05-01-preview -``` \ +curl -x DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-05-01-preview \ -H "api-key: $AZURE_OPENAI_API_KEY" \- -H 'Content-Type: application/json' \ - -d '{ - "metadata": { - "modified": "true", - "user": "abc123" - } - }' - + -H 'Content-Type: application/json' ``` Represents a message within a thread. | `run_id` | string or null |If applicable, the ID of the run associated with the authoring of this message.| | `file_ids` | array |A list of file IDs that the assistant should use. Useful for tools like retrieval and code_interpreter that can access files. A maximum of 10 files can be attached to a message.| | `metadata` | map |Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|--## Message file object --A list of files attached to a message. --|Name | Type | Description | -| | | | -| `id`| string | The identifier, which can be referenced in API endpoints.| -|`object`|string| The object type, which is always `thread.message.file`.| -|`created_at`|integer | The Unix timestamp (in seconds) for when the message file was created.| -|`message_id`| string | The ID of the message that the File is attached to.| |
ai-services | Assistants Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference.md | description: Learn how to use Azure OpenAI's Python & REST API with Assistants. Previously updated : 06/13/2024 Last updated : 07/25/2024 recommendations: false curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2 -## Create assistant file --```http -POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-05-01-preview -``` --Create an assistant file by attaching a `File` to an `assistant`. --**Path parameters** --|Parameter| Type | Required | Description | -||||| -|`assistant_id`| string | Required | The ID of the assistant that the file should be attached to. | --**Request body** --| Name | Type | Required | Description | -|||| -| file_id | string | Required | A File ID (with purpose="assistants") that the assistant should use. Useful for tools like code_interpreter that can access files. | --### Returns --An [assistant file](#assistant-file-object) object. --### Example create assistant file request --# [Python 1.x](#tab/python) --```python -from openai import AzureOpenAI - -client = AzureOpenAI( - api_key=os.getenv("AZURE_OPENAI_API_KEY"), - api_version="2024-05-01-preview", - azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") - ) --assistant_file = client.beta.assistants.files.create( - assistant_id="asst_abc123", - file_id="assistant-abc123" -) -print(assistant_file) -``` --# [REST](#tab/rest) --```console -curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-05-01-preview \ - -H "api-key: $AZURE_OPENAI_API_KEY" \ - -H 'Content-Type: application/json' \ - -d '{ - "file_id": "assistant-abc123" - }' -``` --- ## List assistants ```http curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2 -## List assistant files --```http -GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-05-01-preview -``` --Returns a list of assistant files. --**Path parameters** --|Parameter| Type | Required | Description | -||||| -| assistant_id | string | Required | The ID of the assistant the file belongs to. | --**Query parameters** --|Parameter| Type | Required | Description | -||||| -| `limit` | integer | Optional | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.| -| `order` | string | Optional - Defaults to desc | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.| -| `after` | string | Optional | A cursor for use in pagination. `after` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. | -|`before`| string | Optional | A cursor for use in pagination. `before` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. | --### Returns --A list of [assistant file](#assistant-file-object) objects --### Example list assistant files --# [Python 1.x](#tab/python) --```python -from openai import AzureOpenAI - -client = AzureOpenAI( - api_key=os.getenv("AZURE_OPENAI_API_KEY"), - api_version="2024-05-01-preview", - azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") - ) --assistant_files = client.beta.assistants.files.list( - assistant_id="asst_abc123" -) -print(assistant_files) -``` --# [REST](#tab/rest) --```console -curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}/files?api-version=2024-05-01-preview \ - -H "api-key: $AZURE_OPENAI_API_KEY" \ - -H 'Content-Type: application/json' -``` -- ## Retrieve assistant curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id -## Retrieve assistant file --```http -GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-05-01-preview -``` --Retrieves an Assistant file. --**Path parameters** --|Parameter| Type | Required | Description | -|||| -| assistant_id | string | Required | The ID of the assistant the file belongs to. | -|file_id| string | Required | The ID of the file we're getting | --### Returns --The [assistant file](#assistant-file-object) object matching the specified ID --### Example retrieve assistant file --# [Python 1.x](#tab/python) --```python -client = AzureOpenAI( - api_key=os.getenv("AZURE_OPENAI_API_KEY"), - api_version="2024-05-01-preview", - azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") - ) --assistant_file = client.beta.assistants.files.retrieve( - assistant_id="asst_abc123", - file_id="assistant-abc123" -) -print(assistant_file) -``` --# [REST](#tab/rest) --```console -curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}/files/{file-id}?api-version=2024-05-01-preview \ - -H "api-key: $AZURE_OPENAI_API_KEY" \ - -H 'Content-Type: application/json' -``` --- ## Modify assistant ```http curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id -## Delete assistant file --```http -DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-05-01-preview -``` --Delete an assistant file. --**Path parameters** --|Parameter| Type | Required | Description | -||||| -| `assistant_id` | string | Required | The ID of the assistant the file belongs to. | -| `file_id` | string | Required | The ID of the file to delete | --**Returns** --File deletion status --### Example delete assistant file --# [Python 1.x](#tab/python) --```python -client = AzureOpenAI( - api_key=os.getenv("AZURE_OPENAI_API_KEY"), - api_version="2024-05-01-preview", - azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") - ) --deleted_assistant_file = client.beta.assistants.files.delete( - assistant_id="asst_abc123", - file_id="assistant-abc123" -) -print(deleted_assistant_file) --``` --# [REST](#tab/rest) --```console -curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-05-01-preview -``` \ - -H "api-key: $AZURE_OPENAI_API_KEY" \ - -H 'Content-Type: application/json' \ - -X DELETE -``` --- ## File upload API reference Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-05-01-preview&tabs=HTTP&preserve-view=true). When uploading a file you have to specify an appropriate value for the [purpose parameter](/rest/api/azureopenai/files/upload?view=rest-azureopenai-2024-05-01-preview&tabs=HTTP#purpose&preserve-view=true). Assistants use the [same API for file upload as fine-tuning](/rest/api/azureopen | `tools` | array | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function. A `function` description can be a maximum of 1,024 characters.| | `file_ids` | array | A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.| | `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|--## Assistant file object --| Field | Type | Description | -|||| -| `id`| string | The identifier, which can be referenced in API endpoints.| -|`object`| string | The object type, which is always `assistant.file` | -|`created_at` | integer | The Unix timestamp (in seconds) for when the assistant file was created.| -|`assistant_id` | string | The assistant ID that the file is attached to. | |
ai-services | Use Your Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md | These estimates will vary based on the values set for the above parameters. For The estimates also depend on the nature of the documents and questions being asked. For example, if the questions are open-ended, the responses are likely to be longer. Similarly, a longer system message would contribute to a longer prompt that consumes more tokens, and if the conversation history is long, the prompt will be longer. -| Model | Max tokens for system message | Max tokens for model response | -|--|--|--| -| GPT-35-0301 | 400 | 1500 | -| GPT-35-0613-16K | 1000 | 3200 | -| GPT-4-0613-8K | 400 | 1500 | -| GPT-4-0613-32K | 2000 | 6400 | --The table above shows the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens: +| Model | Max tokens for system message | +|--|--| +| GPT-35-0301 | 400 | +| GPT-35-0613-16K | 1000 | +| GPT-4-0613-8K | 400 | +| GPT-4-0613-32K | 2000 | +| GPT-35-turbo-0125 | 2000 | +| GPT-4-turbo-0409 | 4000 | +| GPT-4o | 4000 | +| GPT-4o-mini | 4000 | ++The table above shows the maximum number of tokens that can be used for the [system message](#system-message). To see the maximum tokens for the model response, see the [models article](./models.md#gpt-4-and-gpt-4-turbo-models). Additionally, the following also consume tokens: |
ai-services | Azure Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/azure-resources.md | Use these keys when making requests to the service through APIs. |Name|Location|Purpose| |--|--|--|-|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.| +|Authoring/Subscription key|[Azure portal](https://azure.microsoft.com/free/cognitive-services/)|These keys are used to access the [QnA Maker management service APIs](/rest/api/qnamaker/knowledgebase). These APIs let you edit the questions and answers in your knowledge base, and publish your knowledge base. These keys are created when you create a new QnA Maker service.<br><br>Find these keys on the **Azure AI services** resource on the **Keys and Endpoint** page.| |Query endpoint key|[QnA Maker portal](https://www.qnamaker.ai)|These keys are used to query the published knowledge base endpoint to get a response for a user question. You typically use this query endpoint in your chat bot or in the client application code that connects to the QnA Maker service. These keys are created when you publish your QnA Maker knowledge base.<br><br>Find these keys in the **Service settings** page. Find this page from the user's menu in the upper right of the page on the drop-down menu.| ### Find authoring keys in the Azure portal |
ai-services | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/best-practices.md | By default, QnA Maker searches through questions and answers. If you want to sea ### Use synonyms -While there is some support for synonyms in the English language, use case-insensitive word alterations via the [Alterations API](/rest/api/cognitiveservices/qnamaker/alterations/replace) to add synonyms to keywords that take different forms. Synonyms are added at the QnA Maker service-level and **shared by all knowledge bases in the service**. +While there is some support for synonyms in the English language, use case-insensitive word alterations via the [Alterations API](/rest/api/qnamaker/alterations/replace) to add synonyms to keywords that take different forms. Synonyms are added at the QnA Maker service-level and **shared by all knowledge bases in the service**. ### Use distinct words to differentiate questions QnA Maker's ranking algorithm, that matches a user query with a question in the knowledge base, works best if each question addresses a different need. Repetition of the same word set between questions reduces the likelihood that the right answer is chosen for a given user query with those words. |
ai-services | Confidence Score | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/confidence-score.md | Set the threshold score as a property of the [GenerateAnswer API JSON body](../h From the bot framework, set the score as part of the options object with [C#](../how-to/metadata-generateanswer-usage.md?#use-qna-maker-with-a-bot-in-c) or [Node.js](../how-to/metadata-generateanswer-usage.md?#use-qna-maker-with-a-bot-in-nodejs). ## Improve confidence scores-To improve the confidence score of a particular response to a user query, you can add the user query to the knowledge base as an alternate question on that response. You can also use case-insensitive [word alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) to add synonyms to keywords in your KB. +To improve the confidence score of a particular response to a user query, you can add the user query to the knowledge base as an alternate question on that response. You can also use case-insensitive [word alterations](/rest/api/qnamaker/alterations/replace) to add synonyms to keywords in your KB. ## Similar confidence scores |
ai-services | Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Concepts/plan.md | QnA Maker uses _active learning_ to improve your knowledge base by suggesting al ### Providing a default answer -If your knowledge base doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page in the QnA Maker portal or in the [APIs](/rest/api/cognitiveservices/qnamaker/knowledgebase/update#request-body). +If your knowledge base doesn't find an answer, it returns the _default answer_. This answer is configurable on the **Settings** page in the QnA Maker portal or in the [APIs](/rest/api/qnamaker/knowledgebase/update#request-body). This default answer is different from the Azure bot default answer. You configure the default answer for your Azure bot in the Azure portal as part of configuration settings. It's returned when the score threshold isn't met. |
ai-services | Change Default Answer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/change-default-answer.md | There are two types of default answer in your knowledge base. It is important to |Types of default answers|Description of answer| |--|--|-|KB answer when no answer is determined|`No good match found in KB.` - When the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer) finds no matching answer to the question, the `DefaultAnswer` setting of the App service is returned. All knowledge bases in the same QnA Maker resource share the same default answer text.<br>You can manage the setting in the Azure portal, via the App service, or with the REST APIs for [getting](/rest/api/appservice/webapps/listapplicationsettings) or [updating](/rest/api/appservice/webapps/updateapplicationsettings) the setting.| -|Follow-up prompt instruction text|When using a follow-up prompt in a conversation flow, you may not need an answer in the QnA pair because you want the user to select from the follow-up prompts. In this case, set specific text by setting the default answer text, which is returned with each prediction for follow-up prompts. The text is meant to display as instructional text to the selection of follow-up prompts. An example for this default answer text is `Please select from the following choices`. This configuration is explained in the next few sections of this document. Can also set as part of knowledge base definition of `defaultAnswerUsedForExtraction` using [REST API](/rest/api/cognitiveservices/qnamaker/knowledgebase/create).| +|KB answer when no answer is determined|`No good match found in KB.` - When the [GenerateAnswer API](/rest/api/qnamaker/runtime/generate-answer) finds no matching answer to the question, the `DefaultAnswer` setting of the App service is returned. All knowledge bases in the same QnA Maker resource share the same default answer text.<br>You can manage the setting in the Azure portal, via the App service, or with the REST APIs for [getting](/rest/api/appservice/webapps/listapplicationsettings) or [updating](/rest/api/appservice/webapps/updateapplicationsettings) the setting.| +|Follow-up prompt instruction text|When using a follow-up prompt in a conversation flow, you may not need an answer in the QnA pair because you want the user to select from the follow-up prompts. In this case, set specific text by setting the default answer text, which is returned with each prediction for follow-up prompts. The text is meant to display as instructional text to the selection of follow-up prompts. An example for this default answer text is `Please select from the following choices`. This configuration is explained in the next few sections of this document. Can also set as part of knowledge base definition of `defaultAnswerUsedForExtraction` using [REST API](/rest/api/qnamaker/knowledgebase/create).| ### Client application integration |
ai-services | Metadata Generateanswer Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/metadata-generateanswer-usage.md | To get the predicted answer to a user's question, use the GenerateAnswer API. Wh ## Get answer predictions with the GenerateAnswer API -You use the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer) in your bot or application to query your knowledge base with a user question, to get the best match from the question and answer pairs. +You use the [GenerateAnswer API](/rest/api/qnamaker/runtime/generate-answer) in your bot or application to query your knowledge base with a user question, to get the best match from the question and answer pairs. > [!NOTE] > This documentation does not apply to the latest release. To learn about using the latest question answering APIs consult the [question answering quickstart guide](../../language-service/question-answering/quickstart/sdk.md). You use the [GenerateAnswer API](/rest/api/cognitiveservices/qnamakerruntime/run ## Publish to get GenerateAnswer endpoint -After you publish your knowledge base, either from the [QnA Maker portal](https://www.qnamaker.ai), or by using the [API](/rest/api/cognitiveservices/qnamaker/knowledgebase/publish), you can get the details of your GenerateAnswer endpoint. +After you publish your knowledge base, either from the [QnA Maker portal](https://www.qnamaker.ai), or by using the [API](/rest/api/qnamaker/knowledgebase/publish), you can get the details of your GenerateAnswer endpoint. To get your endpoint details: 1. Sign in to [https://www.qnamaker.ai](https://www.qnamaker.ai). You call GenerateAnswer with an HTTP POST request. For sample code that shows ho The POST request uses: -* Required [URI parameters](/rest/api/cognitiveservices/qnamakerruntime/runtime/train#uri-parameters) +* Required [URI parameters](/rest/api/qnamaker/runtime/train#uri-parameters) * Required header property, `Authorization`, for security-* Required [body properties](/rest/api/cognitiveservices/qnamakerruntime/runtime/train#feedbackrecorddto). +* Required [body properties](/rest/api/qnamaker/runtime/train#feedbackrecorddto). The GenerateAnswer URL has the following format: The previous JSON requested only answers that are at 30% or above the threshold ## GenerateAnswer response properties -The [response](/rest/api/cognitiveservices/qnamakerruntime/runtime/generateanswer#successful-query) is a JSON object including all the information you need to display the answer and the next turn in the conversation, if available. +The [response](/rest/api/qnamaker/runtime/generate-answer#successful-query) is a JSON object including all the information you need to display the answer and the next turn in the conversation, if available. ```json { |
ai-services | Multi Turn | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/multi-turn.md | When you add a hierarchical document, QnA Maker determines follow-up prompts fro ## Create knowledge base with multi-turn prompts with the Create API -You can create a knowledge case with multi-turn prompts using the [QnA Maker Create API](/rest/api/cognitiveservices/qnamaker/knowledgebase/create). The prompts are adding in the `context` property's `prompts` array. +You can create a knowledge case with multi-turn prompts using the [QnA Maker Create API](/rest/api/qnamaker/knowledgebase/create). The prompts are adding in the `context` property's `prompts` array. ## Show questions and answers with context If you are building a custom application, in the initial question's response, an ## Display order is supported in the Update API -The [display text and display order](/rest/api/cognitiveservices/qnamaker/knowledgebase/update#promptdto), returned in the JSON response, is supported for editing by the [Update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). +The [display text and display order](/rest/api/qnamaker/knowledgebase/update#promptdto), returned in the JSON response, is supported for editing by the [Update API](/rest/api/qnamaker/knowledgebase/update). ## Add or delete multi-turn prompts with the Update API -You can add or delete multi-turn prompts using the [QnA Maker Update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The prompts are adding in the `context` property's `promptsToAdd` array and the `promptsToDelete` array. +You can add or delete multi-turn prompts using the [QnA Maker Update API](/rest/api/qnamaker/knowledgebase/update). The prompts are adding in the `context` property's `promptsToAdd` array and the `promptsToDelete` array. ## Export knowledge base for version control |
ai-services | Use Active Learning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/use-active-learning.md | QnA Maker needs explicit feedback about which of the answers was the best answer * Business logic, such as determining an acceptable score range. * A combination of both user feedback and business logic. -Use the [Train API](/rest/api/cognitiveservices/qnamaker4.0/runtime/train) to send the correct answer to QnA Maker, after the user selects it. +Use the [Train API](/rest/api/qnamaker/runtime/train) to send the correct answer to QnA Maker, after the user selects it. ## Upgrade runtime version to use active learning |
ai-services | Using Prebuilt Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/using-prebuilt-api.md | We see that multiple answers are received as part of the API response. Each answ Visit the [Prebuilt API Limits](../limits.md#prebuilt-question-answering-limits) documentation ## Prebuilt API reference-Visit the [Prebuilt API reference](/rest/api/cognitiveservices-qnamaker/qnamaker5.0preview2/prebuilt/generateanswer) documentation to understand the input and output parameters required for calling the API. +Visit the [Prebuilt API reference](/rest/api/qnamaker/prebuilt/generate-answer) documentation to understand the input and output parameters required for calling the API. |
ai-services | Export Knowledge Base | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/export-knowledge-base.md | You may want to create a copy of your knowledge base for several reasons: 1. On the **Settings** page, you have the options to export **QnAs**, **Synonyms**, or **Knowledge Base Replica**. You can choose to download the data in .tsv/.xlsx. - 1. **QnAs**: When exporting QnAs, all QnA pairs (with questions, answers, metadata, follow-up prompts, and the data source names) are downloaded. The QnA IDs that are exported with the questions and answers may be used to update a specific QnA pair using the [update API](/rest/api/cognitiveservices/qnamaker/knowledgebase/update). The QnA ID for a specific QnA pair remains unchanged across multiple export operations. + 1. **QnAs**: When exporting QnAs, all QnA pairs (with questions, answers, metadata, follow-up prompts, and the data source names) are downloaded. The QnA IDs that are exported with the questions and answers may be used to update a specific QnA pair using the [update API](/rest/api/qnamaker/knowledgebase/update). The QnA ID for a specific QnA pair remains unchanged across multiple export operations. 2. **Synonyms**: You can export Synonyms that have been added to the knowledge base. 4. **Knowledge Base Replica**: If you want to download the entire knowledge base with synonyms and other settings, you can choose this option. The export/import process is programmatically available using the following REST **Export** -* [Download knowledge base API](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/download) +* [Download knowledge base API](/rest/api/qnamaker/knowledgebase/download) **Import** -* [Replace API (reload with same knowledge base ID)](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/replace) -* [Create API (load with new knowledge base ID)](/rest/api/cognitiveservices/qnamaker4.0/knowledgebase/create) +* [Replace API (reload with same knowledge base ID)](/rest/api/qnamaker/knowledgebase/replace) +* [Create API (load with new knowledge base ID)](/rest/api/qnamaker/knowledgebase/create) ## Chat logs |
ai-services | Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/limits.md | These represent the limits when Prebuilt API is used to *Generate response* or c > Support for unstructured file/content and is available only in question answering. ## Alterations limits-[Alterations](/rest/api/cognitiveservices/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#' +[Alterations](/rest/api/qnamaker/alterations/replace) do not allow these special characters: ',', '?', ':', ';', '\"', '\'', '(', ')', '{', '}', '[', ']', '-', '+', '.', '/', '!', '*', '-', '_', '@', '#' ## Next steps |
ai-services | Reference App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/reference-app-service.md | Each version of QnA Maker uses a different set of Azure resources (services). Th ## App Service -QnA Maker uses the App Service to provide the query runtime used by the [generateAnswer API](/rest/api/cognitiveservices/qnamaker4.0/runtime/generateanswer). +QnA Maker uses the App Service to provide the query runtime used by the [generateAnswer API](/rest/api/qnamaker/runtime/generate-answer). These settings are available in the Azure portal, for the App Service. The settings are available by selecting **Settings**, then **Configuration**. |
ai-studio | Simulator Interaction Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/simulator-interaction-data.md | from azure.identity import DefaultAzureCredential azure_ai_project = { "subscription_id": <sub_ID>, "resource_group_name": <resource_group_name>,- "workspace_name": <workspace_name>, + "project_name": <project_name>, "credential": DefaultAzureCredential(), } ``` |
aks | Advanced Network Observability Bring Your Own Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/advanced-network-observability-bring-your-own-cli.md | az aks get-credentials --name $CLUSTER_NAME --resource-group $RESOURCE_GROUP ```azurecli-interactive # Set environment variables- export HUBBLE_VERSION=0.11 + export HUBBLE_VERSION=v0.11.0 export HUBBLE_ARCH=amd64 # Install Hubble CLI |
aks | Advanced Network Observability Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/advanced-network-observability-cli.md | Install the Hubble CLI to access the data it collects using the following comman ```azurecli-interactive # Set environment variables-export HUBBLE_VERSION=0.11 +export HUBBLE_VERSION=v0.11.0 export HUBBLE_ARCH=amd64 #Install Hubble CLI |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | az aks update --name $clusterName \ --network-plugin-mode overlay ``` -Since the cluster is already using a private CIDR for pods which doesn't overlap with the VNet IP space, you don't need to specify the `--pod-cidr` parameter and the Pod CIDR will remain the same. +Since the cluster is already using a private CIDR for pods which doesn't overlap with the VNet IP space, you don't need to specify the `--pod-cidr` parameter and the Pod CIDR will remain the same if the parameter is not used. > [!NOTE] > When upgrading from Kubenet to CNI Overlay, the route table will no longer be required for pod routing. If the cluster is using a customer provided route table, the routes which were being used to direct pod traffic to the correct node will automatically be deleted during the migration operation. If the cluster is using a managed route table (the route table was created by AKS and lives in the node resource group) then that route table will be deleted as part of the migration. |
aks | Azure Linux Aks Partner Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-linux-aks-partner-solutions.md | Microsoft collaborates with partners to ensure your build, test, deployment, con The third party partners featured in this article have introduction guides to help you start using their solutions with your applications running on Azure Linux Container Host on AKS. -| Solutions | Partners | -|--|| -| DevOps | [Advantech](#advantech) <br> [Akuity](#akuity) <br> [Anchore](#anchore) <br> [Hashicorp](#hashicorp) <br> [Kong](#kong) <br> [NetApp](#netapp) | -| Networking | [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Solo.io](#soloio) <br> [Tetrate](#tetrate) <br> [Tigera](#tigera-inc) | -| Observability | [Anchore](#anchore) <br> [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Dynatrace](#dynatrace) <br> [Solo.io](#soloio) <br> [Tigera](#tigera-inc) | -| Security | [Anchore](#anchore) <br> [Buoyant](#buoyant) <br> [Isovalent](#isovalent) <br> [Kong](#kong) <br> [Palo Alto Networks](#palo-alto-networks) <br> [Solo.io](#soloio) <br> [Tetrate](#tetrate) <br> [Tigera](#tigera-inc) <br> [Wiz](#wiz) | -| Storage | [Catalogic](#catalogic) <br> [Veeam](#veeam) | -| Config Management | [Corent](#corent) | -| Migration | [Catalogic](#catalogic) | +| | DevOps | Networking | Observability | Security | Storage | Config Management | Migration | +|-|--|||-||-|--| +| **Partners** | ΓÇó [Advantech](#advantech) <br> ΓÇó [Akuity](#akuity) <br> ΓÇó [Anchore](#anchore) <br> ΓÇó [Hashicorp](#hashicorp) <br> ΓÇó [Kong](#kong) <br> ΓÇó [NetApp](#netapp) | ΓÇó [Buoyant](#buoyant) <br> ΓÇó [Isovalent](#isovalent) <br> ΓÇó [Solo.io](#soloio) <br> ΓÇó [Tetrate](#tetrate) <br> ΓÇó [Tigera](#tigera-inc) | ΓÇó [Anchore](#anchore) <br> ΓÇó [Buoyant](#buoyant) <br> ΓÇó [Isovalent](#isovalent) <br> ΓÇó [Dynatrace](#dynatrace) <br> ΓÇó [Solo.io](#soloio) <br> ΓÇó [Tigera](#tigera-inc) | ΓÇó [Anchore](#anchore) <br> ΓÇó [Buoyant](#buoyant) <br> ΓÇó [Isovalent](#isovalent) <br> ΓÇó [Kong](#kong) <br> ΓÇó [Palo Alto Networks](#palo-alto-networks) <br> ΓÇó [Qualys](#qualys) <br> ΓÇó [Solo.io](#soloio) <br> ΓÇó [Tetrate](#tetrate) <br> ΓÇó [Tigera](#tigera-inc) <br> ΓÇó [Wiz](#wiz) | ΓÇó [Catalogic](#catalogic) <br> ΓÇó [Veeam](#veeam) | ΓÇó [Corent](#corent) | ΓÇó [Catalogic](#catalogic) | ## DevOps With Prisma Cloud by Palo Alto Networks you get always on, real-time app visibil For more information, see [Palo Alto Networks Solutions](https://www.paloaltonetworks.com/prisma/environments/azure) and [Prisma Cloud Compute Edition on Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/paloaltonetworks.pcce_twistlock?tab=Overview). +### Qualys +++#### Qualys Cloud Agent ++| Solution | Categories | +|-|| +| Qualys Cloud Agent | Security | ++The Qualys Cloud Agent is a lightweight, modular software agent that enables continuous, real-time security and compliance monitoring across various environments, including SaaS platforms, on-premises systems, and cloud infrastructures. ++<details> <summary> See more </summary><br> ++It supports a wide range of operating systems and architectures, such as Windows, Linux, macOS, AIX, Solaris, and specialized systems like AWS Bottlerocket. The agent provides functionalities such as vulnerability management, patching, endpoint protection, and file integrity monitoring, allowing customers to activate and deactivate capabilities based on their needs. Designed for ease of use, it facilitates seamless integration and management of security protocols across complex IT landscapes. ++</details> ++For more information, see [Qualys Cloud Agent Solutions](https://www.qualys.com/cloud-agent/). ++#### Qualys Container Security ++| Solution | Categories | +|-|| +| Qualys Container Security | Security | ++Qualys K8s and the Container Security solution provide proactive, preventive, and reactive security for containerized applications. ++<details> <summary> See more </summary><br> ++It integrates into your DevOps workflows, offering continuous real-time security and compliance throughout the containerized application lifecycle. Key features include: ++* **Vulnerability management**: Identifies vulnerabilities in container images, registries, and running containers, prioritizes them, and helps mitigate the most critical vulnerabilities first. +* **Runtime protection**: eBPF-based runtime security monitors and protects containers in real-time, detecting and responding to malicious activities. +* **Compliance**: Ensures that Kubernetes configurations and container images adhere to best practices and compliance standards, preventing misconfigurations that might lead to security breaches. +* **File integrity monitoring**: Monitors changes to critical files within containers to detect and respond to unauthorized modifications. +* **Secret and malware detection**: Detects secrets and malware on the left side before container images are deployed in runtime, ensuring security from the development phase. ++</details> ++For more information, see [Qualys Container Security Solutions](https://www.qualys.com/apps/container-security/). + ### Tetrate :::image type="icon" source="./media/azure-linux-aks-partner-solutions/tetrate.png"::: |
aks | Azure Netapp Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md | Using a CSI driver to directly consume Azure NetApp Files volumes from AKS workl You can take advantage of Astra Trident's Container Storage Interface (CSI) driver for Azure NetApp Files to abstract underlying details and create, expand, and snapshot volumes on-demand. Also, using Astra Trident enables you to use [Astra Control Service][astra-control-service] built on top of Astra Trident. Using the Astra Control Service, you can backup, recover, move, and manage the application-data lifecycle of your AKS workloads across clusters within and across Azure regions to meet your business and service continuity needs. + ## Before you begin The following considerations apply when you use Azure NetApp Files: |
aks | Best Practices Cost | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices-cost.md | In this article, you learn about: It's important to evaluate the resource requirements of your application prior to deployment. Small development workloads have different infrastructure needs than large production ready workloads. While a combination of CPU, memory, and networking capacity configurations heavily influences the cost effectiveness of a SKU, consider the following VM types: - [**Azure Spot Virtual Machines**](/azure/virtual-machines/spot-vms) - [Spot node pools](./spot-node-pool.md) are backed by Azure Spot Virtual machine scale sets and deployed to a single fault domain with no high availability or SLA guarantees. Spot VMs allow you to take advantage of unutilized Azure capacity with significant discounts (up to 90% as compared to pay-as-you-go prices). If Azure needs capacity back, the Azure infrastructure evicts the Spot nodes. _Best for dev/test environments, workloads that can handle interruptions such as batch processing jobs, and workloads with flexible execution time._-- [**Ampere Altra Arm-based processors (ARM64)**](https://azure.microsoft.com/blog/now-in-preview-azure-virtual-machines-with-ampere-altra-armbased-processors/) - ARM64 VMs are power-efficient and cost effective but don't compromise on performance. With [AMR64 node pool support in AKS](./create-node-pools.md#arm64-node-pools), you can create ARM64 Ubuntu agent nodes and even mix Intel and ARM architecture nodes within a cluster. These ARM VMs are engineered to efficiently run dynamic, scalable workloads and can deliver up to 50% better price-performance than comparable x86-based VMs for scale-out workloads. _Best for web or application servers, open-source databases, cloud-native applications, gaming servers, and more._+- [**Ampere Altra Arm-based processors (ARM64)**](https://azure.microsoft.com/blog/now-in-preview-azure-virtual-machines-with-ampere-altra-armbased-processors/) - ARM64 VMs are power-efficient and cost effective but don't compromise on performance. With [ARM64 node pool support in AKS](./create-node-pools.md#arm64-node-pools), you can create ARM64 Ubuntu agent nodes and even mix Intel and ARM architecture nodes within a cluster. These ARM VMs are engineered to efficiently run dynamic, scalable workloads and can deliver up to 50% better price-performance than comparable x86-based VMs for scale-out workloads. _Best for web or application servers, open-source databases, cloud-native applications, gaming servers, and more._ - [**GPU optimized SKUs**](/azure/virtual-machines/sizes) - Depending on the nature of your workload, consider using compute optimized, memory optimized, storage optimized, or even graphical processing unit (GPU) optimized VM SKUs. GPU VM sizes are specialized VMs that are available with single, multiple, and fractional GPUs. _[GPU-enabled Linux node pools on AKS](./gpu-cluster.md) are best for compute-intensive workloads like graphics rendering, large model training and inferencing._ > [!NOTE] |
aks | Concepts Ai Ml Language Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-ai-ml-language-models.md | Kubernetes AI Toolchain Operator (KAITO) is an open-source operator that automat For more information, see [Deploy an AI model on AKS with the AI toolchain operator][ai-toolchain-operator]. To get started with a range of supported small and large language models for your inference workflows, see the [KAITO model GitHub repository][kaito-repo]. + ## Next steps To learn more about containerized AI and machine learning workloads on AKS, see the following articles: |
aks | Concepts Fine Tune Language Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-fine-tune-language-models.md | With KAITO version 0.3.0 or later, you can efficiently fine-tune supported MIT a For guidance on getting started with fine-tuning on KAITO, see the [Kaito Tuning Workspace API documentation][kaito-fine-tuning]. To learn more about deploying language models with KAITO in your AKS clusters, see the [KAITO model GitHub repository][kaito-repo]. + ## Next steps To learn more about containerized AI and machine learning workloads on AKS, see the following articles: |
aks | Create Postgresql Ha | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-postgresql-ha.md | In this section, you create a user-assigned managed identity (UAMI) to allow the echo "ClientId: $AKS_UAMI_WORKLOAD_CLIENTID" ``` -The object ID is a unique identifier for the client ID (also known as the application ID) that uniquely identifies a security principal of type *Application* within the Entra ID tenant. The resource ID is a unique identifier to manage and locate a resource in Azure. These values are required to enabled AKS workload identity. +The object ID is a unique identifier for the client ID (also known as the application ID) that uniquely identifies a security principal of type *Application* within the Microsoft Entra ID tenant. The resource ID is a unique identifier to manage and locate a resource in Azure. These values are required to enabled AKS workload identity. The CNPG operator automatically generates a service account called *postgres* that you use later in the guide to create a federated credential that enables OAuth access from PostgreSQL to Azure Storage. The CNPG operator automatically generates a service account called *postgres* th > If you encounter the error message: `The request may be blocked by network rules of storage account. Please check network rule set using 'az storage account show -n accountname --query networkRuleSet'. If you want to change the default action to apply when no rule matches, please use 'az storage account update'`. Please verify user permissions for Azure Blob Storage and, if **necessary**, elevate your role to `Storage Blob Data Owner` using the commands provided below and after retry the [`az storage container create`][az-storage-container-create] command. ```bash- az role assignment list --scope $STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID --output table - export USER_ID=$(az ad signed-in-user show --query id --output tsv) export STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID=$(az storage account show \ The CNPG operator automatically generates a service account called *postgres* th --resource-group $RESOURCE_GROUP_NAME \ --query "id" \ --output tsv)+ + az role assignment list --scope $STORAGE_ACCOUNT_PRIMARY_RESOURCE_ID --output table az role assignment create \ --assignee-object-id $USER_ID \ |
aks | Deploy Postgresql Ha | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-postgresql-ha.md | In this article, you deploy a highly available PostgreSQL database on AKS. * If you haven't already created the required infrastructure for this deployment, follow the steps in [Create infrastructure for deploying a highly available PostgreSQL database on AKS][create-infrastructure] to get set up, and then you can return to this article. + ## Create secret for bootstrap app user 1. Generate a secret to validate the PostgreSQL deployment by interactive login for a bootstrap app user using the [`kubectl create secret`][kubectl-create-secret] command. |
aks | Image Cleaner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/image-cleaner.md | Once `eraser-controller-manager` is deployed, the following steps will be taken * It immediately starts the cleanup process and creates `eraser-aks-xxxxx` worker pods for each node. * There are three containers in each worker pod:- * A **collector**, which collects unused images + * A **collector**, which collects unused images. * A **trivy-scanner**, which leverages [trivy](https://github.com/aquasecurity/trivy) to scan image vulnerabilities. * A **remover**, which removes unused images with vulnerabilities. * After the cleanup process completes, the worker pod is deleted and the next scheduled cleanup happens according to the `--image-cleaner-interval-hours` you define. |
aks | Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/integrations.md | GitHub Actions help you automate your software development workflows from within There are many open-source and third-party integrations you can install on your AKS cluster. The [AKS support policy][aks-support-policy] doesn't support the following open-source and third-party integrations. + | Name | Description | More details | |||| | [Helm][helm] | An open-source packaging tool that helps you install and manage the lifecycle of Kubernetes applications. | [Quickstart: Develop on Azure Kubernetes Service (AKS) with Helm][helm-qs] | |
aks | Keda Integrations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/keda-integrations.md | You can also install external scalers to autoscale on other Azure > [!IMPORTANT] > External scalers *aren't supported as part of the add-on* and rely on community support. + ## Next steps * [Enable the KEDA add-on with an ARM template][keda-arm] |
aks | Kubernetes Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-portal.md | Title: Access Kubernetes resources from the Azure portal -description: Learn how to interact with Kubernetes resources to manage an Azure Kubernetes Service (AKS) cluster from the Azure portal. - Previously updated : 03/30/2023+ Title: Access Kubernetes resources using the Azure portal +description: Learn how to access Kubernetes resources to manage an Azure Kubernetes Service (AKS) cluster in the Azure portal. + Last updated : 07/26/2024+++ -# Access Kubernetes resources from the Azure portal +# Access Kubernetes resources using the Azure portal -The Azure portal includes a Kubernetes resource view for easy access to the Kubernetes resources in your Azure Kubernetes Service (AKS) cluster. Viewing Kubernetes resources from the Azure portal reduces context switching between the Azure portal and the `kubectl` command-line tool, streamlining the experience for viewing and editing your Kubernetes resources. The resource viewer currently includes multiple resource types, such as deployments, pods, and replica sets. +In this article, you learn how to access and manage your Azure Kubernetes Service (AKS) resources using the Azure portal. -The Kubernetes resource view from the Azure portal replaces the deprecated AKS dashboard add-on. --## Prerequisites +## Before you begin To view Kubernetes resources in the Azure portal, you need an AKS cluster. Any cluster is supported, but if you're using Microsoft Entra integration, your cluster must use [AKS-managed Microsoft Entra integration][aks-managed-aad]. If your cluster uses legacy Microsoft Entra ID, you can upgrade your cluster in the portal or with the [Azure CLI][cli-aad-upgrade]. You can also [use the Azure portal][aks-quickstart-portal] to create a new AKS cluster. ## View Kubernetes resources -To see the Kubernetes resources, navigate to your AKS cluster in the Azure portal. The navigation pane on the left is used to access your resources. The resources include: --- **Namespaces** displays the namespaces of your cluster. The filter at the top of the namespace list provides a quick way to filter and display your namespace resources.-- **Workloads** shows information about deployments, pods, replica sets, stateful sets, daemon sets, jobs, and cron jobs deployed to your cluster. The screenshot below shows the default system pods in an example AKS cluster.-- **Services and ingresses** shows all of your cluster's service and ingress resources.-- **Storage** shows your Azure storage classes and persistent volume information.-- **Configuration** shows your cluster's config maps and secrets.---### Deploy an application --In this example, we'll use our sample AKS cluster to deploy the Azure Vote application from the [AKS quickstart][aks-quickstart-portal]. --1. From the **Services and ingresses** resource view, select **Create** > **Starter application**. -2. Under **Create a basic web application**, select **Create**. -3. On the **Application details** page, select **Next**. -4. On the **Review YAML** page, select **Deploy**. --Once the application is deployed, the resource view shows the two Kubernetes --- **azure-vote-back**: The internal service.-- **azure-vote-front**: The external service, which includes a linked external IP address so you can view the application in your browser.---### Monitor deployment insights --AKS clusters with [Container insights][enable-monitor] enabled can quickly view deployment and other insights. From the Kubernetes resources view, you can see the live status of individual deployments, including CPU and memory usage. You can also go to Azure Monitor for more in-depth information about specific nodes and containers. --Here's an example of deployment insights from a sample AKS cluster: ---## Edit YAML --The Kubernetes resource view also includes a YAML editor. A built-in YAML editor means you can update or create services and deployments from within the portal and apply changes immediately. ---To edit a YAML file for one of your resources, see the following steps: --1. Navigate to your resource in the Azure portal. -2. Select **YAML** and make your desired edits. -3. Select **Review + save** > **Confirm manifest changes** > **Save**. -->[!WARNING] -> We don't recommend performing direct production changes via UI or CLI. Instead, you should leverage [continuous integration (CI) and continuous deployment (CD) best practices](kubernetes-action.md). The Azure portal Kubernetes management capabilities, such as the YAML editor, are built for learning and flighting new deployments in a development and testing setting. +1. In the [Azure portal](https://portal.azure.com), navigate to your AKS cluster resource. +2. On the left side menu, select **Kubernetes resources**. The Kubernetes resources list displays the following categories: ++ - **Namespaces** shows information about the namespaces of your cluster. + - **Workloads** shows information about deployments, pods, replica sets, stateful sets, daemon sets, jobs, and cron jobs deployed to your cluster. + - **Services and ingresses** shows all of your cluster's service and ingress resources. + - **Storage** shows your Azure storage classes and persistent volume information. + - **Configuration** shows your cluster's config maps and secrets. + - **Custom resources** shows any custom resources deployed to your cluster. + - **Events** shows all events related to your cluster. + - **Run command** allows you to remotely invoke commands, like `kubectl` and `helm`, on your cluster through the Azure API without directly connecting to the cluster. ++ :::image type="content" source="media/kubernetes-portal/kubernetes-resources.png" alt-text="Screenshot showing the Kubernetes resources displayed in the Azure portal." lightbox="media/kubernetes-portal/kubernetes-resources.png"::: ++## Deploy a sample application ++In this section, we deploy the Azure Store application from the [AKS quickstart][aks-quickstart-portal]. ++### Connect to your cluster ++To deploy the Azure Store application, you need to connect to your AKS cluster. Follow these steps to connect to your cluster using the Azure portal: ++1. From the **Overview** page of your AKS cluster, select **Connect**. +2. Follow the instructions to connect to your cluster using *Cloud Shell*, *Azure CLI*, or *Run command*. ++### Deploy the Azure Store application ++1. From the **Kubernetes resources** list, select **Services and ingresses**. +2. Select **Create** > **Apply a YAML**. +3. Copy and paste the following YAML into the editor: ++ ```yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: rabbitmq + spec: + replicas: 1 + selector: + matchLabels: + app: rabbitmq + template: + metadata: + labels: + app: rabbitmq + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: rabbitmq + image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine + ports: + - containerPort: 5672 + name: rabbitmq-amqp + - containerPort: 15672 + name: rabbitmq-http + env: + - name: RABBITMQ_DEFAULT_USER + value: "username" + - name: RABBITMQ_DEFAULT_PASS + value: "password" + resources: + requests: + cpu: 10m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + volumeMounts: + - name: rabbitmq-enabled-plugins + mountPath: /etc/rabbitmq/enabled_plugins + subPath: enabled_plugins + volumes: + - name: rabbitmq-enabled-plugins + configMap: + name: rabbitmq-enabled-plugins + items: + - key: rabbitmq_enabled_plugins + path: enabled_plugins + + apiVersion: v1 + data: + rabbitmq_enabled_plugins: | + [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. + kind: ConfigMap + metadata: + name: rabbitmq-enabled-plugins + + apiVersion: v1 + kind: Service + metadata: + name: rabbitmq + spec: + selector: + app: rabbitmq + ports: + - name: rabbitmq-amqp + port: 5672 + targetPort: 5672 + - name: rabbitmq-http + port: 15672 + targetPort: 15672 + type: ClusterIP + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: order-service + spec: + replicas: 1 + selector: + matchLabels: + app: order-service + template: + metadata: + labels: + app: order-service + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: order-service + image: ghcr.io/azure-samples/aks-store-demo/order-service:latest + ports: + - containerPort: 3000 + env: + - name: ORDER_QUEUE_HOSTNAME + value: "rabbitmq" + - name: ORDER_QUEUE_PORT + value: "5672" + - name: ORDER_QUEUE_USERNAME + value: "username" + - name: ORDER_QUEUE_PASSWORD + value: "password" + - name: ORDER_QUEUE_NAME + value: "orders" + - name: FASTIFY_ADDRESS + value: "0.0.0.0" + resources: + requests: + cpu: 1m + memory: 50Mi + limits: + cpu: 75m + memory: 128Mi + initContainers: + - name: wait-for-rabbitmq + image: busybox + command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;'] + resources: + requests: + cpu: 1m + memory: 50Mi + limits: + cpu: 75m + memory: 128Mi + + apiVersion: v1 + kind: Service + metadata: + name: order-service + spec: + type: ClusterIP + ports: + - name: http + port: 3000 + targetPort: 3000 + selector: + app: order-service + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: product-service + spec: + replicas: 1 + selector: + matchLabels: + app: product-service + template: + metadata: + labels: + app: product-service + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: product-service + image: ghcr.io/azure-samples/aks-store-demo/product-service:latest + ports: + - containerPort: 3002 + resources: + requests: + cpu: 1m + memory: 1Mi + limits: + cpu: 1m + memory: 7Mi + + apiVersion: v1 + kind: Service + metadata: + name: product-service + spec: + type: ClusterIP + ports: + - name: http + port: 3002 + targetPort: 3002 + selector: + app: product-service + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: store-front + spec: + replicas: 1 + selector: + matchLabels: + app: store-front + template: + metadata: + labels: + app: store-front + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: store-front + image: ghcr.io/azure-samples/aks-store-demo/store-front:latest + ports: + - containerPort: 8080 + name: store-front + env: + - name: VUE_APP_ORDER_SERVICE_URL + value: "http://order-service:3000/" + - name: VUE_APP_PRODUCT_SERVICE_URL + value: "http://product-service:3002/" + resources: + requests: + cpu: 1m + memory: 200Mi + limits: + cpu: 1000m + memory: 512Mi + + apiVersion: v1 + kind: Service + metadata: + name: store-front + spec: + ports: + - port: 80 + targetPort: 8080 + selector: + app: store-front + type: LoadBalancer + ``` ++4. Select **Add**. ++ Once the application finishes deploying, you see the following services in the *Services* list: ++ - **order-service** + - **product-service** + - **rabbitmq** + - **store-front** ++ :::image type="content" source="media/kubernetes-portal/portal-services.png" alt-text="Screenshot of the Azure Store application services displayed in the Azure portal." lightbox="media/kubernetes-portal/portal-services.png"::: ++## Monitor deployment insights ++### Enable the monitoring add-on on your AKS cluster ++AKS clusters with [Container Insights][enable-monitor] enabled can access various deployment insights in the Azure portal. If you don't have monitoring enabled on your cluster, you can enable it using the following steps: ++1. On the left side menu of your AKS cluster resource, select **Monitoring** > **Insights** > **Configure monitoring**. +2. On the *Configure Container Insights* page, select **Configure**. ++ It might take a few minutes for the monitoring solution to deploy and begin collecting data. ++### View deployment insights ++1. On the left side menu of your AKS cluster resource, select **Workloads**. +2. Select a deployment from the list to view deployment insights, such as CPU and memory usage. ++> [!NOTE] +> You can also select **Monitoring** > **Insights** to view more in-depth information about specific nodes and containers. ++## Clean up resources ++If you no longer need the Azure Store application, you can delete the services to avoid incurring Azure costs. ++1. From the **Kubernetes resources** list, select **Services and ingresses**. +2. Select the services you want to delete, and then select **Delete**. ## Troubleshooting -This section addresses common problems and troubleshooting steps. - ### Unauthorized access -To access the Kubernetes resources, you must have access to the AKS cluster, the Kubernetes API, and the Kubernetes objects. Ensure that you're either a cluster administrator or a user with the appropriate permissions to access the AKS cluster. For more information on cluster security, see [Access and identity options for AKS][concepts-identity]. -->[!NOTE] -> The Kubernetes resource view in the Azure portal is only supported by [managed-AAD enabled clusters](managed-azure-ad.md) or non-AAD enabled clusters. If you're using a managed-AAD enabled cluster, your Microsoft Entra user or identity needs to have the respective roles/role bindings to access the Kubernetes API and the permission to pull the [user `kubeconfig`](control-kubeconfig-access.md). +To access the Kubernetes resources, you need access to the AKS cluster, Kubernetes API, and Kubernetes objects. Make sure you're either a *cluster administrator* or a user with the appropriate permissions to access the AKS cluster. For more information, see [Access and identity options for AKS][concepts-identity]. ### Enable resource view -For existing clusters, you may need to enable the Kubernetes resource view. To enable the resource view, follow the prompts in the portal for your cluster. +You might need to enable the Kubernetes resource view for existing clusters. ++> [!TIP] +> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `--api-server-authorized-ip-ranges`/`-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or the IP address range from which you're browsing the Azure portal. To allow this access, you need the computer's public IPv4 address. You can find this address using the following Azure CLI or Azure PowerShell commands, or you can search "what is my IP address" in your browser. ### [Azure CLI](#tab/azure-cli) -> [!TIP] -> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with the following command or you can search "what is my IP address" in your browser. +1. Retrieve your IP address using the following command: -```bash -# Retrieve your IP address -CURRENT_IP=$(dig +short myip.opendns.com @resolver1.opendns.com) -``` + ```bash + CURRENT_IP=$(dig +short myip.opendns.com @resolver1.opendns.com) + ``` -```azurecli -# Add to AKS approved list -az aks update -g $RG -n $AKSNAME --api-server-authorized-ip-ranges $CURRENT_IP/32 -``` +2. Add your IP address to the AKS approved list using the [`az aks update`][az-aks-update] command with the `--api-server-authorized-ip-ranges` parameter. -### [Azure PowerShell](#tab/azure-powershell) + ```azurecli-interactive + az aks update --resource-group <resource-group-name> --name <aks-cluster-name> --api-server-authorized-ip-ranges $CURRENT_IP/32 + ``` -> [!TIP] -> You can add the AKS feature for [**API server authorized IP ranges**](api-server-authorized-ip-ranges.md) to limit API server access to only the firewall's public endpoint. Another option is to update the `-ApiServerAccessAuthorizedIpRange` to include access for a local client computer or IP address range (from which portal is being browsed). To allow this access, you need the computer's public IPv4 address. You can find this address with the following command or you can search "what is my IP address" in your browser. +### [Azure PowerShell](#tab/azure-powershell) ++1. Retrieve your IP address using the following command: ++ ```azurepowershell-interactive + $CURRENT_IP = (Invoke-RestMethod -Uri http://ipinfo.io/json).ip + ``` -```azurepowershell -# Retrieve your IP address -$CURRENT_IP = (Invoke-RestMethod -Uri http://ipinfo.io/json).ip +2. Add your IP address to the AKS approved list using the [`Set-AzAksCluster`][set-az-aks-cluster] command with the `-ApiServerAccessAuthorizedIpRange` parameter. -# Add to AKS approved list -Set-AzAksCluster -ResourceGroupName $RG -Name $AKSNAME -ApiServerAccessAuthorizedIpRange $CURRENT_IP/32 -``` + ```azurepowershell-interactive + Set-AzAksCluster -ResourceGroupName <resource-group-name> -Name <aks-cluster-name> -ApiServerAccessAuthorizedIpRange $CURRENT_IP/32 + ``` ## Next steps -This article showed you how to access Kubernetes resources from the Azure portal. For more information on cluster resources, see [Deployments and YAML manifests][deployments]. +This article showed you how to access Kubernetes resources from the Azure portal. For more information about AKS, [Core concepts for Azure Kubernetes Service (AKS)][core-concepts]. <!-- LINKS - internal --> [concepts-identity]: concepts-identity.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md-[deployments]: concepts-clusters-workloads.md#deployments-and-yaml-manifests +[core-concepts]: core-aks-concepts.md [aks-managed-aad]: managed-azure-ad.md [cli-aad-upgrade]: managed-azure-ad.md#migrate-a-legacy-azure-ad-cluster-to-integration [enable-monitor]: ../azure-monitor/containers/container-insights-enable-existing-clusters.md+[az-aks-update]: /cli/azure/aks#az-aks-update +[set-az-aks-cluster]: /powershell/module/az.aks/set-azakscluster |
aks | Node Updates Kured | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/node-updates-kured.md | This article shows you how to use the open-source [kured (KUbernetes REboot Daem > [!NOTE] > `Kured` is an open-source project in the Cloud Native Computing Foundation. Please direct issues to the [kured GitHub][kured]. Additional support can be found in the #kured channel on [CNCF Slack](https://slack.cncf.io). + ## Before you begin You need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. |
aks | Postgresql Ha Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/postgresql-ha-overview.md | In this guide, you deploy a highly available PostgreSQL cluster that spans multi This article walks through the prerequisites for setting up a PostgreSQL cluster on [Azure Kubernetes Service (AKS)][what-is-aks] and provides an overview of the full deployment process and architecture. + ## Prerequisites * This guide assumes a basic understanding of [core Kubernetes concepts][core-kubernetes-concepts] and [PostgreSQL][postgresql]. |
aks | Use Flyte | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-flyte.md | This article shows you how to use Flyte on Azure Kubernetes Service (AKS). Flyte For more information, see [Introduction to Flyte][flyte]. ++ ## Flyte use cases Flyte can be used for a variety of use cases, including: |
api-center | Key Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/key-concepts.md | The following diagram shows the main entities in Azure API Center and how they r A top-level logical entity in Azure API Center, an API represents any real-world API that you want to track. An API center can include APIs of any type, including REST, GraphQL, gRPC, SOAP, WebSocket, and Webhook. -An API can be managed by any API management solution (such as Azure [API Management](../api-management/api-management-key-concepts.md) or solutions from other providers), or unmanaged. +An API in the inventory can be managed by any API management solution, such as Azure [API Management](../api-management/api-management-key-concepts.md), Apigee API Management, Kong Konnect, MuleSoft API Management, or another platform. An API represented in Azure API Center can also be unmanaged. The API inventory in Azure API Center is designed to be created and managed by API program managers or IT administrators. |
api-center | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/overview.md | With an API center, stakeholders throughout your organization - including API pr ## Benefits -* **Create and maintain an organizational inventoryΓÇï** - Organizations can build a **complete inventory of APIs** available in their organization. Foster communication and let API program managers and developers collaborate for increased API reuse, quality, security, compliance, and developer productivity.ΓÇï +* **Create and maintain an organizational inventoryΓÇï** - Organizations can build a **complete inventory of APIs** available in their organization. Register APIs managed in all of your API management solutions, including Azure API Management and platforms from other providers. Also include your unmanaged APIs and APIs under development. Foster communication and let API program managers and developers collaborate for increased API reuse, quality, security, compliance, and developer productivity.ΓÇï * **Govern your organization's APIs** - With more complete visibility into the APIs being produced and used within an organization, API program managers and IT administrators can govern this inventory to ensure it meets organizational standards by **defining custom metadata** and **analyzing API definitions** to enforce conformance to API style guidelines.ΓÇï |
app-service | Manage Move Across Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-move-across-regions.md | - Title: Move an app to another region -description: Learn how to move App Service resources from one region to another. --- Previously updated : 02/27/2020---#Customer intent: As an Azure service administrator, I want to move my App Service resources to another Azure region. ---# Move an App Service resource to another region --This article describes how to move App Service resources to a different Azure region. You might move your resources to another region for a number of reasons. For example, to take advantage of a new Azure region, to deploy features or services available in specific regions only, to meet internal policy and governance requirements, or in response to capacity planning requirements. --App Service resources are region-specific and can't be moved across regions. You must create a copy of your existing App Service resources in the target region, then move your content over to the new app. If your source app uses a custom domain, you can [migrate it to the new app in the target region](manage-custom-dns-migrate-domain.md) when you're finished. --To make copying your app easier, you can [clone an individual App Service app](app-service-web-app-cloning.md) into an App Service plan in another region, but it does have [limitations](app-service-web-app-cloning.md#current-restrictions), especially that it doesn't support Linux apps. --## Prerequisites --- Make sure that the App Service app is in the Azure region from which you want to move.-- Make sure that the target region supports App Service and any related service, whose resources you want to move.-<!- --## Prepare --Identify all the App Service resources that you're currently using. For example: --- App Service apps-- [App Service plans](overview-hosting-plans.md)-- [Deployment slots](deploy-staging-slots.md)-- [Custom domains purchased in Azure](manage-custom-dns-buy-domain.md)-- [TLS/SSL certificates](configure-ssl-certificate.md)-- [Azure Virtual Network integration](./overview-vnet-integration.md)-- [Hybrid connections](app-service-hybrid-connections.md).-- [Managed identities](overview-managed-identity.md)-- [Backup settings](manage-backup.md)--Certain resources, such as imported certificates or hybrid connections, contain integration with other Azure services. For information on how to move those resources across regions, see the documentation for the respective services. --## Move --1. [Create a back up of the source app](manage-backup.md). -1. [Create an app in a new App Service plan, in the target region](app-service-plan-manage.md#create-an-app-service-plan). -2. [Restore the back up in the target app](manage-backup.md) -2. If you use a custom domain, [bind it preemptively to the target app](manage-custom-dns-migrate-domain.md#2-create-the-dns-records) with `asuid.` and [enable the domain in the target app](manage-custom-dns-migrate-domain.md#3-enable-the-domain-for-your-app). -3. Configure everything else in your target app to be the same as the source app and verify your configuration. -4. When you're ready for the custom domain to point to the target app, [remap the domain name](manage-custom-dns-migrate-domain.md#4-remap-the-active-dns-name). --<!-- 1. Login to the [Azure portal](https://portal.azure.com) > **Resource Groups**. -2. Locate the Resource Group that contains the source App Service resources and click on it. -3. Select > **Settings** > **Export template**. -4. Choose **Deploy** in the **Export template** blade. -5. Click **TEMPLATE** > **Edit template** to open the template in the online editor. -6. Click inside the online editor and type Ctrl+F (or Γîÿ+F on a Mac) and type `"identity": {` to find any managed identity definition. The following is an example if you have a user-assigned managed identity. - ```json - "identity": { - "type": "UserAssigned", - "userAssignedIdentities": { - "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/<group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity-name>": { - "principalId": "00000000-0000-0000-0000-000000000000", - "clientId": "00000000-0000-0000-0000-000000000000" - } - } - }, - ``` -6. Click inside the online editor and type Ctrl+F (or Γîÿ+F on a Mac) and type `"Microsoft.Web/sites/hostNameBindings` to find all hostname bindings. The following is an example if you have a user-assigned managed identity. - ```json - { - "type": "Microsoft.Web/sites/hostNameBindings", - "apiVersion": "2018-11-01", - "name": "[concat(parameters('sites_webapp_name'), '/', parameters('sites_webapp_name'), '.azurewebsites.net')]", - "location": "West Europe", - "dependsOn": [ - "[resourceId('Microsoft.Web/sites', parameters('sites_webapp_name'))]" - ], - "properties": { - "siteName": "<app-name>", - "hostNameType": "Verified" - } - }, - ``` -6. Click inside the online editor and type Ctrl+F (or Γîÿ+F on a Mac) and type `"Microsoft.Web/certificates` to find all hostname bindings. The following is an example if you have a user-assigned managed identity. - ```json - { - "type": "Microsoft.Web/certificates", - "apiVersion": "2018-11-01", - "name": "[parameters('certificates_test2_cephaslin_com_name')]", - "location": "West Europe", - "properties": { - "hostNames": [ - "[parameters('certificates_test2_cephaslin_com_name')]" - ], - "password": "[parameters('certificates_test2_cephaslin_com_password')]" - } - }, - ``` -7. Delete the entire JSON block. Click **Save** in the online editor. -8. Click **BASICS** > **Create new** to create a new resource group. Type the group name and click **OK**. -9. In **BASICS** > **Location**, select the region you want. --> --## Clean up source resources --Delete the source app and App Service plan. [An App Service plan in the non-free tier carries a charge, even if no app is running in it.](app-service-plan-manage.md#delete-an-app-service-plan) --## Next steps --[Azure App Service App Cloning Using PowerShell](app-service-web-app-cloning.md) |
app-service | Tutorial Python Postgresql App Fastapi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app-fastapi.md | + + Title: 'Tutorial: Deploy a Python FastAPI web app with PostgreSQL' +description: Create a FastAPI web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the FastAPI framework and the app is hosted on Azure App Service on Linux. +ms.devlang: python + Last updated : 7/24/2024++++zone_pivot_groups: app-service-portal-azd +++# Deploy a Python FastAPI web app with PostgreSQL in Azure ++In this tutorial, you deploy a data-driven Python web app (**[FastAPI](https://fastapi.tiangolo.com/)** ) to **[Azure App Service](./overview.md#app-service-on-linux)** with the **[Azure Database for PostgreSQL](../postgresql/index.yml)** relational database service. Azure App Service supports [Python](https://www.python.org/downloads/) in a Linux server environment. +++**To complete this tutorial, you'll need:** +++* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). +* Knowledge of Python with FastAPI development ++++* An Azure account with an active subscription. If you don't have an Azure account, you [can create one for free](https://azure.microsoft.com/free/python). +* [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed. You can follow the steps with the [Azure Cloud Shell](https://shell.azure.com) because it already has Azure Developer CLI installed. +* Knowledge of Python with FastAPI development +++## Skip to the end ++With [Azure Developer CLI](/azure/developer/azure-developer-cli/install-azd) installed, you can skip to the end of the tutorial by running the following commands in an empty working directory: ++```bash +azd auth login +azd init --template msdocs-fastapi-postgresql-sample-app +azd up +``` ++## Sample application ++A sample Python application using FastAPI framework is provided to help you follow along with this tutorial. To deploy it without running it locally, skip this part. ++To run the application locally, make sure you have [Python 3.8 or higher](https://www.python.org/downloads/) and [PostgreSQL](https://www.postgresql.org/download/) installed locally. Then, clone the sample repository's `starter-no-infra` branch and change to the repository root. ++```bash +git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-fastapi-postgresql-sample-app +cd msdocs-fastapi-postgresql-sample-app +``` ++Create an *.env* file as shown below using the *.env.sample* file as a guide. Set the value of `DBNAME` to the name of an existing database in your local PostgreSQL instance. Set the values of `DBHOST`, `DBUSER`, and `DBPASS` as appropriate for your local PostgreSQL instance. ++``` +DBNAME=<database name> +DBHOST=<database-hostname> +DBUSER=<db-user-name> +DBPASS=<db-password> +``` ++Create a virtual environment for the app: +++Install the dependencies: ++```bash +python3 -m pip install -r src/requirements.txt +``` ++Install the app as an editable package: ++```bash +python3 -m pip install -e src +``` ++Run the sample application with the following commands: ++```bash +# Run database migration +python3 src/fastapi_app/seed_data.py +# Run the app at http://127.0.0.1:8000 +python3 -m uvicorn fastapi_app:app --reload --port=8000 +``` +++## 1. Create App Service and PostgreSQL ++In this step, you create the Azure resources. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL. For the creation process, you specify: ++* The **Name** for the web app. It's the name used as part of the DNS name for your webapp in the form of `https://<app-name>.azurewebsites.net`. +* The **Region** to run the app physically in the world. +* The **Runtime stack** for the app. It's where you select the version of Python to use for your app. +* The **Hosting plan** for the app. It's the pricing tier that includes the set of features and scaling capacity for your app. +* The **Resource Group** for the app. A resource group lets you group (in a logical container) all the Azure resources needed for the application. ++Sign in to the [Azure portal](https://portal.azure.com/) and follow these steps to create your Azure App Service resources. ++ :::column span="2"::: + **Step 1:** In the Azure portal: + 1. Enter "web app database" in the search bar at the top of the Azure portal. + 1. Select the item labeled **Web App + Database** under the **Marketplace** heading. + You can also navigate to the [creation wizard](https://portal.azure.com/?feature.customportal=false#create/Microsoft.AppServiceWebAppDatabaseV3) directly. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-1.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find the Web App + Database creation wizard (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the **Create Web App + Database** page, fill out the form as follows. + 1. *Resource Group* → Select **Create new** and use a name of **msdocs-python-postgres-tutorial**. + 1. *Region* → Any Azure region near you. + 1. *Name* → **msdocs-python-postgres-XYZ** where *XYZ* is any three random characters. This name must be unique across Azure. + 1. *Runtime stack* → **Python 3.12**. + 1. *Database* → **PostgreSQL - Flexible Server** is selected by default as the database engine. The server name and database name are also set by default to appropriate values. + 1. *Hosting plan* → **Basic**. When you're ready, you can [scale up](manage-scale-up.md) to a production pricing tier later. + 1. Select **Review + create**. + 1. After validation completes, select **Create**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-2.png" alt-text="A screenshot showing how to configure a new app and database in the Web App + Database wizard (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** The deployment takes a few minutes to complete. Once deployment completes, select the **Go to resource** button. You're taken directly to the App Service app, but the following resources are created: + - **Resource group** → The container for all the created resources. + - **App Service plan** → Defines the compute resources for App Service. A Linux plan in the *Basic* tier is created. + - **App Service** → Represents your app and runs in the App Service plan. + - **Virtual network** → Integrated with the App Service app and isolates back-end network traffic. + - **Azure Database for PostgreSQL flexible server** → Accessible only from within the virtual network. A database and a user are created for you on the server. + - **Private DNS zone** → Enables DNS resolution of the PostgreSQL server in the virtual network. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-3.png" alt-text="A screenshot showing the deployment process completed (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-3.png"::: + :::column-end::: + :::column span="2"::: + **Step 4:** For FastAPI apps, you must enter a startup command so App service can start your app. On the App Service page: + 1. In the left menu, under **Settings**, select **Configuration**. + 1. In the **General settings** tab of the **Configuration** page, enter `src/entrypoint.sh` in the **Startup Command** field under **Stack settings**. + 1. Select **Save**. When prompted, select **Continue**. + To learn more about app configuration and startup in App Service, see [Configure a Linux Python app for Azure App Service](configure-language-python.md). + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-fastapi-4.png" alt-text="A screenshot showing adding a startup command (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-create-app-postgres-fastapi-4.png"::: + :::column-end::: ++## 2. Verify connection settings ++The creation wizard generated the connectivity variables for you already as [app settings](configure-common.md#configure-app-settings). App settings are one way to keep connection secrets out of your code repository. When you're ready to move your secrets to a more secure location, here's an [article on storing in Azure Key Vault](../key-vault/certificates/quick-create-python.md). ++ :::column span="2"::: + **Step 1:** In the App Service page, in the left menu, select **Environment variables**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-get-connection-string-fastapi-1.png" alt-text="A screenshot showing how to open the configuration page in App Service (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-get-connection-string-fastapi-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the **App settings** tab of the **Environment variables** page, verify that `AZURE_POSTGRESQL_CONNECTIONSTRING` is present. The connection string will be injected into the runtime environment as an environment variable. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-get-connection-string-fastapi-2.png" alt-text="A screenshot showing how to see the autogenerated connection string (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-get-connection-string-fastapi-2.png"::: + :::column-end::: ++## 3. Deploy sample code ++In this step, you configure GitHub deployment using GitHub Actions. It's just one of many ways to deploy to App Service, but also a great way to have continuous integration in your deployment process. By default, every `git push` to your GitHub repository will kick off the build and deploy action. ++ :::column span="2"::: + **Step 1:** In a new browser window: + 1. Sign in to your GitHub account. + 1. Navigate to [https://github.com/Azure-Samples/msdocs-fastapi-postgresql-sample-app](https://github.com/Azure-Samples/msdocs-fastapi-postgresql-sample-app). + 1. Select **Fork**. + 1. Select **Create fork**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-1.png" alt-text="A screenshot showing how to create a fork of the sample GitHub repository (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the GitHub page, open Visual Studio Code in the browser by pressing the `.` key. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-2.png" alt-text="A screenshot showing how to open the Visual Studio Code browser experience in GitHub (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** In Visual Studio Code in the browser, open *src/fastapi/models.py* in the explorer. + See the environment variables being used in the production environment, including the app settings that you saw in the configuration page. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-3.png" alt-text="A screenshot showing Visual Studio Code in the browser and an opened file (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-3.png"::: + :::column-end::: + :::column span="2"::: + **Step 4:** Back in the App Service page, in the left menu, under **Deployment**, select **Deployment Center**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-4.png" alt-text="A screenshot showing how to open the deployment center in App Service (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-4.png"::: + :::column-end::: + :::column span="2"::: + **Step 5:** In the Deployment Center page: + 1. In **Source**, select **GitHub**. By default, **GitHub Actions** is selected as the build provider. + 1. Sign in to your GitHub account and follow the prompt to authorize Azure. + 1. In **Organization**, select your account. + 1. In **Repository**, select **msdocs-fastapi-postgresql-sample-app**. + 1. In **Branch**, select **main**. + 1. Keep the default option selected to **Add a workflow**. + 1. Under **Authentication type**, select **User-assigned identity**. + 1. In the top menu, select **Save**. App Service commits a workflow file into the chosen GitHub repository, in the `.github/workflows` directory. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-5.png" alt-text="A screenshot showing how to configure CI/CD using GitHub Actions (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-5.png"::: + :::column-end::: + :::column span="2"::: + **Step 6:** In the Deployment Center page: + 1. Select **Logs**. A deployment run is already started. + 1. In the log item for the deployment run, select **Build/Deploy Logs**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-6.png" alt-text="A screenshot showing how to open deployment logs in the deployment center (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-6.png"::: + :::column-end::: + :::column span="2"::: + **Step 7:** You're taken to your GitHub repository and see that the GitHub action is running. The workflow file defines two separate stages, build and deploy. Wait for the GitHub run to show a status of **Complete**. It takes about 5 minutes. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-7.png" alt-text="A screenshot showing a GitHub run in progress (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-deploy-sample-code-fastapi-7.png"::: + :::column-end::: ++Having issues? Check the [Troubleshooting guide](configure-language-python.md#troubleshooting). ++## 4. Generate database schema ++In previous section, you added *src/entrypoint.sh* as the startup command for your app. *entrypoint.sh* contains the following line: `python3 src/fastapi_app/seed_data.py`. This command migrates your database. In the sample app, it only ensures that the correct tables are created in your database. It doesn't populate these tables with any data. ++In this section, you'll run this command manually for demonstration purposes. With the PostgreSQL database protected by the virtual network, the easiest way to run the command is in an SSH session with the App Service container. ++ :::column span="2"::: + **Step 1:** Back in the App Service page, in the left menu, + 1. Select **SSH**. + 1. Select **Go**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-generate-db-schema-fastapi-1.png" alt-text="A screenshot showing how to open the SSH shell for your app from the Azure portal (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-generate-db-schema-fastapi-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the SSH terminal, run `python3 src/fastapi_app/seed_data.py`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations). + Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-generate-db-schema-fastapi-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-generate-db-schema-fastapi-2.png"::: + :::column-end::: ++## 5. Browse to the app ++ :::column span="2"::: + **Step 1:** In the App Service page: + 1. From the left menu, select **Overview**. + 1. Select the URL of your app. You can also navigate directly to `https://<app-name>.azurewebsites.net`. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-browse-app-1.png" alt-text="A screenshot showing how to launch an App Service from the Azure portal (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-browse-app-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** Add a few restaurants to the list. + Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-browse-app-2.png" alt-text="A screenshot of the FastAPI web app with PostgreSQL running in Azure showing restaurants and restaurant reviews (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-browse-app-2.png"::: + :::column-end::: ++## 6. Stream diagnostic logs ++The sample app uses the Python Standard Library logging module to help you diagnose issues with your application. The sample app includes calls to the logger as shown in the following code. +++ :::column span="2"::: + **Step 1:** In the App Service page: + 1. From the left menu, under **Monitoring**, select **App Service logs**. + 1. Under **Application logging**, select **File System**. + 1. In the top menu, select **Save**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-stream-diagnostic-logs-1-fastapi.png" alt-text="A screenshot showing how to enable native logs in App Service in the Azure portal." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-stream-diagnostic-logs-1-fastapi.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** From the left menu, select **Log stream**. You see the logs for your app, including platform logs and logs from inside the container. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-stream-diagnostic-logs-2-fastapi.png" alt-text="A screenshot showing how to view the log stream in the Azure portal." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-stream-diagnostic-logs-2-fastapi.png"::: + :::column-end::: ++Events can take several minutes to show up in the diagnostic logs. Learn more about logging in Python apps in the series on [setting up Azure Monitor for your Python application](/azure/azure-monitor/app/opencensus-python). ++## 7. Clean up resources ++When you're finished, you can delete all of the resources from your Azure subscription by deleting the resource group. ++ :::column span="2"::: + **Step 1:** In the search bar at the top of the Azure portal: + 1. Enter the resource group name. + 1. Select the resource group. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-clean-up-resources-1.png" alt-text="A screenshot showing how to search for and navigate to a resource group in the Azure portal." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-clean-up-resources-1.png"::: + :::column-end::: + :::column span="2"::: + **Step 2:** In the resource group page, select **Delete resource group**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-clean-up-resources-2.png" alt-text="A screenshot showing the location of the Delete Resource Group button in the Azure portal." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-clean-up-resources-2.png"::: + :::column-end::: + :::column span="2"::: + **Step 3:** + 1. Enter the resource group name to confirm your deletion. + 1. Select **Delete**. + :::column-end::: + :::column::: + :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-clean-up-resources-3.png" alt-text="A screenshot of the confirmation dialog for deleting a resource group in the Azure portal." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-clean-up-resources-3.png":::: + :::column-end::: +++## 1. Create Azure resources and deploy a sample app ++In this step, you create the Azure resources and deploy a sample app to App Service on Linux. The steps used in this tutorial create a set of secure-by-default resources that include App Service and Azure Database for PostgreSQL. ++1. If you haven't already, clone the sample repository's `starter-no-infra` branch in a local terminal. + + ```bash + git clone -b starter-no-infra https://github.com/Azure-Samples/msdocs-fastapi-postgresql-sample-app + cd msdocs-fastapi-postgresql-sample-app + ``` ++ This cloned branch is your starting point. It contains a simple data-drive FastAPI application. ++1. From the repository root, run `azd init`. ++ ```bash + azd init --template msdocs-fastapi-postgresql-sample-app + ``` ++1. When prompted, give the following answers: + + |Question |Answer | + ||| + |The current directory is not empty. Would you like to initialize a project here in '\<your-directory>'? | **Y** | + |What would you like to do with these files? | **Keep my existing files unchanged** | + |Enter a new environment name | Type a unique name. The azd template uses this name as part of the DNS name of your web app in Azure (`<app-name>.azurewebsites.net`). Alphanumeric characters and hyphens are allowed. | ++1. Run the `azd up` command to provision the necessary Azure resources and deploy the app code. If you aren't already signed-in to Azure, the browser will launch and ask you to sign-in. The `azd up` command will also prompt you to select the desired subscription and location to deploy to. ++ ```bash + azd up + ``` ++ The `azd up` command can take several minutes to complete. It also compiles and deploys your application code. While it's running, the command provides messages about the provisioning and deployment process, including a link to the deployment in Azure. When it finishes, the command also displays a link to the deploy application. ++ This azd template contains files (*azure.yaml* and the *infra* directory) that generate a secure-by-default architecture with the following Azure resources: ++ - **Resource group** → The container for all the created resources. + - **App Service plan** → Defines the compute resources for App Service. A Linux plan in the *B1* tier is specified. + - **App Service** → Represents your app and runs in the App Service plan. + - **Virtual network** → Integrated with the App Service app and isolates back-end network traffic. + - **Azure Database for PostgreSQL flexible server** → Accessible only from within the virtual network. A database and a user are created for you on the server. + - **Private DNS zone** → Enables DNS resolution of the PostgreSQL server in the virtual network. + - **Log Analytics workspace** → Acts as the target container for your app to ship its logs, where you can also query the logs. ++1. When the `azd up` command completes, note down the values for the **Subscription ID** (Guid), the **App Service**, and the **Resource Group** in the output. You use them in the following sections. Your output will look similar to the following (partial) output: ++ ```output + Subscription: Your subscription name (1111111-1111-1111-1111-111111111111) + Location: East US + + You can view detailed progress in the Azure Portal: + https://portal.azure.com/#view/HubsExtension/DeploymentDetailsBlade/~/overview/id/%2Fsubscriptions%2F1111111-1111-1111-1111-111111111111%2Fproviders%2FMicrosoft.Resources%2Fdeployments%2Fyourenv-1721867673 + + (Γ£ô) Done: Resource group: yourenv-rg + (Γ£ô) Done: Virtual Network: yourenv-e2najjk4vewf2-vnet + (Γ£ô) Done: App Service plan: yourenv-e2najjk4vewf2-service-plan + (Γ£ô) Done: Log Analytics workspace: yourenv-e2najjk4vewf2-workspace + (Γ£ô) Done: Application Insights: yourenv-e2najjk4vewf2-appinsights + (Γ£ô) Done: Portal dashboard: yourenv-e2najjk4vewf2-dashboard + (Γ£ô) Done: App Service: yourenv-e2najjk4vewf2-app-service + (Γ£ô) Done: Azure Database for PostgreSQL flexible server: yourenv-e2najjk4vewf2-postgres-server + (Γ£ô) Done: Cache for Redis: yourenv-e2najjk4vewf2-redisCache + (Γ£ô) Done: Private Endpoint: cache-privateEndpoint + + SUCCESS: Your application was provisioned in Azure in 32 minutes. + You can view the resources created under the resource group yourenv-rg in Azure Portal: + https://portal.azure.com/#@/resource/subscriptions/1111111-1111-1111-1111-111111111111/resourceGroups/yourenv-rg/overview + + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service web + - Endpoint: https://yourenv-e2najjk4vewf2-app-service.azurewebsites.net/ + + ``` ++## 2. Examine the database connection string ++The azd template generates the connectivity variables for you as [app settings](configure-common.md#configure-app-settings). App settings are one way to keep connection secrets out of your code repository. ++1. In the `infra/resources.bicep` file, find the app settings and find the setting for `AZURE_POSTGRESQL_CONNECTIONSTRING`. ++ :::code language="python" source="~/msdocs-fastapi-postgresql-sample-app/infra/resources.bicep" range="180-188" highlight="5"::: ++1. `AZURE_POSTGRESQL_CONNECTIONSTRING` contains the connection string to the Postgres database in Azure. You need to use it in your code to connect to it. You can find the code that uses this environment variable in *src/fastapi/models.py*: ++ :::code language="python" source="~/msdocs-fastapi-postgresql-sample-app/src/fastapi_app/models.py" range="13-40" highlight="4-16"::: ++## 3. Examine the startup command ++Azure App Service requires a startup command to run your FastAPI app. The azd template sets this command for you in your App Service instance. ++1. In the `infra/resources.bicep` file, find the declaration for your web site and then find the setting for `appCommandLine`. This is the setting for your startup command. ++ :::code language="python" source="~/msdocs-fastapi-postgresql-sample-app/infra/resources.bicep" range="160-178" highlight="12"::: ++1. The startup command runs the file *src/entrypoint.sh*. Examine the code in that file to understand the commands that App Service runs to start your app: ++ :::code language="python" source="~/msdocs-fastapi-postgresql-sample-app/src/entrypoint.sh" range="1-6"::: ++To learn more about app configuration and startup in App Service, see [Configure a Linux Python app for Azure App Service](configure-language-python.md). ++## 4. Generate database schema ++You might have noticed in the previous section that *entrypoint.sh* contains the following line: `python3 src/fastapi_app/seed_data.py`. This command migrates your database. In the sample app, it only ensures that the correct tables are created in your database. It doesn't populate these tables with any data. ++In this section, you'll run this command manually for demonstration purposes. With the PostgreSQL database protected by the virtual network, the easiest way to run the command is in an SSH session with the App Service container. ++1. Use the value of the **App Service** that you noted previously in the azd output and the template shown below, to construct the URL for the SSH session and navigate to it in the browser: ++ ``` + https://<app-name>.scm.azurewebsites.net/webssh/host + ``` ++1. In the SSH terminal, run `python3 src/fastapi_app/seed_data.py`. If it succeeds, App Service is [connecting successfully to the database](#i-get-an-error-when-running-database-migrations). ++ :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-generate-db-schema-fastapi-2.png" alt-text="A screenshot showing the commands to run in the SSH shell and their output (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-generate-db-schema-fastapi-2.png"::: ++ > [!NOTE] + > Only changes to files in `/home` can persist beyond app restarts. Changes outside of `/home` aren't persisted. + > ++## 5. Browse to the app ++1. In the azd output, find the URL of your app and navigate to it in the browser. The URL looks like this in the AZD output: ++ <pre> + Deploying services (azd deploy) + + (Γ£ô) Done: Deploying service web + - Endpoint: https://<app-name>.azurewebsites.net/ + </pre> ++2. Add a few restaurants to the list. ++ :::image type="content" source="./media/tutorial-python-postgresql-app-fastapi/azure-portal-browse-app-2.png" alt-text="A screenshot of the FastAPI web app with PostgreSQL running in Azure showing restaurants and restaurant reviews (FastAPI)." lightbox="./media/tutorial-python-postgresql-app-fastapi/azure-portal-browse-app-2.png"::: ++ Congratulations, you're running a web app in Azure App Service, with secure connectivity to Azure Database for PostgreSQL. ++## 6. Stream diagnostic logs ++Azure App Service can capture logs to help you diagnose issues with your application. For convenience, the azd template has already [enabled logging to the local file system](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer). ++The sample app uses the Python Standard Library logging module to output logs. The sample app includes calls to the logger as shown below. +++Use the values of the **Subscription ID** (Guid), **Resource Group**, and **App Service** that you noted previously in the azd output and the template shown below, to construct the URL to stream App Service logs and navigate to it in the browser. ++``` +https://portal.azure.com/#@/resource/subscriptions/<subscription-guid>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name>/logStream +``` ++Events can take several minutes to show up in the diagnostic logs. Learn more about logging in Python apps in the series on [setting up Azure Monitor for your Python application](/azure/azure-monitor/app/opencensus-python). ++## 7. Clean up resources ++To delete all Azure resources in the current deployment environment, run `azd down`. ++```bash +azd down +``` ++## Troubleshooting ++Listed below are issues you might encounter while trying to work through this tutorial and steps to resolve them. ++#### I can't connect to the SSH session ++If you can't connect to the SSH session, then the app itself has failed to start. Check the [diagnostic logs](#6-stream-diagnostic-logs) for details. For example, if you see an error like `KeyError: 'AZURE_POSTGRESQL_CONNECTIONSTRING'`, it might mean that the environment variable is missing (you might have removed the app setting). ++#### I get an error when running database migrations ++If you encounter any errors related to connecting to the database, check if the app settings (`AZURE_POSTGRESQL_CONNECTIONSTRING`) have been changed. Without that connection string, the migrate command can't communicate with the database. ++## Frequently asked questions ++- [How much does this setup cost?](#how-much-does-this-setup-cost) +- [How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools?](#how-do-i-connect-to-the-postgresql-server-thats-secured-behind-the-virtual-network-with-other-tools) +- [How does local app development work with GitHub Actions?](#how-does-local-app-development-work-with-github-actions) ++#### How much does this setup cost? ++Pricing for the created resources is as follows: ++- The App Service plan is created in **Basic** tier and can be scaled up or down. See [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/linux/). +- The PostgreSQL flexible server is created in the lowest burstable tier **Standard_B1ms**, with the minimum storage size, which can be scaled up or down. See [Azure Database for PostgreSQL pricing](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/). +- The virtual network doesn't incur a charge unless you configure extra functionality, such as peering. See [Azure Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/). +- The private DNS zone incurs a small charge. See [Azure DNS pricing](https://azure.microsoft.com/pricing/details/dns/). ++#### How do I connect to the PostgreSQL server that's secured behind the virtual network with other tools? ++- For basic access from a command-line tool, you can run `psql` from the app's SSH terminal. +- To connect from a desktop tool, your machine must be within the virtual network. For example, it could be an Azure VM that's connected to one of the subnets, or a machine in an on-premises network that has a [site-to-site VPN](../vpn-gateway/vpn-gateway-about-vpngateways.md) connection with the Azure virtual network. +- You can also [integrate Azure Cloud Shell](../cloud-shell/private-vnet.md) with the virtual network. ++#### How does local app development work with GitHub Actions? ++Using the autogenerated workflow file from App Service as an example, each `git push` kicks off a new build and deployment run. From a local clone of the GitHub repository, you make the desired updates and push to GitHub. For example: ++```terminal +git add . +git commit -m "<some-message>" +git push origin main +``` ++## Next steps ++Advance to the next tutorial to learn how to secure your app with a custom domain and certificate. ++> [!div class="nextstepaction"] +>┬á[Secure with custom domain and certificate](tutorial-secure-domain-certificate.md) ++Learn how App Service runs a Python app: ++> [!div class="nextstepaction"] +> [Configure Python app](configure-language-python.md) |
application-gateway | V1 Retirement | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/v1-retirement.md | We announced the deprecation of Application Gateway V1 on **April 28 ,2023**. Be - Follow the steps outlined in the [migration script](./migrate-v1-v2.md) to migrate from Application Gateway v1 to v2. Review [pricing](./understanding-pricing.md) before making the transition. -- Use the video guide for [Migrate Application Gateway from v1 to v2](https://learn.microsoft.com/_themes/docs.theme/master/en-us/_themes/global/video-embed.html?id=7ed01e33-80a9-4daa-9322-e771f963a2fe) to understand the migration steps.+- Use the Migrate Application Gateway from v1 to v2 video guide to understand the migration steps. ++> [!VIDEO 7ed01e33-80a9-4daa-9322-e771f963a2fe] - If your company/organization has partnered with Microsoft or works with Microsoft representatives (like cloud solution architects (CSAs) or customer success account managers (CSAMs)), work with them for migration. |
azure-arc | Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md | This article describes how Arc resource bridge is upgraded, and the two ways upg ## Private cloud providers Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider. -For **Arc-enabled VMware vSphere**, manual upgrade and cloud upgrade are available. Appliances on version 1.0.15 and higher are automatically opted-in to cloud-managed upgrade. In order for either upgrade option to work, [the upgrade prerequisites](#prerequisites) must be met. Microsoft may attempt to perform a cloud-managed upgrade of your Arc resource bridge at any time if your appliance will soon be out of support. While Microsoft offers cloud-managed upgrade, you’re still responsible for ensuring that your Arc resource bridge is within the supported n-3 versions. Disruptions could cause cloud-managed upgrade to fail and you may need to manual upgrade the Arc resource bridge. If you are close to being out of support, please manual upgrade to stay in supported versions.  Do not wait for cloud-managed upgrade. Any appliances that are earlier than version 1.0.15 must be manually upgraded. +For **Arc-enabled VMware vSphere**, manual upgrade and cloud upgrade are available. Appliances on version 1.0.15 and higher are automatically opted-in to cloud-managed upgrade. In order for either upgrade option to work, [the upgrade prerequisites](#prerequisites) must be met. Microsoft may attempt to perform a cloud-managed upgrade of your Arc resource bridge at any time if your appliance will soon be out of support. While Microsoft offers cloud-managed upgrade, you’re still responsible for ensuring that your Arc resource bridge is within the supported n-3 versions. Disruptions could cause cloud-managed upgrade to fail and you may need to manual upgrade the Arc resource bridge. If you are close to being out of support, please manual upgrade to stay in supported versions. For **Azure Arc VM management (preview) on Azure Stack HCI**, appliance version 1.0.15 or higher is only available on Azure Stack HCI build 23H2. In HCI 23H2, the LCM tool manages upgrades across all HCI, Arc resource bridge, and extension components as a "validated recipe" package. Any preview version of Arc resource bridge must be removed prior to updating from 22H2 to 23H2. Attempting to upgrade Arc resource bridge independent of other HCI environment components may cause problems in your environment that could result in a disaster recovery scenario. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq). -For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources. +For **Arc-enabled System Center Virtual Machine Manager (SCVMM)**, the manual upgrade feature is available for appliance version 1.0.15 and higher. Appliances below version 1.0.15 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery). This deploys a new resource bridge and reconnects pre-existing Azure resources. ## Prerequisites |
azure-cache-for-redis | Cache Redis Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-redis-samples.md | The [Manage Azure Cache for Redis using Azure Management Libraries](https://gith The [Access Azure Cache for Redis Monitoring data](https://github.com/rustd/RedisSamples/tree/master/CustomMonitoring) sample demonstrates how to access monitoring data for your Azure Cache for Redis outside of the Azure portal. -## A Twitter-style clone written using PHP and Redis +## An X-style clone written using PHP and Redis -The [Retwis](https://github.com/SyntaxC4-MSFT/retwis) sample is the Redis Hello World. It's a minimal Twitter-style social network clone written using Redis and PHP using the [Predis](https://github.com/nrk/predis) client. The source code is designed to be simple and at the same time to show different Redis data structures. +The [Retwis](https://github.com/SyntaxC4-MSFT/retwis) sample is the Redis Hello World. It's a minimal X-style social network clone written using Redis and PHP using the [Predis](https://github.com/nrk/predis) client. The source code is designed to be simple and at the same time to show different Redis data structures. ## Bandwidth monitor |
azure-functions | Durable Functions Configure Durable Functions With Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-configure-durable-functions-with-credentials.md | Title: "Configure Durable Functions with Microsoft Entra ID" -description: Configure Durable Functions with Managed Identity Credentials and Client Secret Credentials. + Title: "Quickstart: Authenticate a Durable Functions app by using Microsoft Entra ID" +description: Authenticate a Durable Functions app in Azure Functions by using managed identity credentials or client secret credentials in Microsoft Entra ID. Previously updated : 02/01/2023 Last updated : 07/24/2024 -# Configure Durable Functions with Microsoft Entra ID +# Quickstart: Authenticate a Durable Functions app by using Microsoft Entra ID -[Microsoft Entra ID](../../active-directory/fundamentals/active-directory-whatis.md) (Microsoft Entra ID) is a cloud-based identity and access management service. Identity-based connections allow Durable Functions to make authorized requests against Microsoft Entra protected resources, like an Azure Storage account, without the need to manage secrets manually. Using the default Azure storage provider, Durable Functions needs to authenticate against an Azure storage account. In this article, we show how to configure a Durable Functions app to utilize two kinds of Identity-based connections: **managed identity credentials** and **client secret credentials**. +[Microsoft Entra ID](/entra/fundamentals/whatis) is a cloud-based identity and access management service. Identity-based connections allow Durable Functions, a feature of Azure Functions, to make authorized requests against Microsoft Entra-protected resources, such as an Azure Storage account, without using manually managed secrets. When Durable Functions uses the default Azure storage provider, it must authenticate against an Azure storage account. +In this quickstart, you complete steps to set up a Durable Functions app to use two different kinds of identity-based connections: -## Configure your app to use managed identity (recommended) +* Managed identity credentials (recommended) +* Client secret credentials ++If you don't have an Azure account, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ++## Prerequisites ++To complete this quickstart, you need: ++* An existing Durable Functions project created in the Azure portal or a local Durable Functions project deployed to Azure. +* Familiarity running a Durable Functions app in Azure. ++If you don't have an existing Durable Functions project deployed in Azure, we recommend that you start with one of the following quickstarts: ++* [Create a Durable Functions app - C#](durable-functions-isolated-create-first-csharp.md) +* [Create a Durable Functions app - JavaScript](quickstart-js-vscode.md) +* [Create a Durable Functions app - Python](quickstart-python-vscode.md) +* [Create a Durable Functions app - PowerShell](quickstart-powershell-vscode.md) +* [Create a Durable Functions app - Java](quickstart-java.md) ++## Configure your app to use managed identity credentials ++Your app can use a [managed identity](../../app-service/overview-managed-identity.md) to easily access other Microsoft Entra-protected resources, such as an instance of Azure Key Vault. Managed identity access is supported in the [Durable Functions extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) version 2.7.0 and later. -A [managed identity](../../app-service/overview-managed-identity.md) allows your app to easily access other Microsoft Entra protected resources such as Azure Key Vault. Managed identity is supported in [Durable Functions extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) versions **2.7.0** and greater. > [!NOTE]-> Strictly speaking, a managed identity is only available to apps when executing on Azure. When configured to use identity-based connections, a locally executing app will utilize your **developer credentials** to authenticate with Azure resources. Then, when deployed on Azure, it will utilize your managed identity configuration instead. +> A managed identity is available to apps only when they execute in Azure. When an app is configured to use identity-based connections, a locally executing app instead uses your *developer credentials* to authenticate with Azure resources. Then, when the app is deployed in Azure, it uses your managed identity configuration. ++### Enable a managed identity ++To begin, enable a managed identity for your application. Your function app must have either a *system-assigned managed identity* or a *user-assigned managed identity*. To enable a managed identity for your function app, and to learn more about the differences between the two types of identities, see the [managed identity overview](../../app-service/overview-managed-identity.md). ++### Assign access roles to the managed identity -### Prerequisites +Next, in the Azure portal, [assign](/entra/identity/managed-identities-azure-resources/how-to-assign-access-azure-resource) three role-based access control (RBAC) roles to your managed identity resource: -The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it. -In particular, this quickstart assumes that you have already: +* Storage Queue Data Contributor +* Storage Blob Data Contributor +* Storage Table Data Contributor -* Created a Durable Functions project in the Azure portal or deployed a local Durable Functions to Azure. +### Configure the managed identity -If this isn't the case, we suggest you start with one of the following articles, which provides detailed instructions on how to achieve all the requirements above: +Before you can use your app's managed identity, make some changes to the app configuration: -- [Create your first durable function - C#](durable-functions-create-first-csharp.md)-- [Create your first durable function - JavaScript](quickstart-js-vscode.md)-- [Create your first durable function - Python](quickstart-python-vscode.md)-- [Create your first durable function - PowerShell](quickstart-powershell-vscode.md)-- [Create your first durable function - Java](quickstart-java.md)+1. In the Azure portal, on your function app resource menu under **Settings**, select **Configuration**. -### Enable managed identity +1. In the list of settings, select **AzureWebJobsStorage** and select the **Delete** icon. -Only one identity is needed for your function, either a **system assigned managed identity** or a **user assigned managed identity**. To enable a managed identity for your function and learn more about the differences between the two identities, read the detailed instructions [here](../../app-service/overview-managed-identity.md). + :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-01.png" alt-text="Screenshot that shows default storage settings and deleting AzureWebJobsStorage." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-01.png"::: -### Assign Role-based Access Controls (RBAC) to managed identity +1. Add a setting to link your Azure storage account to the application. -Navigate to your app's storage resource on the Azure portal. Follow [these instructions](../../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) to assign the following roles to your managed identity resource. + Use *one of the following methods* depending on the cloud that your app runs in: -* Storage Queue Data Contributor -* Storage Blob Data Contributor -* Storage Table Data Contributor + * **Azure cloud**: If your app runs in *public Azure*, add a setting that identifies an Azure storage account name: -### Add managed identity configuration in the Azure portal + * `AzureWebJobsStorage__<accountName>` -Navigate to your Azure function appΓÇÖs **Configuration** page and perform the following changes: + Example: `AzureWebJobsStorage__mystorageaccount123` -1. Remove the default value "AzureWebJobsStorage". + * **Non-Azure cloud**: If your application runs in a cloud outside of Azure, you must add a specific service URI (an *endpoint*) for the storage account instead of an account name. - [ ![Screenshot of default storage setting.](./media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-01.png)](./media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-01.png#lightbox) + > [!NOTE] + > If you use [Azure Government](../../azure-government/documentation-government-welcome.md) or any other cloud that's separate from public Azure, you must use the option to provide a specific service URI. For more information about using Azure Storage with Azure Government, see [Develop by using the Storage API in Azure Government](../../azure-government/documentation-government-get-started-connect-to-storage.md). -2. Link your Azure storage account by adding **either one** of the following value settings: + * `AzureWebJobsStorage__<blobServiceUri>` - * **AzureWebJobsStorage__accountName**: For example: `mystorageaccount123` + Example: `AzureWebJobsStorage__https://mystorageaccount123.blob.core.windows.net/` - * **AzureWebJobsStorage__blobServiceUri**: Example: `https://mystorageaccount123.blob.core.windows.net/` + * `AzureWebJobsStorage__<queueServiceUri>` - **AzureWebJobsStorage__queueServiceUri**: Example: `https://mystorageaccount123.queue.core.windows.net/` + Example: `AzureWebJobsStorage__https://mystorageaccount123.queue.core.windows.net/` - **AzureWebJobsStorage__tableServiceUri**: Example: `https://mystorageaccount123.table.core.windows.net/` + * `AzureWebJobsStorage__<tableServiceUri>` - > [!NOTE] - > If you are using [Azure Government](../../azure-government/documentation-government-welcome.md) or any other cloud that's separate from global Azure, then you will need to use this second option to provide specific service URLs. The values for these settings can be found in the storage account under the **Endpoints** tab. For more information on using Azure Storage with Azure Government, see the [Develop with Storage API on Azure Government](../../azure-government/documentation-government-get-started-connect-to-storage.md) documentation. + Example: `AzureWebJobsStorage__https://mystorageaccount123.table.core.windows.net/` - ![Screenshot of endpoint sample.](media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png) + You can get the values for these URI variables in the storage account information on the **Endpoints** tab. -3. Finalize your managed identity configuration: + :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png" alt-text="Screenshot that shows an example of an endpoint as a specific service URI."::: - * If **system-assigned identity** should be used, then specify nothing else. +1. Finish your managed identity configuration: - * If **user-assigned identity** should be used, then add the following app settings values in your app configuration: - * **AzureWebJobsStorage__credential**: managedidentity + * If you use a *system-assigned identity*, make no other changes. - * **AzureWebJobsStorage__clientId**: (This is a GUID value that you obtain from the Microsoft Entra admin center) + * If you use a *user-assigned identity*, add the following settings to your app configuration: - ![Screenshot of user identity client id.](media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-03.png) + * For **AzureWebJobsStorage__credential**, enter **managedidentity**. + * For **AzureWebJobsStorage__clientId**, get this GUID value from the Microsoft Entra admin center. + :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-03.png" alt-text="Screenshot that shows the user identity client ID." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-03.png"::: ## Configure your app to use client secret credentials -Registering a client application in Microsoft Entra ID is another way you can configure access to an Azure service. In the following steps, you will learn how to use client secret credentials for authentication to your Azure Storage account. This method can be used by function apps both locally and on Azure. However, client secret credential is **less recommended** than managed identity as it's more complicated to configure and manage and it requires sharing a secret credential with the Azure Functions service. +Registering a client application in Microsoft Entra ID is another way you can configure access to an Azure service for your Durable Functions app. In the following steps, you use client secret credentials for authentication to your Azure Storage account. Function apps can use this method both locally and in Azure. Using a client secret credential is *less recommended* than using managed identity credentials because a client secret is more complex to set up and manage. A client secret credential also requires sharing a secret credential with the Azure Functions service. -### Prerequisites +<a name='register-a-client-application-on-azure-active-directory'></a> -The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it. -In particular, this quickstart assumes that you have already: +### Register the client application with Microsoft Entra ID -* Created a Durable Functions project on your local machine or in the Azure portal. +1. In the Azure portal, [register the client application](/entra/identity-platform/quickstart-register-app) with Microsoft Entra ID. +1. Create a client secret for your application. In your registered application, complete these steps: -<a name='register-a-client-application-on-azure-active-directory'></a> + 1. Select **Certificates & secrets** > **New client secret**. ++ 1. For **Description**, enter a unique description. ++ 1. For **Expires**, enter a valid time for the secret to expire. ++ 1. *Copy the secret value to use later*. ++ The secret's value doesn't appear again after you leave the pane, so be sure that you *copy the secret and save it*. ++ :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-01.png" alt-text="Screenshot that shows the Add a client secret pane." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-01.png"::: ++### Assign access roles to your application ++Next, assign three RBAC roles to your client application: ++* Storage Queue Data Contributor +* Storage Blob Data Contributor +* Storage Table Data Contributor ++To add the roles: ++1. In the Azure portal, go to your function's storage account. ++1. On the resource menu, select **Access Control (IAM)**, and then select **Add role assignment**. ++ :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-02.png" alt-text="Screenshot that shows the Access control pane with Add role assignment highlighted." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-02.png"::: ++1. Select a role to add, select **Next**, and then search for your application. Review the role assignment, and then add the role. ++ :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-03.png" alt-text="Screenshot that shows the role assignment pane." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-03.png"::: ++### Configure the client secret -### Register a client application on Microsoft Entra ID -1. Register a client application under Microsoft Entra ID in the Azure portal according to [these instructions](../../healthcare-apis/register-application.md). +In the Azure portal, run and test the application. To run and test the app locally, specify the following settings in the functionΓÇÖs *local.settings.json* file. -2. Create a client secret for your client application. In your registered application: +1. In the Azure portal, on your function app resource menu under **Settings**, select **Configuration**. - 1. Select **Certificates & Secrets** and select **New client secret**. +1. In the list of settings, select **AzureWebJobsStorage** and select the **Delete** icon. - 2. Fill in a **Description** and choose secret valid time in the **Expires** field. +1. Add a setting to link your Azure storage account to the application. - 3. Copy and save the secret value carefully because it will not show up again after you leave the page. - - ![Screenshot of client secret page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-01.png) + Use *one of the following methods* depending on the cloud that your app runs in: -### Assign Role-based Access Controls (RBAC) to the client application + * **Azure cloud**: If your app runs in *public Azure*, add a setting that identifies an Azure storage account name: -Assign these three roles to your client application with the following steps. + * `AzureWebJobsStorage__<accountName>` -* Storage Queue Data Contributor -* Storage Blob Data Contributor -* Storage Table Data Contributor + Example: `AzureWebJobsStorage__mystorageaccount123` -1. Navigate to your functionΓÇÖs storage account **Access Control (IAM)** page and add a new role assignment. + * **Non-Azure cloud**: If your application runs in a cloud outside of Azure, you must add a specific service URI (an *endpoint*) for the storage account instead of an account name. - ![Screenshot of access control page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-02.png) + > [!NOTE] + > If you use [Azure Government](../../azure-government/documentation-government-welcome.md) or any other cloud that's separate from public Azure, you must use the option to provide a specific service URI. For more information about using Azure Storage with Azure Government, see [Develop by using the Storage API in Azure Government](../../azure-government/documentation-government-get-started-connect-to-storage.md). -2. Choose the required role, click next, then search for your application, review and add. + * `AzureWebJobsStorage__<blobServiceUri>` - ![Screenshot of role assignment page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-03.png) + Example: `AzureWebJobsStorage__https://mystorageaccount123.blob.core.windows.net/` -### Add client secret configuration + * `AzureWebJobsStorage__<queueServiceUri>` -To run and test in Azure, specify the followings in your Azure function appΓÇÖs **Configuration** page in the Azure portal. To run and test locally, specify the following in the functionΓÇÖs **local.settings.json** file. + Example: `AzureWebJobsStorage__https://mystorageaccount123.queue.core.windows.net/` -1. Remove the default value "AzureWebJobsStorage". + * `AzureWebJobsStorage__<tableServiceUri>` -2. Link Azure storage account by adding either one of the following value settings: + Example: `AzureWebJobsStorage__https://mystorageaccount123.table.core.windows.net/` - * **AzureWebJobsStorage__accountName**: For example: `mystorageaccount123` + You can get the values for these URI variables in the storage account information on the **Endpoints** tab. - * **AzureWebJobsStorage__blobServiceUri**: Example: `https://mystorageaccount123.blob.core.windows.net/` + :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png" alt-text="Screenshot that shows an example of an endpoint as a specific service URI." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png"::: - **AzureWebJobsStorage__queueServiceUri**: Example: `https://mystorageaccount123.queue.core.windows.net/` +1. To add client secret credentials, set the following values: - **AzureWebJobsStorage__tableServiceUri**: Example: `https://mystorageaccount123.table.core.windows.net/` - - The values for these Uri variables can be found in the storage account under the **Endpoints** tab. - - ![Screenshot of endpoint sample.](media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png) + * **AzureWebJobsStorage__clientId**: Get this GUID value on the Microsoft Entra application pane. -3. Add a client secret credential by specifying the following values: - * **AzureWebJobsStorage__clientId**: (this is a GUID value found in the Microsoft Entra application page) + * **AzureWebJobsStorage__ClientSecret**: The secret value that you generated in the Microsoft Entra admin center in an earlier step. - * **AzureWebJobsStorage__ClientSecret**: (this is the secret value generated in the Microsoft Entra admin center in a previous step) + * **AzureWebJobsStorage__tenantId**: The tenant ID that the Microsoft Entra application is registered in. Get this GUID value on the Microsoft Entra application pane. - * **AzureWebJobsStorage__tenantId**: (this is the tenant ID that the Microsoft Entra application is registered in) + The values to use for the client ID and the tenant ID appear on your client application Overview pane. The client secret value is the one that you saved in an earlier step. The client secret's value isn't available after the pane is refreshed. - The client ID and tenant ID values can be found on your client applicationΓÇÖs overview page. The client secret value is the one that was carefully saved in the previous step. It will not be available after the page is refreshed. - - ![Screenshot of application's overview page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-04.png) + :::image type="content" source="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-04.png" alt-text="Screenshot that shows the tenant ID and client ID on a Microsoft Entra application pane." lightbox="media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-04.png"::: |
azure-functions | Durable Functions Create First Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-first-csharp.md | - Title: "Create your first durable function in Azure using C#" -description: Create and publish an Azure Durable Function using Visual Studio or Visual Studio Code. -- Previously updated : 06/15/2022--zone_pivot_groups: code-editors-set-one ----# Create your first durable function in C# --Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. ---In this article, you learn how to use Visual Studio Code to locally create and test a "hello world" durable function. This function orchestrates and chains together calls to other functions. You can then publish the function code to Azure. These tools are available as part of the Visual Studio Code [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). ---## Prerequisites --To complete this tutorial: --* Install [Visual Studio Code](https://code.visualstudio.com/download). --* Install the following Visual Studio Code extensions: - * [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) - * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) --* Make sure that you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). --* Durable Functions require an Azure storage account. You need an Azure subscription. --* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. ---## <a name="create-an-azure-functions-project"></a>Create your local project --In this section, you use Visual Studio Code to create a local Azure Functions project. --1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. -- :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-create-project.png" alt-text="Screenshot of create a function project window."::: --1. Choose an empty folder location for your project and choose **Select**. --1. Follow the prompts and provide the following information: -- | Prompt | Value | Description | - | | -- | -- | - | Select a language for your function app project | C# | Create a local C# Functions project. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | --Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. --## Add functions to the app --The following steps use a template to create the durable function code in your project. --1. In the command palette, search for and select `Azure Functions: Create Function...`. --1. Follow the prompts and provide the following information: -- | Prompt | Value | Description | - | | -- | -- | - | Select a template for your function | DurableFunctionsOrchestration | Create a Durable Functions orchestration | - | Provide a function name | HelloOrchestration | Name of the class in which functions are created | - | Provide a namespace | Company.Function | Namespace for the generated class | --1. When Visual Studio Code prompts you to select a storage account, choose **Select storage account**. Follow the prompts and provide the following information to create a new storage account in Azure: -- | Prompt | Value | Description | - | | -- | -- | - | Select subscription | *name of your subscription* | Select your Azure subscription | - | Select a storage account | Create a new storage account | | - | Enter the name of the new storage account | *unique name* | Name of the storage account to create | - | Select a resource group | *unique name* | Name of the resource group to create | - | Select a location | *region* | Select a region close to you | --A class containing the new functions is added to the project. Visual Studio Code also adds the storage account connection string to *local.settings.json* and a reference to the [`Microsoft.Azure.WebJobs.Extensions.DurableTask`](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) NuGet package to the *.csproj* project file. --Open the new *HelloOrchestration.cs* file to view the contents. This durable function is a simple function chaining example with the following methods: --| Method | FunctionName | Description | -| -- | | -- | -| **`RunOrchestrator`** | `HelloOrchestration` | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. | -| **`SayHello`** | `HelloOrchestration_Hello` | The function returns a hello. It's the function that contains the business logic that is being orchestrated. | -| **`HttpStart`** | `HelloOrchestration_HttpStart` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | --Now that you've created your function project and a durable function, you can test it on your local computer. --## Test the function locally --Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. --1. To test your function, set a breakpoint in the `SayHello` activity function code and press <kbd>F5</kbd> to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. -- > [!NOTE] - > For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging). --1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. -- :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-f5.png" alt-text="Screenshot of Azure local output window."::: --1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), and then send an HTTP POST request to the URL endpoint. -- The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. --1. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. -- The request will query the orchestration instance for the status. You must get an eventual response, which shows us that the instance has completed and includes the outputs or results of the durable function. It looks like: -- ```json - { - "name": "HelloOrchestration", - "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", - "runtimeStatus": "Completed", - "input": null, - "customStatus": null, - "output": [ - "Hello Tokyo!", - "Hello Seattle!", - "Hello London!" - ], - "createdTime": "2020-03-18T21:54:49Z", - "lastUpdatedTime": "2020-03-18T21:54:54Z" - } - ``` --1. To stop debugging, press <kbd>Shift + F5</kbd> in Visual Studio Code. --After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. ----## Test your function in Azure --1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in the following format: -- `https://<functionappname>.azurewebsites.net/api/HelloOrchestration_HttpStart` --1. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app. --## Next steps --You have used Visual Studio Code to create and publish a C# durable function app. --> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) ----In this article, you learn how to use Visual Studio 2022 to locally create and test a "hello world" durable function. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2022. ---## Prerequisites --To complete this tutorial: --* Install [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps differ. --* Verify that you have the [Azurite Emulator](../../storage/common//storage-use-azurite.md) installed and running. ---## Create a function app project --The Azure Functions template creates a project that can be published to a function app in Azure. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources. --1. In Visual Studio, select **New** > **Project** from the **File** menu. --1. In the **Create a new project** dialog, search for `functions`, choose the **Azure Functions** template, and then select **Next**. -- :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-new-project.png" alt-text="Screenshot of new project dialog to create a function in Visual Studio."::: --1. Enter a **Project name** for your project, and select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or nonalphanumeric characters. --1. Under **Additional information**, use the settings specified in the table that follows the image. -- :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-new-function.png" alt-text="Screenshot of create a new Azure Functions Application dialog in Visual Studio."::: -- | Setting | Suggested value | Description | - | | - |-- | - | **Functions worker** | .NET 6 | Creates a function project that supports .NET 6 and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). | - | **Function** | Empty | Creates an empty function app. | - | **Storage account** | Storage Emulator | A storage account is required for durable function state management. | --1. Select **Create** to create an empty function project. This project has the basic configuration files needed to run your functions. --## Add functions to the app --The following steps use a template to create the durable function code in your project. --1. Right-click the project in Visual Studio and select **Add** > **New Azure Function**. -- :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-add-function.png" alt-text="Screenshot of Add new function."::: --1. Verify **Azure Function** is selected from the add menu, enter a name for your C# file, and then select **Add**. --1. Select the **Durable Functions Orchestration** template and then select **Add**. -- :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-select-durable-template.png" alt-text="Screenshot of Select durable template."::: --A new durable function is added to the app. Open the new *.cs* file to view the contents. This durable function is a simple function chaining example with the following methods: --| Method | FunctionName | Description | -| -- | | -- | -| **`RunOrchestrator`** | `<file-name>` | Manages the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. | -| **`SayHello`** | `<file-name>_Hello` | The function returns a hello. It's the function that contains the business logic that is being orchestrated. | -| **`HttpStart`** | `<file-name>_HttpStart` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | --You can test it on your local computer now that you've created your function project and a durable function. --## Test the function locally --Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio. --1. To test your function, press <kbd>F5</kbd>. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You may also need to enable a firewall exception so that the tools can handle HTTP requests. --1. Copy the URL of your function from the Azure Functions runtime output. -- :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-debugging.png" alt-text="Screenshot of Azure local runtime."::: --1. Paste the URL for the HTTP request into your browser's address bar and execute the request. The following shows the response in the browser to the local GET request returned by the function: -- :::image type="content" source="./media/durable-functions-create-first-csharp/functions-vs-status.png" alt-text="Screenshot of the browser window with statusQueryGetUri called out."::: -- The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. --1. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. -- The request will query the orchestration instance for the status. You must get an eventual response that looks like the following. This output shows us the instance has completed and includes the outputs or results of the durable function. -- ```json - { - "name": "Durable", - "instanceId": "d495cb0ac10d4e13b22729c37e335190", - "runtimeStatus": "Completed", - "input": null, - "customStatus": null, - "output": [ - "Hello Tokyo!", - "Hello Seattle!", - "Hello London!" - ], - "createdTime": "2019-11-02T07:07:40Z", - "lastUpdatedTime": "2019-11-02T07:07:52Z" - } - ``` --1. To stop debugging, press <kbd>Shift + F5</kbd>. --After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. --## Publish the project to Azure --You must have a function app in your Azure subscription before publishing your project. You can create a function app right from Visual Studio. ---## Test your function in Azure --1. Copy the base URL of the function app from the Publish profile page. Replace the `localhost:port` portion of the URL you used when testing the function locally with the new base URL. -- The URL that calls your durable function HTTP trigger must be in the following format: -- `https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>_HttpStart` --2. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app. --## Next steps --You have used Visual Studio to create and publish a C# durable function app. --> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) - |
azure-functions | Durable Functions Create Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-create-portal.md | The [Durable Functions](durable-functions-overview.md) extension for Azure Funct > [!NOTE] > -> * If you are developing durable functions in C#, you should instead consider [Visual Studio 2019 development](durable-functions-create-first-csharp.md). +> * If you are developing durable functions in C#, you should instead consider [Visual Studio 2019 development](durable-functions-isolated-create-first-csharp.md). > * If you are developing durable functions in JavaScript, you should instead consider [Visual Studio Code development](./quickstart-js-vscode.md). ## Create a function app |
azure-functions | Durable Functions Isolated Create First Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-isolated-create-first-csharp.md | Title: "Create your first C# durable function running in the isolated worker" -description: Create and publish a C# Azure Durable Function running in the isolated worker using Visual Studio or Visual Studio Code. + Title: "Quickstart: Create a C# Durable Functions app" +description: Create and publish a C# Durable Functions app in Azure Functions by using Visual Studio or Visual Studio Code. Previously updated : 06/05/2024 Last updated : 07/24/2024 zone_pivot_groups: code-editors-set-one ms.devlang: csharp -# Create your first Durable Function in C# +# Quickstart: Create a C# Durable Functions app -Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. Durable Functions manages state, checkpoints, and restarts in your application. -Like Azure Functions, Durable Functions supports two process models for .NET class library functions: ---To learn more about the two processes, refer to [Differences between in-process and isolated worker process .NET Azure Functions](../dotnet-isolated-in-process-differences.md). +Like Azure Functions, Durable Functions supports two process models for .NET class library functions. To learn more about the two processes, see [Differences between in-process and isolated worker process .NET Azure Functions](../dotnet-isolated-in-process-differences.md). ::: zone pivot="code-editor-vscode" -In this article, you learn how to use Visual Studio Code to locally create and test a "hello world" durable function. This function orchestrates and chains together calls to other functions. You can then publish the function code to Azure. These tools are available as part of the Visual Studio Code [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). +In this quickstart, you use Visual Studio Code to locally create and test a "hello world" Durable Functions app. The function app orchestrates and chains together calls to other functions. Then, you publish the function code in Azure. The tools you use are available via the Visual Studio Code [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions). ## Prerequisites -To complete this tutorial: +To complete this quickstart, you need: ++* [Visual Studio Code](https://code.visualstudio.com/download) installed. -* Install [Visual Studio Code](https://code.visualstudio.com/download). +* The following Visual Studio Code extensions installed: -* Install the following Visual Studio Code extensions: * [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) * [C#](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp) -* Make sure that you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). +* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed. -* Durable Functions require an Azure storage account. You need an Azure subscription. +* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. -* Make sure that you have version 3.1 or a later version of the [.NET Core SDK](https://dotnet.microsoft.com/download) installed. +* [.NET Core SDK](https://dotnet.microsoft.com/download) version 3.1 or later installed. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] -## <a name="create-an-azure-functions-project"></a>Create your local project +## <a name="create-an-azure-functions-project"></a>Create an Azure Functions project -In this section, you use Visual Studio Code to create a local Azure Functions project. +In Visual Studio Code, create a local Azure Functions project. -1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +1. On the **View** menu, select **Command Palette** (or select Ctrl+Shift+P). - :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-create-project.png" alt-text="Screenshot of create a function project window."::: +1. At the prompt (`>`), enter and then select **Azure Functions: Create New Project**. -1. Choose an empty folder location for your project and choose **Select**. + :::image type="content" source="media/durable-functions-create-first-csharp/functions-vscode-create-project.png" alt-text="Screenshot that shows the command to create a Functions project."::: -1. Follow the prompts and provide the following information: +1. Select **Browse**. In the **Select Folder** dialog, go to a folder to use for your project, and then choose **Select**. - | Prompt | Value | Description | +1. At the prompts, select or enter the following values: ++ | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | C# | Create a local C# Functions project. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a .NET runtime | .NET 8.0 isolated | Creates a function project that supports .NET 8 running in isolated worker process and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). | - | Select a template for your project's first function | Durable Functions Orchestration | Create a Durable Functions orchestration | - | Choose a durable storage type | Azure Storage | The default storage provider for Durable Functions. See [Durable Functions storage providers](./durable-functions-storage-providers.md) for more details. | - | Provide a function name | HelloOrchestration | Name of the orchestration function | - | Provide a namespace | Company.Function | Namespace for the generated class | - | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | + | **Select a language for your function app project** | Select **C#**. | Creates a local C# Functions project. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. Core Tools is installed the first time you run the app. | + | **Select a .NET runtime** | Select **.NET 8.0 isolated**. | Creates a Functions project that supports .NET 8 running in an isolated worker process and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). | + | **Select a template for your project's first function** | Select **Durable Functions Orchestration**. | Creates a Durable Functions orchestration. | + | **Choose a durable storage type** | Select **Azure Storage**. | The default storage provider for Durable Functions. For more information, see [Durable Functions storage providers](./durable-functions-storage-providers.md). | + | **Provide a function name** | Enter **HelloOrchestration**. | A name for the orchestration function. | + | **Provide a namespace** | Enter **Company.Function**. | A namespace for the generated class. | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | -Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. +Visual Studio Code installs Azure Functions Core Tools if it's required to create the project. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. -There's also a file called *HelloOrchestration.cs*, which contains the basic building blocks of a Durable Functions app: +Another file, *HelloOrchestration.cs*, contains the basic building blocks of a Durable Functions app: | Method | Description | | -- | -- |-| **`HelloOrchestration`** | Defines the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. | -| **`SayHello`** | Simple function returning hello. It's the function containing the business logic that is being orchestrated. | -| **`HelloOrchestration_HttpStart`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | +| `HelloOrchestration` | Defines the Durable Functions app orchestration. In this case, the orchestration starts, creates a list, and then adds the result of three functions calls to the list. When the three function calls finish, it returns the list. | +| `SayHello` | A simple function app that returns *hello*. This function contains the business logic that is orchestrated. | +| `HelloOrchestration_HttpStart` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a *check status* response. | -You can find more details about these functions in [Durable Functions types and features](./durable-functions-types-features-overview.md). +For more information about these functions, see [Durable Functions types and features](./durable-functions-types-features-overview.md). ## Configure storage -You can use [Azurite](../../storage/common/storage-use-azurite.md?tabs=visual-studio-code), which is an emulator for Azure Storage, to test the function locally. Do this by setting `AzureWebJobStorage` in _local.settings.json_ to `UseDevelopmentStorage=true`: +You can use [Azurite](../../storage/common/storage-use-azurite.md?tabs=visual-studio-code), an emulator for Azure Storage, to test the function locally. In *local.settings.json*, set the value for `AzureWebJobsStorage` to `UseDevelopmentStorage=true` like in this example: ```json { You can use [Azurite](../../storage/common/storage-use-azurite.md?tabs=visual-st } } ```-You can install the Azurite extension on Visual Studio Code and start it by running `Azurite: Start` in the command palette. -There are other storage options you can use for your Durable Functions app. See [Durable Functions storage providers](durable-functions-storage-providers.md) to learn more about different storage options and what benefits they provide. +To install and start running the Azurite extension in Visual Studio Code, in the command palette, enter **Azurite: Start** and select Enter. +You can use other storage options for your Durable Functions app. For more information about storage options and benefits, see [Durable Functions storage providers](durable-functions-storage-providers.md). ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project locally. You're prompted to install these tools the first time you start a function from Visual Studio Code. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function in Visual Studio Code. -1. To test your function, set a breakpoint in the `SayHello` activity function code and press <kbd>F5</kbd> to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. +1. In Visual Studio Code, set a breakpoint in the `SayHello` activity function code, and then select F5 to start the function app project. The terminal panel displays output from Core Tools. > [!NOTE]- > For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging). + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). + > + > If the message *No job functions found* appears, [update your Azure Functions Core Tools installation to the latest version](./../functions-core-tools-reference.md). - > [!NOTE] - > If you encounter a "No job functions found" error, please [update your Azure Functions Core Tools installation to the latest version](./../functions-core-tools-reference.md). Older versions of core tools do not support .NET isolated. +1. In the terminal panel, copy the URL endpoint of your HTTP-triggered function. -1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. + :::image type="content" source="media/durable-functions-create-first-csharp/isolated-functions-vscode-debugging.png" alt-text="Screenshot of the Azure local output window." lightbox="media/durable-functions-create-first-csharp/isolated-functions-vscode-debugging.png"::: - :::image type="content" source="media/durable-functions-create-first-csharp/isolated-functions-vscode-debugging.png" alt-text="Screenshot of Azure local output window."::: +1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. -1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), and then send an HTTP POST request to the URL endpoint. -- The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. + The response is the HTTP function's initial result. It lets you know that the Durable Functions app orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. At this point, your breakpoint in the activity function should be hit because the orchestration has started. Step through it to get a response for the status of the orchestration. -1. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. +1. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. - The request will query the orchestration instance for the status. You should see that the instance has completed and includes the outputs or results of the durable function. It looks like: + The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the Durable Functions app like in this example: ```json { Azure Functions Core Tools lets you run an Azure Functions project locally. You' "lastUpdatedTime":"2023-01-31T18:48:56Z" } ```- - > [!NOTE] - > You can observe the [replay behavior](./durable-functions-orchestrations.md#reliability) of Durable Functions through breakpoints. Because this is an important concept to understand, it's highly recommended that you read the linked article. -1. To stop debugging, press <kbd>Shift + F5</kbd> in Visual Studio Code. + > [!TIP] + > Learn how you can observe the [replay behavior](./durable-functions-orchestrations.md#reliability) of a Durable Functions app through breakpoints. -After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. +1. To stop debugging, in Visual Studio Code, select Shift+F5. ++After you verify that the function runs correctly on your local computer, it's time to publish the project to Azure. [!INCLUDE [functions-sign-in-vs-code](../../../includes/functions-sign-in-vs-code.md)] After you've verified that the function runs correctly on your local computer, i ## Test your function in Azure -1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in the following format: +1. In the Visual Studio Code output panel, copy the URL of the HTTP trigger. The URL that calls your HTTP-triggered function must be in the following format: ++ `https://<function-app-name>.azurewebsites.net/api/HelloOrchestration_HttpStart` - `https://<functionappname>.azurewebsites.net/api/HelloOrchestration_HttpStart` +1. Paste the new URL for the HTTP request in your browser's address bar. You must get the same status response that you got when you tested locally when you use the published app. -1. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app. +The C# Durable Functions app that you created and published by using Visual Studio Code is ready to use. -## Next steps +## Clean up resources -You have used Visual Studio Code to create and publish a C# durable function app. +If you no longer need the resources that you created to complete the quickstart, to avoid related costs in your Azure subscription, [delete the resource group](/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-portal#delete-resource-group) and all related resources. -> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) +## Related content ++* Learn about [common Durable Functions app patterns](durable-functions-overview.md#application-patterns). ::: zone-end ::: zone pivot="code-editor-visualstudio" -In this article, you will learn how to use Visual Studio 2022 to locally create and test a "hello world" durable function that run in the isolated worker process. This function orchestrates and chains-together calls to other functions. You then publish the function code to Azure. These tools are available as part of the Azure development workload in Visual Studio 2022. +In this quickstart, you use Visual Studio 2022 to locally create and test a "hello world" Durable Functions app. The function orchestrates and chains together calls to other functions. Then, you publish the function code in Azure. The tools you use are available via the *Azure development workload* in Visual Studio 2022. ## Prerequisites -To complete this tutorial: +To complete this quickstart, you need: ++* [Visual Studio 2022](https://visualstudio.microsoft.com/vs/) installed. -* Install [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps differ. + Make sure that the **Azure development** workload is also installed. Visual Studio 2019 also supports Durable Functions development, but the UI and steps are different. -* Verify that you have the [Azurite Emulator](../../storage/common/storage-use-azurite.md) installed and running. +* The [Azurite emulator](../../storage/common/storage-use-azurite.md) installed and running. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] ## Create a function app project -The Azure Functions template creates a project that can be published to a function app in Azure. A function app lets you group functions as a logical unit for easier management, deployment, scaling, and sharing of resources. +The Azure Functions template creates a project that you can publish to a function app in Azure. You can use a function app to group functions as a logical unit to more easily manage, deploy, scale, and share resources. -1. In Visual Studio, select **New** > **Project** from the **File** menu. +1. In Visual Studio, on the **File** menu, select **New** > **Project**. -2. In the **Create a new project** dialog, search for `functions`, choose the **Azure Functions** template, and then select **Next**. +1. On **Create a new project**, search for **functions**, select the **Azure Functions** template, and then select **Next**. - :::image type="content" source="./media/durable-functions-create-first-csharp/functions-isolated-vs-new-project.png" alt-text="Screenshot of new project dialog in Visual Studio."::: + :::image type="content" source="./media/durable-functions-create-first-csharp/functions-isolated-vs-new-project.png" alt-text="Screenshot of the New project dialog in Visual Studio."::: -3. Enter a **Project name** for your project, and select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or nonalphanumeric characters. +1. For **Project name**, enter a name for your project, and then select **OK**. The project name must be valid as a C# namespace, so don't use underscores, hyphens, or nonalphanumeric characters. -4. Under **Additional information**, use the settings specified in the table that follows the image. +1. On **Additional information**, use the settings that are described in the next table. - :::image type="content" source="./media/durable-functions-create-first-csharp/functions-isolated-vs-new-function.png" alt-text="Screenshot of create a new Azure Functions Application dialog in Visual Studio."::: + :::image type="content" source="./media/durable-functions-create-first-csharp/functions-isolated-vs-new-function.png" alt-text="Screenshot of the Create a new Azure Functions Application dialog in Visual Studio."::: - | Setting | Suggested value | Description | + | Setting | Action | Description | | | - |-- |- | **Functions worker** | .NET 8 Isolated (Long Term Support) | Creates a function project that supports .NET 8 running in isolated worker process and the Azure Functions Runtime 4.0. For more information, see [How to target Azure Functions runtime version](../functions-versions.md). | - | **Function** | Durable Functions Orchestration | Creates a Durable Functions orchestration. | + | **Functions worker** | Select **.NET 8 Isolated (Long Term Support)**. | Creates an Azure Functions project that supports .NET 8 running in an isolated worker process and the Azure Functions Runtime 4.0. For more information, see [How to target the Azure Functions runtime version](../functions-versions.md). | + | **Function** | Enter **Durable Functions Orchestration**. | Creates a Durable Functions orchestration. | -> [!NOTE] -> If you don't see .NET 8 isolated in the Functions worker drop-down, it could be because you don't have the latest Azure Functions toolsets and templates. Go to Tools -> Options -> Projects and Solutions -> Azure Functions -> Check for updates to download the latest. + > [!NOTE] + > If **.NET 8 Isolated (Long Term Support)** doesn't appear in the **Functions worker** menu, you might not have the latest Azure Functions tool sets and templates. Go to **Tools** > **Options** > **Projects and Solutions** > **Azure Functions** > **Check for updates to download the latest**. -5. Make sure the box for _"Use Azurite for runtime storage account (AzureWebJobStorage)"_ is checked. This will use Azurite emulator. Select **Create** to create a function project with a Durable Functions orchestration template. This project has the basic configuration files needed to run your functions. +1. To use the Azurite emulator, make sure that the **Use Azurite for runtime storage account (AzureWebJobStorage)** checkbox is selected. To create a Functions project by using a Durable Functions orchestration template, select **Create**. The project has the basic configuration files that you need to run your functions. -> [!NOTE] -> There are other storage options you can use for your Durable Functions app. See [Durable Functions storage providers](durable-functions-storage-providers.md) to learn more about different storage options and what benefits they provide. + > [!NOTE] + > You can choose other storage options for your Durable Functions app. For more information, see [Durable Functions storage providers](durable-functions-storage-providers.md). -In your function app, you'll see a file called *Function1.cs* containing three functions, which are the basic building blocks of a Durable Functions: +In your app folder, a file named *Function1.cs* contains three functions. The three functions are the basic building blocks of a Durable Functions app: | Method | Description | | -- | -- |-| **`RunOrchestrator`** | Defines the durable orchestration. In this case, the orchestration starts, creates a list, and adds the result of three functions calls to the list. When the three function calls are complete, it returns the list. | -| **`SayHello`** | The function returns a hello. It's the function that contains the business logic that is being orchestrated. | -| **`HttpStart`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | +| `RunOrchestrator` | Defines the Durable Functions app orchestration. In this case, the orchestration starts, creates a list, and then adds the result of three functions calls to the list. When the three function calls finish, it returns the list. | +| `SayHello` | A simple function app that returns *hello*. This function contains the business logic that is orchestrated. | +| `HttpStart` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a *check status* response. | -You can find more details about these functions in [Durable Functions types and features](./durable-functions-types-features-overview.md). +For more information about these functions, see [Durable Functions types and features](./durable-functions-types-features-overview.md). ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function in Visual Studio Code. -1. To test your function, set a breakpoint in the `SayHello` activity function code and press <kbd>F5</kbd>. If prompted, accept the request from Visual Studio to download and install Azure Functions Core (CLI) tools. You may also need to enable a firewall exception so that the tools can handle HTTP requests. +1. In Visual Studio Code, set a breakpoint in the `SayHello` activity function code, and then select F5. If you're prompted, accept the request from Visual Studio to download and install Azure Functions Core (command-line) tools. You might also need to enable a firewall exception so that the tools can handle HTTP requests. -> [!NOTE] -> For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging). + > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). ++1. Copy the URL of your function from the Azure Functions runtime output. -2. Copy the URL of your function from the Azure Functions runtime output. + :::image type="content" source="./media/durable-functions-create-first-csharp/isolated-functions-vs-debugging.png" alt-text="Screenshot of the Azure local runtime." lightbox="media/durable-functions-create-first-csharp/isolated-functions-vs-debugging.png"::: - :::image type="content" source="./media/durable-functions-create-first-csharp/isolated-functions-vs-debugging.png" alt-text="Screenshot of Azure local runtime."::: +1. Paste the URL for the HTTP request in your browser's address bar and execute the request. The following screenshot shows the response to the local GET request that the function returns in the browser: -3. Paste the URL for the HTTP request into your browser's address bar and execute the request. The following shows the response in the browser to the local GET request returned by the function: + :::image type="content" source="./media/durable-functions-create-first-csharp/isolated-functions-vs-status.png" alt-text="Screenshot of the browser window with statusQueryGetUri called out." lightbox="media/durable-functions-create-first-csharp/isolated-functions-vs-status.png"::: - :::image type="content" source="./media/durable-functions-create-first-csharp/isolated-functions-vs-status.png" alt-text="Screenshot of the browser window with statusQueryGetUri called out."::: + The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. - The response is the HTTP function's initial result, letting us know that the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. - - At this point, your breakpoint in the activity function should be hit because the orchestration has started. Step through it to get a response for the status of the orchestration. + At this point, your breakpoint in the activity function should be hit because the orchestration started. Step through it to get a response for the status of the orchestration. -4. Copy the URL value for `statusQueryGetUri`, paste it into the browser's address bar, and execute the request. +1. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. - The request will query the orchestration instance for the status. You should see that the instance has completed and includes the outputs of your activity invocations. It looks like: + The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the durable function, like in this example: ```json { Azure Functions Core Tools lets you run an Azure Functions project on your local } ``` -> [!NOTE] -> You can observe the [replay behavior](./durable-functions-orchestrations.md#reliability) of Durable Functions through breakpoints. Because this is an important concept to understand, it's highly recommended that you read the linked article. + > [!TIP] + > Learn how you can observe the [replay behavior](./durable-functions-orchestrations.md#reliability) of a Durable Functions app through breakpoints. -5. To stop debugging, press <kbd>Shift + F5</kbd>. +1. To stop debugging, select Shift+F5. -After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. +After you verify that the function runs correctly on your local computer, it's time to publish the project to Azure. ## Publish the project to Azure -You must have a function app in your Azure subscription before publishing your project. You can create a function app right from Visual Studio. +You must have a function app in your Azure subscription before you publish your project. You can create a function app in Visual Studio. [!INCLUDE [Publish the project to Azure](../../../includes/functions-vstools-publish.md)] ## Test your function in Azure -1. Copy the base URL of the function app from the Publish profile page. Replace the `localhost:port` portion of the URL you used when testing the function locally with the new base URL. +1. On the **Publish profile** page, copy the base URL of the function app. Replace the `localhost:port` portion of the URL that you used when you tested the function locally with the new base URL. The URL that calls your durable function HTTP trigger must be in the following format: `https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>_HttpStart` -2. Paste this new URL for the HTTP request into your browser's address bar. You must get the same status response as before when using the published app. +1. Paste the new URL for the HTTP request in your browser's address bar. When you test the published app, you must get the same status response that you got when you tested locally. ++The C# Durable Functions app that you created and published by using Visual Studio is ready to use. ++## Clean up resources -## Next steps +If you no longer need the resources that you created to complete the quickstart, to avoid related costs in your Azure subscription, [delete the resource group](/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-portal#delete-resource-group) and all related resources. -You have used Visual Studio to create and publish a C# Durable Functions app. +## Related content -> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) +* Learn about [common Durable Functions app patterns](durable-functions-overview.md#application-patterns). ::: zone-end |
azure-functions | Durable Functions Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-monitor.md | The monitor pattern refers to a flexible *recurring* process in a workflow - for # [C#](#tab/csharp) -* [Complete the quickstart article](durable-functions-create-first-csharp.md) +* [Complete the quickstart article](durable-functions-isolated-create-first-csharp.md) * [Clone or download the samples project from GitHub](https://github.com/Azure/azure-functions-durable-extension/tree/main/samples/precompiled) # [JavaScript](#tab/javascript) The monitor pattern refers to a flexible *recurring* process in a workflow - for This sample monitors a location's current weather conditions and alerts a user by SMS when the skies are clear. You could use a regular timer-triggered function to check the weather and send alerts. However, one problem with this approach is **lifetime management**. If only one alert should be sent, the monitor needs to disable itself after clear weather is detected. The monitoring pattern can end its own execution, among other benefits: -* Monitors run on intervals, not schedules: a timer trigger *runs* every hour; a monitor *waits* one hour between actions. A monitor's actions will not overlap unless specified, which can be important for long-running tasks. +* Monitors run on intervals, not schedules: a timer trigger *runs* every hour; a monitor *waits* one hour between actions. A monitor's actions won't overlap unless specified, which can be important for long-running tasks. * Monitors can have dynamic intervals: the wait time can change based on some condition. * Monitors can terminate when some condition is met or be terminated by another process. * Monitors can take parameters. The sample shows how the same weather-monitoring process can be applied to any requested location and phone number. This sample monitors a location's current weather conditions and alerts a user b This sample involves using the Weather Underground API to check current weather conditions for a location. -The first thing you need is a Weather Underground account. You can create one for free at [https://www.wunderground.com/signup](https://www.wunderground.com/signup). Once you have an account, you will need to acquire an API key. You can do so by visiting [https://www.wunderground.com/weather/api](https://www.wunderground.com/weather/api/?MR=1), then selecting Key Settings. The Stratus Developer plan is free and sufficient to run this sample. +The first thing you need is a Weather Underground account. You can create one for free at [https://www.wunderground.com/signup](https://www.wunderground.com/signup). Once you have an account, you need to acquire an API key. You can do so by visiting [https://www.wunderground.com/weather/api](https://www.wunderground.com/weather/api/?MR=1), then selecting Key Settings. The Stratus Developer plan is free and sufficient to run this sample. Once you have an API key, add the following **app setting** to your function app. This article explains the following functions in the sample app: [!code-csharp[Main](~/samples-durable-functions/samples/precompiled/Monitor.cs?range=41-78,97-115)] -The orchestrator requires a location to monitor and a phone number to send a message to when the whether becomes clear at the location. This data is passed to the orchestrator as a strongly typed `MonitorRequest` object. +The orchestrator requires a location to monitor and a phone number to send a message to when the weather becomes clear at the location. This data is passed to the orchestrator as a strongly typed `MonitorRequest` object. # [JavaScript](#tab/javascript) The **E3_Monitor** function uses the standard *function.json* for orchestrator f :::code language="javascript" source="~/azure-functions-durable-js/samples/E3_Monitor/function.json"::: -Here is the code that implements the function: +Here's the code that implements the function: :::code language="javascript" source="~/azure-functions-durable-js/samples/E3_Monitor/index.js"::: Here is the code that implements the function: This orchestrator function performs the following actions: -1. Gets the **MonitorRequest** consisting of the *location* to monitor and the *phone number* to which it will send an SMS notification. +1. Gets the **MonitorRequest** consisting of the *location* to monitor and the *phone number* to which it sends an SMS notification. 2. Determines the expiration time of the monitor. The sample uses a hard-coded value for brevity. 3. Calls **E3_GetIsClear** to determine whether there are clear skies at the requested location. 4. If the weather is clear, calls **E3_SendGoodWeatherAlert** to send an SMS notification to the requested phone number. 5. Creates a durable timer to resume the orchestration at the next polling interval. The sample uses a hard-coded value for brevity. 6. Continues running until the current UTC time passes the monitor's expiration time, or an SMS alert is sent. -Multiple orchestrator instances can run simultaneously by calling the orchestrator function multiple times. The location to monitor and the phone number to send an SMS alert to can be specified. Finally, do note that the orchestrator function is *not* running while waiting for the timer, so you will not get charged for it. +Multiple orchestrator instances can run simultaneously by calling the orchestrator function multiple times. The location to monitor and the phone number to send an SMS alert to can be specified. Finally, do note that the orchestrator function isn't* running while waiting for the timer, so you won't get charged for it. ### E3_GetIsClear activity function As with other samples, the helper activity functions are regular functions that use the `activityTrigger` trigger binding. The **E3_GetIsClear** function gets the current weather conditions using the Weather Underground API and determines whether the sky is clear. The *function.json* is defined as follows: :::code language="javascript" source="~/azure-functions-durable-js/samples/E3_GetIsClear/function.json"::: -And here is the implementation. +And here's the implementation. :::code language="javascript" source="~/azure-functions-durable-js/samples/E3_GetIsClear/index.js"::: Its *function.json* is simple: :::code language="javascript" source="~/azure-functions-durable-js/samples/E3_SendGoodWeatherAlert/function.json"::: -And here is the code that sends the SMS message: +And here's the code that sends the SMS message: :::code language="javascript" source="~/azure-functions-durable-js/samples/E3_SendGoodWeatherAlert/index.js"::: RetryAfter: 10 {"id": "f6893f25acf64df2ab53a35c09d52635", "statusQueryGetUri": "https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635?taskHub=SampleHubVS&connection=Storage&code={systemKey}", "sendEventPostUri": "https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/raiseEvent/{eventName}?taskHub=SampleHubVS&connection=Storage&code={systemKey}", "terminatePostUri": "https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/terminate?reason={text}&taskHub=SampleHubVS&connection=Storage&code={systemKey}"} ``` -The **E3_Monitor** instance starts and queries the current weather conditions for the requested location. If the weather is clear, it calls an activity function to send an alert; otherwise, it sets a timer. When the timer expires, the orchestration will resume. +The **E3_Monitor** instance starts and queries the current weather conditions for the requested location. If the weather is clear, it calls an activity function to send an alert; otherwise, it sets a timer. When the timer expires, the orchestration resumes. You can see the orchestration's activity by looking at the function logs in the Azure Functions portal. You can see the orchestration's activity by looking at the function logs in the 2018-03-01T01:14:54.030 Function completed (Success, Id=561d0c78-ee6e-46cb-b6db-39ef639c9a2c, Duration=62ms) ``` -The orchestration completes once its timeout is reached or clear skies are detected. You can also use the `terminate` API inside another function or invoke the **terminatePostUri** HTTP POST webhook referenced in the 202 response above. To use the webhook, replace `{text}` with the reason for the early termination. The HTTP POST URL will look roughly as follows: +The orchestration completes once its timeout is reached or clear skies are detected. You can also use the `terminate` API inside another function or invoke the **terminatePostUri** HTTP POST webhook referenced in the preceding 202 response. To use the webhook, replace `{text}` with the reason for the early termination. The HTTP POST URL looks roughly as follows: ``` POST https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a35c09d52635/terminate?reason=Because&taskHub=SampleHubVS&connection=Storage&code={systemKey} POST https://{host}/runtime/webhooks/durabletask/instances/f6893f25acf64df2ab53a ## Next steps -This sample has demonstrated how to use Durable Functions to monitor an external source's status using [durable timers](durable-functions-timers.md) and conditional logic. The next sample shows how to use external events and [durable timers](durable-functions-timers.md) to handle human interaction. +This sample demonstrates how to use Durable Functions to monitor an external source's status using [durable timers](durable-functions-timers.md) and conditional logic. The next sample shows how to use external events and [durable timers](durable-functions-timers.md) to handle human interaction. > [!div class="nextstepaction"]-> [Run the human interaction sample](durable-functions-phone-verification.md) +> [Run the human interaction sample](durable-functions-phone-verification.md) |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Durable Functions is designed to work with all Azure Functions programming langu > This article uses tabs to support multiple versions of the Python programming model. The v2 model is generally available and is designed to provide a more code-centric way for authoring functions through decorators. For more details about how the v2 model works, refer to the [Azure Functions Python developer guide](../functions-reference-python.md). ::: zone-end -Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md). +Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-isolated-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md). ## Application patterns The async HTTP API pattern addresses the problem of coordinating the state of lo ![A diagram of the HTTP API pattern](./media/durable-functions-concepts/async-http-api.png) -Durable Functions provides **built-in support** for this pattern, simplifying or even removing the code you need to write to interact with long-running function executions. For example, the Durable Functions quickstart samples ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [TypeScript](quickstart-ts-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), and [Java](quickstart-java.md)) show a simple REST command that you can use to start new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that query the orchestrator function status. +Durable Functions provides **built-in support** for this pattern, simplifying or even removing the code you need to write to interact with long-running function executions. For example, the Durable Functions quickstart samples ([C#](durable-functions-isolated-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [TypeScript](quickstart-ts-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), and [Java](quickstart-java.md)) show a simple REST command that you can use to start new orchestrator function instances. After an instance starts, the extension exposes webhook HTTP APIs that query the orchestrator function status. The following example shows REST commands that start an orchestrator and query its status. For clarity, some protocol details are omitted from the example. Durable Functions are billed the same as Azure Functions. For more information, You can get started with Durable Functions in under 10 minutes by completing one of these language-specific quickstart tutorials: -* [C# using Visual Studio 2019](durable-functions-create-first-csharp.md) +* [C# using Visual Studio 2019](durable-functions-isolated-create-first-csharp.md) * [JavaScript using Visual Studio Code](quickstart-js-vscode.md) * [TypeScript using Visual Studio Code](quickstart-ts-vscode.md) * [Python using Visual Studio Code](quickstart-python-vscode.md) |
azure-functions | Durable Functions Sequence | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-sequence.md | -Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [TypeScript](quickstart-ts-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md). +Function chaining refers to the pattern of executing a sequence of functions in a particular order. Often the output of one function needs to be applied to the input of another function. This article describes the chaining sequence that you create when you complete the Durable Functions quickstart ([C#](durable-functions-isolated-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [TypeScript](quickstart-ts-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md)). For more information about Durable Functions, see [Durable Functions overview](durable-functions-overview.md). [!INCLUDE [durable-functions-prerequisites](../../../includes/durable-functions-prerequisites.md)] |
azure-functions | Durable Functions Types Features Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-types-features-overview.md | In addition to triggering orchestrator or entity functions, the *durable client* ## Next steps -To get started, create your first durable function in [C#](durable-functions-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md). +To get started, create your first durable function in [C#](durable-functions-isolated-create-first-csharp.md), [JavaScript](quickstart-js-vscode.md), [Python](quickstart-python-vscode.md), [PowerShell](quickstart-powershell-vscode.md), or [Java](quickstart-java.md). > [!div class="nextstepaction"] > [Read more about Durable Functions orchestrations](durable-functions-orchestrations.md) |
azure-functions | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md | Title: Create your first durable function in Azure using Java -description: Create an Azure Durable Function in Java + Title: "Quickstart: Create a Java Durable Functions app" +description: Create and publish a Java Durable Functions app in Azure Functions. Choose manual setup, Maven, or Visual Studio Code. Previously updated : 12/12/2022 Last updated : 07/24/2024 ms.devlang: java zone_pivot_groups: create-java-durable-options -# Create your first durable function in Java +# Quickstart: Create a Java Durable Functions app -_Durable Functions_ is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. Durable Functions manages state, checkpoints, and restarts in your application. -In this quickstart, you'll learn how to create and test a "Hello World" Durable Functions app in Java. The most basic Durable Functions app contains the following three functions: +In this quickstart, you create and test a "hello world" Durable Functions app in Java. -- _Orchestrator function_ - describes a workflow that orchestrates other functions.-- _Activity function_ - called by the orchestrator function, performs work, and optionally returns a value.-- _Client function_ - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function.+The most basic Durable Functions app has three functions: -This quickstart will show you how to create this "Hello World" app, which you can do in different ways. Use the selector above to choose your preferred approach. +* **Orchestrator function**: A workflow that orchestrates other functions. +* **Activity function**: A function that is called by the orchestrator function, performs work, and optionally returns a value. +* **Client function**: A regular function in Azure that starts an orchestrator function. This example uses an HTTP-triggered function. ++This quickstart describes different ways to create this "hello world" app. Use the selector at the top of the page to set your preferred approach. ## Prerequisites -To complete this tutorial, you need: +To complete this quickstart, you need: ++* The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure) version 8 or later installed. -- The [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 or newer.+* [Apache Maven](https://maven.apache.org) version 3.0 or later installed. -- [Apache Maven](https://maven.apache.org), version 3.0 or newer.+* The latest version of [Azure Functions Core Tools](../functions-run-local.md). -- Latest version of the [Azure Functions Core Tools](../functions-run-local.md).- - For Azure Functions 4.x, Core Tools **v4.0.4915** or newer is required. + For Azure Functions _4.x_, Core Tools version 4.0.4915 or later is required. -- An Azure Storage account, which requires that you have an Azure subscription.+* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] ## Add required dependencies and plugins to your project -Add the following to your `pom.xml`: +Add the following code to your _pom.xml_ file: ```xml <properties> Add the following to your `pom.xml`: </build> ``` -## Add required JSON files +## Add the required JSON files -Add a `host.json` file to your project directory. It should look similar to the following: +Add a _host.json_ file to your project directory. It should look similar to the following example: ```json { Add a `host.json` file to your project directory. It should look similar to the } } ```+ > [!NOTE] > It's important to note that only the Azure Functions v4 extension bundle currently has the necessary support for Durable Functions for Java. Durable Functions for Java is _not_ supported in v3 and early extension bundles. For more information on extension bundles, see the [extension bundles documentation](../functions-bindings-register.md#extension-bundles). -Durable Functions needs a storage provider to store runtime state. Add a `local.settings.json` file to your project directory to configure the storage provider. To use Azure Storage as the provider, set the value of `AzureWebJobsStorage` to the connection string of your Azure Storage account: +Durable Functions needs a storage provider to store runtime state. Add a _local.settings.json_ file to your project directory to configure the storage provider. To use Azure Storage as the provider, set the value of `AzureWebJobsStorage` to the connection string of your Azure Storage account: ```json { Durable Functions needs a storage provider to store runtime state. Add a `local. ## Create your functions -The sample code below shows a simple example of each: +The following sample code shows a basic example of each type of function: ```java import com.microsoft.azure.functions.annotation.*; public class DurableFunctionsSample { } /**- * This is the activity function that gets invoked by the orchestrator function. + * This is the activity function that is invoked by the orchestrator function. */ @FunctionName("Capitalize") public String capitalize(@DurableActivityTrigger(name = "name") String name, final ExecutionContext context) { public class DurableFunctionsSample { } ```+ ::: zone-end ::: zone pivot="create-option-maven-command" -## Create a local project with Maven command +## Create a local project by using the Maven command -1. Run the following command to generate a project with the basic functions of a Durable Functions app: +Run the following command to generate a project that contains the basic functions of a Durable Functions app: # [Bash](#tab/bash)+ ```bash mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DarchetypeVersion=1.51 -Dtrigger=durablefunctions ``` # [PowerShell](#tab/powershell)-```powershell ++```powershell mvn archetype:generate "-DarchetypeGroupId=com.microsoft.azure" "-DarchetypeArtifactId=azure-functions-archetype" "-DarchetypeVersion=1.51" "-Dtrigger=durablefunctions" ``` # [Cmd](#tab/cmd)-```cmd ++```cmd mvn archetype:generate "-DarchetypeGroupId=com.microsoft.azure" "-DarchetypeArtifactId=azure-functions-archetype" "-DarchetypeVersion=1.51" "-Dtrigger=durablefunctions" ```+ -2. Follow the prompts and provide the following information: +At the prompts, provide the following information: - | Prompt | Value | - | | -- | - | **groupId** | `com.function` | - | **artifactId** | `myDurableFunction` | - | **version** | `1.0-SNAPSHOT` | - | **package** | `com.function` | - | **Y** | Hit _enter_ to confirm | + | Prompt | Action | + | | -- | + | **groupId** | Enter **com.function**. | + | **artifactId** | Enter **myDurableFunction**. | + | **version** | Select **1.0-SNAPSHOT**. | + | **package** | Enter **com.function**. | + | **Y** | Enter **Y** and select Enter to confirm. | -Now you have a local project generated with the three functions that are needed for a basic Durable Functions app. +Now you have a local project that has the three functions that are in a basic Durable Functions app. -Please check to ensure you have `com.microsoft:durabletask-azure-functions` as a dependency in your `pom.xml`. +Check to ensure that `com.microsoft:durabletask-azure-functions` is set as a dependency in your _pom.xml_ file. -## Configure backend storage provider +## Configure the back-end storage provider -Durable Functions needs a storage provider to store runtime state. You can configure to use Azure Storage as the storage provider in `local.settings.json` by providing the connection string of your Azure Storage account as the value to `AzureWebJobsStorage`: +Durable Functions needs a storage provider to store runtime state. You can set Azure Storage as the storage provider in _local.settings.json_. Use the connection string of your Azure storage account as the value for `AzureWebJobsStorage` like in this example: ```json { Durable Functions needs a storage provider to store runtime state. You can confi "FUNCTIONS_WORKER_RUNTIME": "java" } }-``` +``` + ::: zone-end ::: zone pivot="create-option-vscode"-## Create your local project -1. In Visual Studio Code, press F1 (or Ctrl/Cmd+Shift+P) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +## Create your local project ++1. In Visual Studio Code, select F1 (or select Ctrl/Cmd+Shift+P) to open the command palette. At the prompt (`>`), enter and then select **Azure Functions: Create New Project**. - ![Screenshot of create new functions project.](media/quickstart-js-vscode/functions-create-project.png) + :::image type="content" source="media/quickstart-js-vscode/functions-create-project.png" alt-text="Screenshot of the create new functions project command."::: -2. Choose an empty folder location for your project and choose **Select**. +1. Select **Browse**. In the **Select Folder** dialog, go to a folder to use for your project, and then choose **Select**. -3. Follow the prompts and provide the following information: +1. At the prompts, provide the following information: - |Prompt|Value| + | Prompt | Action | |--|--|- |**Select a language**| Choose `Java`.| - |**Select a version of Java**| Choose `Java 8` or newer, the Java version on which your functions run in Azure. Choose a Java version that you've verified locally. | - | **Provide a group ID** | `com.function`. | - | **Provide an artifact ID** | `myDurableFunction`. | - | **Provide a version** | `1.0-SNAPSHOT`. | - | **Provide a package name** | `com.function`. | - | **Provide an app name** | `myDurableFunction`. | - | **Select the build tool for Java project** | Choose `Maven`.| - |**Select how you would like to open your project**| Choose `Open in new window`.| --You now have a project with an example HTTP function. You can remove this function if you'd like because we'll be adding the basic functions of a Durable Functions app in the next step. + | **Select a language** | Select **Java**. | + | **Select a version of Java** | Select **Java 8** or later. Select the Java version that your functions run on in Azure, and one that you verified locally. | + | **Provide a group ID** | Enter **com.function**. | + | **Provide an artifact ID** | Enter **myDurableFunction**. | + | **Provide a version** | Enter **1.0-SNAPSHOT**. | + | **Provide a package name** | Enter **com.function**. | + | **Provide an app name** | Enter **myDurableFunction**. | + | **Select the build tool for Java project** | Select **Maven**. | + | **Select how you would like to open your project** | Select **Open in new window**. | ++You now have a project that has an example HTTP function. You can remove this function if you'd like to, because you add the basic functions of a Durable Functions app in the next step. ## Add functions to the project -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -2. Select `Change template filter` to `All`. +1. For **Change template filter**, select **All**. -3. Follow the prompts and provide the following information: +1. At the prompts, provide the following information: - | Prompt | Value | - | | -- | - | **Select a template for your function**| DurableFunctionsOrchestration | - | **Provide a package name** | `com.function` - | **Provide a function name** | `DurableFunctionsOrchestrator` | + | Prompt | Action | + | | -- | + | **Select a template for your function**| Select **DurableFunctionsOrchestration**. | + | **Provide a package name** | Enter **com.function**. | + | **Provide a function name** | Enter **DurableFunctionsOrchestrator**. | -4. Choose `Select storage account` on the pop-up window asking to set up storage account information and follow the prompts. +1. In the dialog, choose **Select storage account** to set up a storage account, and then follow the prompts. -You should now have the three basic functions for a Durable Functions app generated. +You should now have the three basic functions generated for a Durable Functions app. ## Configure pom.xml and host.json -Add the following dependency to your `pom.xml`: +Add the following dependency to your _pom.xml_ file: ```xml <dependency> Add the following dependency to your `pom.xml`: </dependency> ``` -Add the `extensions` property to your `host.json`: +Add the `extensions` property to your _host.json_ file: ```json "extensions": { "durableTask": { "hubName": "JavaTestHub" }} ``` ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. > [!NOTE]-> Durable Functions for Java requires Azure Functions Core Tools v4.0.4915 or newer. You can see which version is installed by running the `func --version` command from the terminal. +> Durable Functions for Java requires Azure Functions Core Tools version 4.0.4915 or later. You can see which version is installed by running the `func --version` command in the terminal. -1. If you are using Visual Studio Code, open a new terminal window and run the following commands to build the project: +1. If you're using Visual Studio Code, open a new terminal window and run the following commands to build the project: - ```bash - mvn clean package - ``` - - Then run the durable function: - - ```bash - mvn azure-functions:run - ``` + ```bash + mvn clean package + ``` ++ Then, run the durable function: -2. In the Terminal panel, copy the URL endpoint of your HTTP-triggered function. + ```bash + mvn azure-functions:run + ``` - ![Screenshot of Azure local output.](media/quickstart-java/maven-functions-run.png) +1. In the terminal panel, copy the URL endpoint of your HTTP-triggered function. -3. Using a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. You should get a response similar to the following: + :::image type="content" source="media/quickstart-java/maven-functions-run.png" alt-text="Screenshot of Azure local output."::: ++1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. The response should look similar to the following example: ```json { Azure Functions Core Tools lets you run an Azure Functions project on your local "terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/d1b33a60-333f-4d6e-9ade-17a7020562a9/terminate?reason={text}&code=ACCupah_QfGKo..." } ```- - The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. -4. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman or cURL to issue the GET request. + The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. ++1. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can continue to use Postman to issue the GET request. - The request will query the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: + The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the durable function, like in this example: ```json { |
azure-functions | Quickstart Js Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-js-vscode.md | Title: Create your first durable function in Azure using JavaScript -description: Create and publish an Azure Durable Function in JavaScript using Visual Studio Code. + Title: "Quickstart: Create a JavaScript Durable Functions app" +description: Create and publish a JavaScript Durable Functions app in Azure Functions by using Visual Studio Code. Previously updated : 02/13/2023 Last updated : 07/24/2024 ms.devlang: javascript zone_pivot_groups: functions-nodejs-model -# Create your first durable function in JavaScript +# Quickstart: Create a JavaScript Durable Functions app -*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. You install Durable Functions by installing the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) in Visual Studio Code. The extension manages state, checkpoints, and restarts in your application. -In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. +In this quickstart, you use the Durable Functions extension in Visual Studio Code to locally create and test a "hello world" Durable Functions app in Azure Functions. The Durable Functions app orchestrates and chains together calls to other functions. Then, you publish the function code to Azure. The tools you use are available via the Visual Studio Code extension. [!INCLUDE [functions-nodejs-model-pivot-description](../../../includes/functions-nodejs-model-pivot-description.md)] -![Screenshot of an Edge window. The window shows the output of invoking a simple durable function in Azure.](./media/quickstart-js-vscode/functions-vs-code-complete.png) ## Prerequisites -To complete this tutorial: +To complete this quickstart, you need: -* Install [Visual Studio Code](https://code.visualstudio.com/download). +* [Visual Studio Code](https://code.visualstudio.com/download) installed. ::: zone pivot="nodejs-model-v3"-* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension ++* The Visual Studio Code extension [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension version `1.10.4` or above. ++* The Visual Studio Code extension [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) version 1.10.4 or later installed. + ::: zone-end ::: zone pivot="nodejs-model-v3"-* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ++* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5382` or above. ++* [Azure Functions Core Tools](../functions-run-local.md) version 4.0.5382 or later installed. + ::: zone-end -* Durable Functions require an Azure storage account. You need an Azure subscription. +* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. ::: zone pivot="nodejs-model-v3"-* Make sure that you have version 16.x+ of [Node.js](https://nodejs.org/) installed. ++* [Node.js](https://nodejs.org/) version 16.x+ installed. + ::: zone-end-* Make sure that you have version 18.x+ of [Node.js](https://nodejs.org/) installed. +++* [Node.js](https://nodejs.org/) version 18.x+ installed. + ::: zone-end [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] -## <a name="create-an-azure-functions-project"></a>Create your local project +## <a name="create-an-azure-functions-project"></a>Create your local project -In this section, you use Visual Studio Code to create a local Azure Functions project. +In this section, you use Visual Studio Code to create a local Azure Functions project. -1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +1. In Visual Studio Code, select F1 (or select Ctrl/Cmd+Shift+P) to open the command palette. At the prompt (`>`), enter and then select **Azure Functions: Create New Project**. - ![Screenshot of the Visual Studio Code command palette. The command titled "Azure Functions: Create New Project..." is highlighted.](media/quickstart-js-vscode/functions-create-project.png) + :::image type="content" source="media/quickstart-js-vscode/functions-create-project.png" alt-text="Screenshot that shows the Visual Studio Code command palette with the command Azure Functions Create New Project highlighted."::: -2. Choose an empty folder location for your project and choose **Select**. +2. Select **Browse**. In the **Select Folder** dialog, go to a folder to use for your project, and then choose **Select**. ::: zone pivot="nodejs-model-v3"-3. Following the prompts, provide the following information: - | Prompt | Value | Description | +3. At the prompts, provide the following information: ++ | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | JavaScript | Create a local Node.js Functions project. | - | Select a JavaScript programming model | Model V3 | Choose the V3 programming model. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | + | **Select a language for your function app project** | Select **JavaScript**. | Creates a local Node.js Functions project. | + | **Select a JavaScript programming model** | Select **Model V3**. | Sets the v3 programming model. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Select a template for your project's first function** | Select **Skip for now**. | | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | ::: zone-end+ ::: zone pivot="nodejs-model-v4"-3. Following the prompts, provide the following information: - | Prompt | Value | Description | +3. At the prompts, provide the following information: ++ | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | JavaScript | Create a local Node.js Functions project. | - | Select a JavaScript programming model | Model V4 | Choose the V4 programming model. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | + | **Select a language for your function app project** | Select **JavaScript**. | Creates a local Node.js Functions project. | + | **Select a JavaScript programming model** | Select **Model V4**. | Choose the v4 programming model. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Select a template for your project's first function** | Select **Skip for now**. | | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | ::: zone-end -Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. +Visual Studio Code installs Azure Functions Core Tools if it's required to create a project. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. -A `package.json` file is also created in the root folder. +A *package.json* file is also created in the root folder. ## Install the Durable Functions npm package -To work with Durable Functions in a Node.js function app, you use a library called `durable-functions`. +To work with Durable Functions in a Node.js function app, you use a library called *durable-functions*. + ::: zone pivot="nodejs-model-v4"-To use the V4 programming model, you need to install the preview `v3.x` version of `durable-functions`. ++To use the v4 programming model, you install the preview v3.x version of the durable-functions library. + ::: zone-end -1. Use the *View* menu or <kbd>Ctrl + Shift + `</kbd> to open a new terminal in VS Code. +1. Use the **View** menu or select Ctrl+Shift+` to open a new terminal in Visual Studio Code. ::: zone pivot="nodejs-model-v3"-2. Install the `durable-functions` npm package by running `npm install durable-functions` in the root directory of the function app. ++2. Install the durable-functions npm package by running `npm install durable-functions` in the root directory of the function app. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-2. Install the `durable-functions` npm package preview version by running `npm install durable-functions@preview` in the root directory of the function app. ++2. Install the durable-functions npm package preview version by running `npm install durable-functions@preview` in the root directory of the function app. + ::: zone-end -## Creating your functions +## Create your functions -The most basic Durable Functions app contains three functions: +The most basic Durable Functions app has three functions: -* *Orchestrator function* - describes a workflow that orchestrates other functions. -* *Activity function* - called by the orchestrator function, performs work, and optionally returns a value. -* *Client function* - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. +* **Orchestrator function**: A workflow that orchestrates other functions. +* **Activity function**: A function that is called by the orchestrator function, performs work, and optionally returns a value. +* **Client function**: A regular function in Azure that starts an orchestrator function. This example uses an HTTP-triggered function. ::: zone pivot="nodejs-model-v3" ### Orchestrator function -You use a template to create the durable function code in your project. +You use a template to create the Durable Functions app code in your project. -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions orchestrator | Create a Durable Functions orchestration | - | Choose a durable storage type. | Azure Storage (Default) | Select the storage backend used for Durable Functions. | - | Provide a function name | HelloOrchestrator | Name of your durable function | + | **Select a template for your function** | Select **Durable Functions orchestrator**. | Creates a Durable Functions app orchestration. | + | **Choose a durable storage type** | Select **Azure Storage (Default)**. | Selects the storage back end that's used for your Durable Functions app. | + | **Provide a function name** | Enter **HelloOrchestrator**. | A name for your durable function. | -You've added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/index.js* to see the orchestrator function. Each call to `context.df.callActivity` invokes an activity function named `Hello`. +You added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/index.js* to see the orchestrator function. Each call to `context.df.callActivity` invokes an activity function named `Hello`. -Next, you'll add the referenced `Hello` activity function. +Next, add the referenced `Hello` activity function. ### Activity function -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions activity | Create an activity function | - | Provide a function name | Hello | Name of your activity function | + | **Select a template for your function** | Select **Durable Functions activity**. | Creates an activity function. | + | **Provide a function name** | Enter **Hello**. | A name for your durable function. | -You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.js* to see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. +You added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.js* to see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow, such as making a database call or performing some nondeterministic computation. -Finally, you'll add an HTTP triggered function that starts the orchestration. +Finally, add an HTTP-triggered function that starts the orchestration. ### Client function (HTTP starter) -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions HTTP starter | Create an HTTP starter function | - | Provide a function name | DurableFunctionsHttpStart | Name of your activity function | - | Authorization level | Anonymous | For demo purposes, allow the function to be called without authentication | + | **Select a template for your function** | Select **Durable Functions HTTP starter**. | Creates an HTTP starter function. | + | **Provide a function name** | Enter **DurableFunctionsHttpStart**. | The name of your activity function. | + | **Authorization level** | Select **Anonymous**. | For demo purposes, this value allows the function to be called without using authentication | -You've added an HTTP triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/index.js* to see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. +You added an HTTP-triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/index.js* to see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response that contains URLs that you can use to monitor and manage the new orchestration. ++You now have a Durable Functions app that you can run locally and deploy to Azure. -You now have a Durable Functions app that can be run locally and deployed to Azure. ::: zone-end+ ::: zone pivot="nodejs-model-v4" -One of the benefits of the V4 Programming Model is the flexibility of where you write your functions. -In the V4 Model, you can use a single template to create all three functions in one file in your project. +One of the benefits of the v4 programming model is the flexibility of where you write your functions. In the v4 model, you can use a single template to create all three functions in one file in your project. -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions orchestrator | Create a file with a Durable Functions orchestration, an Activity function, and a Durable Client starter function. | - | Choose a durable storage type | Azure Storage (Default) | Select the storage backend used for Durable Functions. | - | Provide a function name | hello | Name used for your durable functions | + | **Select a template for your function** | Select **Durable Functions orchestrator**. | Creates a file that has a Durable Functions app orchestration, an activity function, and a durable client starter function. | + | **Choose a durable storage type** | Select **Azure Storage (Default)**. | Sets the storage back end to use for your Durable Functions app. | + | **Provide a function name** | Enter **hello**. | The name of your durable function. | Open *src/functions/hello.js* to view the functions you created. -You've created an orchestrator called `helloOrchestrator` to coordinate activity functions. Each call to `context.df.callActivity` invokes an activity function called `hello`. +You created an orchestrator called `helloOrchestrator` to coordinate activity functions. Each call to `context.df.callActivity` invokes an activity function called `hello`. ++You also added the `hello` activity function that is invoked by the orchestrator. In the same file, you can see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow, such as making a database call or performing some nondeterministic computation. -You've also added the `hello` activity function that is invoked by the orchestrator. In the same file, you can see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. +Finally, also added an HTTP-triggered function that starts an orchestration. In the same file, you can see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response that contains URLs that you can use to monitor and manage the new orchestration. -Lastly, you've also added an HTTP triggered function that starts an orchestration. In the same file, you can see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. +You now have a Durable Functions app that you can run locally and deploy to Azure. -You now have a Durable Functions app that can be run locally and deployed to Azure. ::: zone-end ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function in Visual Studio Code. ::: zone pivot="nodejs-model-v3"-1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/index.js*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. ++1. To test your function, set a breakpoint in the `Hello` activity function code (in *Hello/index.js*). Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. ++ > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-1. To test your function, set a breakpoint in the `hello` activity function code (*src/functions/hello.js*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. - > [!NOTE] - > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging. +1. To test your function, set a breakpoint in the `hello` activity function code (in *src/functions/hello.js*). Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. -2. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**. + > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). - ![Screenshot of a Visual Studio Code alert window. The window says "In order to debug, you must select a storage account for internal use by the Azure Functions runtime." The button titled "Select storage account" is highlighted.](media/quickstart-js-vscode/functions-select-storage.png) ++2. Durable Functions requires an Azure Storage account to run. When Visual Studio Code prompts you to select a storage account, choose **Select storage account**. -3. Following the prompts, provide the following information to create a new storage account in Azure. + ![Screenshot of a Visual Studio Code alert window. Select storage account is highlighted.](media/quickstart-js-vscode/functions-select-storage.png) ++3. At the prompts, provide the following information to create a new storage account in Azure: | Prompt | Value | Description | | | -- | -- | Azure Functions Core Tools lets you run an Azure Functions project on your local | Select a resource group | *unique name* | Name of the resource group to create | | Select a location | *region* | Select a region close to you | -4. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. +4. In the terminal panel, copy the URL endpoint of your HTTP-triggered function. - ![Screenshot of the Visual Studio code terminal panel. The terminal shows the output of running an Durable Functions app locally. The table titled "terminal" and the URL of the HTTP starter function are highlighted.](media/quickstart-js-vscode/functions-f5.png) + ![Screenshot of the Visual Studio code terminal panel. The terminal shows the output of running an Durable Functions app locally.](media/quickstart-js-vscode/functions-f5.png) ::: zone pivot="nodejs-model-v3"-5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. ++5. Use your browser or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. ++ The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`helloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/helloOrchestrator`. ++5. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`helloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/helloOrchestrator`. ++ The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. + ::: zone-end - The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. -6. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request. +6. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. - The request queries the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: + The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the Durable Functions app, like in this example: - ::: zone pivot="nodejs-model-v3" ```json { "name": "HelloOrchestrator", Azure Functions Core Tools lets you run an Azure Functions project on your local "lastUpdatedTime": "2020-03-18T21:54:54Z" } ```- ::: zone-end - ::: zone pivot="nodejs-model-v4" ++++6. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. ++ The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the Durable Functions app, like in this example: + ```json { "name": "helloOrchestrator", Azure Functions Core Tools lets you run an Azure Functions project on your local "lastUpdatedTime": "2023-02-13T23:02:25Z" } ```+ ::: zone-end -7. To stop debugging, press **Shift + F5** in VS Code. +7. In Visual Studio Code, select Shift+F5 to stop debugging. -After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. +After you verify that the function runs correctly on your local computer, it's time to publish the project to Azure. [!INCLUDE [functions-create-function-app-vs-code](../../../includes/functions-sign-in-vs-code.md)] After you've verified that the function runs correctly on your local computer, i ## Test your function in Azure ::: zone pivot="nodejs-model-v4"+ > [!NOTE]-> To use the V4 node programming model, make sure your app is running on at least version 4.25 of the Azure Functions runtime. +> To use the v4 Node.js programming model, make sure that your app is running on at least version 4.25 of the Azure Functions runtime. +> + ::: zone-end ::: zone pivot="nodejs-model-v3"-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` ++1. On the output panel, copy the URL of the HTTP trigger. The URL that calls your HTTP-triggered function should be in this format: ++ `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator` ++1. On the output panel, copy the URL of the HTTP trigger. The URL that calls your HTTP-triggered function should be in this format: ++ `https://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator` + ::: zone-end -2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app. +2. Paste the new URL for the HTTP request in your browser's address bar. When you use the published app, you can expect to get the same status response that you got when you tested locally. ++The JavaScript Durable Functions app that you created and published in Visual Studio Code is ready to use. ++## Clean up resources -## Next steps +If you no longer need the resources that you created to complete the quickstart, to avoid related costs in your Azure subscription, [delete the resource group](/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-portal#delete-resource-group) and all related resources. -You have used Visual Studio Code to create and publish a JavaScript durable function app. +## Related content -> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) +* Learn about [common Durable Functions app patterns](durable-functions-overview.md#application-patterns). |
azure-functions | Quickstart Mssql | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-mssql.md | Title: Configure storage provider - Microsoft SQL Server (MSSQL) -description: Configure a Durable Functions app to use MSSQL + Title: "Quickstart: Configure a storage provider by using MSSQL" +description: Configure a Durable Functions app to use the Microsoft SQL Server (MSSQL) storage provider in Azure Functions. Previously updated : 11/14/2022 Last updated : 07/24/2024 -# Configure Durable Functions with the Microsoft SQL Server (MSSQL) storage provider +# Quickstart: Set a Durable Functions app to use the MSSQL storage provider -Durable Functions supports several [storage providers](durable-functions-storage-providers.md), also known as _backends_, for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this article, we walk through how to configure a Durable Functions app to utilize the [MSSQL storage provider](durable-functions-storage-providers.md#mssql). +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. Durable Functions manages state, checkpoints, and restarts in your application. -> [!NOTE] -> The MSSQL backend was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub state so that users get the benefits of modern, enterprise-grade DBMS infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation. --## Note on data migration +Durable Functions supports several [storage providers](durable-functions-storage-providers.md), also known as _back ends_, for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this quickstart, you configure a Durable Functions app to use the [Microsoft SQL Server (MSSQL) storage provider](durable-functions-storage-providers.md#mssql). -Migration of [Task Hub data](durable-functions-task-hubs.md) across storage providers isn't currently supported. Function apps with existing runtime data will start with a fresh, empty task hub after switching to the MSSQL backend. Similarly, the task hub contents created with MSSQL can't be preserved when switching to a different storage provider. +> [!NOTE] +> +> - The MSSQL back end was designed to maximize application portability and control over your data. It uses [Microsoft SQL Server](https://www.microsoft.com/sql-server/) to persist all task hub data so that users get the benefits of a modern, enterprise-grade database management system (DBMS) infrastructure. To learn more about when to use the MSSQL storage provider, see the [storage providers overview](durable-functions-storage-providers.md). +> +> - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the MSSQL back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider. ## Prerequisites -The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it. +The following steps assume that you have an existing Durable Functions app and that you're familiar with how to operate it. -In particular, this quickstart assumes that you have already: -1. Created an Azure Functions project on your local machine. -2. Added Durable Functions to your project with an [orchestrator function](durable-functions-bindings.md#orchestration-trigger) and a [client function](durable-functions-bindings.md#orchestration-client) that triggers it. -3. Configured the project for local debugging. +Specifically, this quickstart assumes that you have already: -If this isn't the case, we suggest you start with one of the following articles, which provides detailed instructions on how to achieve all the requirements above: +- Created an Azure Functions project on your local computer. +- Added Durable Functions to your project with an [orchestrator function](durable-functions-bindings.md#orchestration-trigger) and a [client function](durable-functions-bindings.md#orchestration-client) that triggers the Durable Functions app. +- Configured the project for local debugging. -- [Create your first durable function - C#](durable-functions-create-first-csharp.md)-- [Create your first durable function - JavaScript](quickstart-js-vscode.md)-- [Create your first durable function - Python](quickstart-python-vscode.md)-- [Create your first durable function - PowerShell](quickstart-powershell-vscode.md)-- [Create your first durable function - Java](quickstart-java.md)+If you don't meet these prerequisites, we recommend that you begin with one of the following quickstarts: ++- [Create a Durable Functions app - C#](durable-functions-isolated-create-first-csharp.md) +- [Create a Durable Functions app - JavaScript](quickstart-js-vscode.md) +- [Create a Durable Functions app - Python](quickstart-python-vscode.md) +- [Create a Durable Functions app - PowerShell](quickstart-powershell-vscode.md) +- [Create a Durable Functions app - Java](quickstart-java.md) ## Add the Durable Task MSSQL extension (.NET only) > [!NOTE]-> If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. --You need to install the latest version of the MSSQL storage provider Extension on NuGet, which for .NET means adding a reference to it in your `.csproj` file and building the project. You can also use the [`dotnet add package`](/dotnet/core/tools/dotnet-add-package) command to add extension packages. +> If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), skip this section. Extension Bundles removes the need for manual extension management. -The Extension package to install depends on the .NET worker you're using: -- For the _in-process_ .NET worker, install [`Microsoft.DurableTask.SqlServer.AzureFunctions`](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions).-- For the _isolated_ .NET worker, install [`Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer`](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer).+First, install the latest version of the [Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.SqlServer) MSSQL storage provider extension from NuGet. For .NET, you add a reference to the extension in your _.csproj_ file and then build the project. You can also use the [dotnet add package](/dotnet/core/tools/dotnet-add-package) command to add extension packages. -You can install the Extension using the following [Azure Functions Core Tools CLI](../functions-run-local.md#install-the-azure-functions-core-tools) command +You can install the extension by using the following [Azure Functions Core Tools CLI](../functions-run-local.md#install-the-azure-functions-core-tools) command: ```cmd func extensions install --package <package name depending on your worker model> --version <latest version> ``` -For more information on installing Azure Functions Extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). +For more information about installing Azure Functions extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). -## Set up your Database +## Set up your database > [!NOTE]-> If you already have an MSSQL-compatible database, you may skip this section and its sub-section on setting up a Docker-based local database. +> If you already have an MSSQL-compatible database, you can skip this section and skip the next section on setting up a Docker-based local database. ++Because the MSSQL back end is designed for portability, you have several options to set up your backing database. For example, you can set up an on-premises SQL Server instance, use a fully managed instance of [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview), or use any other SQL Server-compatible hosting option. -As the MSSQL backend is designed for portability, you have several options to set up your backing database. For example, you can set up an on-premises SQL Server instance, use a fully managed [Azure SQL DB](/azure/azure-sql/database/sql-database-paas-overview), or use any other SQL Server-compatible hosting option. +You can also do local, offline development by using [SQL Server Express](https://www.microsoft.com/sql-server/sql-server-downloads) on your local Windows computer or use a [SQL Server Docker image](https://hub.docker.com/_/microsoft-mssql-server) running in a Docker container. -You can also do local, offline development with [SQL Server Express](https://www.microsoft.com/sql-server/sql-server-downloads) on your local Windows machine or use [SQL Server Docker image](https://hub.docker.com/_/microsoft-mssql-server) running in a Docker container. For ease of setup, this article focuses on the latter. +This quickstart focuses on using a SQL Server Docker image. -### Set up your local Docker-based SQL Server +### Set up your local Docker-based SQL Server instance -To run these steps, you need a [Docker](https://www.docker.com/products/docker-desktop/) installation on your local machine. Below are PowerShell commands that you can use to set up a local SQL Server database on Docker. Note that PowerShell can be installed on Windows, macOS, or Linux using the installation instructions [here](/powershell/scripting/install/installing-powershell). +To run these steps, you need a [Docker](https://www.docker.com/products/docker-desktop/) installation on your local computer. You can use the following PowerShell commands to set up a local SQL Server database on Docker. You can install PowerShell on [Windows, macOS, or Linux](/powershell/scripting/install/installing-powershell). ```powershell # primary parameters $collation = "Latin1_General_100_BIN2_UTF8" # pull the image from the Microsoft container registry docker pull mcr.microsoft.com/mssql/server:$tag -# run the image, providing some basic setup parameters +# run the image and provide some basic setup parameters docker run --name mssql-server -e 'ACCEPT_EULA=Y' -e "SA_PASSWORD=$pw" -e "MSSQL_PID=$edition" -p ${port}:1433 -d mcr.microsoft.com/mssql/server:$tag # wait a few seconds for the container to start... docker run --name mssql-server -e 'ACCEPT_EULA=Y' -e "SA_PASSWORD=$pw" -e "MSSQL docker exec -d mssql-server /opt/mssql-tools/bin/sqlcmd -S . -U sa -P "$pw" -Q "CREATE DATABASE [$dbname] COLLATE $collation" ``` -After running these commands, you should have a local SQL Server running on Docker and listening on port `1443`. If port `1443` conflicts with another service, you can rerun these commands after changing the variable `$port` to a different value. +After you run these commands, you should have a local SQL Server running on Docker and listening on port 1443. If port 1443 conflicts with another service, you can rerun these commands after you change the variable `$port` to a different value. > [!NOTE]-> To stop and delete a running container, you may use `docker stop <containerName>` and `docker rm <containerName>` respectively. You may use these commands to re-create your container, and to stop if after you're done with this quickstart. For more assistance, try `docker --help`. +> To stop and delete a running container, you can use `docker stop <containerName>` and `docker rm <containerName>` respectively. You can use these commands to re-create your container and to stop the container when you finish this quickstart. For more assistance, run `docker --help`. -To validate your database installation, you can query for your new SQL database using the following Docker command: +To validate your database installation, use this Docker command to query your new SQL database: ```powershell docker exec -it mssql-server /opt/mssql-tools/bin/sqlcmd -S . -U sa -P "$pw" -Q "SELECT name FROM sys.databases" ``` -If the database setup completed successfully, you should see the name of your created database (for example, `DurableDB`) in the command-line output. +If the database setup completed successfully, the name of your database (for example, **DurableDB**) appears in the command-line output: ```bash name DurableDB ### Add your SQL connection string to local.settings.json -The MSSQL backend needs a connection string to your database. How to obtain a connection string largely depends on your specific MSSQL Server provider. Review the documentation of your specific provider for information on how to obtain a connection string. +The MSSQL back end needs a connection string to access your database. How to obtain a connection string depends primarily on your specific MSSQL server provider. For information about how to obtain a connection string, review the documentation of your specific provider. -Using the previous Docker commands, without changing any parameters, your connection string should be: +If you use the preceding Docker commands without changing any parameters, your connection string is: -``` +```cmd Server=localhost,1433;Database=DurableDB;User Id=sa;Password=yourStrong(!)Password; ``` -After obtaining your connection string, add it to a variable in `local.settings.json` so it can be used during local development. --Below is an example `local.settings.json` assigning the default Docker-based SQL Server's connection string to the variable `SQLDB_Connection`. +After you get your connection string, add it to a variable in _local.settings.json_ to use it during local development. +Here's an example _local.settings.json_ that assigns the default Docker-based SQL Server instance connection string to the variable `SQLDB_Connection`: ```json { Below is an example `local.settings.json` assigning the default Docker-based SQL ``` > [!NOTE]-> The value of `FUNCTIONS_WORKER_RUNTIME` is dependent on your programming language of choice. For more information, please see its [reference docs](../functions-app-settings.md#functions_worker_runtime). +> The value of `FUNCTIONS_WORKER_RUNTIME` depends on the programming language you use. For more information, see the [runtime reference](../functions-app-settings.md#functions_worker_runtime). ### Update host.json -Edit the storage provider section of the `host.json` file so it sets the `type` to `mssql`. You must also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. Set `createDatabaseIfNotExists` to `true`; this setting creates a database named `DurableDB` if one doesn't already exist, with collation `Latin1_General_100_BIN2_UTF8`. +Edit the storage provider section of the _host.json_ file to set `type` to `mssql`. You must also specify the connection string variable name, `SQLDB_Connection`, under `connectionStringName`. Set `createDatabaseIfNotExists` to `true`. This setting creates a database named **DurableDB** if one doesn't already exist, with the collation `Latin1_General_100_BIN2_UTF8`. ```json { Edit the storage provider section of the `host.json` file so it sets the `type` } ``` -The snippet above is a fairly *minimal* `host.json` example. Later, you may want to consider [other parameters](https://microsoft.github.io/durabletask-mssql/#/quickstart?id=hostjson-configuration). +This code sample is a relatively basic _host.json_ example. Later, you might want to [add parameters](https://microsoft.github.io/durabletask-mssql/#/quickstart?id=hostjson-configuration). ### Test locally -Your app is now ready for local development: You can start the Function app to test it. One way to do this is to run `func host start` on your application's root and executing a simple orchestrator Function. +Your app is now ready for local development. You can start the function app to test it. One way to start the app is to run `func host start` on your application's root and execute a basic orchestrator function. -While the function app is running, it updates runtime state in the configured SQL database. You can test this is working as expected using your SQL query interface. For example, in our docker-based local SQL server container, you can view the state of your orchestration instances with the following `docker` command: +While the function app is running, it updates runtime state in the configured SQL database. You can test it's working as expected by using your SQL query interface. For example, in your Docker-based local SQL Server container, you can view the state of your orchestration instances by using the following Docker command: ```bash docker exec -it mssql-server /opt/mssql-tools/bin/sqlcmd -S . -d $dbname -U sa -P "$pw" -Q "SELECT TOP 5 InstanceID, RuntimeStatus, CreatedTime, CompletedTime FROM dt.Instances" ``` -After running an orchestration, the previous query should return something like this: +After you run an orchestration, the query returns results that look like this example: -``` +```cmd InstanceID RuntimeStatus CreatedTime CompletedTime -- -- - 9fe1ea9d109341ff923621c0e58f215c Completed 2022-11-16 21:42:39.1787277 2022-11-16 21:42:42.3993899 ``` -## Run your app on Azure +## Run your app in Azure -To run your app in Azure, you'll need a publicly accessible SQL Server instance. You can obtain one by creating an Azure SQL database. +To run your app in Azure, you need a publicly accessible SQL Server instance. You can get one by creating an Azure SQL database. ### Create an Azure SQL database > [!NOTE]-> If you already have an Azure SQL database, or some other publicly accessible SQL Server you would like to use, you may skip to the next section. +> If you already have an Azure SQL database or another publicly accessible SQL Server instance that you would like to use, you can go to the next section. -You can follow [these](/azure/azure-sql/database/single-database-create-quickstart) instructions to create an Azure SQL database on the portal. When configuring the database, make sure to set the *Database collation* (under _Additional settings_) to `Latin1_General_100_BIN2_UTF8`. +In the Azure portal, you can [create an Azure SQL database](/azure/azure-sql/database/single-database-create-quickstart). When you configure the database, make sure that you set the value for _Database collation_ (under _Additional settings_) to `Latin1_General_100_BIN2_UTF8`. > [!NOTE] > Microsoft offers a [12-month free Azure subscription account](https://azure.microsoft.com/free/) if youΓÇÖre exploring Azure for the first time. -You may obtain your Azure SQL database's connection string by navigating to the database's blade in the Azure portal. Then, under **Settings**, select **Connection strings** and obtain the **ADO.NET** connection string. Make sure to provide your password in the template provided. +You can get your Azure SQL database's connection string by going to the database's overview pane in the Azure portal. Then, under **Settings**, select **Connection strings** and get the **ADO.NET** connection string. Make sure that you provide your password in the template that's provided. -Below is an example of the portal view for obtaining the Azure SQL connection string. +Here's an example of how to get the Azure SQL connection string in the portal: -![An Azure connection string as found in the portal](./media/quickstart-mssql/mssql-azure-db-connection-string.png) -In the Azure portal, the connection string has the database's password removed: it's replaced with `{your_password}`. Replace that segment with the password you used to create the database earlier in this section. If you forgot your password, you may reset it by navigating to the database's blade in the Azure portal, selecting your *Server name* in the **Essentials** view, and also selecting **Reset password** in the resulting page. Below are some guiding images. +In the Azure portal, the connection string has the database's password removed: it's replaced with `{your_password}`. Replace that placeholder with the password that you used to create the database earlier in this section. If you forgot your password, you can reset it by going to the database overview pane in the Azure portal. In the **Essentials** view, select your server name. Then, select **Reset password**. For examples, see the following screenshots. -![The Azure SQL database view, with the Server name option highlighted](./media/quickstart-mssql/mssql-azure-reset-pass-1.png) -![The SQL server view, where the Reset password is visible](./media/quickstart-mssql/mssql-azure-reset-pass-2.png) +### Add the connection string as an application setting -### Add connection string as an application setting +Next, add your database's connection string as an application setting. To add it in the Azure portal, first go to your Azure Functions app view. Under **Configuration**, select **New application setting**. Assign **SQLDB_Connection** to map to a publicly accessible connection string. For examples, see the following screenshots. -You need to add your database's connection string as an application setting. To do this through the Azure portal, first go to your Azure Functions App view. Then under **Configuration**, select **New application setting**, where you assign **SQLDB_Connection** to map to a publicly accessible connection string. Below are some guiding images. -![On the DB blade, go to Configuration, then click new application setting.](./media/quickstart-mssql/mssql-azure-environment-variable-1.png) -![Enter your connection string setting name, and its value.](./media/quickstart-mssql/mssql-azure-environment-variable-2.png) ### Deploy -You can now deploy your function app to Azure and run your tests or workload on it. To validate the MSSQL backend is correctly configured, you can query your database for Task Hub data. +You can now deploy your function app to Azure and run your tests or workload on it. To validate that the MSSQL back end is correctly configured, you can query your database for task hub data. -For example, with Azure SQL database you can query for your orchestration instances by navigating to your SQL database's blade, clicking Query Editor, authenticating, and then running the following query: +For example, you can query your orchestration instances on your SQL database's overview pane. Select **Query Editor**, authenticate, and then run the following query: ```sql SELECT TOP 5 InstanceID, RuntimeStatus, CreatedTime, CompletedTime FROM dt.Instances ``` -After running a simple orchestrator, you should see at least one result, as shown below: +After you run a simple orchestrator, you should see at least one result, as shown in this example: + -![Azure SQL Query editor results for the SQL query provided.](./media/quickstart-mssql/mssql-azure-db-check.png) +## Related content -For more information about the Durable Task MSSQL backend architecture, configuration, and workload behavior, see the [MSSQL storage provider documentation](https://microsoft.github.io/durabletask-mssql/). +- For more information about the Durable Functions app task MSSQL back-end architecture, configuration, and workload behavior, see the [MSSQL storage provider documentation](https://microsoft.github.io/durabletask-mssql/). |
azure-functions | Quickstart Netherite | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-netherite.md | Title: Configure storage provider - Netherite -description: Configure a Durable Functions app to use Netherite + Title: "Quickstart: Configure a storage provider by using Netherite" +description: Configure a Durable Functions app to use the Netherite storage provider in Azure Functions. Previously updated : 11/14/2022 Last updated : 07/24/2024 -# Configure Durable Functions with the Netherite storage provider +# Quickstart: Set a Durable Functions app to use the Netherite storage provider -Durable Functions offers several [storage providers](durable-functions-storage-providers.md), also called "backends", for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this article, we walk through how to configure a Durable Functions app to utilize the [Netherite storage provider](durable-functions-storage-providers.md#netherite). +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. Durable Functions manages state, checkpoints, and restarts in your application. -> [!NOTE] -> Netherite was designed and developed by [Microsoft Research](https://www.microsoft.com/research) for [high throughput](https://microsoft.github.io/durabletask-netherite/#/scenarios) scenarios. In some [benchmarks](https://microsoft.github.io/durabletask-netherite/#/throughput?id=multi-node-throughput), throughput increased by over an order of magnitude compared to the default Azure Storage provider. To learn more about when to use the Netherite storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation. --## Note on data migration +Durable Functions offers several [storage providers](durable-functions-storage-providers.md), also called *back ends*, for storing orchestration and entity runtime state. By default, new projects are configured to use the [Azure Storage provider](durable-functions-storage-providers.md#azure-storage). In this quickstart, you configure a Durable Functions app to use the [Netherite storage provider](durable-functions-storage-providers.md#netherite). -Migration of [Task Hub data](durable-functions-task-hubs.md) across storage providers isn't currently supported. Function apps with existing runtime data will start with a fresh, empty task hub after switching to the Netherite backend. Similarly, the task hub contents created with Netherite can't be preserved when switching to a different storage provider. +> [!NOTE] +> +> - Netherite was designed and developed by [Microsoft Research](https://www.microsoft.com/research) for [high throughput](https://microsoft.github.io/durabletask-netherite/#/scenarios) scenarios. In some [benchmarks](https://microsoft.github.io/durabletask-netherite/#/throughput?id=multi-node-throughput), throughput increased by more than an order of magnitude compared to the default Azure Storage provider. To learn more about when to use the Netherite storage provider, see the [storage providers](durable-functions-storage-providers.md) documentation. +> +> - Migrating [task hub data](durable-functions-task-hubs.md) across storage providers currently isn't supported. Function apps that have existing runtime data start with a fresh, empty task hub after they switch to the Netherite back end. Similarly, the task hub contents that are created by using MSSQL can't be preserved if you switch to a different storage provider. ## Prerequisites -The following steps assume that you are starting with an existing Durable Functions app and are familiar with how to operate it. +The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it. ++Specifically, this quickstart assumes that you have already: -In particular, this quickstart assumes that you have already: -1. Created an Azure Functions project on your local machine. -2. Added Durable Functions to your project with an [orchestrator function](durable-functions-bindings.md#orchestration-trigger) and a [client function](durable-functions-bindings.md#orchestration-client) that triggers it. -3. Configured the project for local debugging. -4. Learned how to deploy an Azure Functions project to Azure. +- Created an Azure Functions project on your local computer. +- Added Durable Functions to your project with an [orchestrator function](durable-functions-bindings.md#orchestration-trigger) and a [client function](durable-functions-bindings.md#orchestration-client) that triggers it. +- Configured the project for local debugging. +- Learned how to deploy an Azure Functions project to Azure. -If this isn't the case, we suggest you start with one of the following articles, which provides detailed instructions on how to achieve all the requirements above: +If you don't meet these prerequisites, we recommend that you start with one of the following quickstarts: -- [Create your first durable function - C#](durable-functions-create-first-csharp.md)-- [Create your first durable function - JavaScript](quickstart-js-vscode.md)-- [Create your first durable function - Python](quickstart-python-vscode.md)-- [Create your first durable function - PowerShell](quickstart-powershell-vscode.md)-- [Create your first durable function - Java](quickstart-java.md)+- [Create a Durable Functions app - C#](durable-functions-isolated-create-first-csharp.md) +- [Create a Durable Functions app - JavaScript](quickstart-js-vscode.md) +- [Create a Durable Functions app - Python](quickstart-python-vscode.md) +- [Create a Durable Functions app - PowerShell](quickstart-powershell-vscode.md) +- [Create a Durable Functions app - Java](quickstart-java.md) ## Add the Netherite extension (.NET only) > [!NOTE]-> If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), you should ignore this section as Extension Bundles removes the need for manual Extension management. +> If your app uses [Extension Bundles](../functions-bindings-register.md#extension-bundles), skip this section. Extension Bundles removes the need for manual extension management. -You need to install the latest version of the Netherite Extension on NuGet. This usually means including a reference to it in your `.csproj` file and building the project. +First, install the latest version of the [Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite) storage provider extension from NuGet. For .NET, you usually include a reference to it in your *.csproj* file and building the project. -The Extension package to install depends on the .NET worker you are using: -- For the _in-process_ .NET worker, install [`Microsoft.Azure.DurableTask.Netherite.AzureFunctions`](https://www.nuget.org/packages/Microsoft.Azure.DurableTask.Netherite.AzureFunctions).-- For the _isolated_ .NET worker, install [`Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite`](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.DurableTask.Netherite).--You can install the Extension using the following [Azure Functions Core Tools CLI](../functions-run-local.md#install-the-azure-functions-core-tools) command +You can install the extension by using the following [Azure Functions Core Tools CLI](../functions-run-local.md#install-the-azure-functions-core-tools) command: ```cmd func extensions install --package <package name depending on your worker model> --version <latest version> ``` -For more information on installing Azure Functions Extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). +For more information about installing Azure Functions extensions via the Core Tools CLI, see [func extensions install](../functions-core-tools-reference.md#func-extensions-install). ## Configure local.settings.json for local development -The Netherite backend requires a connection string to [Event Hubs](https://azure.microsoft.com/products/event-hubs/) to run on Azure. However, for local development, providing the string `"SingleHost"` bypasses the need for Event Hubs. +The Netherite back end requires a connection string to [Azure Event Hubs](https://azure.microsoft.com/products/event-hubs/) to run on Azure. However, for local development, providing the string `"SingleHost"` bypasses the need to use Event Hubs. -In `local.settings.json`, set the value of `EventHubsConnection` to `SingleHost` as shown below: +In *local.settings.json*, set the value of `EventHubsConnection` to `SingleHost`: ```json { In `local.settings.json`, set the value of `EventHubsConnection` to `SingleHost` ``` > [!NOTE]-> The value of `FUNCTIONS_WORKER_RUNTIME` is dependent on your programming language of choice. For more information, please see its [reference docs](../functions-app-settings.md#functions_worker_runtime). +> The value of `FUNCTIONS_WORKER_RUNTIME` depends on the programming language you use. For more information, see the [runtime reference](../functions-app-settings.md#functions_worker_runtime). ## Update host.json -Edit the storage provider section of the `host.json` file so it sets the `type` to `Netherite`. +Edit the storage provider section of the *host.json* file to set `type` to `Netherite`: ```json { Edit the storage provider section of the `host.json` file so it sets the `type` } ``` -The snippet above is just a *minimal* configuration. Later, you may want to consider [other parameters](https://microsoft.github.io/durabletask-netherite/#/settings?id=typical-configuration). -+This code snippet is a basic configuration. Later, you might want to [add parameters](https://microsoft.github.io/durabletask-netherite/#/settings?id=typical-configuration). ## Test locally -Your app is now ready for local development: You can start the Function app to test it. One way to do this is to run `func host start` on your application's root and executing a simple orchestrator Function. +Your app is now ready for local development. You can start the function app to test it. One way to start the app is to run `func host start` on your application's root, and then execute a basic orchestrator function. -While the function app is running, Netherite will publish load information about its active partitions to an Azure Storage table named "DurableTaskPartitions". You can use [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) to check that it's working as expected. If Netherite is running correctly, the table won't be empty; see the example below. +While the function app is running, Netherite publishes load information about its active partitions to an Azure Storage table named **DurableTaskPartitions**. You can use [Azure Storage Explorer](../../vs-azure-tools-storage-manage-with-storage-explorer.md) to verify that it's working as expected. If Netherite is running correctly, the table isn't empty. For an example, see the following screenshot. -![Data on the "DurableTaskPartitions" table in the Azure Storage Explorer.](./media/quickstart-netherite/partition-table.png) -> [!NOTE] -> For more information on the contents of this table, see the [Partition Table](https://microsoft.github.io/durabletask-netherite/#/ptable) article. +For more information about the contents of the **DurableTaskPartitions** table, see [Partition Table](https://microsoft.github.io/durabletask-netherite/#/ptable). > [!NOTE]-> If you are using local storage emulation on a Windows OS, please ensure you're using the [Azurite](../../storage/common/storage-use-azurite.md) storage emulator and not the legacy "Azure Storage Emulator" component. Local storage emulation with Netherite is only supported via Azurite. +> If you use local storage emulation on a Windows OS, ensure that you're using the [Azurite](../../storage/common/storage-use-azurite.md) storage emulator and not the earlier *Azure Storage Emulator* component. Local storage emulation with Netherite is supported only via Azurite. -## Run your app on Azure +## Run your app in Azure -You need to create an Azure Functions app on Azure. To do this, follow the instructions in the **Create a function app** section of [these instructions](../functions-create-function-app-portal.md). +To run your app in Azure, [create an Azure Functions app](../functions-create-function-app-portal.md). ### Set up Event Hubs -You need to set up an Event Hubs namespace to run Netherite on Azure. You can also set it up if you prefer to use Event Hubs during local development. +You need to set up an Event Hubs namespace to run Netherite in Azure. You can also set it up if you prefer to use Event Hubs during local development. > [!NOTE] > An Event Hubs namespace incurs an ongoing cost, whether or not it is being used by Durable Functions. Microsoft offers a [12-month free Azure subscription account](https://azure.microsoft.com/free/) if youΓÇÖre exploring Azure for the first time. #### Create an Event Hubs namespace -Follow [these steps](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace) to create an Event Hubs namespace on the Azure portal. When creating the namespace, you may be prompted to: +Complete the steps to [create an Event Hubs namespace](../../event-hubs/event-hubs-create.md#create-an-event-hubs-namespace) in the Azure portal. When you create the namespace, you might be prompted to: -1. Choose a *resource group*: Use the same resource group as the Function app. -2. Choose a *plan* and provision *throughput units*. Select the defaults, this setting can be changed later. -3. Choose the *retention* time: Select the default, this setting has no effect on Netherite. +- Select a *resource group*. Use the same resource group that the function app uses. +- Select a *plan* and provision *throughput units*. Select the defaults. You can change this setting later. +- Select a *retention* time. Select the default. This setting has no effect on Netherite. -#### Obtain the Event Hubs connection string +#### Get the Event Hubs connection string -To obtain the connection string for your Event Hubs namespace, go to your Event Hubs namespace in the Azure portal, select **Shared access policies**, and then select **RootManagedSharedAccessKey**. This should reveal a field named **Connection string-primary key** and that field's value is the connection string. +To get the connection string for your Event Hubs namespace, go to your Event Hubs namespace in the Azure portal. Select **Shared access policies**, and then select **RootManagedSharedAccessKey**. A field named **Connection string-primary key** appears, and the field's value is the connection string. -Below are guiding screenshots on how to find this data in the portal: -![Find the connection string primary key on the portal"](./media/quickstart-netherite/namespace-connection-string.png) +### Add the connection string as an application setting -### Add connection string as an application setting +Next, add your connection string as an application setting in your function app. To add it in the Azure portal, go to your function app view, select **Configuration**, and then select **New application setting**. You can assign `EventHubsConnection` to map to your connection string. The following screenshots show some examples. -You need to add your connection string as an application setting in your function app. To do this through the Azure portal, go to your function app view, select **Configuration**, and then select **New application setting**. This is where you can assign `EventHubsConnection` to map to your connection string. Below are some guiding images. -![In the Function App view, go to "configuration" and select "new application setting."](./media/quickstart-netherite/add-configuration.png) -![Enter `EventHubsConnection` as the name, and the connection string as its value.](./media/quickstart-netherite/enter-configuration.png) ### Enable runtime scaling (Elastic Premium only) > [!NOTE] > Skip this section if your app is not in the Elastic Premium plan. -If your app is running on the Elastic Premium Plan, it is recommended that you enable runtime scale monitoring for better scaling. To do this, go to **Configuration**, select **Function runtime settings** and toggle **Runtime Scale Monitoring** to On. +If your app is running on the Elastic Premium plan, we recommend that you enable runtime scale monitoring for better scaling. Go to **Configuration**, select **Function runtime settings**, and set **Runtime Scale Monitoring** to **On**. -![How to enable Runtime Scale Monitoring in the portal.](./media/quickstart-netherite/runtime-scale-monitoring.png) -### Ensure your app is using a 64-bit architecture (Windows only) +### Ensure that your app is using a 64-bit architecture (Windows only) > [!NOTE]-> Skip this section if your app is running on Linux. +> Skip this section if your app runs on Linux. -Netherite requires a 64-bit architecture to work. Starting on Functions V4, this should be the default. You can usually validate this in the portal: under **Configuration**, select **General Settings** and then ensure the **Platform** field is set to **64 Bit**. If you don't see this option in the portal, then it's possible you're already running on a 64-bit platform. For example, Linux apps won't show this setting because they only support 64-bit. +Netherite requires a 64-bit architecture. Beginning with Azure Functions V4, 64-bit should be the default. You can usually validate this setting in the Azure portal. Under **Configuration**, select **General Settings**, and then ensure that **Platform** is set to **64 Bit**. If you don't see this option in the portal, then you might already run on a 64-bit platform. For example, Linux apps don't show this setting because they support only 64-bit architecture. -![Configure runtime to use 64 bit in the portal.](./media/quickstart-netherite/ensure-64-bit-architecture.png) ## Deploy You can now deploy your code to the cloud and run your tests or workload on it. To validate that Netherite is correctly configured, you can review the metrics for Event Hubs in the portal to ensure that there's activity. > [!NOTE]-> For guidance on deploying your project to Azure, review the deployment instructions in the article for your programming language of choice in the [prerequisites section](#prerequisites). +> For information about how to deploy your project to Azure, review the deployment instructions for your programming language in [Prerequisites](#prerequisites). ++## Related content -For more information about the Netherite architecture, configuration, and workload behavior, including performance benchmarks, we recommend you take a look at the [Netherite documentation](https://microsoft.github.io/durabletask-netherite/#/). +- For more information about the Netherite architecture, configuration, and workload behavior, including performance benchmarks, we recommend that you take a look at the [Netherite documentation](https://microsoft.github.io/durabletask-netherite/#/). |
azure-functions | Quickstart Powershell Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-powershell-vscode.md | Title: Create your first durable function in Azure Functions using PowerShell -description: Create and publish an Azure Durable Function in PowerShell using Visual Studio Code. + Title: "Quickstart: Create a PowerShell Durable Functions app" +description: Create and publish a PowerShell Durable Functions app in Azure Functions by using Visual Studio Code. Previously updated : 06/22/2022 Last updated : 07/24/2024 ms.devlang: powershell -# Create your first durable function in PowerShell +# Quickstart: Create a PowerShell Durable Functions app -*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. You install Durable Functions by installing the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) in Visual Studio Code. The extension manages state, checkpoints, and restarts in your application. -In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. +In this quickstart, you use the Durable Functions extension in Visual Studio Code to locally create and test a "hello world" Durable Functions app in Azure Functions. The Durable Functions app orchestrates and chains together calls to other functions. Then, you publish the function code to Azure. The tools you use are available via the Visual Studio Code extension. -![Running durable function in Azure](./media/quickstart-js-vscode/functions-vs-code-complete.png) +![Running a Durable Functions app in Azure.](./media/quickstart-js-vscode/functions-vs-code-complete.png) ## Prerequisites -To complete this tutorial: +To complete this quickstart, you need: -* Install [Visual Studio Code](https://code.visualstudio.com/download). +* [Visual Studio Code](https://code.visualstudio.com/download) installed. -* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension +* The Visual Studio Code extension [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed. -* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). +* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed. -* Durable Functions require an Azure storage account. You need an Azure subscription. +* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] -## <a name="create-an-azure-functions-project"></a>Create your local project +## <a name="create-an-azure-functions-project"></a>Create your local project -In this section, you use Visual Studio Code to create a local Azure Functions project. +In this section, you use Visual Studio Code to create a local Azure Functions project. -1. In Visual Studio Code, press F1 (or Ctrl/Cmd+Shift+P) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +1. In Visual Studio Code, select F1 (or select Ctrl/Cmd+Shift+P) to open the command palette. At the prompt (`>`), enter and then select **Azure Functions: Create New Project**. - ![Create function](media/quickstart-js-vscode/functions-create-project.png) + :::image type="content" source="media/quickstart-js-vscode/functions-create-project.png" alt-text="Screenshot that shows the Create a function command."::: -1. Choose an empty folder location for your project and choose **Select**. +1. Select **Browse**. In the **Select Folder** dialog, go to a folder to use for your project, and then choose **Select**. -1. Following the prompts, provide the following information: +1. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | PowerShell | Create a local PowerShell Functions project. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | + | **Select a language for your function app project** | Select **PowerShell**. | Creates a local PowerShell Functions project. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Select a template for your project's first function** | Select **Skip for now**. | | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | -Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. +Visual Studio Code installs Azure Functions Core Tools if it's required to create a project. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. -A package.json file is also created in the root folder. +A *package.json* file is also created in the root folder. -### Configure function app to use PowerShell 7 +### Configure the function app to use PowerShell 7 -Open the *local.settings.json* file and confirm that a setting named `FUNCTIONS_WORKER_RUNTIME_VERSION` is set to `~7`. If it is missing or set to another value, update the contents of the file. +Open the *local.settings.json* file and confirm that a setting named `FUNCTIONS_WORKER_RUNTIME_VERSION` is set to `~7`. If it's missing or if it's set to another value, update the contents of the file. ```json { Open the *local.settings.json* file and confirm that a setting named `FUNCTIONS_ ## Create your functions -The most basic Durable Functions app contains three functions: +The most basic Durable Functions app has three functions: -* *Orchestrator function* - describes a workflow that orchestrates other functions. -* *Activity function* - called by the orchestrator function, performs work, and optionally returns a value. -* *Client function* - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. +* **Orchestrator function**: A workflow that orchestrates other functions. +* **Activity function**: A function that is called by the orchestrator function, performs work, and optionally returns a value. +* **Client function**: A regular function in Azure that starts an orchestrator function. This example uses an HTTP-triggered function. ### Orchestrator function -You use a template to create the durable function code in your project. +Use a template to create the Durable Functions app code in your project. -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +1. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions orchestrator | Create a Durable Functions orchestration | - | Provide a function name | HelloOrchestrator | Name of your durable function | + | **Select a template for your function** | Enter **Durable Functions orchestrator**. | Creates a Durable Functions app orchestration. | + | **Provide a function name** | Enter **HelloOrchestrator**. | A name for your durable function. | -You've added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/run.ps1* to see the orchestrator function. Each call to the `Invoke-ActivityFunction` cmdlet invokes an activity function named `Hello`. +You added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/run.ps1* to see the orchestrator function. Each call to the Invoke-ActivityFunction cmdlet invokes an activity function named `Hello`. -Next, you'll add the referenced `Hello` activity function. +Next, you add the referenced `Hello` activity function. ### Activity function -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +1. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions activity | Create an activity function | - | Provide a function name | Hello | Name of your activity function | + | **Select a template for your function** | Select **Durable Functions activity**. | Creates an activity function. | + | **Provide a function name** | Enter **Hello**. | The name of your activity function. | -You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/run.ps1* to see that it's taking a name as input and returning a greeting. An activity function is where you'll perform actions such as making a database call or performing a computation. +You added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/run.ps1* to see that it's taking a name as input and returning a greeting. An activity function is where you perform actions such as making a database call or performing a computation. -Finally, you'll add an HTTP triggered function that starts the orchestration. +Finally, you add an HTTP-triggered function that starts the orchestration. ### Client function (HTTP starter) -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +1. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions HTTP starter | Create an HTTP starter function | - | Provide a function name | HttpStart | Name of your activity function | - | Authorization level | Anonymous | For demo purposes, allow the function to be called without authentication | + | **Select a template for your function** | Select **Durable Functions HTTP starter**. | Creates an HTTP starter function. | + | **Provide a function name** | Enter **HttpStart**. | The name of your activity function. | + | **Authorization level** | Select **Anonymous**. | For demo purposes, this value allows the function to be called without using authentication. | -You've added an HTTP triggered function that starts an orchestration. Open *HttpStart/run.ps1* to see that it uses the `Start-NewOrchestration` cmdlet to start a new orchestration. Then it uses the `New-OrchestrationCheckStatusResponse` cmdlet to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. +You added an HTTP-triggered function that starts an orchestration. Open *HttpStart/run.ps1* to check that it uses the Start-NewOrchestration cmdlet to start a new orchestration. Then it uses the New-OrchestrationCheckStatusResponse cmdlet to return an HTTP response that contains URLs that can be used to monitor and manage the new orchestration. -You now have a Durable Functions app that can be run locally and deployed to Azure. +You now have a Durable Functions app that you can run locally and deploy to Azure. > [!NOTE]-> The next version of the DF PowerShell is now in preview and may be downloaded from the PowerShell Gallery. -> Learn about it and how to try it out in the [guide to the standalone PowerShell SDK](./durable-functions-powershell-v2-sdk-migration-guide.md). -> You may follow the guide's [installation section](./durable-functions-powershell-v2-sdk-migration-guide.md#install-and-enable-the-sdk) for instructions compatible with this quickstart on how to enable it. +> The next version of the Durable Functions PowerShell application is now in preview. You can download it from the PowerShell Gallery. Learn more about it and learn how to try it out in the [guide to the standalone PowerShell SDK](./durable-functions-powershell-v2-sdk-migration-guide.md). You can follow the guide's [installation section](./durable-functions-powershell-v2-sdk-migration-guide.md#install-and-enable-the-sdk) for instructions that are compatible with this quickstart to enable it. ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function app from Visual Studio Code. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function in Visual Studio. -1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/run.ps1*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. +1. To test your function, set a breakpoint in the `Hello` activity function code (in *Hello/run.ps1*). Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. - > [!NOTE] - > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging. + > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). -1. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**. +1. Durable Functions requires an Azure storage account to run. When Visual Studio Code prompts you to select a storage account, choose **Select storage account**. - ![Create storage account](media/quickstart-js-vscode/functions-select-storage.png) + :::image type="content" source="media/quickstart-js-vscode/functions-select-storage.png" alt-text="Screenshot that shows the Create storage account command."::: -1. Following the prompts, provide the following information to create a new storage account in Azure. +1. At the prompts, provide the following information to create a new storage account in Azure. - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select subscription | *name of your subscription* | Select your Azure subscription | - | Select a storage account | Create a new storage account | | - | Enter the name of the new storage account | *unique name* | Name of the storage account to create | - | Select a resource group | *unique name* | Name of the resource group to create | - | Select a location | *region* | Select a region close to you | + | **Select subscription** | Select the name of your subscription. | Your Azure subscription. | + | **Select a storage account** | Select **Create a new storage account**. | | + | **Enter the name of the new storage account** | Enter a unique name. | The name of the storage account to create. | + | **Select a resource group** | Enter a unique name. | The name of the resource group to create. | + | **Select a location** | Select an Azure region. | Select a region that is close to you. | -1. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. +1. In the terminal panel, copy the URL endpoint of your HTTP-triggered function. - ![Azure local output](media/quickstart-js-vscode/functions-f5.png) + :::image type="content" source="media/quickstart-js-vscode/functions-f5.png" alt-text="Screenshot of Azure local output."::: -1. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. +1. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. - The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. + The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. -1. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request. +1. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. - The request will query the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: + The request queries the orchestration instance for the status. You must get an eventual response, which shows the instance completed and includes the outputs or results of the durable function. It looks like this example: ```json { Azure Functions Core Tools lets you run an Azure Functions project on your local } ``` -1. To stop debugging, press **Shift + F5** in VS Code. +1. To stop debugging, in Visual Studio Code, select Shift+F5. -After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. +After you verify that the function runs correctly on your local computer, it's time to publish the project to Azure. [!INCLUDE [functions-sign-in-vs-code](../../../includes/functions-sign-in-vs-code.md)] After you've verified that the function runs correctly on your local computer, i ## Test your function in Azure -1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` +1. Copy the URL of the HTTP trigger from the output panel. The URL that calls your HTTP-triggered function should be in this format: -2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app. + `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` -## Next steps +1. Paste the new URL for the HTTP request in your browser's address bar. When you use the published app, you can expect to get the same status response that you got when you tested locally. -You have used Visual Studio Code to create and publish a PowerShell durable function app. +The PowerShell Durable Functions app that you created and published by using Visual Studio Code is ready to use. -> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) +## Clean up resources ++If you no longer need the resources that you created to complete the quickstart, to avoid related costs in your Azure subscription, [delete the resource group](/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-portal#delete-resource-group) and all related resources. ++## Related content ++* Learn about [common Durable Functions app patterns](durable-functions-overview.md#application-patterns). |
azure-functions | Quickstart Python Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-python-vscode.md | Title: Create your first durable function in Azure using Python -description: Create and publish an Azure Durable Function in Python using Visual Studio Code. + Title: "Quickstart: Create a Python Durable Functions app" +description: Create and publish a Python Durable Functions app in Azure Functions by using Visual Studio Code. Previously updated : 06/15/2022 Last updated : 07/24/2024 ms.devlang: python zone_pivot_groups: python-mode-functions -# Create your first durable function in Python +# Quickstart: Create a Python Durable Functions app -Durable Functions is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. You install Durable Functions by installing the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) in Visual Studio Code. The extension manages state, checkpoints, and restarts in your application. -In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chains together calls to other functions. You can then publish the function code to Azure. +In this quickstart, you use the Durable Functions extension in Visual Studio Code to locally create and test a "hello world" Durable Functions app in Azure Functions. The Durable Functions app orchestrates and chains together calls to other functions. Then, you publish the function code to Azure. The tools you use are available via the Visual Studio Code extension. ## Prerequisites -To complete this tutorial: +To complete this quickstart, you need: -* Install [Visual Studio Code](https://code.visualstudio.com/download). +* [Visual Studio Code](https://code.visualstudio.com/download) installed. -* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) Visual Studio Code extension. +* The Visual Studio Code extension [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed. -* Make sure that you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). +* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed. -* Durable Functions require an Azure storage account. You need an Azure subscription. +* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. -* Make sure that you have version 3.7, 3.8, 3.9, or 3.10 of [Python](https://www.python.org/) installed. +* [Python](https://www.python.org/) version 3.7, 3.8, 3.9, or 3.10 installed. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] To complete this tutorial: In this section, you use Visual Studio Code to create a local Azure Functions project. -1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd+Shift+P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +1. In Visual Studio Code, select F1 (or select Ctrl/Cmd+Shift+P) to open the command palette. At the prompt (`>`), enter and then select **Azure Functions: Create New Project**. :::image type="content" source="media/quickstart-python-vscode/functions-create-project.png" alt-text="Screenshot of Create function window."::: -1. Choose an empty folder location for your project and choose **Select**. +2. Select **Browse**. In the **Select Folder** dialog, go to a folder to use for your project, and then choose **Select**. ::: zone pivot="python-mode-configuration" -1. Follow the prompts and provide the following information: +3. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | Python | Create a local Python Functions project. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Python version | Python 3.7, 3.8, 3.9, or 3.10 | Visual Studio Code will create a virtual environment with the version you select. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | + | **Select a language for your function app project** | Select **Python**. | Creates a local Python Functions project. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Python version** | Select **Python 3.7**, **Python 3.8**, **Python 3.9**, or **Python 3.10**. | Visual Studio Code creates a virtual environment by using the version you select. | + | **Select a template for your project's first function** | Select **Skip for now**. | | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | + ::: zone-end -1. Follow the prompts and provide the following information: +3. At the prompts, provide the following information: | Prompt | Value | Description | | | -- | -- |- | Select a language | Python (Programming Model V2) | Create a local Python Functions project using the V2 programming model. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Python version | Python 3.7, 3.8, 3.9, or 3.10 | Visual Studio Code will create a virtual environment with the version you select. | - | Select how you would like to open your project | Open in current window | Reopens Visual Studio Code in the folder you selected. | + | **Select a language** | Select **Python (Programming Model V2)**. | Creates a local Python Functions project by using the V2 programming model. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Python version** | Select **Python 3.7**, **Python 3.8**, **Python 3.9**, or **Python 3.10**. | Visual Studio Code creates a virtual environment by using the version you select. | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | -Visual Studio Code installs the Azure Functions Core Tools if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. +Visual Studio Code installs Azure Functions Core Tools if it's required to create a project. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. A *requirements.txt* file is also created in the root folder. It specifies the Python packages required to run your function app. ## Install azure-functions-durable from PyPI -When you've created the project, the Azure Functions Visual Studio Code extension automatically creates a virtual environment with your selected Python version. You then need to activate the virtual environment in a terminal and install some dependencies required by Azure Functions and Durable Functions. +When you create the project, the Azure Functions Visual Studio Code extension automatically creates a virtual environment with your selected Python version. You then need to activate the virtual environment in a terminal and install some dependencies required by Azure Functions and Durable Functions. 1. Open the *requirements.txt* in the editor and change its content to the following code: - ``` + ```txt azure-functions azure-functions-durable ``` -1. Open the editor's integrated terminal in the current folder (<kbd>Ctrl+Shift+`</kbd>). +2. In the current folder, open the editor's integrated terminal (Ctrl+Shift+`). -1. In the integrated terminal, activate the virtual environment in the current folder, depending on your operating system: +3. In the integrated terminal, activate the virtual environment in the current folder, depending on your operating system. - # [Linux](#tab/linux) + # [Linux](#tab/linux) - ```bash - source .venv/bin/activate - ``` - # [MacOS](#tab/macos) + ```bash + source .venv/bin/activate + ``` - ```bash - source .venv/bin/activate - ``` + # [macOS](#tab/macos) - # [Windows](#tab/windows) + ```bash + source .venv/bin/activate + ``` - ```powershell - .venv\scripts\activate - ``` - - + # [Windows](#tab/windows) -1. In the integrated terminal where the virtual environment is activated, use pip to install the packages you defined. + ```powershell + .venv\scripts\activate + ``` - ```bash - python -m pip install -r requirements.txt - ``` +++Then, in the integrated terminal where the virtual environment is activated, use pip to install the packages you defined. ++```bash +python -m pip install -r requirements.txt +``` ## Create your functions -A basic Durable Functions app contains three functions: +The most basic Durable Functions app has three functions: -* *Orchestrator function*: Describes a workflow that orchestrates other functions. -* *Activity function*: It's called by the orchestrator function, performs work, and optionally returns a value. -* *Client function*: It's a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. +* **Orchestrator function**: A workflow that orchestrates other functions. +* **Activity function**: A function that is called by the orchestrator function, performs work, and optionally returns a value. +* **Client function**: A regular function in Azure that starts an orchestrator function. This example uses an HTTP-triggered function. ::: zone pivot="python-mode-configuration" ### Orchestrator function -You use a template to create the durable function code in your project. +You use a template to create the Durable Functions app code in your project. -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Follow the prompts and provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions orchestrator | Create a Durable Functions orchestration | - | Provide a function name | HelloOrchestrator | Name of your durable function | + | **Select a template for your function** | Select **Durable Functions orchestrator**. | Creates a Durable Functions app orchestration. | + | **Provide a function name** | Select **HelloOrchestrator**. | A name for your durable function. | -You've added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/\_\_init__.py* to see the orchestrator function. Each call to `context.call_activity` invokes an activity function named `Hello`. +You added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/\_\_init__.py* to see the orchestrator function. Each call to `context.call_activity` invokes an activity function named `Hello`. -Next, you'll add the referenced `Hello` activity function. +Next, you add the referenced `Hello` activity function. ### Activity function -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Follow the prompts and provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions activity | Create an activity function | - | Provide a function name | Hello | Name of your activity function | + | **Select a template for your function** | Select **Durable Functions activity**. | Creates an activity function. | + | **Provide a function name** | Enter **Hello**. | The name of your activity function. | -You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/\_\_init__.py* to see that it takes a name as input and returns a greeting. An activity function is where you'll perform actions such as making a database call or performing a computation. +You added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/\_\_init__.py* to see that it takes a name as input and returns a greeting. An activity function is where you perform actions such as making a database call or performing a computation. -Finally, you'll add an HTTP triggered function that starts the orchestration. +Finally, you add an HTTP-triggered function that starts the orchestration. ### Client function (HTTP starter) -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Follow the prompts and provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions HTTP starter | Create an HTTP starter function | - | Provide a function name | DurableFunctionsHttpStart | Name of your client function | - | Authorization level | Anonymous | For demo purposes, allow the function to be called without authentication | + | **Select a template for your function** | Select **Durable Functions HTTP starter**. | Creates an HTTP starter function. | + | **Provide a function name** | Enter **DurableFunctionsHttpStart**. | The name of your client function | + | **Authorization level** | Select **Anonymous**. | For demo purposes, this value allows the function to be called without using authentication. | -You've added an HTTP triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/\_\_init__.py* to see that it uses `client.start_new` to start a new orchestration. Then it uses `client.create_check_status_response` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. +You added an HTTP-triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/\_\_init__.py* to see that it uses `client.start_new` to start a new orchestration. Then it uses `client.create_check_status_response` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. You now have a Durable Functions app that can be run locally and deployed to Azure.+ ::: zone-end ## Requirements Version 2 of the Python programming model requires the following minimum versions: -- [Azure Functions Runtime](../functions-versions.md) v4.16+-- [Azure Functions Core Tools](../functions-run-local.md) v4.0.5095+ (if running locally)-- [azure-functions-durable](https://pypi.org/project/azure-functions-durable/) v1.2.4++* [Azure Functions Runtime](../functions-versions.md) v4.16+ +* [Azure Functions Core Tools](../functions-run-local.md) v4.0.5095+ (if running locally) +* [azure-functions-durable](https://pypi.org/project/azure-functions-durable/) v1.2.4+ ++## Enable the v2 programming model -## Enable v2 programming model +The following application setting is required to run the v2 programming model: -The following application setting is required to run the v2 programming model: -- Name: `AzureWebJobsFeatureFlags`-- Value: `EnableWorkerIndexing`+* **Name**: `AzureWebJobsFeatureFlags` +* **Value**: `EnableWorkerIndexing` -If you're running locally using [Azure Functions Core Tools](../functions-run-local.md), you should add this setting to your `local.settings.json` file. If you're running in Azure, follow these steps with the tool of your choice: +If you're running locally by using [Azure Functions Core Tools](../functions-run-local.md), add this setting to your *local.settings.json* file. If you're running in Azure, complete these steps by using a relevant tool: # [Azure CLI](#tab/azure-cli-set-indexing-flag) Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your function app and resource group, respectively. -```azurecli +```azurecli az functionapp config appsettings set --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME> --settings AzureWebJobsFeatureFlags=EnableWorkerIndexing ``` Replace `<FUNCTION_APP_NAME>` and `<RESOURCE_GROUP_NAME>` with the name of your Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -AppSetting @{"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"} ``` -# [VS Code](#tab/vs-code-set-indexing-flag) +# [Visual Studio Code](#tab/vs-code-set-indexing-flag) -1. Make sure you have the [Azure Functions extension for VS Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed -1. Press <kbd>F1</kbd> to open the command palette. In the command palette, search for and select `Azure Functions: Add New Setting...`. -1. Choose your subscription and function app when prompted -1. For the name, type `AzureWebJobsFeatureFlags` and press <kbd>Enter</kbd>. -1. For the value, type `EnableWorkerIndexing` and press <kbd>Enter</kbd>. +1. Make sure that you have the [Azure Functions extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed. +2. Select F1 to open the command palette. At the prompt (`>`), enter and then select **Azure Functions: Add New Setting**. +3. Select your subscription and function app when you are prompted. +4. For the name, enter **AzureWebJobsFeatureFlags**, and then select Enter. +5. For the value, enter **EnableWorkerIndexing**, and then select Enter. -To create a basic Durable Functions app using these 3 function types, replace the contents of `function_app.py` with the following Python code. +To create a basic Durable Functions app by using these three function types, replace the contents of *function_app.py* with the following Python code: -```Python +```python import azure.functions as func import azure.durable_functions as df myApp = df.DFApp(http_auth_level=func.AuthLevel.ANONYMOUS) -# An HTTP-Triggered Function with a Durable Functions Client binding +# An HTTP-triggered function with a Durable Functions client binding @myApp.route(route="orchestrators/{functionName}") @myApp.durable_client_input(client_name="client") async def http_start(req: func.HttpRequest, client): def hello(city: str): return f"Hello {city}" ``` -Review the table below for an explanation of each function and its purpose in the sample. +Review the following table for an explanation of each function and its purpose in the sample: | Method | Description | | -- | -- |-| **`hello_orchestrator`** | The orchestrator function, which describes the workflow. In this case, the orchestration starts, invokes three functions in a sequence, and returns the ordered results of all 3 functions in a list. | -| **`hello`** | The activity function, which performs the work being orchestrated. The function returns a simple greeting to the city passed as an argument. | -| **`http_start`** | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a check status response. | +| `hello_orchestrator` | The orchestrator function, which describes the workflow. In this case, the orchestration starts, invokes three functions in a sequence, and then returns the ordered results of all three functions in a list. | +| `hello` | The activity function, which performs the work that is orchestrated. The function returns a simple greeting to the city passed as an argument. | +| `http_start` | An [HTTP-triggered function](../functions-bindings-http-webhook.md) that starts an instance of the orchestration and returns a `check status` response. | > [!NOTE]-> Durable Functions also supports Python V2's [blueprints](../functions-reference-python.md#blueprints). To use them, you will need to register your blueprint functions using the [`azure-functions-durable`](https://pypi.org/project/azure-functions-durable) `Blueprint` class, as -> shown [here](https://github.com/Azure/azure-functions-durable-python/blob/dev/samples-v2/blueprint/durable_blueprints.py). The resulting blueprint can then be registered as normal. See our [sample](https://github.com/Azure/azure-functions-durable-python/tree/dev/samples-v2/blueprint) for an example. +> Durable Functions also supports Python v2 [blueprints](../functions-reference-python.md#blueprints). To use blueprints, register your blueprint functions by using the [azure-functions-durable](https://pypi.org/project/azure-functions-durable) `Blueprint` [class](https://github.com/Azure/azure-functions-durable-python/blob/dev/samples-v2/blueprint/durable_blueprints.py). You can register the resulting blueprint as usual. You can use our [sample](https://github.com/Azure/azure-functions-durable-python/tree/dev/samples-v2/blueprint) as an example. ::: zone-end ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. If you don't have it installed, you're prompted to install these tools the first time you start a function from Visual Studio Code. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. If it isn't installed, you're prompted to install these tools the first time you start a function in Visual Studio Code. ::: zone pivot="python-mode-configuration" -1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/\_\_init__.py*). Press <kbd>F5</kbd> or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. +1. To test your function, set a breakpoint in the `Hello` activity function code (in *Hello/\_\_init__.py*). Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. ++ > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). ::: zone-end -1. To test your function, set a breakpoint in the `hello` activity function code. Press <kbd>F5</kbd> or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. +1. To test your function, set a breakpoint in the `hello` activity function code. Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. + > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). -> [!NOTE] -> For more information on debugging, see [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging). -2. Durable Functions require an Azure storage account to run. When Visual Studio Code prompts you to select a storage account, select **Select storage account**. +2. Durable Functions requires an Azure storage account to run. When Visual Studio Code prompts you to select a storage account, select **Select storage account**. :::image type="content" source="media/quickstart-python-vscode/functions-select-storage.png" alt-text="Screenshot of how to create a storage account."::: -3. Follow the prompts and provide the following information to create a new storage account in Azure: +3. At the prompts, provide the following information to create a new storage account in Azure. - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select subscription | *name of your subscription* | Select your Azure subscription | - | Select a storage account | Create a new storage account | | - | Enter the name of the new storage account | *unique name* | Name of the storage account to create | - | Select a resource group | *unique name* | Name of the resource group to create | - | Select a location | *region* | Select a region close to you | + | **Select subscription** | Select the name of your subscription. | Your Azure subscription. | + | **Select a storage account** | Select **Create a new storage account**. | | + | **Enter the name of the new storage account** | Enter a unique name. | The name of the storage account to create. | + | **Select a resource group** | Enter a unique name. | The name of the resource group to create. | + | **Select a location** | Select an Azure region. | Select a region that is close to you. | -4. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. +4. In the terminal panel, copy the URL endpoint of your HTTP-triggered function. :::image type="content" source="media/quickstart-python-vscode/functions-f5.png" alt-text="Screenshot of Azure local output."::: ::: zone pivot="python-mode-configuration"-5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. - The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. --5. Use your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`hello_orchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/hello_orchestrator`. +5. In your browser or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. ++ The response is the HTTP function's initial result. It lets you know that the durable orchestration has started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. ++6. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. ++ The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the durable function. It looks similar to this example: ++ ```json + { + "name": "HelloOrchestrator", + "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", + "runtimeStatus": "Completed", + "input": null, + "customStatus": null, + "output": [ + "Hello Tokyo!", + "Hello Seattle!", + "Hello London!" + ], + "createdTime": "2020-03-18T21:54:49Z", + "lastUpdatedTime": "2020-03-18T21:54:54Z" + } + ``` - The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It isn't yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. ::: zone-end -2. Copy the URL value for `statusQueryGetUri`, paste it in the browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. -- The request will query the orchestration instance for the status. You must get an eventual response, which shows the instance has completed and includes the outputs or results of the durable function. It looks like: +5. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`hello_orchestrator`). The URL must be similar to `http://localhost:7071/api/orchestrators/hello_orchestrator`. ++ The response is the HTTP function's initial result. It lets you know that the durable orchestration has started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. ++6. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. ++ The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the durable function. It looks similar to this example: ++ ```json + { + "name": "hello_orchestrator", + "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", + "runtimeStatus": "Completed", + "input": null, + "customStatus": null, + "output": [ + "Hello Tokyo!", + "Hello Seattle!", + "Hello London!" + ], + "createdTime": "2020-03-18T21:54:49Z", + "lastUpdatedTime": "2020-03-18T21:54:54Z" + } + ``` -```json -{ - "name": "HelloOrchestrator", - "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", - "runtimeStatus": "Completed", - "input": null, - "customStatus": null, - "output": [ - "Hello Tokyo!", - "Hello Seattle!", - "Hello London!" - ], - "createdTime": "2020-03-18T21:54:49Z", - "lastUpdatedTime": "2020-03-18T21:54:54Z" -} -``` -```json -{ - "name": "hello_orchestrator", - "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", - "runtimeStatus": "Completed", - "input": null, - "customStatus": null, - "output": [ - "Hello Tokyo!", - "Hello Seattle!", - "Hello London!" - ], - "createdTime": "2020-03-18T21:54:49Z", - "lastUpdatedTime": "2020-03-18T21:54:54Z" -} -``` ::: zone-end -7. To stop debugging, press <kbd>Shift+F5</kbd> in Visual Studio Code. +7. To stop debugging, in Visual Studio Code, select Shift+F5. -After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. +After you verify that the function runs correctly on your local computer, it's time to publish the project to Azure. [!INCLUDE [functions-create-function-app-vs-code](../../../includes/functions-sign-in-vs-code.md)] After you've verified that the function runs correctly on your local computer, i ## Test your function in Azure ::: zone pivot="python-mode-configuration"-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` ++1. Copy the URL of the HTTP trigger from the output panel. The URL that calls your HTTP-triggered function must be in this format: ++ `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` + ::: zone-end+ ::: zone pivot="python-mode-decorators"-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function must be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/hello_orchestrator` ++1. Copy the URL of the HTTP trigger from the output panel. The URL that calls your HTTP-triggered function must be in this format: ++ `https://<functionappname>.azurewebsites.net/api/orchestrators/hello_orchestrator` + ::: zone-end +2. Paste the new URL for the HTTP request in your browser's address bar. When you use the published app, you can expect to get the same status response that you got when you tested locally. ++The Python Durable Functions app that you created and published by using Visual Studio Code is ready to use. -1. Paste this new URL for the HTTP request in your browser's address bar. You must get the same status response as before when using the published app. +## Clean up resources -## Next steps +If you no longer need the resources that you created to complete the quickstart, to avoid related costs in your Azure subscription, [delete the resource group](/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-portal#delete-resource-group) and all related resources. -You have used Visual Studio Code to create and publish a Python durable function app. +## Related content -> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) +* Learn about [common Durable Functions app patterns](durable-functions-overview.md#application-patterns). |
azure-functions | Quickstart Ts Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-ts-vscode.md | Title: Create your first durable function in Azure using TypeScript -description: Create and publish an Azure Durable Function in TypeScript using Visual Studio Code. + Title: "Quickstart: Create a TypeScript Durable Functions app" +description: Create and publish a TypeScript Durable Functions app in Azure Functions by using Visual Studio Code. Previously updated : 02/13/2023 Last updated : 07/24/2024 ms.devlang: typescript zone_pivot_groups: functions-nodejs-model -# Create your first durable function in TypeScript +# Quickstart: Create a TypeScript Durable Functions app -*Durable Functions* is an extension of [Azure Functions](../functions-overview.md) that lets you write stateful functions in a serverless environment. The extension manages state, checkpoints, and restarts for you. +Use Durable Functions, a feature of [Azure Functions](../functions-overview.md), to write stateful functions in a serverless environment. You install Durable Functions by installing the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) in Visual Studio Code. The extension manages state, checkpoints, and restarts in your application. -In this article, you learn how to use the Visual Studio Code Azure Functions extension to locally create and test a "hello world" durable function. This function will orchestrate and chain together calls to other functions. You then publish the function code to Azure. +In this quickstart, you use the Durable Functions extension in Visual Studio Code to locally create and test a "hello world" Durable Functions app in Azure Functions. The Durable Functions app orchestrates and chains together calls to other functions. Then, you publish the function code to Azure. The tools you use are available via the Visual Studio Code extension. [!INCLUDE [functions-nodejs-model-pivot-description](../../../includes/functions-nodejs-model-pivot-description.md)] -![Screenshot of an Edge window. The window shows the output of invoking a simple durable function in Azure.](./media/quickstart-js-vscode/functions-vs-code-complete.png) +![Screenshot of an Edge window. The window shows the output of invoking a simple Durable Functions app in Azure.](./media/quickstart-js-vscode/functions-vs-code-complete.png) ## Prerequisites -To complete this tutorial: +To complete this quickstart, you need: -* Install [Visual Studio Code](https://code.visualstudio.com/download). +* [Visual Studio Code](https://code.visualstudio.com/download) installed. ::: zone pivot="nodejs-model-v3"-* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension ++* The Visual Studio Code extension [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) installed. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-* Install the [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) VS Code extension version `1.10.4` or above. ++* The Visual Studio Code extension [Azure Functions](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) version 1.10.4 or later installed. + ::: zone-end ::: zone pivot="nodejs-model-v3"-* Make sure you have the latest version of the [Azure Functions Core Tools](../functions-run-local.md). ++* The latest version of [Azure Functions Core Tools](../functions-run-local.md) installed. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-* Make sure you have [Azure Functions Core Tools](../functions-run-local.md) version `v4.0.5382` or above. ++* [Azure Functions Core Tools](../functions-run-local.md) version 4.0.5382 or later installed. + ::: zone-end -* Durable Functions require an Azure storage account. You need an Azure subscription. +* An Azure subscription. To use Durable Functions, you must have an Azure Storage account. ::: zone pivot="nodejs-model-v3"-* Make sure that you have version 16.x+ of [Node.js](https://nodejs.org/) installed. ++* [Node.js](https://nodejs.org/) version 16.x+ installed. + ::: zone-end-* Make sure that you have version 18.x+ of [Node.js](https://nodejs.org/) installed. +++* [Node.js](https://nodejs.org/) version 18.x+ installed. + ::: zone-end-* Make sure that you have [TypeScript](https://www.typescriptlang.org/) v4.x+ installed. ++* [TypeScript](https://www.typescriptlang.org/) version 4.x+ installed. [!INCLUDE [quickstarts-free-trial-note](~/reusable-content/ce-skilling/azure/includes/quickstarts-free-trial-note.md)] ## <a name="create-an-azure-functions-project"></a>Create your local project -In this section, you use Visual Studio Code to create a local Azure Functions project. +In this section, you use Visual Studio Code to create a local Azure Functions project. -1. In Visual Studio Code, press <kbd>F1</kbd> (or <kbd>Ctrl/Cmd + Shift + P</kbd>) to open the command palette. In the command palette, search for and select `Azure Functions: Create New Project...`. +1. In Visual Studio Code, select F1 (or select Ctrl/Cmd+Shift+P) to open the command palette. At the prompt (`>`), enter and then select **Azure Functions: Create New Project**. - ![Screenshot of the Visual Studio Code command palette. The command titled "Azure Functions: Create New Project..." is highlighted.](media/quickstart-js-vscode/functions-create-project.png) + :::image type="content" source="media/quickstart-js-vscode/functions-create-project.png" alt-text="Screenshot that shows the Visual Studio Code command palette, with Azure Functions Create New Project highlighted."::: -2. Choose an empty folder location for your project and choose **Select**. +2. Select **Browse**. In the **Select Folder** dialog, go to a folder to use for your project, and then choose **Select**. ::: zone pivot="nodejs-model-v3"-3. Following the prompts, provide the following information: - | Prompt | Value | Description | +3. At the prompts, provide the following information: ++ | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | TypeScript | Create a local Node.js Functions project using TypeScript. | - | Select a JavaScript programming model | Model V3 | Choose the V3 programming model. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | + | **Select a language for your function app project** | Select **TypeScript**. | Creates a local Node.js Functions project by using TypeScript. | + | **Select a JavaScript programming model** | Select **Model V3**. | Sets the v3 programming model. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Select a template for your project's first function** | Select **Skip for now**. | | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | ::: zone-end+ ::: zone pivot="nodejs-model-v4"-3. Following the prompts, provide the following information: - | Prompt | Value | Description | +3. At the prompts, provide the following information: ++ | Prompt | Action | Description | | | -- | -- |- | Select a language for your function app project | TypeScript | Create a local Node.js Functions project using TypeScript. | - | Select a JavaScript programming model | Model V4 | Choose the V4 programming model. | - | Select a version | Azure Functions v4 | You only see this option when the Core Tools aren't already installed. In this case, Core Tools are installed the first time you run the app. | - | Select a template for your project's first function | Skip for now | | - | Select how you would like to open your project | Open in current window | Reopens VS Code in the folder you selected. | + | **Select a language for your function app project** | Select **TypeScript**. | Creates a local Node.js Functions project by using TypeScript. | + | **Select a JavaScript programming model** | Select **Model V4**. | Sets the v4 programming model. | + | **Select a version** | Select **Azure Functions v4**. | You see this option only when Core Tools isn't already installed. In this case, Core Tools is installed the first time you run the app. | + | **Select a template for your project's first function** | Select **Skip for now**. | | + | **Select how you would like to open your project** | Select **Open in current window**. | Opens Visual Studio Code in the folder you selected. | ::: zone-end -Visual Studio Code installs the Azure Functions Core Tools, if needed. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. +Visual Studio Code installs Azure Functions Core Tools if it's required to create a project. It also creates a function app project in a folder. This project contains the [host.json](../functions-host-json.md) and [local.settings.json](../functions-develop-local.md#local-settings-file) configuration files. -A `package.json` file and a `tsconfig.json` file are also created in the root folder. +A *package.json* file and a *tsconfig.json* file are also created in the root folder. ## Install the Durable Functions npm package -To work with Durable Functions in a Node.js function app, you use a library called `durable-functions`. +To work with Durable Functions in a Node.js function app, you use a library called *durable-functions*. + ::: zone pivot="nodejs-model-v4"-To use the V4 programming model, you need to install the preview `v3.x` version of `durable-functions`. ++To use the v4 programming model, you need to install the preview v3.x version of the durable-functions library. + ::: zone-end -1. Use the *View* menu or <kbd>Ctrl + Shift + `</kbd> to open a new terminal in VS Code. +1. Use the **View** menu or select Ctrl+Shift+` to open a new terminal in Visual Studio Code. ::: zone pivot="nodejs-model-v3"-2. Install the `durable-functions` npm package by running `npm install durable-functions` in the root directory of the function app. ++2. Install the durable-functions npm package by running `npm install durable-functions` in the root directory of the function app. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-2. Install the `durable-functions` npm package preview version by running `npm install durable-functions@preview` in the root directory of the function app. ++2. Install the durable-functions npm package preview version by running `npm install durable-functions@preview` in the root directory of the function app. + ::: zone-end -## Creating your functions +## Create your functions -The most basic Durable Functions app contains three functions: +The most basic Durable Functions app has three functions: -* *Orchestrator function* - describes a workflow that orchestrates other functions. -* *Activity function* - called by the orchestrator function, performs work, and optionally returns a value. -* *Client function* - a regular Azure Function that starts an orchestrator function. This example uses an HTTP triggered function. +* **Orchestrator function**: A workflow that orchestrates other functions. +* **Activity function**: A function that is called by the orchestrator function, performs work, and optionally returns a value. +* **Client function**: A regular function in Azure that starts an orchestrator function. This example uses an HTTP-triggered function. ::: zone pivot="nodejs-model-v3" ### Orchestrator function -You use a template to create the durable function code in your project. +You use a template to create the Durable Functions code in your project. -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions orchestrator | Create a Durable Functions orchestration | - | Choose a durable storage type. | Azure Storage (Default) | Select the storage backend used for Durable Functions. | - | Provide a function name | HelloOrchestrator | Name of your durable function | + | **Select a template for your function** | Select **Durable Functions orchestrator**. | Creates a Durable Functions orchestration. | + | **Choose a durable storage type** | Select **Azure Storage (Default)**. | Sets the storage back end to use for your Durable Functions app. | + | **Provide a function name** | Enter **HelloOrchestrator**. | The name of your function. | -You've added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/index.ts* to see the orchestrator function. Each call to `context.df.callActivity` invokes an activity function named `Hello`. +You added an orchestrator to coordinate activity functions. Open *HelloOrchestrator/index.ts* to see the orchestrator function. Each call to `context.df.callActivity` invokes an activity function named `Hello`. -Next, you'll add the referenced `Hello` activity function. +Next, you add the referenced `Hello` activity function. ### Activity function -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions activity | Create an activity function | - | Provide a function name | Hello | Name of your activity function | + | **Select a template for your function** | Select **Durable Functions activity**. | Creates an activity function. | + | **Provide a function name** | Enter **Hello**. | A name for your activity function. | -You've added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.ts* to see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. +You added the `Hello` activity function that is invoked by the orchestrator. Open *Hello/index.ts* to see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow, such as making a database call or performing some nondeterministic computation. -Finally, you'll add an HTTP triggered function that starts the orchestration. +Finally, you add an HTTP-triggered function that starts the orchestration. ### Client function (HTTP starter) -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select `Azure Functions: Create Function`. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions HTTP starter | Create an HTTP starter function | - | Provide a function name | DurableFunctionsHttpStart | Name of your activity function | - | Authorization level | Anonymous | For demo purposes, allow the function to be called without authentication | + | **Select a template for your function** | Select **Durable Functions HTTP starter**. | Creates an HTTP starter function. | + | **Provide a function name** | Select **DurableFunctionsHttpStart**. | The name of your activity function. | + | **Authorization level** | Select **Anonymous**. | For demo purposes, this value allows the function to be called without using authentication. | -You've added an HTTP triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/index.ts* to see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. +You added an HTTP-triggered function that starts an orchestration. Open *DurableFunctionsHttpStart/index.ts* to see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. ++You now have a Durable Functions app that you can run locally and deploy to Azure. -You now have a Durable Functions app that can be run locally and deployed to Azure. ::: zone-end+ ::: zone pivot="nodejs-model-v4" -One of the benefits of the V4 Programming Model is the flexibility of where you write your functions. -In the V4 Model, you can use a single template to create all three functions in one file in your project. +One of the benefits of the v4 programming model is the flexibility of where you write your functions. In the v4 model, you can use a single template to create all three functions in one file in your project. -1. In the command palette, search for and select `Azure Functions: Create Function...`. +1. In the command palette, enter and then select **Azure Functions: Create Function**. -1. Following the prompts, provide the following information: +2. At the prompts, provide the following information: - | Prompt | Value | Description | + | Prompt | Action | Description | | | -- | -- |- | Select a template for your function | Durable Functions orchestrator | Create a file with a Durable Functions orchestration, an Activity function, and a Durable Client starter function. | - | Choose a durable storage type | Azure Storage (Default) | Select the storage backend used for Durable Functions. | - | Provide a function name | hello | Name used for your durable functions | + | **Select a template for your function** | Select **Durable Functions orchestrator**. | Creates a file that has a Durable Functions app orchestration, an activity function, and a durable client starter function. | + | **Choose a durable storage type** | Select **Azure Storage (Default)**. | Sets the storage back end to use for your Durable Function. | + | **Provide a function name** | Enter **Hello**. | A name for your durable function. | Open *src/functions/hello.ts* to view the functions you created. -You've created an orchestrator called `helloOrchestrator` to coordinate activity functions. Each call to `context.df.callActivity` invokes an activity function called `hello`. +You created an orchestrator called `helloOrchestrator` to coordinate activity functions. Each call to `context.df.callActivity` invokes an activity function called `hello`. -You've also added the `hello` activity function that is invoked by the orchestrator. In the same file, you can see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow: work such as making a database call or performing some non-deterministic computation. +You also added the `hello` activity function that is invoked by the orchestrator. In the same file, you can see that it's taking a name as input and returning a greeting. An activity function is where you perform "the real work" in your workflow, such as making a database call or performing some nondeterministic computation. -Lastly, you've also added an HTTP triggered function that starts an orchestration. In the same file, you can see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. +Finally, you added an HTTP-triggered function that starts an orchestration. In the same file, you can see that it uses `client.startNew` to start a new orchestration. Then it uses `client.createCheckStatusResponse` to return an HTTP response containing URLs that can be used to monitor and manage the new orchestration. ++You now have a Durable Functions app that you can run locally and deploy to Azure. -You now have a Durable Functions app that can be run locally and deployed to Azure. ::: zone-end ## Test the function locally -Azure Functions Core Tools lets you run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function from Visual Studio Code. +Azure Functions Core Tools gives you the capability to run an Azure Functions project on your local development computer. You're prompted to install these tools the first time you start a function in Visual Studio. ::: zone pivot="nodejs-model-v3" -1. To test your function, set a breakpoint in the `Hello` activity function code (*Hello/index.ts*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. +1. To test your function, set a breakpoint in the `Hello` activity function code (in *Hello/index.ts*). Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. ++ > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-1. To test your function, set a breakpoint in the `hello` activity function code (*src/functions/hello.ts*). Press F5 or select `Debug: Start Debugging` from the command palette to start the function app project. Output from Core Tools is displayed in the **Terminal** panel. - > [!NOTE] - > Refer to the [Durable Functions Diagnostics](durable-functions-diagnostics.md#debugging) for more information on debugging. +1. To test your function, set a breakpoint in the `hello` activity function code (in *src/functions/hello.ts*). Select F5 or select **Debug: Start Debugging** in the command palette to start the function app project. Output from Core Tools appears in the terminal panel. ++ > [!NOTE] + > For more information about debugging, see [Durable Functions diagnostics](durable-functions-diagnostics.md#debugging). -2. Durable Functions requires an Azure Storage account to run. When VS Code prompts you to select a storage account, choose **Select storage account**. - ![Screenshot of a Visual Studio Code alert window. The window says "In order to debug, you must select a storage account for internal use by the Azure Functions runtime." The button titled "Select storage account" is highlighted.](media/quickstart-js-vscode/functions-select-storage.png) +2. Durable Functions requires an Azure Storage account to run. When Visual Studio Code prompts you to select a storage account, select **Select storage account**. -3. Following the prompts, provide the following information to create a new storage account in Azure. + ![Screenshot of a Visual Studio Code alert window. Select storage account is highlighted.](media/quickstart-js-vscode/functions-select-storage.png) - | Prompt | Value | Description | +3. At the prompts, provide the following information to create a new storage account in Azure. ++ | Prompt | Action | Description | | | -- | -- |- | Select subscription | *name of your subscription* | Select your Azure subscription | - | Select a storage account | Create a new storage account | | - | Enter the name of the new storage account | *unique name* | Name of the storage account to create | - | Select a resource group | *unique name* | Name of the resource group to create | - | Select a location | *region* | Select a region close to you | + | **Select subscription** | Select the name of your subscription. | Your Azure subscription. | + | **Select a storage account** | Select **Create a new storage account**. | | + | **Enter the name of the new storage account** | Enter a unique name. | The name of the storage account to create. | + | **Select a resource group** | Enter a unique name. | The name of the resource group to create. | + | **Select a location** | Select an Azure region. | Select a region that is close to you. | -4. In the **Terminal** panel, copy the URL endpoint of your HTTP-triggered function. +4. In the terminal panel, copy the URL endpoint of your HTTP-triggered function. - ![Screenshot of the Visual Studio code terminal panel. The terminal shows the output of running an Durable Functions app locally. The table titled "terminal" and the URL of the HTTP starter function are highlighted.](media/quickstart-js-vscode/functions-f5.png) + :::image type="content" source="media/quickstart-js-vscode/functions-f5.png" alt-text="Screenshot that shows the Visual Studio Code terminal panel. The URL of the HTTP starter function is highlighted." lightbox="media/quickstart-js-vscode/functions-f5.png"::: ::: zone pivot="nodejs-model-v3"-5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. ++5. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`HelloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/HelloOrchestrator`. ++ The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-5. Using your browser, or a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/), send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`helloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/helloOrchestrator`. ++5. Use a tool like [Postman](https://www.getpostman.com/) or [cURL](https://curl.haxx.se/) to send an HTTP POST request to the URL endpoint. Replace the last segment with the name of the orchestrator function (`helloOrchestrator`). The URL should be similar to `http://localhost:7071/api/orchestrators/helloOrchestrator`. ++ The response is the HTTP function's initial result. It lets you know that the durable orchestration started successfully. It doesn't yet display the end result of the orchestration. The response includes a few useful URLs. For now, query the status of the orchestration. ++++6. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. ++ The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the durable function. It looks similar to this example: ++ ```json + { + "name": "HelloOrchestrator", + "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", + "runtimeStatus": "Completed", + "input": null, + "customStatus": null, + "output": [ + "Hello Tokyo!", + "Hello Seattle!", + "Hello London!" + ], + "createdTime": "2020-03-18T21:54:49Z", + "lastUpdatedTime": "2020-03-18T21:54:54Z" + } + ``` + ::: zone-end - The response is the initial result from the HTTP function letting you know the durable orchestration has started successfully. It is not yet the end result of the orchestration. The response includes a few useful URLs. For now, let's query the status of the orchestration. --6. Copy the URL value for `statusQueryGetUri` and paste it in the browser's address bar and execute the request. Alternatively you can also continue to use Postman to issue the GET request. -- The request queries the orchestration instance for the status. You should get an eventual response, which shows us the instance has completed, and includes the outputs or results of the durable function. It looks like: -- ::: zone pivot="nodejs-model-v3" - ```json - { - "name": "HelloOrchestrator", - "instanceId": "9a528a9e926f4b46b7d3deaa134b7e8a", - "runtimeStatus": "Completed", - "input": null, - "customStatus": null, - "output": [ - "Hello Tokyo!", - "Hello Seattle!", - "Hello London!" - ], - "createdTime": "2020-03-18T21:54:49Z", - "lastUpdatedTime": "2020-03-18T21:54:54Z" - } - ``` - ::: zone-end - ::: zone pivot="nodejs-model-v4" - ```json - { - "name": "helloOrchestrator", - "instanceId": "6ba3f77933b1461ea1a3828c013c9d56", - "runtimeStatus": "Completed", - "input": "", - "customStatus": null, - "output": [ - "Hello, Tokyo", - "Hello, Seattle", - "Hello, Cairo" - ], - "createdTime": "2023-02-13T23:02:21Z", - "lastUpdatedTime": "2023-02-13T23:02:25Z" - } - ``` - ::: zone-end --7. To stop debugging, press **Shift + F5** in VS Code. --After you've verified that the function runs correctly on your local computer, it's time to publish the project to Azure. ++6. Copy the URL value for `statusQueryGetUri`, paste it in your browser's address bar, and execute the request. Alternatively, you can also continue to use Postman to issue the GET request. ++ The request queries the orchestration instance for the status. You should see that the instance finished and that it includes the outputs or results of the Durable Functions app. It looks similar to this example: ++ ```json + { + "name": "helloOrchestrator", + "instanceId": "6ba3f77933b1461ea1a3828c013c9d56", + "runtimeStatus": "Completed", + "input": "", + "customStatus": null, + "output": [ + "Hello, Tokyo", + "Hello, Seattle", + "Hello, Cairo" + ], + "createdTime": "2023-02-13T23:02:21Z", + "lastUpdatedTime": "2023-02-13T23:02:25Z" + } + ``` +++7. To stop debugging, in Visual Studio Code, select Shift+F5. ++After you verify that the function runs correctly on your local computer, it's time to publish the project to Azure. [!INCLUDE [functions-create-function-app-vs-code](../../../includes/functions-sign-in-vs-code.md)] After you've verified that the function runs correctly on your local computer, i ## Test your function in Azure ::: zone pivot="nodejs-model-v4"+ > [!NOTE]-> To use the V4 node programming model, make sure your app is running on at least version 4.25 of the Azure Functions runtime. +> To use the v4 node programming model, make sure that your app is running on at least version 4.25 of the Azure Functions runtime. + ::: zone-end ::: zone pivot="nodejs-model-v3"-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` ++1. Copy the URL of the HTTP trigger from the output panel. The URL that calls your HTTP-triggered function should be in this format: ++ `https://<functionappname>.azurewebsites.net/api/orchestrators/HelloOrchestrator` + ::: zone-end+ ::: zone pivot="nodejs-model-v4"-1. Copy the URL of the HTTP trigger from the **Output** panel. The URL that calls your HTTP-triggered function should be in this format: `https://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator` ++1. Copy the URL of the HTTP trigger from the output panel. The URL that calls your HTTP-triggered function should be in this format: ++ `https://<functionappname>.azurewebsites.net/api/orchestrators/helloOrchestrator` + ::: zone-end -2. Paste this new URL for the HTTP request into your browser's address bar. You should get the same status response as before when using the published app. +2. Paste the new URL for the HTTP request in your browser's address bar. When you use the published app, you can expect to get the same status response that you got when you tested locally. ++The TypeScript Durable Functions app that you created and published by using Visual Studio Code is ready to use. ++## Clean up resources -## Next steps +If you no longer need the resources that you created to complete the quickstart, to avoid related costs in your Azure subscription, [delete the resource group](/azure/azure-resource-manager/management/delete-resource-group?tabs=azure-portal#delete-resource-group) and all related resources. -You have used Visual Studio Code to create and publish a JavaScript durable function app. +## Related content -> [!div class="nextstepaction"] -> [Learn about common durable function patterns](durable-functions-overview.md#application-patterns) +* Learn about [common Durable Functions app patterns](durable-functions-overview.md#application-patterns). |
azure-functions | Flex Consumption Plan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-plan.md | Title: Azure Functions Flex Consumption plan hosting description: Running your function code in the Azure Functions Flex Consumption plan provides virtual network integration, dynamic scale (to zero), and reduced cold starts. Previously updated : 06/15/2024 Last updated : 07/26/2024 # Customer intent: As a developer, I want to understand the benefits of using the Flex Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need. Keep these other considerations in mind when using Flex Consumption plan during + Continuous deployment using Azure DevOps Tasks (`AzureFunctionApp@2`) + Continuous deployment using GitHub Actions (`functions-action@v1`) + **Scale**: The lowest maximum scale in preview is `40`. The highest currently supported value is `1000`.-+ **Authorization**: EasyAuth isn't currently supported. Unauthenticated callers currently aren't blocked when EasyAuth is enabled in a Flex Consumption plan app. -+ **CORS**: [Cross-origin resource sharing (CORS) settings](functions-how-to-use-azure-function-app-settings.md#cors) are currently ignored for Flex Consumption apps. + **Managed dependencies**: [Managed dependencies in PowerShell](functions-reference-powershell.md#dependency-management) aren't supported by Flex Consumption. You must instead [define your own custom modules](functions-reference-powershell.md#custom-modules). ## Related articles |
azure-functions | Functions Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenarios.md | Functions is often the compute component in a serverless workflow topology, such ::: zone pivot="programming-language-csharp" + Tutorial: [Create a function to integrate with Azure Logic Apps](./functions-twitter-email.md)-+ Quickstart: [Create your first durable function in Azure using C#](./durable/durable-functions-create-first-csharp.md) ++ Quickstart: [Create your first durable function in Azure using C#](./durable/durable-functions-isolated-create-first-csharp.md) + Training: [Deploy serverless APIs with Azure Functions, Logic Apps, and Azure SQL Database](/training/modules/deploy-backend-apis/) ::: zone-end |
azure-government | Documentation Government Csp List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md | Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Apollo Information Systems Corp.](https://www.apollo-is.com/)| |[Approved Contact, LLC](https://approvedcontact.com)| |[Apps4Rent](https://www.apps4rent.com)|-|[Apptus](https://apttus.com)| +|[Apptus](https://conga.com/)| |[ArcherPoint, Inc.](https://www.archerpoint.com)| |[Arctic IT](https://arcticit.com/)| |[Ardalyst Federal LLC](https://ardalyst.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Sieena, Inc.](https://siennatech.com/)| |[Simeon Networks](https://simeonnetworks.com)| |[SimpleHelix](https://simplehelix.com/)|-|[Simons Advisors, LLC](https://simonsadvisors.com/)| |[Sirius Computer Solutions, Inc.](https://www.siriuscom.com/)| |[SKY SOLUTIONS LLC](https://www.skysolutions.com/)| |[SKY Terra Technologies LLC](https://www.skyterratech.com)| |
azure-linux | Tutorial Azure Linux Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/tutorial-azure-linux-migration.md | There are several settings that can block the OS SKU migration request. To ensur * We recommend that you ensure your workloads configure and run successfully on the Azure Linux container host before attempting to use the OS SKU migration feature by [deploying an Azure Linux cluster](./quickstart-azure-cli.md) in dev/prod and verifying your service remains healthy. * Ensure the migration feature is working for you in test/dev before using the process on a production cluster. * Ensure that your pods have enough [Pod Disruption Budget](../aks/operator-best-practices-scheduler.md#plan-for-availability-using-pod-disruption-budgets) to allow AKS to move pods between VMs during the upgrade.-* You need Azure CLI version [2.61.0](https://learn.microsoft.com/cli/azure/release-notes-azure-cli#may-21-2024) or higher. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). +* You need Azure CLI version [2.61.0](/cli/azure/release-notes-azure-cli#may-21-2024) or higher. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * If you are using Terraform, you must have [v3.111.0](https://github.com/hashicorp/terraform-provider-azurerm/releases/tag/v3.111.0) or greater of the AzureRM Terraform module. ### [Azure CLI](#tab/azure-cli) |
azure-monitor | Azure Monitor Agent Network Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-network-configuration.md | Azure Virtual network service tags can be used to define network access controls ## Firewall endpoints The following table provides the endpoints that firewalls need to provide access to for different clouds. Each is an outbound connection to port 443. +> [!IMPORTANT] +> For all endpoints, HTTPS inspection must be disabled. + |Endpoint |Purpose | Example | |:--|:--|:--| | `global.handler.control.monitor.azure.com` |Access control service - | |
azure-monitor | Azure Monitor Agent Supported Operating Systems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-supported-operating-systems.md | This article lists the operating systems supported by [Azure Monitor Agent](./az | CentOS Linux 8 | Γ£ô | Γ£ô | | CentOS Linux 7 | Γ£ô<sup>2</sup> | Γ£ô | | CBL-Mariner 2.0 | Γ£ô<sup>2,3</sup> | |+| Debian 12 | Γ£ô | | | Debian 11 | Γ£ô<sup>2</sup> | Γ£ô | | Debian 10 | Γ£ô | Γ£ô | | Debian 9 | Γ£ô | Γ£ô | |
azure-monitor | Data Collection Log Json | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-json.md | Use the following ARM template to create a DCR for collecting text log files. In "type": "string", "metadata": { "description": "Unique name for the DCR. "- }, + } }, "location": { "type": "string", "metadata": { "description": "Region for the DCR. Must be the same location as the Log Analytics workspace. "+ } }, "filePatterns": { "type": "string", "metadata": { "description": "Path on the local disk for the log file to collect. May include wildcards.Enter multiple file patterns separated by commas (AMA version 1.26 or higher required for multiple file patterns on Linux)."- }, + } }, "tableName": { "type": "string", "metadata": { "description": "Name of destination table in your Log Analytics workspace. "- }, + } }, "workspaceResourceId": { "type": "string", "metadata": { "description": "Resource ID of the Log Analytics workspace with the target table."- }, - } + } + }, + "dataCollectionEndpointResourceId": { + "type": "string", + "metadata": { "description": "Resource ID of the Data Collection Endpoint to be used with this rule." + } + } }, "variables": {- "tableOutputStream": "['Custom-',concat(parameters('tableName'))]" + "tableOutputStream": "[concat('Custom-', parameters('tableName'))]]" }, "resources": [ { Use the following ARM template to create a DCR for collecting text log files. In }, { "name": "FilePath",- "type": "String" + "type": "string" }, { "name": "MyStringColumn", Use the following ARM template to create a DCR for collecting text log files. In }, { "name": "MyBooleanColumn",- "type": "bool" + "type": "boolean" } ] } Use the following ARM template to create a DCR for collecting text log files. In "dataFlows": [ { "streams": [- "Custom-Json-dataSource" + "Custom-JSONLog-stream" ], "destinations": [ "workspace" Use the following ARM template to create a DCR for collecting text log files. In "outputStream": "[variables('tableOutputStream')]" } ]+ "dataCollectionEndpointId" : "[parameters('dataCollectionEndpointResourceId')]" } } ] |
azure-monitor | Monitor Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md | Title: Monitor applications running on Azure Functions with Application Insights description: Azure Monitor integrates with your Azure Functions application, allowing performance monitoring and quickly identifying problems. Previously updated : 07/10/2023 Last updated : 08/24/2024 On the function app **Overview** pane, go to **Application Insights**. Under **C > [!div class="mx-imgBorder"] :::image type="content" source="./media//functions/collection-level.jpg" lightbox="./media//functions/collection-level.jpg" alt-text="Screenshot that shows the how to enable the AppInsights Java Agent."::: +### Configuration ++To configure this feature for an Azure Function App not on a consumption plan, add environment variables in App settings. To review available configurations, see [Configuration options: Azure Monitor Application Insights for Java](../app/java-standalone-config.md). ++For Azure Functions on a consumption plan, the available configuration options are limited to WEBSITE_SITE_NAME, APPLICATIONINSIGHTS_INSTRUMENTATION_LOGGING_LEVEL, and APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL in order to take advantage of the consumption plan warmup pool. For more configurations on a consumption plan Function, deploy your own agent and see [Distributed Tracing for Java Functions](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent). ++Deploying your own agent results in a longer cold start implication for consumption plan Functions. + ### Troubleshooting Your Java functions might have slow startup times if you adopted this feature before February 2023. From the function app **Overview** pane, go to **Configuration** in the left-hand side navigation menu. Then select **Application settings** and use the following steps to fix the issue. To view more data from your Node Azure Functions applications than is [collected ## Distributed tracing for Python function apps -To collect telemetry from services such as Requests, urllib3, httpx, PsycoPG2, and more, use the [Azure Monitor OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python). Tracked incoming requests coming into your Python application hosted in Azure Functions will not be automatically correlated with telemetry being tracked within it. You can manually achieve trace correlation by extract the TraceContext directly as shown below: +To collect telemetry from services such as Requests, urllib3, `httpx`, PsycoPG2, and more, use the [Azure Monitor OpenTelemetry Distro](./opentelemetry-enable.md?tabs=python). Tracked incoming requests coming into your Python application hosted in Azure Functions aren't automatically correlated with telemetry being tracked within it. You can manually achieve trace correlation by extracting the TraceContext directly as follows: <!-- TODO: Remove after Azure Functions implements this automatically --> |
azure-monitor | Data Platform Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md | The diagram and table below compare the Analytics, Basic, and Auxiliary table pl :::image type="content" source="media/data-platform-logs/azure-monitor-logs-data-plans.png" lightbox="media/data-platform-logs/azure-monitor-logs-data-plans.png" alt-text="Diagram that presents an overview of the capabilities provided by the Analytics, Basic, and Auxiliary table plans."::: -| | Analytics | Basic | Auxiliary (Preview) | +| Features | Analytics | Basic | Auxiliary (Preview) | | | | | | | Best for | High-value data used for continuous monitoring, real-time detection, and performance analytics. | Medium-touch data needed for troubleshooting and incident response. | Low-touch data, such as verbose logs, and data required for auditing and compliance. | | Supported [table types](../logs/manage-logs-tables.md) | All table types | [Azure tables that support Basic logs](basic-logs-azure-tables.md) and DCR-based custom tables | DCR-based custom tables | |
azure-resource-manager | Monitor Resource Manager Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/monitor-resource-manager-reference.md | + + Title: Monitoring data reference for Azure Resource Manager +description: This article contains important reference material you need when you monitor Azure Resource Manager. Last updated : 07/25/2024++++++++# Azure Resource Manager monitoring data reference +++See [Monitor Azure Resource Manager](monitor-resource-manager.md) for details on the data you can collect for Azure Resource Manager and how to use it. +++### Supported metrics for microsoft.resources/subscriptions ++The following table lists the metrics available for the microsoft.resources/subscriptions resource type. ++++++| Dimension name | Description | +|:-- |:-- | +| IsCustomerOriginated | | +| Microsoft.SubscriptionId | | +| Method | The HTTP method used in the request made to Azure Resource Manager. Possible values are: <br/>- GET<br/>- HEAD<br/>- PUT<br/>- POST<br/>- PATCH<br/>- DELETE | +| Namespace | The namespace for the Resource Provider, in all caps, like *MICROSOFT.COMPUTE*. | +| RequestRegion | The Azure Resource Manager region where your control plane requests land, like *EastUS2*. This region isn't the resource's location. | +| ResourceType | Any resource type in Azure that you created or sent a request to, in all caps, like *VIRTUALMACHINES*. | +| StatusCode | Response type from Azure Resource Manager for your control plane request. Possible values are (but not limited to): <br/>- 0<br/>- 200<br/>- 201<br/>- 400<br/>- 404<br/>- 429<br/>- 500<br/>- 502 | +| StatusCodeClass | The class for the status code returned from Azure Resource Manager. Possible values are: <br/>- 2xx<br/>- 4xx<br/>- 5xx | +++- [Microsoft.Resources resource provider operations](/azure/role-based-access-control/resource-provider-operations#management-and-governance) ++## Related content ++- See [Monitor Azure Resource Manager](monitor-resource-manager.md) for a description of monitoring Resource Manager. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
azure-resource-manager | Monitor Resource Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/monitor-resource-manager.md | + + Title: Monitor Azure Resource Manager +description: Start here to learn how to monitor Azure Resource Manager. Learn about Traffic and latency observability for subscription-level control plane requests. Last updated : 07/25/2024++++++++# Monitor Azure Resource Manager ++++For more information, see [Monitor Azure Monitor Resource Group insights](resource-group-insights.md). +++For more information about the resource types for Resource Manager, see [Azure Resource Manager monitoring data reference](monitor-resource-manager-reference.md). ++++For a list of available metrics for Resource Manager, see [Azure Resource Manager monitoring data reference](monitor-resource-manager-reference.md#metrics). ++When you create and manage resources in Azure, your requests are orchestrated through Azure's [control plane](./control-plane-and-data-plane.md), Azure Resource Manager. This article describes how to monitor the volume and latency of control plane requests made to Azure. ++With these metrics, you can observe traffic and latency for control plane requests throughout your subscriptions. For example, you can now figure out when your requests have been throttled by [examining throttled requests](#examining-throttled-requests). Determine if they failed by filtering for specific status codes and [examining server errors](#examining-server-errors). ++The metrics are available for up to three months (93 days) and only track synchronous requests. For a scenario like a virtual machine creation, the metrics don't represent the performance or reliability of the long running asynchronous operation. ++### Accessing Azure Resource Manager metrics ++You can access control plane metrics by using the Azure Monitor REST APIs, SDKs, and the Azure portal by selecting the **Azure Resource Manager** metric. For an overview on Azure Monitor, see [Azure Monitor Metrics](../../azure-monitor/data-platform.md). ++There's no opt-in or sign-up process to access control plane metrics. ++For guidance on how to retrieve a bearer token and make requests to Azure, see [Azure REST API reference](/rest/api/azure/#create-the-request). ++### Metric definition ++The definition for Azure Resource Manager metrics in Azure Monitor is only accessible through the 2017-12-01-preview API version. To retrieve the definition, you can run the following snippet. Replace `00000000-0000-0000-0000-000000000000` with your subscription ID. ++```bash +curl --location --request GET 'https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.insights/metricDefinitions?api-version=2017-12-01-preview&metricnamespace=microsoft.resources/subscriptions' \ +--header 'Authorization: bearer {{bearerToken}}' +``` ++This snippet returns the definition for the metrics schema. Notably, this schema includes [the dimensions you can filter on with the Monitor API](monitor-resource-manager-reference.md#metric-dimensions). ++### Metrics examples ++Here are some scenarios that can help you explore Azure Resource Manager metrics. ++#### Query traffic and latency control plane metrics with Azure portal ++First, navigate to the Azure Monitor page within the [portal](https://portal.azure.com): +++After selecting **Explore Metrics**, select a single subscription and then select the **Azure Resource Manager** metric: +++Then, after selecting **Apply**, you can visualize your Traffic or Latency control plane metrics with custom filtering and splitting: +++#### Query traffic and latency control plane metrics with REST API ++After you authenticate with Azure, you can make a request to retrieve control plane metrics for your subscription. In the script, replace `00000000-0000-0000-0000-000000000000` with your subscription ID. The script retrieves the average request latency, in seconds, and the total request count for the two day timespan, broken down by one day intervals: ++```bash +curl --location --request GET "https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.insights/metrics?api-version=2021-05-01&interval=P1D&metricnames=Latency&metricnamespace=microsoft.resources/subscriptions®ion=global&aggregation=average,count×pan=2021-11-01T00:00:00Z/2021-11-03T00:00:00Z" \ +--header "Authorization: bearer {{bearerToken}}" +``` ++For Azure Resource Manager metrics, you can retrieve the traffic count by using the Latency metric and including the 'count' aggregation. You see a JSON response for the request: ++```Json +{ + "cost": 5758, + "timespan": "2021-11-01T00:00:00Z/2021-11-03T00:00:00Z", + "interval": "P1D", + "value": [ + { + "id": "subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Insights/metrics/Latency", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Latency", + "localizedValue": "Latency" + }, + "displayDescription": "Latency data for all requests to Azure Resource Manager", + "unit": "Seconds", + "timeseries": [ + { + "metadatavalues": [], + "data": [ + { + "timeStamp": "2021-11-01T00:00:00Z", + "count": 1406.0, + "average": 0.19345163584637273 + }, + { + "timeStamp": "2021-11-02T00:00:00Z", + "count": 1517.0, + "average": 0.28294792353328935 + } + ] + } + ], + "errorCode": "Success" + } + ], + "namespace": "microsoft.resources/subscriptions", + "resourceregion": "global" +} +``` ++If you want to retrieve only the traffic count, then you can use the Traffic metric with the `count` aggregation: ++```bash +curl --location --request GET 'https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.insights/metrics?api-version=2021-05-01&interval=P1D&metricnames=Traffic&metricnamespace=microsoft.resources/subscriptions®ion=global&aggregation=count×pan=2021-11-01T00:00:00Z/2021-11-03T00:00:00Z' \ +--header 'Authorization: bearer {{bearerToken}}' +``` ++The response for the request is: ++```Json +{ + "cost": 2879, + "timespan": "2021-11-01T00:00:00Z/2021-11-03T00:00:00Z", + "interval": "P1D", + "value": [ + { + "id": "subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Insights/metrics/Traffic", + "type": "Microsoft.Insights/metrics", + "name": { + "value": "Traffic", + "localizedValue": "Traffic" + }, + "displayDescription": "Traffic data for all requests to Azure Resource Manager", + "unit": "Count", + "timeseries": [ + { + "metadatavalues": [], + "data": [ + { + "timeStamp": "2021-11-01T00:00:00Z", + "count": 1406.0 + }, + { + "timeStamp": "2021-11-02T00:00:00Z", + "count": 1517.0 + } + ] + } + ], + "errorCode": "Success" + } + ], + "namespace": "microsoft.resources/subscriptions", + "resourceregion": "global" +} +``` ++For the metrics supporting dimensions, you need to specify the dimension value to see the corresponding metrics values. For example, if you want to focus on the **Latency** for successful requests to Resource Manager, you need to filter the **StatusCodeClass** dimension with **2XX**. ++If you want to look at the number of requests made in your subscription for Networking resources, like Virtual Networks and Load Balancers, you would need to filter the **Namespace** dimension for **MICROSOFT.NETWORK**. ++#### Examining Throttled Requests ++To view only your throttled requests, you need to filter for 429 status code responses only. For REST API calls, filtering is accomplished by using the [$filter property](/rest/api/monitor/Metrics/List#uri-parameters) and the StatusCode dimension by appending: `$filter=StatusCode eq '429'` as seen at the end of the request in the following snippet: ++```bash +curl --location --request GET 'https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.insights/metrics?api-version=2021-05-01&interval=P1D&metricnames=Latency&metricnamespace=microsoft.resources/subscriptions®ion=global&aggregation=count,average×pan=2021-11-01T00:00:00Z/2021-11-03T00:00:00Z&$filter=StatusCode%20eq%20%27429%27' \ +--header 'Authorization: bearer {{bearerToken}}' +``` ++You can also filter directly in portal: ++#### Examining Server Errors ++Similar to looking at throttled requests, you view *all* requests that returned a server error response code by filtering 5xx responses only. For REST API calls, filtering is accomplished by using the [$filter property](/rest/api/monitor/Metrics/List#uri-parameters) and the StatusCodeClass dimension by appending: $filter=StatusCodeClass eq '5xx' as seen at the end of the request in the following snippet: ++```bash +curl --location --request GET 'https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/providers/microsoft.insights/metrics?api-version=2021-05-01&interval=P1D&metricnames=Latency&metricnamespace=microsoft.resources/subscriptions®ion=global&aggregation=count,average×pan=2021-11-01T00:00:00Z/2021-11-03T00:00:00Z&$filter=StatusCodeClass%20eq%20%275xx%27' \ +--header 'Authorization: bearer {{bearerToken}}' +``` ++You can also accomplish generic server errors filtering within portal by setting the filter property to `StatusCodeClass` and the value to `5xx`, similar to what was done in the throttling example. +++++++++### Resource Manager alert rules ++You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Resource Manager monitoring data reference](monitor-resource-manager-reference.md). +++## Related content ++- See [Azure Resource Manager monitoring data reference](monitor-resource-manager-reference.md) for a reference of the metrics, logs, and other important values created for Resource Manager. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. |
azure-signalr | Signalr Concept Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-azure-functions.md | Using Azure Functions to integrate with Azure Cosmos DB is an example of utilizi ### Authentication and users -SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Microsoft Entra ID, Facebook, and Twitter. You can then send messages directly to these authenticated users. +SignalR Service allows you to broadcast messages to all or a subset of clients, such as those belonging to a single user. You can combine the SignalR Service bindings for Azure Functions with App Service authentication to authenticate users with providers such as Microsoft Entra ID, Facebook, and X. You can then send messages directly to these authenticated users. ## Next steps |
azure-signalr | Signalr Concept Serverless Development Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-serverless-development-config.md | Configure your SignalR clients to use the API Management URL. ### Using App Service Authentication -Azure Functions has built-in authentication, supporting popular providers such as Facebook, Twitter, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that is authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID. +Azure Functions has built-in authentication, supporting popular providers such as Facebook, X, Microsoft Account, Google, and Microsoft Entra ID. This feature can be integrated with the `SignalRConnectionInfo` binding to create connections to Azure SignalR Service that is authenticated to a user ID. Your application can send messages using the `SignalR` output binding that are targeted to that user ID. In the Azure portal, in your Function app's _Platform features_ tab, open the _Authentication/authorization_ settings window. Follow the documentation for [App Service Authentication](../app-service/overview-authentication-authorization.md) to configure authentication using an identity provider of your choice. |
azure-signalr | Signalr Howto Configure Application Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-configure-application-firewall.md | -# Application Firewall for Azure SignalR Service +# Application Firewall (Preview) for Azure SignalR Service The Application Firewall provides sophisticated control over client connections in a distributed system. Before diving into its functionality and setup, let's clarify what the Application Firewall does not do: |
azure-signalr | Signalr Tutorial Authenticate Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-tutorial-authenticate-azure-functions.md | The `--publish-local-settings` option publishes your local settings from the _lo ### Enable App Service authentication -Azure Functions supports authentication with Microsoft Entra ID, Facebook, Twitter, Microsoft account, and Google. You'll use Microsoft as the identity provider for this tutorial. +Azure Functions supports authentication with Microsoft Entra ID, Facebook, X, Microsoft account, and Google. You'll use Microsoft as the identity provider for this tutorial. 1. In the Azure portal, go to the resource page of your function app. 1. Select **Settings** > **Authentication**. For more information about the supported identity providers, see the following a - [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) - [Facebook](../app-service/configure-authentication-provider-facebook.md)-- [Twitter](../app-service/configure-authentication-provider-twitter.md)+- [X](../app-service/configure-authentication-provider-twitter.md) - [Microsoft account](../app-service/configure-authentication-provider-microsoft.md) - [Google](../app-service/configure-authentication-provider-google.md) |
azure-vmware | Extended Security Updates Windows Sql Server | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/extended-security-updates-windows-sql-server.md | To find the SQL Server configuration from the Azure portal: 1. In the Azure VMware Solution portal, go to **vCenter Server Inventory** and **Virtual Machines** by clicking through one of the Azure Arc-enabled VMs. The **Machine-Azure Arc (AVS)** page appears. 1. On the left pane, under **Operations**, select **SQL Server Configuration**.-1. Follow the steps in the section [Configure SQL Server enabled by Azure Arc - Modify SQL Server configuration](https://learn.microsoft.com/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver16&tabs=azure#modify-sql-server-configuration). This section also provides syntax to configure by using Azure PowerShell or the Azure CLI. +1. Follow the steps in the section [Configure SQL Server enabled by Azure Arc - Modify SQL Server configuration](/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver16&tabs=azure#modify-sql-server-configuration). This section also provides syntax to configure by using Azure PowerShell or the Azure CLI. #### View ESU subscription status For machines that run SQL Server where guest management is enabled, the Azure Ex - Use Azure Resource Graph queries: - - You can use the query [List Arc-enabled SQL Server instances subscribed to ESU](https://learn.microsoft.com/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver16&tabs=azure&branch=main#list-arc-enabled-sql-server-instances-subscribed-to-esu) as an example to show how you can view eligible SQL Server ESU instances and their ESU subscription status. + - You can use the query [List Arc-enabled SQL Server instances subscribed to ESU](/sql/sql-server/azure-arc/manage-configuration?view=sql-server-ver16&tabs=azure&branch=main#list-arc-enabled-sql-server-instances-subscribed-to-esu) as an example to show how you can view eligible SQL Server ESU instances and their ESU subscription status. ### Windows Server When you contact Support, raise the ticket under the Azure VMware Solution categ - Customer name and tenant ID - Number of VMs you want to register - OS versions-- ESU year of coverage (for example, Year 1, Year 2, or Year 3). See [ESU Availability and End Dates](https://learn.microsoft.com/lifecycle/faq/extended-security-updates?msclkid=65927660d02011ecb3792e8849989799#esu-availability-and-end-dates) for ESU End Date and Year. The support ticket provides you with ESU keys for one year. You'll need to raise a new support request for other years. It's recommended to raise a new request as your current ESU End Date Year date is approaching.+- ESU year of coverage (for example, Year 1, Year 2, or Year 3). See [ESU Availability and End Dates](/lifecycle/faq/extended-security-updates?msclkid=65927660d02011ecb3792e8849989799#esu-availability-and-end-dates) for ESU End Date and Year. The support ticket provides you with ESU keys for one year. You'll need to raise a new support request for other years. It's recommended to raise a new request as your current ESU End Date Year date is approaching. > [!WARNING] > If you create ESU licenses for Windows through Azure Arc, you're charged for the ESUs. |
azure-vmware | Migrate Sql Server Always On Availability Group | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-always-on-availability-group.md | For details about configuring and managing the quorum, see [Failover Clustering - [Microsoft SQL Server 2022 Documentation](/sql/sql-server/) - [Windows Server Technical Documentation](/windows-server/) - [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)-- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951) - [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf) - [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf) |
azure-vmware | Migrate Sql Server Failover Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-failover-cluster.md | Check the connectivity to SQL Server from other systems and applications in your - [Microsoft SQL Server 2022 Documentation](/sql/sql-server/?view=sql-server-ver16&preserve-view=true) - [Windows Server Technical Documentation](/windows-server/) - [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)-- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951) - [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf) - [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf) |
azure-vmware | Migrate Sql Server Standalone Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/migrate-sql-server-standalone-cluster.md | Check the connectivity to SQL Server from other systems and applications in your - [Microsoft SQL Server 2022 Documentation](/sql/sql-server/?view=sql-server-ver16&preserve-view=true) - [Windows Server Technical Documentation](/windows-server/) - [Planning Highly Available, Mission Critical SQL Server Deployments with VMware vSphere](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/vmware-vsphere-highly-available-mission-critical-sql-server-deployments.pdf)-- [Microsoft SQL Server on VMware vSphere Availability and Recovery Options](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-availability-and-recovery-options.pdf) - [VMware KB 100 2951 ΓÇô Tips for configuring Microsoft SQL Server in a virtual machine](https://kb.vmware.com/s/article/1002951) - [Microsoft SQL Server 2019 in VMware vSphere 7.0 Performance Study](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/vsphere7-sql-server-perf.pdf) - [Architecting Microsoft SQL Server on VMware vSphere ΓÇô Best Practices Guide](https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/solutions/sql-server-on-vmware-best-practices-guide.pdf) |
azure-web-pubsub | Howto Configure Application Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-configure-application-firewall.md | -# Application Firewall for Azure Web PubSub Service +# Application Firewall (Preview) for Azure Web PubSub Service The Application Firewall provides sophisticated control over client connections in a distributed system. Before diving into its functionality and setup, let's clarify what the Application Firewall does not do: |
azure-web-pubsub | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/overview.md | Title: What is Azure Web PubSub service? -description: Better understand what typical use case scenarios to use Azure Web PubSub, and learn the key benefits of Azure Web PubSub. --+description: Better understand what typical use cases and app scenarios Azure Web PubSub service enables, and learn the key benefits of the service. ++ Previously updated : 07/12/2024 Last updated : 07/26/2024 # What is Azure Web PubSub service? -The Azure Web PubSub Service makes it easy to build real-time messaging web applications using WebSockets and the publish-subscribe pattern. This real-time functionality allows publishing content updates between server and connected clients (for example, a single page web application or mobile application). The clients don't need to poll for the latest updates, or submit new HTTP requests for updates. +Azure Web PubSub Service makes it easy to build web applications where server and clients need to exchange data in real-time. Real-time data exchange is the bedrock of certain time-sensitive apps developers build and maintain. Developers have used the service in a variety of applications and industries, for exmaple, in chat apps, real-time dashboards, multi-player games, online auctions, multi-user collaborative apps, location tracking, notifications, and more. -This article provides an overview of the Azure Web PubSub service. +With the recent surge in interest in AI, Web PubSub has become an invaluable tool to developers building AI-enabled applications for token streaming. The service is battle-tested to scale to tens of millions of concurrent connections and offers ultra-low latency. ++When an app's usage is small, developers typically opt for a polling mechanism to provide real-time communication between server and clients - clients send repeated HTTP requests to server over a time interval. However, developers often report that while polling mechanism is straightforward to implement, it suffers three important drawbacks. +- Outdated data. +- Inconsistent data. +- Wasted bandwidth and compute resources. ++These drawbacks are the primary motivations that drive developers to look for alternatives. This article provides an overview of Azure Web PubSub service and how developers can use it to build real-time communication channel fast and at scale. ## What is Azure Web PubSub service used for? -Any scenario that requires real-time publish-subscribe messaging between the server and clients or among clients, can use the Azure Web PubSub service. Traditional real-time features that often require polling from the server or submitting HTTP requests, can also use the Azure Web PubSub service. +Any app scenario where updates at the data resource need to be delivered to other components across network can benefit from using Azure Web PubSub. As the name suggests, the service facilities the communication between a publisher and subscribers. A publisher is a component that publishes data updates. A subscriber is a component that subscribes to data updates. -The Azure Web PubSub service can be used in any application type that requires real-time content updates. We list some examples that are good to use the Azure Web PubSub service: +Azure Web PubSub service is used in a multitude of industries and app scenarios where data is time-sensitive. Here's a partial list of some common use cases. -* **High frequency data updates:** gaming, voting, polling, auction. -* **Live dashboards and monitoring:** company dashboard, financial market data, instant sales update, multi-player game leader board, and IoT monitoring. -* **Cross-platform live chat:** live chat room, chat bot, on-line customer support, real-time shopping assistant, messenger, in-game chat, and so on. -* **Real-time location on map:** logistic tracking, delivery status tracking, transportation status updates, GPS apps. -* **Real-time targeted ads:** personalized real-time push ads and offers, interactive ads. -* **Collaborative apps:** coauthoring, whiteboard apps and team meeting software. -* **Push instant notifications:** social network, email, game, travel alert. -* **Real-time broadcasting:** live audio/video broadcasting, live captioning, translating, events/news broadcasting. -* **IoT and connected devices:** real-time IoT metrics, remote control, real-time status, and location tracking. -* **Automation:** real-time trigger from upstream events. +|Use case |Example applications | +|-|-| +|High frequency data updates | Multi-player games, social media voting, opinion polling, online auctioning | +|Live dashboards and monitoring | Company dashboard, financial market data, instant sales update, game leaderboard, IoT monitoring | +|Cross-platform chat| Live chat room, AI-assisted chatbot, online customer support, real-time shopping assistant, messenger, in-game chat | +|Location tracking | Vehicle asset tracking, delivery status tracking, transportation status updates, ride-hailing apps | +|Multi-user collaborative apps | coauthoring, collaborative whiteboard and team meeting apps | +|Cross-platform push notifications | Social media, email, game status, travel alert | +|IoT and connected devices | Real-time IoT metrics, managing charging network for electric vehicles, live concert engagement | +|Automation | Real-time trigger from upstream events | ## What are the benefits using Azure Web PubSub service? **Built-in support for large-scale client connections and highly available architectures:** -The Azure Web PubSub service is designed for large-scale real-time applications. The service allows multiple instances to work together and scale to millions of client connections. Meanwhile, it also supports multiple global regions for sharding, high availability, or disaster recovery purposes. +Azure Web PubSub service is designed for large-scale, real-time applications. With a single Web PubSub resource, it can scale to 1 million concurrent connections, which is sufficient for most cases. When multiple resources are used together, the service allows you to scale beyond 1 million concurrent connections. Meanwhile, it also supports multiple global regions for sharding, high availability, or disaster recovery purposes. **Support for a wide variety of client SDKs and programming languages:** -The Azure Web PubSub service works with a broad range of clients. These clients include web and mobile browsers, desktop apps, mobile apps, server processes, IoT devices, and game consoles. Since this service supports the standard WebSocket connection with publish-subscribe pattern, it's easy to use any standard WebSocket client SDK in different languages with this service. +Azure Web PubSub service works with a broad range of clients. These clients include web and mobile browsers, desktop apps, mobile apps, server processes, IoT devices, and game consoles. Server and client SDKs are available for mainstream programming languages, C#, Java, JavaScript, and Python, making it easy to consume the APIs offered by the service. Since the service supports standard WebSocket protocol, you can use any REST capable programming languages to call Web PubSub's APIs directly if SDKs aren't available in your programming language of choice. **Offer rich APIs for different messaging patterns:** -Azure Web PubSub service is a bi-directional messaging service that allows different messaging patterns among server and clients, for example: --* The server sends messages to individual clients, all clients, or groups of clients that are associated with a specific user or categorized into arbitrary groups. -* The client sends messages to clients that belong to an arbitrary group. -* The clients send messages to server. +Azure Web PubSub service offers real-time, bi-directional communication between server and clients for data exchange. The service offers features to allow you to finely control how a message should be delivered and to whom. Here's a list of supported messaging patterns. +|Messaging pattern |Details | +|-|--| +|Broadcast to all clients | A server sends data updates to all connected clients. | +|Broadcast to a subset of clients | A server sends data updates to a subset of clients arbitrarily defined by you. | +|Broadcast to all clients owned by a specific human user | A human user can have multiple browser tabs or device open, you can broadcast to the user so that all the web clients used by the user are synchronized. | +|Client pub/sub | A client sends messages to clients that are in a group arbitrarily defined by you without your server's involvement.| +|Clients to server | Clients send messages to server at low latency. | ## How to use the Azure Web PubSub service? There are many different ways to program with Azure Web PubSub service, as some of the samples listed here: - **Build serverless real-time applications**: Use Azure Functions' integration with Azure Web PubSub service to build serverless real-time applications in languages such as JavaScript, C#, Java and Python. -- **Use WebSocket subprotocol to do client-side only Pub/Sub** - Azure Web PubSub service provides WebSocket subprotocols to empower authorized clients to publish to other clients in a convenience manner.+- **Use WebSocket subprotocol to do client-side only Pub/Sub** - Azure Web PubSub service provides WebSocket subprotocols to empower authorized clients to publish to other clients in a convenient manner. - **Use provided SDKs to manage the WebSocket connections in self-host app servers** - Azure Web PubSub service provides SDKs in C#, JavaScript, Java and Python to manage the WebSocket connections easily, including broadcast messages to the connections, add connections to some groups, or close the connections, etc. - **Send messages from server to clients via REST API** - Azure Web PubSub service provides REST API to enable applications to post messages to clients connected, in any REST capable programming languages. |
azure-web-pubsub | Quickstart Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md | Here we choose `Microsoft` as identify provider, which uses `x-ms-client-princip - [Microsoft Entra ID](../app-service/configure-authentication-provider-aad.md) - [Facebook](../app-service/configure-authentication-provider-facebook.md) - [Google](../app-service/configure-authentication-provider-google.md)-- [Twitter](../app-service/configure-authentication-provider-twitter.md)+- [X](../app-service/configure-authentication-provider-twitter.md) ## Try the application |
backup | Backup Azure Mars Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-mars-troubleshoot.md | This section explains the process to troubleshoot errors that you might encounte | Causes | Recommended actions | | | | | **Vault credentials aren't valid** <br/> <br/> Vault credential files might be corrupt, might have expired, or they might have a different file extension than `.vaultCredentials`. (For example, they might have been downloaded more than 10 days before the time of registration.) | [Download new credentials](backup-azure-file-folder-backup-faq.yml#where-can-i-download-the-vault-credentials-file-) from the Recovery Services vault on the Azure portal. Then take these steps, as appropriate: <br><br>- If you've already installed and registered MARS, open the Microsoft Azure Backup Agent MMC console. Then select **Register Server** in the **Actions** pane to complete the registration with the new credentials. <br> - If the new installation fails, try reinstalling with the new credentials. <br><br> **Note**: If multiple vault credential files have been downloaded, only the latest file is valid for the next 10 days. We recommend that you download a new vault credential file. <br><br> - To prevent errors during vault registration, ensure that the MARS agent version 2.0.9249.0 or above is installed. If not, we recommend you to install it [from here](https://aka.ms/azurebackup_agent).|-| **Proxy server/firewall is blocking registration** <br/>Or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Follow these steps:<br/> <br><br>- Work with your IT team to ensure the system has internet connectivity.<br>- If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<br>- If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `*.microsoft.com` <br> `*.windowsazure.com` <br> `*.microsoftonline.com` <br> `*.windows.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net`<br><br><br>- If you're a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> `*.microsoft.com` <br> `*.windowsazure.us` <br> `*.microsoftonline.us` <br> `*.windows.net` <br> `*.usgovcloudapi.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net` <br><br> Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support). <br/> <br/> If you are using the [Entra Tenant Restrictions](https://learn.microsoft.com/entra/identity/enterprise-apps/tenant-restrictions) feature with your proxy, ensure that the tenant id of Recovery Services Vault used to register the MARS agent is added to the list of allowed tenants in the `Restrict-Access-To-Tenants` header. This tenant id is unique per Azure region. You can find the tenant id by opening the vault credential file and locating the `<AadTenantId>` element.| +| **Proxy server/firewall is blocking registration** <br/>Or <br/>**No internet connectivity** <br/><br/> If your machine has limited internet access, and you don't ensure the firewall, proxy, and network settings allow access to the FQDNS and public IP addresses, the registration will fail.| Follow these steps:<br/> <br><br>- Work with your IT team to ensure the system has internet connectivity.<br>- If you don't have a proxy server, ensure the proxy option isn't selected when you register the agent. [Check your proxy settings](#verifying-proxy-settings-for-windows).<br>- If you do have a firewall/proxy server, work with your networking team to allow access to the following FQDNs and public IP addresses. Access to all of the URLs and IP addresses listed below uses the HTTPS protocol on port 443.<br/> <br> **URLs**<br> `*.microsoft.com` <br> `*.windowsazure.com` <br> `*.microsoftonline.com` <br> `*.windows.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net`<br><br><br>- If you're a US Government customer, ensure that you have access to the following URLs:<br><br> `www.msftncsi.com` <br> `*.microsoft.com` <br> `*.windowsazure.us` <br> `*.microsoftonline.us` <br> `*.windows.net` <br> `*.usgovcloudapi.net` <br> `*blob.core.windows.net` <br> `*queue.core.windows.net` <br> `*blob.storage.azure.net` <br><br> Try registering again after you complete the preceding troubleshooting steps.<br></br> If your connection is via Azure ExpressRoute, make sure the settings are configured as described in Azure [ExpressRoute support](../backup/backup-support-matrix-mars-agent.md#azure-expressroute-support). <br/> <br/> If you are using the [Entra Tenant Restrictions](/entra/identity/enterprise-apps/tenant-restrictions) feature with your proxy, ensure that the tenant id of Recovery Services Vault used to register the MARS agent is added to the list of allowed tenants in the `Restrict-Access-To-Tenants` header. This tenant id is unique per Azure region. You can find the tenant id by opening the vault credential file and locating the `<AadTenantId>` element.| | **Antivirus software is blocking registration** | If you have antivirus software installed on the server, add the exclusion rules to the antivirus scan for: <br><br> - Every file and folder under the *scratch* and *bin* folder locations - `<InstallPath>\Scratch\*` and `<InstallPath>\Bin\*`. <br> - cbengine.exe | #### Additional recommendations |
batch | Monitor Batch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/monitor-batch.md | Title: Monitor Azure Batch description: Start here to learn how to monitor Azure Batch. Previously updated : 03/28/2024 Last updated : 07/19/2024 You can use the Batch APIs to create list queries for Batch jobs, tasks, compute Or, instead of potentially time-consuming list queries that return detailed information about large collections of tasks or nodes, you can use the [Get Task Counts](/rest/api/batchservice/job/gettaskcounts) and [List Pool Node Counts](/rest/api/batchservice/account/listpoolnodecounts) operations to get counts for Batch tasks and compute nodes. For more information, see [Monitor Batch solutions by counting tasks and nodes by state](batch-get-resource-counts.md). ++### Application Insights + You can integrate Application Insights with your Azure Batch applications to instrument your code with custom metrics and tracing. For a detailed walkthrough of how to add Application Insights to a Batch .NET solution, instrument application code, monitor the application in the Azure portal, and build custom dashboards, see [Monitor and debug an Azure Batch .NET application with Application Insights](monitor-application-insights.md) and accompanying [code sample](https://github.com/Azure/azure-batch-samples/tree/master/CSharp/ArticleProjects/ApplicationInsights). ## Related content |
certification | Edge Secured Core Devices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/edge-secured-core-devices.md | Title: Edge Secured-core certified devices description: List of devices that have passed the Edge Secured-core certifications--++ Previously updated : 01/26/2024 Last updated : 07/22/2024 # Edge Secured-core certified devices -This page contains a list of devices that have successfully passed the Edge Secured-core certification. +This page contains a list of devices that have successfully passed the Edge Secured-core certification. The listed models may not be shipped as secure-by-default. When ordering, please request specifically for Edge Secured-core devices. |Manufacturer|Device Name|OS|Last Updated| ||| |AAEON|[SRG-TG01](https://newdata.aaeon.com.tw/DOWNLOAD/2014%20datasheet/Systems/SRG-TG01.pdf)|Windows 10 IoT Enterprise|2022-06-14|+|Advantech|[ITA-580](https://www.advantech.com/en-eu/products/5130beef-2b81-41f7-a89b-2c43c1f2b6e9/ita-580/mod_bf7b0383-e6b2-49d7-9181-b6fc752e188b)|Windows 10 IoT Enterprise|2024-07-08| |Asus|[PE200U](https://www.asus.com/networking-iot-servers/aiot-industrial-solutions/embedded-computers-edge-ai-systems/pe200u/)|Windows 10 IoT Enterprise|2022-04-20| |Asus|[PN64-E1 vPro](https://www.asus.com/ca-en/displays-desktops/mini-pcs/pn-series/asus-expertcenter-pn64-e1/)|Windows 10 IoT Enterprise|2023-08-08| |Asus|[NUC13L3Hv7](https://www.asus.com/us/displays-desktops/nucs/nuc-mini-pcs/asus-nuc-13-pro/)|Windows 10 IoT Enterprise|2023-04-28| This page contains a list of devices that have successfully passed the Edge Secu |Asus|[NUC12WSKV7](https://www.asus.com/us/displays-desktops/nucs/nuc-kits/asus-nuc-12-pro/)|Windows 10 IoT Enterprise|2022-10-31| |Asus|BELM12HBv716W+CMB1AB|Windows 10 IoT Enterprise|2022-10-25| |Asus|[NUC11TNHv5000](https://www.asus.com/us/displays-desktops/nucs/nuc-kits/nuc-11-pro-kit/)|Windows 10 IoT Enterprise|2022-06-14|-|Lenovo|[ThinkEdge SE30](https://www.lenovo.com/us/en/p/desktops/thinkedge/thinkedge-se30/len102c0004)|Windows 10 IoT Enterprise|2022-04-06| +|Lenovo|[ThinkEdge SE30](https://www.lenovo.com/us/en/p/desktops/thinkedge/thinkedge-se30/len102c0004)|Windows 10 IoT Enterprise|2022-04-06| |
communication-services | Calling Sdk Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md | The Azure Communication Services Calling SDK supports the following streaming co | Limit | Web | Windows/Android/iOS | | - | | -- | | **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video and 1 screen sharing | 1 video + 1 screen sharing |-| **Maximum # of incoming remote streams that can be rendered simultaneously** | 9 videos + 1 screen sharing on desktop browsers*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing | +| **Maximum # of incoming remote streams that can be rendered simultaneously** | 16 videos + 1 screen sharing on desktop browsers*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing | ++ \* Starting from Azure Communication Services Web Calling SDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24)-While the Calling SDK doesn't enforce these limits, your users might experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support. +While the Calling SDK doesn't enforce these limits, your users might experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support. To properly support 16 incoming videos the computer should have a mimimum of 16GB RAM and a 4-core or greater CPU that is no older than 3 years old ## Supported video resolutions The Azure Communication Services Calling SDK automatically adjusts resolutions of video and screen share streams during the call. The Azure Communication Services Calling SDK supports sending following video re ## Number of participants on a call support - Up to **350** users can join a group call, Room or Teams + ACS call. - Once the call size reaches 100+ participants in a call, only the top 4 most dominant speakers that have their video camera turned can be seen.-- When the number of people on the call is 100+, the viewable number of incoming video renders automatically decreases from 3x3 (9 incoming videos) down to 2x2 (4 incoming videos).-- When the number of users goes below 100, the number of supported incoming videos goes back up to 3x3 (9 incoming videos).+- When the number of people on the call is 100+, the viewable number of incoming video renders automatically decreases from 4x4 (16 incoming videos) down to 2x2 (4 incoming videos). +- When the number of users goes below 100, the number of supported incoming videos goes back up to 4x4 (16 incoming videos). ## Calling SDK timeouts The following timeouts apply to the Communication Services Calling SDKs: |
communication-services | Manage Video | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/manage-video.md | Title: Manage video during calls description: Use Azure Communication Services SDKs to manage video calls.--++ Previously updated : 08/10/2021 Last updated : 07/25/2024 zone_pivot_groups: acs-plat-web-ios-android-windows |
copilot | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/capabilities.md | Use Microsoft Copilot in Azure to perform many basic tasks in the Azure portal o - Work smarter with Azure - [Deploy virtual machines effectively](deploy-vms-effectively.md) - [Build infrastructure and deploy workloads](build-infrastructure-deploy-workloads.md)- - [Create resources using guided deployments](use-guided-deployments.md) + - [Create resources using interactive deployments](use-guided-deployments.md) - [Work with AKS clusters efficiently](work-aks-clusters.md) - [Get information about Azure Monitor metrics and logs](get-monitoring-information.md) - [Work smarter with Azure Stack HCI](work-smarter-edge.md) |
copilot | Use Guided Deployments | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/use-guided-deployments.md | Title: Create resources using guided deployments from Microsoft Copilot in Azure -description: Learn how Microsoft Copilot in Azure (preview) can provide one-click or step-by-step deployment assistance. Previously updated : 05/28/2024+ Title: Create resources using interactive deployments from Microsoft Copilot in Azure +description: Learn how Microsoft Copilot in Azure (preview) can provide quick or guided deployment assistance. Last updated : 07/24/2024 -# Create resources using guided deployments from Microsoft Copilot in Azure +# Create resources using interactive deployments from Microsoft Copilot in Azure -Microsoft Copilot in Azure (preview) can help you deploy certain resources and workloads by providing one-click or step-by-step deployment assistance. +Microsoft Copilot in Azure (preview) can help you deploy certain resources and workloads by providing quick or guided deployment assistance. -Guided deployments are currently available for select workloads. For other types of deployments, Copilot in Azure helps by [providing links to templates](#template-suggestions) that you can customize and deploy, often with various deployment options such as Azure CLI, Terraform, Bicep, or ARM. If a template isn't available for your scenario, Copilot in Azure provides information to help you choose and deploy the best resources for your scenario. +Interactive deployments are currently available for select workloads. For other types of deployments, Copilot in Azure helps by [providing links to templates](#template-suggestions) that you can customize and deploy, often with various deployment options such as Azure CLI, Terraform, Bicep, or ARM. If a template isn't available for your scenario, Copilot in Azure provides information to help you choose and deploy the best resources for your scenario. [!INCLUDE [scenario-note](includes/scenario-note.md)] Guided deployments are currently available for select workloads. For other types ## Deploy a LEMP stack on an Azure Linux VM -Copilot in Azure can help you deploy an NGINX web server, Azure MySQL Flexible Server, and PHP (the [LEMP stack](/azure/virtual-machines/linux/tutorial-lemp-stack)) on an Ubuntu Linux VM in Azure. To see the LEMP server in action, you can also install and configure a WordPress site. You can choose either a one-click deployment, or step-by-step assistance. +Copilot in Azure can help you deploy an NGINX web server, Azure MySQL Flexible Server, and PHP (the [LEMP stack](/azure/virtual-machines/linux/tutorial-lemp-stack)) on an Ubuntu Linux VM in Azure. To see the LEMP server in action, you can also install and configure a WordPress site. You can choose either a quick deployment or a guided deployment that provides step-by-step assistance. ### LEMP stack sample prompts Copilot in Azure can help you deploy an NGINX web server, Azure MySQL Flexible S ### LEMP stack example -You can say "**I want to deploy a LEMP stack on a Ubuntu VM**". Copilot in Azure checks for deployment experiences, and presents you with two deployment options: **Step-by-Step** or **One-Click**. +You can say "**I want to deploy a LEMP stack on a Ubuntu VM**". Copilot in Azure checks for deployment experiences, and presents you with two deployment options: **Guided deployment** or **Quick deployment**. :::image type="content" source="media/use-guided-deployments/lemp-stack-deployment.png" alt-text="Screenshot showing Copilot in Azure presenting deployment options for a LEMP stack on Ubuntu."::: -If you choose **Step-by-step deployment** and select a subscription, Copilot in Azure launches a guided experience that walks you through each step of the deployment. +If you choose **Guided deployment** and select a subscription, Copilot in Azure launches a guided experience that walks you through each step of the deployment. :::image type="content" source="media/use-guided-deployments/lemp-stack-step-start.png" lightbox="media/use-guided-deployments/lemp-stack-step-start.png" alt-text="Screenshot showing the start of the step-by-step guided deployment for a LEMP stack on Ubuntu."::: After you complete your deployment, you can check and browse the WordPress websi ## Create a Linux virtual machine and connect via SSH -Copilot in Azure can help you [create a Linux VM and connect to it via SSH](/azure/virtual-machines/linux/quick-create-cli). You can choose either a one-click deployment, or step-by-step assistance to handle the necessary tasks, including installing the latest Ubuntu image, provisioning the VM, generating a private key, and establishing the SSH connection. +Copilot in Azure can help you [create a Linux VM and connect to it via SSH](/azure/virtual-machines/linux/quick-create-cli). You can choose either a quick deployment or a guided deployment that provides step-by-step assistance to handle the necessary tasks. These tasks include installing the latest Ubuntu image, provisioning the VM, generating a private key, and establishing the SSH connection. ### Linux VM sample prompts Copilot in Azure can help you [create a Linux VM and connect to it via SSH](/azu ### Linux VM example -You can say "**How do I create a Linux VM and SSH into it?**". You'll see two deployment options: **Step-by-Step deployment** or **One-Click**. If you choose the one-click option and select a subscription, you can run the script to deploy the infrastructure. While the deployment is running, don't close or refresh the page. You'll see progress as each step of the deployment is completed. +You can say "**How do I create a Linux VM and SSH into it?**" You'll see two deployment options: **Guided deployment** or **Quick deployment**. If you choose the quick option and select a subscription, you can run the script to deploy the infrastructure. While the deployment is running, don't close or refresh the page. You'll see progress as each step of the deployment is completed. ## Create an AKS cluster with a custom domain and HTTPS -Copilot in Azure can help you [create an Azure Kubernetes Service (AKS) cluster](/azure/aks/learn/quick-kubernetes-deploy-cli) with an NGINX ingress controller and a custom domain. As with the other deployments, you can choose either a one-click deployment or step-by-step assistance. +Copilot in Azure can help you [create an Azure Kubernetes Service (AKS) cluster](/azure/aks/learn/quick-kubernetes-deploy-cli) with an NGINX ingress controller and a custom domain. As with the other deployments, you can choose either a quick or guided deployment. ### AKS cluster sample prompts Copilot in Azure can help you [create an Azure Kubernetes Service (AKS) cluster] ### AKS cluster example -When you say "**Seamless deployment for AKS cluster on Azure**", Microsoft Copilot in Azure presents you with two deployment options: **Step-by-Step** or **One-Click**. In this example, the one-click deployment option is selected. As with the other examples, you see progress as each step of the deployment is completed. +When you say "**Seamless deployment for AKS cluster on Azure**", Microsoft Copilot in Azure presents you with two deployment options: **Guided deployment** or **Quick deployment**. In this example, the quick deployment option is selected. As with the other examples, you see progress as each step of the deployment is completed. ## Template suggestions -If a guided deployment isn't available, Copilot in Azure checks to see if there's a template available to help with your scenario. Where possible, multiple deployment options are provided, such as Azure CLI, Terraform, Bicep, or ARM. You can then download and customize the templates as desired. +If an interactive deployment isn't available, Copilot in Azure checks to see if there's a template available to help with your scenario. Where possible, multiple deployment options are provided, such as Azure CLI, Terraform, Bicep, or ARM. You can then download and customize the templates as desired. If a template isn't available, Copilot in Azure provides information to help you achieve your goal. You can also revise your prompt to be more specific or ask if there are any related templates you could start from. |
cosmos-db | Audit Restore Continuous | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/audit-restore-continuous.md | -# Audit the point in time restore action for continuous backup mode in Azure Cosmos DB +# Audit the point-in-time restore action for continuous backup mode in Azure Cosmos DB [!INCLUDE[NoSQL, MongoDB, Gremlin, Table](includes/appliesto-nosql-mongodb-gremlin-table.md)] -Azure Cosmos DB provides you the list of all the point in time restores for continuous mode that were performed on an Azure Cosmos DB account using [Activity Logs](../azure-monitor/essentials/activity-log.md). Activity logs can be viewed for any Azure Cosmos DB account from the **Activity Logs** page in the Azure portal. The Activity Log shows all the operations that were triggered on the specific account. When a point in time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The Activity Log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore. +Azure Cosmos DB provides you with a list of all point-in-time restores for continuous mode that were performed on an Azure Cosmos DB account using [activity logs](monitor.md#activity-log). Activity logs can be viewed for any Azure Cosmos DB account from the **Activity Logs** page in the Azure portal. The activity log shows all the operations that were triggered on the specific account. When a point-in-time restore is triggered, it shows up as `Restore Database Account` operation on the source account as well as the target account. The activity log for the source account can be used to audit restore events, and the activity logs on the target account can be used to get the updates about the progress of the restore. ## Audit the restores that were triggered on a live database account |
cosmos-db | Autoscale Per Partition Region | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/autoscale-per-partition-region.md | This feature is available for new Azure Cosmos DB accounts. To enable this featu :::image type="content" source="media/autoscale-per-partition-region/enable-feature.png" lightbox="media/autoscale-per-partition-region/enable-feature.png" alt-text="Screenshot of the 'Per Region and Per Partition Autoscale' feature in the Azure portal."::: -> [!IMPORTANT] -> The feature is enabled at the account level, so all containers within the account will automatically have this capability applied. The feature is available for both shared throughput databases and containers with dedicated throughput. Provisioned throughput accounts must switch over to autoscale and then enable this feature, if interested. --## Metrics + > [!IMPORTANT] + > The feature is enabled at the account level, so all containers within the account will automatically have this capability applied. The feature is available for both shared throughput databases and containers with dedicated throughput. Provisioned throughput accounts must switch over to autoscale and then enable this feature, if interested. -Use Azure Monitor to analyze how the new autoscaling is being applied across partitions and regions. Filter to your desired database account and container, then filter or split by the `PhysicalPartitionID` metric. This metric shows all partitions across their various regions. +1. Use [Azure Monitor metrics](monitor-reference.md#supported-metrics-for-microsoftdocumentdbdatabaseaccounts) to analyze how the new autoscaling is applied across partitions and regions. Filter to your desired database account and container, then filter or split by the `PhysicalPartitionID` metric. This metric shows all partitions across their various regions. -Then, use `NormalizedRUConsumption' to see which partitions are scaling indpendently and which regions are scaling independently if applicable. You can use the 'ProvisionedThroughput' metric to see what throughput value is getting emmitted to our billing service. + Then, use `NormalizedRUConsumption` to see which partitions and regions scale independently. You can use the `ProvisionedThroughput` metric to see what throughput value is emitted to our billing service. ## Requirements/Limitations |
cosmos-db | Error Codes Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/error-codes-solution.md | -Log Analytics is a tool in the Azure portal that helps you run server diagnostics on your API for Cassandra account. Run log queries from data collected by Azure Monitor Logs and interactively analyze their results. Records retrieved from Log Analytics queries help provide various insights into your data. +Log Analytics is a tool in the Azure portal that helps you run server diagnostics on your API for Cassandra account. ## Prerequisites -- Create a [Log Analytics Workspace](../../azure-monitor/logs/quick-create-workspace.md).-- Create [Diagnostic Settings](../monitor-resource-logs.md).+- Create a [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). +- Create [diagnostic settings](../monitor-resource-logs.md). - Start [log analytics](../../azure-monitor/logs/log-analytics-overview.md) on your API for Cassandra account. ## Use Log Analytics After you've completed the log analytics setup, you can begin to explore your logs to gain more insights. ### Explore Data Plane Operations-Use the CDBCassandraRequests table to see data plane operations specifically for your API for Cassandra account. A sample query to see the topN(10) consuming request and get detailed information on each request made. +Use the [CDBCassandraRequests table](/azure/azure-monitor/reference/tables/cdbcassandrarequests) to see data plane operations specifically for your API for Cassandra account. A sample query to see the topN(10) consuming request and get detailed information on each request made. ```Kusto CDBCassandraRequests CDBCassandraRequests | take 10 ``` -#### Error Codes and Possible Solutions -|Status Code | Error Code | Description | -||-|--| -| 200 | -1 | Successful | -| 400 | 8704 | The query is correct but an invalid syntax. | -| 400 | 8192 | The submitted query has a syntax error. Review your query. | -| 400 | 8960 | The query is invalid because of some configuration issue. | -| 401 |8448 | The logged user does not have the right permissions to perform the query. | -| 403 | 8448 | Forbidden response as the user may not have the necessary permissions to carry out the request. | -| 404 | 5376 | A non-timeout exception during a write request as a result of response not found. | -| 405 | 0 | Server-side Cassandra error. The error rarely occurs, open a support ticket. | -| 408 | 4608 | Timeout during a read request. | -| 408 | 4352 | Timeout exception during a write serviceRequest. | -| 409 | 9216 | Attempting to create a keyspace or table that already exist. | -| 412 | 5376 | Precondition failure. To ensure data integrity, we ensure that the write request based on the read response is true. A non-timeout write request exception is returned. | -| 413 | 5376 | This non-timeout exception during a write request is because of payload maybe too large. Currently, there is a limit of 2MB per row. | -| 417 | 9472 | The exception is thrown when a prepared statement is not cached on the server node. It should be transient/non-blocking. | -| 423 | 5376 | There is a lock because a write request that is currently processing. | -| 429 | 4097| Overload exception is as a result of RU shortage or high request rate. Probably need more RU to handle the higher volume request. In, native Cassandra this can be interpreted as one of the VMs not having enough CPU. We advise reviewing current data model to ensure that you do not have excessive skews that might be causing hot partitions. | -| 449 | 5376 | Concurrent execution exception. This occurs to ensure only one write update at a time for a given row. | -| 500 | 0 | Server cassandraError: something unexpected happened. This indicates a server-side bug. | -| 503 | 4096 | Service unavailable. | -| | 256 | This may be because of invalid connection credentials. Please check your connection credentials. | -| | 10 | A client message triggered protocol violation. An example is query message sent before a startup one has been sent. | +For a list of error codes and their possible solutions, see [Error codes](../monitor-reference.md#error-codes-for-cassandra). ### Troubleshoot Query Consumption-The CDBPartitionKeyRUConsumption table contains details on request unit (RU) consumption for logical keys in each region within each of their physical partitions. +The [CDBPartitionKeyRUConsumption table](/azure/azure-monitor/reference/tables/cdbpartitionkeyruconsumption) contains details on request unit (RU) consumption for logical keys in each region within each of their physical partitions. ```Kusto CDBPartitionKeyRUConsumption CDBPartitionKeyRUConsumption ``` ### Explore Control Plane Operations-The CBDControlPlaneRequests table contains details on control plane operations, specifically for API for Cassandra accounts. +The [CBDControlPlaneRequests table](/azure/azure-monitor/reference/tables/cdbcontrolplanerequests) contains details on control plane operations, specifically for API for Cassandra accounts. ```Kusto CDBControlPlaneRequests |
cosmos-db | Monitor Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/monitor-insights.md | Exceeding provisioned throughput could be one of the reasons. Enable [Server Sid ## System and management operations-The system view helps show metadata requests count by primary partition. It also helps identify throttled requests. The management operation shows the account activities such as creation, deletion, key, network and replication settings. Request volume per status code over a time period. +The system view helps show metadata requests count by primary partition. It also helps identify throttled requests. The management operation shows the account activities such as creation, deletion, key, network, and replication settings. Request volume per status code over a time period. :::image type="content" source="./media/monitor-insights/metadata-requests-status-code.png" alt-text="Screenshot showing request status code based on metadata."::: - Metric chart for account diagnostic, network and replication settings over a specified period and filtered based on a Keyspace. - Metric chart to view account key rotation. You can view changes to primary or secondary password for your API for Cassandra account. ## Storage Storage distribution for raw and index storage. Also a count of documents in the API for Cassandra account. Maximum request units consumption for an account over a defined time period. The chart below shows if your applicationΓÇÖs high RU consumption is because of :::image type="content" source="./media/monitor-insights/normalized-ru-pk-rangeid.png" alt-text="Screenshot showing normalized request unit consumption by partition key range ID."::: -The chart below shows a breakdown of requests by different status code. Understand the meaning of the different codes for your [API for Cassandra codes](./error-codes-solution.md). +The chart below shows a breakdown of requests by different status code. Understand the meaning of the different codes for your [API for Cassandra codes](../monitor-reference.md#error-codes-for-cassandra). ## Next steps |
cosmos-db | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/concepts-limits.md | An Azure Cosmos DB container (or shared throughput database) using manual throug The current and minimum throughput of a container or a database can be retrieved from the Azure portal or the SDKs. For more information, see [Allocate throughput on containers and databases](set-throughput.md). -The actual minimum RU/s may vary depending on your account configuration. You can use [Azure Monitor metrics](monitor.md#view-operation-level-metrics-for-azure-cosmos-db) to view the history of provisioned throughput (RU/s) and storage on a resource. +The actual minimum RU/s might vary depending on your account configuration. You can use [Azure Monitor metrics](monitor.md#analyze-azure-cosmos-db-metrics) to view the history of provisioned throughput (RU/s) and storage on a resource. #### Minimum throughput on container The following table lists the limits specific to MongoDB feature support. Other | Maximum execution time for MongoDB operations (for 3.6 and 4.0 server version)| 60 seconds| | Maximum level of nesting for embedded objects / arrays on index definitions | 6 | | Idle connection timeout for server side connection closure ┬▓ | 30 minutes |-| Time limit for MongoDB shell in the Azure Portal | 120 minutes in a 24hr period | +| Time limit for MongoDB shell in the Azure portal | 120 minutes in a 24hr period | -┬╣ Large document sizes up to 16 MB require feature enablement in Azure portal. Read the [feature documentation](../cosmos-db/mongodb/feature-support-42.md#data-types) to learn more. +┬╣ Large document sizes up to 16 MB require feature enablement in the Azure portal. Read the [feature documentation](../cosmos-db/mongodb/feature-support-42.md#data-types) to learn more. ┬▓ We recommend that client applications set the idle connection timeout in the driver settings to 2-3 minutes because the [default timeout for Azure LoadBalancer is 4 minutes](../load-balancer/load-balancer-tcp-idle-timeout.md). This timeout ensures that an intermediate load balancer idle doesn't close connections between the client machine and Azure Cosmos DB. |
cosmos-db | How To Choose Offer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-choose-offer.md | Use the Azure Cosmos DB [capacity calculator](estimate-ru-with-capacity-planner. ### Existing applications ### -If you have an existing application using standard (manual) provisioned throughput, you can use [Azure Monitor metrics](insights-overview.md) to determine if your traffic pattern is suitable for autoscale. +If you have an existing application using standard (manual) provisioned throughput, you can use [Azure Monitor metrics](monitor-reference.md#metrics) to determine if your traffic pattern is suitable for autoscale. -First, find the [normalized request unit consumption metric](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) of your database or container. Normalized utilization is a measure of how much you are currently using your standard (manual) provisioned throughput. The closer the number is to 100%, the more you are fully using your provisioned RU/s. [Learn more](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) about the metric. +First, find the [normalized request unit consumption metric](monitor-normalized-request-units.md#view-the-normalized-request-unit-consumption-metric) of your database or container. Next, determine how the normalized utilization varies over time. Find the highest normalized utilization for each hour. Then, calculate the average normalized utilization across all hours. If you see that your average utilization is less than 66%, consider enabling autoscale on your database or container. In contrast, if the average utilization is greater than 66%, it's recommended to remain on standard (manual) provisioned throughput. If you see that your traffic pattern is variable, but you are over or under prov Autoscale bills for the highest RU/s scaled to in an hour. When analyzing the normalized RU consumption over time, it is important to use the highest utilization per hour when calculating the average. To calculate the average of the highest utilization across all hours:-1. Set the **Aggregation** on the Noramlized RU Consumption metric to **Max**. +1. Set the **Aggregation** on the Normalized RU Consumption metric to **Max**. 1. Select the **Time granularity** to 1 hour. 1. Navigate to **Chart options**. 1. Select the bar chart option. To calculate the average of the highest utilization across all hours: :::image type="content" source="media/how-to-choose-offer/variable-workload-highest-util-by-hour.png" alt-text="To see normalized RU consumption by hour, 1) Select time granularity to 1 hour; 2) Edit chart settings; 3) Select bar chart option; 4) Under Share, select Download to Excel option to calculate average across all hours. "::: ## Measure and monitor your usage-Over time, after you've chosen the throughput type, you should monitor your application and make adjustments as needed. +Over time, after you've chosen the throughput type, you should monitor your application and make adjustments as needed. -When using autoscale, use Azure Monitor to see the provisioned autoscale max RU/s (**Autoscale Max Throughput**) and the RU/s the system is currently scaled to (**Provisioned Throughput**). Below is an example of a variable or unpredictable workload using autoscale. Note when there isn't any traffic, the system scales the RU/s to the minimum of 10% of the max RU/s, which in this case is 5000 RU/s and 50,000 RU/s, respectively. +When using autoscale, use Azure Monitor to see the provisioned autoscale max RU/s (**Autoscale Max Throughput**) and the RU/s the system is currently scaled to (**Provisioned Throughput**). +The following example shows a variable or unpredictable workload using autoscale. Note when there isn't any traffic, the system scales the RU/s to the minimum of 10% of the max RU/s, which in this case is 5,000 RU/s and 50,000 RU/s, respectively. -> [!NOTE] -> When you use standard (manual) provisioned throughput, the **Provisioned Throughput** metric refers to what you as a user have set. When you use autoscale throughput, this metric refers to the RU/s the system is currently scaled to. ## Next steps * Use [RU calculator](https://cosmos.azure.com/capacitycalculator/) to estimate throughput for new workloads. |
cosmos-db | Insights Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/insights-overview.md | This feature doesn't require you to enable or configure anything. These Azure Co >[!NOTE] >There's no charge to access this feature. You'll only be charged for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page. +## View insights from Azure portal ++1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. ++1. You can view your account metrics either from the **Metrics** pane or the **Insights** pane. ++ * **Metrics:** This pane provides numerical metrics that are collected at regular intervals and describes some aspect of a system at a particular time. For example, you can view and monitor the [server side latency metric](monitor-server-side-latency.md), [normalized request unit usage metric](monitor-normalized-request-units.md), etc. ++ * **Insights:** This pane provides a customized monitoring experience for Azure Cosmos DB. Insights use the same metrics and logs that are collected in Azure Monitor and show an aggregated view for your account. ++1. Open the **Insights** pane. By default, the Insights pane shows the throughput, requests, storage, availability, latency, system, and management operations metrics for every container in your account. You can select the **Time Range**, **Database**, and **Container** for which you want to view insights. The **Overview** tab shows RU/s usage, data usage, index usage, throttled requests, and normalized RU/s consumption for the selected database and container. ++ :::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Screenshot of Azure Cosmos DB performance metrics in the Azure portal." lightbox="./media/use-metrics/performance-metrics.png" ::: ++1. The following metrics are available from the **Insights** pane: ++ * **Throughput**. This tab shows the total number of request units consumed or failed (429 response code) because the throughput or storage capacity provisioned for the container has exceeded. ++ * **Requests**. This tab shows the total number of requests processed by status code, by operation type, and the count of failed requests (429 response code). Requests fail when the throughput or storage capacity provisioned for the container exceeds. ++ * **Storage**. This tab shows the size of data and index usage over the selected time period. ++ * **Availability**. This tab shows the percentage of successful requests over the total requests per hour. The Azure Cosmos DB SLAs defines the success rate. ++ * **Latency**. This tab shows the read and write latency observed by Azure Cosmos DB in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. You can also view server-side latency by different operations. This metric doesn't represent the end-to-end request latency. ++ * **System**. This tab shows how many metadata requests that the primary partition serves. It also helps to identify the throttled requests. ++ * **Management Operations**. This tab shows the metrics for account management activities such as account creation, deletion, key updates, network and replication settings. ++ ## View utilization and performance metrics for Azure Cosmos DB To view the utilization and performance of your storage accounts across all your subscriptions: |
cosmos-db | Integrated Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/integrated-cache.md | The integrated cache has a limited storage capacity determined by the dedicated ## Metrics -It's helpful to monitor some key metrics for the integrated cache. These metrics include: --- `DedicatedGatewayCPUUsage` - CPU usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes.-- `DedicatedGatewayAverageCPUUsage` - (Deprecated) Average CPU usage across all dedicated gateway nodes.-- `DedicatedGatewayMaximumCPUUsage` - (Deprecated) Maximum CPU usage across all dedicated gateway nodes.-- `DedicatedGatewayMemoryUsage` - Memory usage with Avg, Max, or Min Aggregation types for data across all dedicated gateway nodes. -- `DedicatedGatewayAverageMemoryUsage` - (Deprecated) Average memory usage across all dedicated gateway nodes.-- `DedicatedGatewayRequests` - Total number of dedicated gateway requests across all dedicated gateway nodes.-- `IntegratedCacheEvictedEntriesSize` ΓÇô The average amount of data evicted from the integrated cache due to LRU across all dedicated gateway nodes. This value doesn't include data that expired due to exceeding the `MaxIntegratedCacheStaleness` time.-- `IntegratedCacheItemExpirationCount` - The average number of items that are evicted from the integrated cache due to cached point reads exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes. -- `IntegratedCacheQueryExpirationCount` - The average number of queries that are evicted from the integrated cache due to cached queries exceeding the `MaxIntegratedCacheStaleness` time across all dedicated gateway nodes.-- `IntegratedCacheItemHitRate` ΓÇô The proportion of point reads that used the integrated cache (out of all point reads routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes.-- `IntegratedCacheQueryHitRate` ΓÇô The proportion of queries that used the integrated cache (out of all queries routed through the dedicated gateway with session or eventual consistency). This value is an average of integrated cache instances across all dedicated gateway nodes.+It's helpful to monitor some key `DedicatedGateway` and `IntegratedCache` metrics for the integrated cache. To learn about these metrics, see [Supported metrics for Microsoft.DocumentDB/DatabaseAccounts](monitor-reference.md#supported-metrics-for-microsoftdocumentdbdatabaseaccounts). All existing metrics are available, by default, from **Metrics** in the Azure portal (not Metrics classic): |
cosmos-db | How To Monitor Diagnostics Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-monitor-diagnostics-logs.md | Azure's diagnostic logs are essential to capture Azure resource logs for an Azur ## Create diagnostic settings -Platform metrics and Activity logs are gathered automatically. To collect resource logs and route them externally from Azure Monitor, you must establish a diagnostic setting. When you activate diagnostic settings for Azure Cosmos DB accounts, you must choose to route them to either a Log Analytics workspace or an Azure Storage account. --### [Log Analytics workspace](#tab/log-analytics) --1. Create shell variables for `clusterName` and `resourceGroupName`. -- ```azurecli - # Variable for API for MongoDB vCore cluster resource - clusterName="<resource-name>" -- # Variable for resource group - resourceGroupName="<resource-group-name>" - ``` --1. Create shell variables for `workspaceName` and `diagnosticSettingName`, -- ```azurecli - # Variable for workspace name - workspaceName="<storage-account-name>" -- # Variable for diagnostic setting name - diagnosticSettingName="<diagnostic-setting-name>" - ``` -- > [!NOTE] - > For example, if the Log Analytics workspace's name is `test-workspace` and the diagnostic settings' name is `test-setting`: - > - > ```azurecli - > workspaceName="test-workspace" - > diagnosticSettingName:"test-setting" - > ``` - > --1. Get the resource identifier for the API for MongoDB vCore cluster. -- ```azurecli - az cosmosdb mongocluster show \ - --resource-group $resourceGroupName \ - --cluster-name $clusterName -- clusterResourceId=$(az cosmosdb mongocluster show \ - --resource-group $resourceGroupName \ - --cluster-name $clusterName \ - --query "id" \ - --output "tsv" \ - ) - ``` --1. Get the resource identifier for the Log Analytics workspace. -- ```azurecli - az monitor log-analytics workspace show \ - --resource-group $resourceGroupName \ - --name $workspaceName -- workspaceResourceId=$(az monitor log-analytics workspace show \ - --resource-group $resourceGroupName \ - --name $workspaceName \ - --query "id" \ - --output "tsv" \ - ) - ``` --1. Use `az monitor diagnostic-settings create` to create the setting. -- ```azurecli - az monitor diagnostic-settings create \ - --resource-group $resourceGroupName \ - --name $diagnosticSettingName \ - --resource $clusterResourceId \ - --export-to-resource-specific true \ - --logs '[{category:vCoreMongoRequests,enabled:true,retention-policy:{enabled:false,days:0}}]' \ - --workspace $workspaceResourceId - ``` -- > [!IMPORTANT] - > By enabling the `--export-to-resource-specific true` setting, you ensure that the API for MongoDB vCore request log events are efficiently ingested into the `vCoreMongoRequests` table specifically designed with a dedicated schema. - > - > In contrast, neglecting to configure `--export-to-resource-specific true` would result in the API for MongoDB vCore request log events being routed to the general `AzureDiagnostics` table. - > - > It's important to note that when creating the diagnostic setting through the Portal, log events will currently flow to the `AzureDiagnostics` table. For customers who prefer exporting logs to the resource-specific `VCoreMongoRequests` table, utilizing the Azure CLI with the `--export-to-resource-specific true` option is recommended. - > --### [Azure Storage account](#tab/azure-storage) --1. Create shell variables for `clusterName` and `resourceGroupName`. -- ```azurecli - # Variable for API for MongoDB vCore cluster resource - clusterName="<resource-name>" -- # Variable for resource group - resourceGroupName="<resource-group-name>" - ``` --1. Create shell variables for `storageAccountName` and `diagnosticSettingName`, -- ```azurecli - # Variable for storage account name - storageAccountName="<storage-account-name>" -- # Variable for diagnostic setting name - diagnosticSettingName="<diagnostic-setting-name>" - ``` -- > [!NOTE] - > For example, if the Azure Storage account's name is `teststorageaccount02909` and the diagnostic settings' name is `test-setting`: - > - > ```azurecli - > storageAccountName="teststorageaccount02909" - > diagnosticSettingName:"test-setting" - > ``` - > --1. Get the resource identifier for the API for MongoDB vCore cluster. -- ```azurecli - az cosmosdb mongocluster show \ - --resource-group $resourceGroupName \ - --cluster-name $clusterName -- clusterResourceId=$(az cosmosdb mongocluster show \ - --resource-group $resourceGroupName \ - --cluster-name $clusterName \ - --query "id" \ - --output "tsv" \ - ) - ``` --1. Get the resource identifier for the Log Analytics workspace. -- ```azurecli - az storage account show \ - --resource-group $resourceGroupName \ - --name $storageAccountName -- storageResourceId=$(az storage account show \ - --resource-group $resourceGroupName \ - --name $storageAccountName \ - --query "id" \ - --output "tsv" \ - ) - ``` --1. Use `az monitor diagnostic-settings create` to create the setting. -- ```azurecli - az monitor diagnostic-settings create \ - --resource-group $resourceGroupName \ - --name $diagnosticSettingName \ - --resource $clusterResourceId \ - --logs '[{category:vCoreMongoRequests,enabled:true,retention-policy:{enabled:false,days:0}}]' \ - --storage-account $storageResourceId - ``` --+Platform metrics and activity logs are gathered automatically. To collect resource logs and route them externally from Azure Monitor, you must establish a diagnostic setting. To learn how, see [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/create-diagnostic-settings?tabs=cli). ## Manage diagnostic settings |
cosmos-db | Rag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/rag.md | OpenAI is a leader in AI research, providing various models for language generat ### Embedding models vs. Language generation models -| | **Text Embedding Model** | **Language Model** | +| **Category** | **Text Embedding Model** | **Language Model** | ||-|| | **Purpose** | Converting text into vector embeddings. | Understanding and generating natural language. | | **Function** | Transforms textual data into high-dimensional arrays of numbers, capturing the semantic meaning of the text. | Comprehends and produces human-like text based on given input. | |
cosmos-db | Monitor Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-reference.md | For a list of all Azure Monitor supported metrics, including Azure Cosmos DB, se ### Supported metrics for Microsoft.DocumentDB/DatabaseAccounts The following table lists the metrics available for the Microsoft.DocumentDB/DatabaseAccounts resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.DocumentDB/cassandraClusters The following table lists the metrics available for the Microsoft.DocumentDB/cassandraClusters resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.DocumentDB/mongoClusters The following table lists the metrics available for the Microsoft.DocumentDB/mongoClusters resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Metrics by category The following tables list Azure Cosmos DB metrics categorized by metric type. - CassandraRequestCharges (Cassandra Request Charges) - CassandraConnectionClosures (Cassandra Connection Closures) +### Error codes for Cassandra ++The following table lists error codes for your API for Cassandra account. For sample queries, see [Server diagnostics for Azure Cosmos DB for Apache Cassandra](cassandr) ++|Status code | Error code | Description | +||-|--| +| 200 | -1 | Successful | +| 400 | 8704 | The query is correct but an invalid syntax. | +| 400 | 8192 | The submitted query has a syntax error. Review your query. | +| 400 | 8960 | The query is invalid because of some configuration issue. | +| 401 |8448 | The logged user does not have the right permissions to perform the query. | +| 403 | 8448 | Forbidden response as the user may not have the necessary permissions to carry out the request. | +| 404 | 5376 | A non-timeout exception during a write request as a result of response not found. | +| 405 | 0 | Server-side Cassandra error. The error rarely occurs, open a support ticket. | +| 408 | 4608 | Timeout during a read request. | +| 408 | 4352 | Timeout exception during a write serviceRequest. | +| 409 | 9216 | Attempting to create a keyspace or table that already exist. | +| 412 | 5376 | Precondition failure. To ensure data integrity, we ensure that the write request based on the read response is true. A non-timeout write request exception is returned. | +| 413 | 5376 | This non-timeout exception during a write request is because of payload maybe too large. Currently, there is a limit of 2MB per row. | +| 417 | 9472 | The exception is thrown when a prepared statement is not cached on the server node. It should be transient/non-blocking. | +| 423 | 5376 | There is a lock because a write request that is currently processing. | +| 429 | 4097| Overload exception is as a result of RU shortage or high request rate. Probably need more RU to handle the higher volume request. In, native Cassandra this can be interpreted as one of the VMs not having enough CPU. We advise reviewing current data model to ensure that you do not have excessive skews that might be causing hot partitions. | +| 449 | 5376 | Concurrent execution exception. This occurs to ensure only one write update at a time for a given row. | +| 500 | 0 | Server cassandraError: something unexpected happened. This indicates a server-side bug. | +| 503 | 4096 | Service unavailable. | +| | 256 | This may be because of invalid connection credentials. Please check your connection credentials. | +| | 10 | A client message triggered protocol violation. An example is query message sent before a startup one has been sent. | + [!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions.md)] The following tables list Azure Cosmos DB metrics categorized by metric type. [!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.DocumentDB/DatabaseAccounts ### Supported resource logs for Microsoft.DocumentDB/cassandraClusters ### Supported resource logs for Microsoft.DocumentDB/mongoClusters [!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)] |
cosmos-db | Monitor Resource Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md | -Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and are referred to as "data plane logs." Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type. +Diagnostic settings in Azure are used to collect resource logs. Resources emit Azure resource Logs and provide rich, frequent data about the operation of that resource. These logs are captured per request and they're also referred to as *data plane logs*. Some examples of the data plane operations include delete, insert, and readFeed. The content of these logs varies by resource type. -Platform metrics and the Activity logs are collected automatically, whereas you must create a diagnostic setting to collect resource logs or forward them outside of Azure Monitor. You can turn on diagnostic setting for Azure Cosmos DB accounts and send resource logs to the following sources: --- Azure Monitor Log Analytics workspaces- - Data sent to Log Analytics can be written into **Azure Diagnostics (legacy)** or **Resource-specific (preview)** tables -- Event hub-- Storage Account+To learn more about diagnostic settings, see [Diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/diagnostic-settings). > [!NOTE]-> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except API for Table) [following our instructions for creating diagnostics setting via REST API](monitor-resource-logs.md). This option provides additional cost-optimizations with an improved view for handling data. +> We recommend creating the diagnostic setting in resource-specific mode (for all APIs except API for Table) following the instructions in the *REST API* tab. This option provides additional cost-optimizations with an improved view for handling data. ## Prerequisites |
cosmos-db | Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor.md | You can monitor diagnostic logs from your Azure Cosmos DB account and create das For the available resource log categories, their associated Log Analytics tables, and the logs schemas for Azure Cosmos DB, see [Azure Cosmos DB monitoring data reference](monitor-reference.md#resource-logs). +<a name="activity-log"></a> [!INCLUDE [horz-monitor-activity-log](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-activity-log.md)] +### Audit restore actions for continuous backup mode ++By using activity logs, you can list all the point-in-time restores for continuous mode that were performed on an Azure Cosmos DB account. To learn how to view these operations in the Azure portal, see [Audit the point-in-time restore action for continuous backup mode](audit-restore-continuous.md). + [!INCLUDE [horz-monitor-analyze-data](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-analyze-data.md)] [!INCLUDE [horz-monitor-external-tools](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-external-tools.md)] See the following articles for more information about working with Azure Monitor Prior to using Log Analytics to issue Kusto queries, you must [enable diagnostic logs for control plane operations](audit-control-plane-logs.md#enable-diagnostic-logs-for-control-plane-operations). When you enable diagnostic logs, you select between storing your data in [resource-specific tables](/azure/azure-monitor/essentials/resource-logs#resource-specific) or the single [AzureDiagnostics table (legacy)](/azure/azure-monitor/essentials/resource-logs#azure-diagnostics-mode). The exact text of Kusto queries depends on the [collection mode](/azure/azure-monitor/essentials/resource-logs#select-the-collection-mode) you select. +- See [Troubleshoot issues with diagnostics queries](monitor-logs-basic-queries.md) for simple queries to help troubleshoot issues with your Azure Cosmos DB. +- See [Troubleshoot issues with advanced diagnostics queries with Azure Cosmos DB for NoSQL](nosql/diagnostic-queries.md) for more advanced queries to help troubleshoot issues with your Azure Cosmos DB account by using diagnostics logs sent to Azure Diagnostics (legacy) and resource-specific (preview) tables. + Here are some queries that you can enter into the **Log search** search bar to help you monitor your Azure Cosmos DB resources. ### [Resource-specific table](#tab/resource-specific-diagnostics) |
cosmos-db | Change Feed Modes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-modes.md | During the preview, the following methods to read the change feed are available | **Method to read change feed** | **.NET** | **Java** | **Python** | **Node.js** | | | | | | | | [Change feed pull model](change-feed-pull-model.md) | [>= 3.32.0-preview](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.32.0-preview) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.37.0) | No | No |-| [Change feed processor](change-feed-processor.md) | No | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.42.0) | No | No | +| [Change feed processor](change-feed-processor.md) | [>= 3.40.0-preview.0](https://www.nuget.org/packages/Microsoft.Azure.Cosmos/3.40.0-preview.0) | [>= 4.42.0](https://mvnrepository.com/artifact/com.azure/azure-cosmos/4.42.0) | No | No | | Azure Functions trigger | No | No | No | No | > [!NOTE] |
cosmos-db | Change Feed Processor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-processor.md | Each range is read in parallel. A range's progress is maintained separately from ### [.NET](#tab/dotnet) -The change feed processor in .NET is currently available only for [latest version mode](change-feed-modes.md#latest-version-change-feed-mode). The point of entry is always the monitored container. In a `Container` instance, you call `GetChangeFeedProcessorBuilder`: +The change feed processor in .NET is available for [latest version mode](change-feed-modes.md#latest-version-change-feed-mode) and [all versions and deletes mode](change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). All versions and deletes mode is in preview and is supported for the change feed processor beginning in version `3.40.0-preview.0`. The point of entry for both modes is always the monitored container. ++To read using latest version mode, in a `Container` instance, you call `GetChangeFeedProcessorBuilder`: [!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=DefineProcessor)] -The first parameter is a distinct name that describes the goal of this processor. The second name is the delegate implementation that handles changes. +To read using all versions and deletes mode, call `GetChangeFeedProcessorBuilderWithAllVersionsAndDeletes` from the `Container` instance: ++[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeedAllVersionsAndDeletes/Program.cs?name=BasicInitialization)] ++For both modes, the first parameter is a distinct name that describes the goal of this processor. The second name is the delegate implementation that handles changes. -Here's an example of a delegate: +Here's an example of a delegate for latest version mode: [!code-csharp[Main](~/samples-cosmosdb-dotnet-change-feed-processor/src/Program.cs?name=Delegate)] +Here's an example of a delegate for all versions and deletes mode: ++[!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeedAllVersionsAndDeletes/Program.cs?name=Delegate)] + Afterward, you define the compute instance name or unique identifier by using `WithInstanceName`. The compute instance name should be unique and different for each compute instance you're deploying. You set the container to maintain the lease state by using `WithLeaseContainer`. Calling `Build` gives you the processor instance that you can start by calling `StartAsync`. +>[!NOTE] +> The preceding code snippets are taken from samples in GitHub. You can get the sample for [latest version mode](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed) or [all versions and deletes mode](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeedAllVersionsAndDeletes). + ## Processing life cycle The normal life cycle of a host instance is: You can connect the change feed processor to any relevant event in its [life cyc * Register a handler for `WithLeaseReleaseNotification` to be notified when the current host releases a lease and stops processing it. * Register a handler for `WithErrorNotification` to be notified when the current host encounters an exception during processing. You need to be able to distinguish whether the source is the user delegate (an unhandled exception) or an error that the processor encounters when it tries to access the monitored container (for example, networking issues). +Life cycle notifications are available in both change feed modes. Here's an example of life cycle notifications in latest version mode: + [!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartWithNotifications)] ## Deployment unit The change feed processor is initialized, and it starts reading changes from the > [!NOTE] > These customization options work only to set up the starting point in time of the change feed processor. After the lease container is initialized for the first time, changing these options has no effect.+> +> Customizing the starting point is only available for latest version change feed mode. When using all versions and deletes mode you must start reading from the time the processor is started, or resume from a prior lease state that is within the [continuous backup](../continuous-backup-restore-introduction.md) retention period of your account. ### [Java](#tab/java) |
cosmos-db | Change Feed Pull Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/change-feed-pull-model.md | FeedIterator<User> InteratorWithPOCOS = container.GetChangeFeedIterator<User>(Ch > [!TIP] > Prior to version `3.34.0`, latest version mode can be used by setting `ChangeFeedMode.Incremental`. Both `Incremental` and `LatestVersion` refer to latest version mode of the change feed and applications that use either mode will see the same behavior. -All versions and deletes mode is in preview and can be used with preview .NET SDK versions >= `3.32.0-preview`. Here's an example for obtaining `FeedIterator` in all versions and deletes mode that returns dynamic objects: +All versions and deletes mode is in preview and can be used with preview .NET SDK versions >= `3.32.0-preview`. Here's an example for obtaining `FeedIterator` in all versions and deletes mode that returns `User` objects: ```csharp-FeedIterator<dynamic> InteratorWithDynamic = container.GetChangeFeedIterator<dynamic>(ChangeFeedStartFrom.Now(), ChangeFeedMode.AllVersionsAndDeletes); +FeedIterator<ChangeFeedItem<User>> InteratorWithPOCOS = container.GetChangeFeedIterator<ChangeFeedItem<User>>(ChangeFeedStartFrom.Now(), ChangeFeedMode.AllVersionsAndDeletes); ``` > [!NOTE] > In latest version mode, you receive objects that represent the item that changed, with some [extra metadata](change-feed-modes.md#parse-the-response-object). All versions and deletes mode returns a different data model. For more information, see [Parse the response object](change-feed-modes.md#parse-the-response-object-1).+> +> You can get the complete sample for [latest version mode](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/CFPullModelLatestVersionMode) or [all versions and deletes mode](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/CFPullModelAllVersionsAndDeletesMode). ### Consume the change feed via streams |
cosmos-db | How To Use Change Feed Estimator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-use-change-feed-estimator.md | The change feed processor acts as a pointer that moves forward across your [chan Your change feed processor deployment can process changes at a particular rate based on its available resources like CPU, memory, network, and so on. -If this rate is slower than the rate at which your changes happen in your Azure Cosmos DB container, your processor will start to lag behind. +If this rate is slower than the rate at which your changes happen in your Azure Cosmos DB container, your processor starts to lag behind. Identifying this scenario helps understand if we need to scale our change feed processor deployment. Identifying this scenario helps understand if we need to scale our change feed p #### As a push model for automatic notifications -Like the [change feed processor](./change-feed-processor.md), the change feed estimator can work as a push model. The estimator will measure the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and push this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds. +Like the [change feed processor](./change-feed-processor.md), the change feed estimator can work as a push model. The estimator measures the difference between the last processed item (defined by the state of the leases container) and the latest change in the container, and pushes this value to a delegate. The interval at which the measurement is taken can also be customized with a default value of 5 seconds. -As an example, if your change feed processor is defined like this: +As an example, if your change feed processor is using latest version mode and is defined like this: [!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=StartProcessorEstimator)] The correct way to initialize an estimator to measure that processor would be us Where both the processor and the estimator share the same `leaseContainer` and the same name. -The other two parameters are the delegate, which will receive a number that represents **how many changes are pending to be read** by the processor, and the time interval at which you want this measurement to be taken. +The other two parameters are the delegate, which receives a number that represents **how many changes are pending to be read** by the processor, and the time interval at which you want this measurement to be taken. An example of a delegate that receives the estimation is: And whenever you want it, with the frequency you require, you can obtain the det [!code-csharp[Main](~/samples-cosmosdb-dotnet-v3/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs?name=GetIteratorEstimatorDetailed)] -Each `ChangeFeedProcessorState` will contain the lease and lag information, and also who is the current instance owning it. +Each `ChangeFeedProcessorState` contains the lease and lag information, and also who is the current instance owning it. #### Estimator deployment -The change feed estimator does not need to be deployed as part of your change feed processor, nor be part of the same project. We recommend deploying the estimator on an independent and completely different instance from your processors. A single estimator instance can track the progress for the all the leases and instances in your change feed processor deployment. +The change feed estimator doesn't need to be deployed as part of your change feed processor, nor be part of the same project. We recommend deploying the estimator on an independent instance from your processors. A single estimator instance can track the progress for the all the leases and instances in your change feed processor deployment. -Each estimation will consume [request units](../request-units.md) from your [monitored and lease containers](change-feed-processor.md#components-of-the-change-feed-processor). A frequency of 1 minute in-between is a good starting point, the lower the frequency, the higher the request units consumed. +Each estimation consumes [request units](../request-units.md) from your [monitored and lease containers](change-feed-processor.md#components-of-the-change-feed-processor). A frequency of 1 minute in-between is a good starting point, the lower the frequency, the higher the request units consumed. ### [Java](#tab/java) -The provided example represents a sample Java application that demonstrates the implementation of the Change Feed Processor with the estimation of the lag in processing change feed events. In the application - documents are being inserted into one container (the "feed container"), and meanwhile another worker thread or worker application is pulling inserted documents from the feed container's Change Feed and operating on them in some way. +This example uses the change feed processor in latest version mode with the estimation of the lag in processing change feed events. In the application - documents are being inserted into one container (the "feed container"), and meanwhile another worker thread or worker application is pulling inserted documents from the feed container's change feed and operating on them in some way. The change Feed Processor is built and started like this: [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedEstimator.java?name=ChangeFeedProcessorBuilder)] -Change Feed Processor lag checking can be performed on a separate application (like a health monitor) as long as the same input containers (feedContainer and leaseContainer) and the exact same lease prefix (```CONTAINER_NAME + "-lease"```) are used. The estimator code requires that the Change Feed Processor had an opportunity to fully initialize the leaseContainer's documents. +Change feed processor lag checking can be performed on a separate application (like a health monitor) as long as the same input containers (feedContainer and leaseContainer) and the exact same lease prefix (```CONTAINER_NAME + "-lease"```) are used. The estimator code requires that the change feed processor had an opportunity to fully initialize the leaseContainer's documents. -The estimator calculates the accumulated lag by retrieving the current state of the Change Feed Processor and summing up the estimated lag values for each event being processed: +The estimator calculates the accumulated lag by retrieving the current state of the change feed processor and summing up the estimated lag values for each event being processed: [!code-java[](~/azure-cosmos-java-sql-api-samples/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedEstimator.java?name=EstimatedLag)] The total lag initially should be zero and finally should be greater or equal to the number of documents created. The total lag value can be logged or used for further analysis, allowing to monitor the performance of the change feed processing and identify any potential bottlenecks or delays in the system: An example of a delegate that receives changes and handles them with a lag is: +## Supported change feed modes ++The change feed estimator can be used for both [latest version mode](./change-feed-modes.md#latest-version-change-feed-mode) and [all versions and deletes mode](./change-feed-modes.md#all-versions-and-deletes-change-feed-mode-preview). In both modes, the estimate provided isn't guaranteed to be an exact count of outstanding changes to process. + ## Additional resources * [Azure Cosmos DB SDK](sdk-dotnet-v3.md)-* [Usage samples on GitHub (.NET)](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed) +* [Usage samples on GitHub (.NET latest version)](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed) +* [Usage samples on GitHub (.NET all versions and deletes)](https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeedAllVersionsAndDeletes) * [Usage samples on GitHub (Java)](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/changefeed) * [Additional samples on GitHub](https://github.com/Azure-Samples/cosmos-dotnet-change-feed-processor) ## Next steps -You can now proceed to learn more about change feed processor in the following articles: +You can now proceed to learn more about change feed processor in the following article: * [Overview of change feed processor](change-feed-processor.md) |
cosmos-db | Tutorial Create Notebook Vscode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-create-notebook-vscode.md | In this section, you'll create the Azure Cosmos database, container, and import 1. Add a new code cell -1. Within the code cell, add the following code to upload data from this url: <https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>. +1. Within the code cell, add the following code to upload data from this url: ``<https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>``. ```python import urllib.request import json In this section, you'll create the Azure Cosmos database, container, and import 1. Add a new code cell. -1. Within the code cell, add the following code to upload data from this url: <https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>. +1. Within the code cell, add the following code to upload data from this url: ``<https://cosmosnotebooksdata.blob.core.windows.net/notebookdata/websiteData.json>``. ```csharp using System.Net.Http; using System.Text.Json; |
cosmos-db | Optimize Cost Throughput | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/optimize-cost-throughput.md | -You can start with a minimum throughput of 400 RU/sec and scale up to tens of millions of requests per second or even more. Each request you issue against your Azure Cosmos DB container ΓÇèor database, such as a read request, write request, query request, stored procedures have a corresponding cost that is deducted from your provisioned throughput. If you provision 400 RU/s and issue a query that costs 40 RUs, you will be able to issue 10 such queries per second. Any request beyond that will get rate-limited and you should retry the request. If you are using client drivers, they support the automatic retry logic. +You can start with a minimum throughput of 400 RU/sec and scale up to tens of millions of requests per second or even more. Each request you issue against your Azure Cosmos DB container ΓÇèor database, such as a read request, write request, query request, stored procedures have a corresponding cost that is deducted from your provisioned throughput. If you provision 400 RU/s and issue a query that costs 40 RUs, you'll be able to issue 10 such queries per second. Any request beyond that get rate-limited and you should retry the request. If you're using client drivers, they support the automatic retry logic. You can provision throughput on databases or containers and each strategy can help you save on costs depending on the scenario. The following are some guidelines to decide on a provisioned throughput strategy 1. You have a few dozen Azure Cosmos DB containers and want to share throughput across some or all of them. -2. You are migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. And if you have many collections/tables/graphs and you do not want to make any changes to your data model. Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you are not updating your data model when migrating from an on-premises database. It's recommended that you always reassess your data model to get the most in terms of performance and also to optimize for costs. +2. You're migrating from a single-tenant database designed to run on IaaS-hosted VMs or on-premises, for example, NoSQL or relational databases to Azure Cosmos DB. And if you have many collections/tables/graphs and you don't want to make any changes to your data model. Note, you might have to compromise some of the benefits offered by Azure Cosmos DB if you aren't updating your data model when migrating from an on-premises database. It's recommended that you always reassess your data model to get the most in terms of performance and also to optimize for costs. 3. You want to absorb unplanned spikes in workloads by virtue of pooled throughput at the database level subjected to unexpected spike in workload. The following are some guidelines to decide on a provisioned throughput strategy **Consider provisioning throughput on an individual container if:** -1. You have a few Azure Cosmos DB containers. Because Azure Cosmos DB is schema-agnostic, a container can contain items that have heterogeneous schemas and does not require customers to create multiple container types, one for each entity. It is always an option to consider if grouping separate say 10-20 containers into a single container makes sense. With a 400 RUs minimum for containers, pooling all 10-20 containers into one could be more cost effective. +1. You have a few Azure Cosmos DB containers. Because Azure Cosmos DB is schema-agnostic, a container can contain items that have heterogeneous schemas and doesn't require customers to create multiple container types, one for each entity. It's always an option to consider if grouping separate say 10-20 containers into a single container makes sense. With a 400 RUs minimum for containers, pooling all 10-20 containers into one could be more cost effective. 2. You want to control the throughput on a specific container and get the guaranteed throughput on a given container backed by SLA. **Consider a hybrid of the above two strategies:** -1. As mentioned earlier, Azure Cosmos DB allows you to mix and match the above two strategies, so you can now have some containers within Azure Cosmos DB database, which may share the throughput provisioned on the database as well as, some containers within the same database, which may have dedicated amounts of provisioned throughput. +1. As mentioned earlier, Azure Cosmos DB allows you to mix and match the above two strategies, so you can now have some containers within Azure Cosmos DB database, which might share the throughput provisioned on the database as well as, some containers within the same database, which might have dedicated amounts of provisioned throughput. 2. You can apply the above strategies to come up with a hybrid configuration, where you have both database level provisioned throughput with some containers having dedicated throughput. HTTP Status 429, The native SDKs (.NET/.NET Core, Java, Node.js and Python) implicitly catch this response, respect the server-specified retry-after header, and retry the request. Unless your account is accessed concurrently by multiple clients, the next retry will succeed. -If you have more than one client cumulatively operating consistently above the request rate, the default retry count, which is currently set to 9, may not be sufficient. In such cases, the client throws a `RequestRateTooLargeException` with status code 429 to the application. The default retry count can be changed by setting the `RetryOptions` on the ConnectionPolicy instance. By default, the `RequestRateTooLargeException` with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value. +If you have more than one client cumulatively operating consistently above the request rate, the default retry count, which is currently set to 9, might not be sufficient. In such cases, the client throws a `RequestRateTooLargeException` with status code 429 to the application. The default retry count can be changed by setting the `RetryOptions` on the ConnectionPolicy instance. By default, the `RequestRateTooLargeException` with status code 429 is returned after a cumulative wait time of 30 seconds if the request continues to operate above the request rate. This occurs even when the current retry count is less than the max retry count, be it the default of 9 or a user-defined value. [MaxRetryAttemptsOnThrottledRequests](/dotnet/api/microsoft.azure.documents.client.retryoptions.maxretryattemptsonthrottledrequests) is set to 3, so in this case, if a request operation is rate limited by exceeding the reserved throughput for the container, the request operation retries three times before throwing the exception to the application. [MaxRetryWaitTimeInSeconds](/dotnet/api/microsoft.azure.documents.client.retryoptions.maxretrywaittimeinseconds#Microsoft_Azure_Documents_Client_RetryOptions_MaxRetryWaitTimeInSeconds) is set to 60, so in this case if the cumulative retry wait time in seconds since the first request exceeds 60 seconds, the exception is thrown. connectionPolicy.RetryOptions.MaxRetryWaitTimeInSeconds = 60; ## Partitioning strategy and provisioned throughput costs -Good partitioning strategy is important to optimize costs in Azure Cosmos DB. Ensure that there is no skew of partitions, which are exposed through storage metrics. Ensure that there is no skew of throughput for a partition, which is exposed with throughput metrics. Ensure that there is no skew towards particular partition keys. Dominant keys in storage are exposed through metrics but the key will be dependent on your application access pattern. It's best to think about the right logical partition key. A good partition key is expected to have the following characteristics: +Good partitioning strategy is important to optimize costs in Azure Cosmos DB. Ensure that there is no skew of partitions, which are exposed through storage metrics. Ensure that there is no skew of throughput for a partition, which is exposed with throughput metrics. Ensure that there's no skew towards particular partition keys. Dominant keys in storage are exposed through metrics but the key is dependent on your application access pattern. It's best to think about the right logical partition key. A good partition key is expected to have the following characteristics: * Choose a partition key that spreads workload evenly across all partitions and evenly over time. In other words, you shouldn't have some keys to with majority of the data and some keys with less or no data. Good partitioning strategy is important to optimize costs in Azure Cosmos DB. En * Choose a partition key that has a wide range of values. -The basic idea is to spread the data and the activity in your container across the set of logical partitions, so that resources for data storage and throughput can be distributed across the logical partitions. Candidates for partition keys may include the properties that appear frequently as a filter in your queries. Queries can be efficiently routed by including the partition key in the filter predicate. With such a partitioning strategy, optimizing provisioned throughput will be a lot easier. +The basic idea is to spread the data and the activity in your container across the set of logical partitions, so that resources for data storage and throughput can be distributed across the logical partitions. Candidates for partition keys might include the properties that appear frequently as a filter in your queries. Queries can be efficiently routed by including the partition key in the filter predicate. With such a partitioning strategy, optimizing provisioned throughput is a lot easier. ### Design smaller items for higher throughput -The request charge or the request processing cost of a given operation is directly correlated to the size of the item. Operations on large items will cost more than operations on smaller items. +The request charge or the request processing cost of a given operation is directly correlated to the size of the item. Operations on large items cost more than operations on smaller items. ## Data access patterns -It is always a good practice to logically separate your data into logical categories based on how frequently you access the data. By categorizing it as hot, medium, or cold data you can fine-tune the storage consumed and the throughput required. Depending on the frequency of access, you can place the data into separate containers (for example, tables, graphs, and collections) and fine-tune the provisioned throughput on them to accommodate to the needs of that segment of data. +It's always a good practice to logically separate your data into logical categories based on how frequently you access the data. By categorizing it as hot, medium, or cold data you can fine-tune the storage consumed and the throughput required. Depending on the frequency of access, you can place the data into separate containers (for example, tables, graphs, and collections) and fine-tune the provisioned throughput on them to accommodate to the needs of that segment of data. -Furthermore, if you're using Azure Cosmos DB, and you know you are not going to search by certain data values or will rarely access them, you should store the compressed values of these attributes. With this method you save on storage space, index space, and provisioned throughput and result in lower costs. +Furthermore, if you're using Azure Cosmos DB, and you know you aren't going to search by certain data values or will rarely access them, you should store the compressed values of these attributes. With this method you save on storage space, index space, and provisioned throughput and result in lower costs. ## Optimize by changing indexing policy -By default, Azure Cosmos DB automatically indexes every property of every record. This is intended to ease development and ensure excellent performance across many different types of ad hoc queries. If you have large records with thousands of properties, paying the throughput cost for indexing every property may not be useful, especially if you only query against 10 or 20 of those properties. As you get closer to getting a handle on your specific workload, our guidance is to tune your index policy. Full details on Azure Cosmos DB indexing policy can be found [here](index-policy.md). +By default, Azure Cosmos DB automatically indexes every property of every record. This is intended to ease development and ensure excellent performance across many different types of ad hoc queries. If you have large records with thousands of properties, paying the throughput cost for indexing every property might not be useful, especially if you only query against 10 or 20 of those properties. As you get closer to getting a handle on your specific workload, our guidance is to tune your index policy. Full details on Azure Cosmos DB indexing policy can be found [here](index-policy.md). -## Monitoring provisioned and consumed throughput +## Monitor provisioned and consumed throughput -You can monitor the total number of RUs provisioned, number of rate-limited requests as well as the number of RUs youΓÇÖve consumed in the Azure portal. The following image shows an example usage metric: +You can monitor the total number of request units provisioned, number of rate-limited requests, and the number of RUs youΓÇÖve consumed in the Azure portal. To learn more, see [Analyze Azure Cosmos DB metrics](monitor.md#analyze-azure-cosmos-db-metrics). The following image shows an example usage metric: :::image type="content" source="./media/optimize-cost-throughput/monitoring.png" alt-text="Monitor request units in the Azure portal"::: -You can also set alerts to check if the number of rate-limited requests exceeds a specific threshold. See [How to monitor Azure Cosmos DB](use-metrics.md) article for more details. These alerts can send an email to the account administrators or call a custom HTTP Webhook or an Azure Function to automatically increase provisioned throughput. +You can also set alerts to check if the number of rate-limited requests exceeds a specific threshold. To learn more about alerts, see [Azure Monitor alerts](monitor.md#alerts). ## Scale your throughput elastically and on-demand -Since you are billed for the throughput provisioned, matching the provisioned throughput to your needs can help you avoid the charges for the unused throughput. You can scale your provisioned throughput up or down any time, as needed. If your throughput needs are very predictable you can use Azure Functions and use a Timer Trigger to [increase or decrease throughput on a schedule](scale-on-schedule.md). +Since you're billed for the throughput provisioned, matching the provisioned throughput to your needs can help you avoid the charges for the unused throughput. You can scale your provisioned throughput up or down any time, as needed. If your throughput needs are very predictable you can use Azure Functions and use a Timer Trigger to [increase or decrease throughput on a schedule](scale-on-schedule.md). -* Monitoring the consumption of your RUs and the ratio of rate-limited requests may reveal that you do not need to keep provisioned throughout constant throughout the day or the week. You may receive less traffic at night or during the weekend. By using either Azure portal or Azure Cosmos DB native SDKs or REST API, you can scale your provisioned throughput at any time. Azure Cosmos DBΓÇÖs REST API provides endpoints to programmatically update the performance level of your containers making it straightforward to adjust the throughput from your code depending on the time of the day or the day of the week. The operation is performed without any downtime, and typically takes effect in less than a minute. +* Monitoring the consumption of your RUs and the ratio of rate-limited requests might reveal that you don't need to keep provisioned throughout constant throughout the day or the week. You might receive less traffic at night or during the weekend. By using either Azure portal or Azure Cosmos DB native SDKs or REST API, you can scale your provisioned throughput at any time. Azure Cosmos DBΓÇÖs REST API provides endpoints to programmatically update the performance level of your containers making it straightforward to adjust the throughput from your code depending on the time of the day or the day of the week. The operation is performed without any downtime, and typically takes effect in less than a minute. * One of the areas you should scale throughput is when you ingest data into Azure Cosmos DB, for example, during data migration. Once you have completed the migration, you can scale provisioned throughput down to handle the solutionΓÇÖs steady state. -* Remember, the billing is at the granularity of one hour, so you will not save any money if you change your provisioned throughput more often than one hour at a time. +* Remember, the billing is at the granularity of one hour, so you don't save any money if you change your provisioned throughput more often than one hour at a time. ## Determine the throughput needed for a new workload To determine the provisioned throughput for a new workload, you can use the foll 2. It's recommended to create the containers with higher throughput than expected and then scaling down as needed. -3. It's recommended to use one of the native Azure Cosmos DB SDKs to benefit from automatic retries when requests get rate-limited. If youΓÇÖre working on a platform that is not supported and use Azure Cosmos DBΓÇÖs REST API, implement your own retry policy using the `x-ms-retry-after-ms` header. +3. It's recommended to use one of the native Azure Cosmos DB SDKs to benefit from automatic retries when requests get rate-limited. If youΓÇÖre working on a platform that isn't supported and use Azure Cosmos DBΓÇÖs REST API, implement your own retry policy using the `x-ms-retry-after-ms` header. 4. Make sure that your application code gracefully supports the case when all retries fail. To determine the provisioned throughput for a new workload, you can use the foll 6. Use monitoring to understand your traffic pattern, so you can consider the need to dynamically adjust your throughput provisioning over the day or a week. -7. Monitor your provisioned vs. consumed throughput ratio regularly to make sure you have not provisioned more than required number of containers and databases. Having a little over provisioned throughput is a good safety check. +7. Monitor your provisioned vs. consumed throughput ratio regularly to make sure you haven't provisioned more than required number of containers and databases. Having a little over provisioned throughput is a good safety check. ### Best practices to optimize provisioned throughput The following steps help you to make your solutions highly scalable and cost-eff 1. If you have significantly over provisioned throughput across containers and databases, you should review RUs provisioned Vs consumed RUs and fine-tune the workloads. -2. One method for estimating the amount of reserved throughput required by your application is to record the request unit RU charge associated with running typical operations against a representative Azure Cosmos DB container or database used by your application and then estimate the number of operations you anticipate to perform each second. Be sure to measure and include typical queries and their usage as well. To learn how to estimate RU costs of queries programmatically or using portal see [Optimizing the cost of queries](./optimize-cost-reads-writes.md). +2. One method for estimating the amount of reserved throughput required by your application is to record the request unit RU charge associated with running typical operations against a representative Azure Cosmos DB container or database used by your application and then estimate the number of operations you anticipate performing each second. Be sure to measure and include typical queries and their usage as well. To learn how to estimate RU costs of queries programmatically or using portal see [Optimizing the cost of queries](./optimize-cost-reads-writes.md). 3. Another way to get operations and their costs in RUs is by enabling Azure Monitor logs, which will give you the breakdown of operation/duration and the request charge. Azure Cosmos DB provides request charge for every operation, so every operation charge can be stored back from the response and then used for analysis. The following steps help you to make your solutions highly scalable and cost-eff 5. You can add and remove regions associated with your Azure Cosmos DB account as you need and control costs. -6. Make sure you have even distribution of data and workloads across logical partitions of your containers. If you have uneven partition distribution, this may cause to provision higher amount of throughput than the value that is needed. If you identify that you have a skewed distribution, we recommend redistributing the workload evenly across the partitions or repartition the data. +6. Make sure you have even distribution of data and workloads across logical partitions of your containers. If you have uneven partition distribution, this might cause to provision higher amount of throughput than the value that is needed. If you identify that you have a skewed distribution, we recommend redistributing the workload evenly across the partitions or repartition the data. -7. If you have many containers and these containers do not require SLAs, you can use the database-based offer for the cases where the per container throughput SLAs do not apply. You should identify which of the Azure Cosmos DB containers you want to migrate to the database level throughput offer and then migrate them by using a change feed-based solution. +7. If you have many containers and these containers do not require SLAs, you can use the database-based offer for the cases where the per container throughput SLAs don't apply. You should identify which of the Azure Cosmos DB containers you want to migrate to the database level throughput offer and then migrate them by using a change feed-based solution. 8. Consider using the ΓÇ£Azure Cosmos DB Free TierΓÇ¥ (free for one year), Try Azure Cosmos DB (up to three regions) or downloadable Azure Cosmos DB emulator for dev/test scenarios. By using these options for test-dev, you can substantially lower your costs. |
cosmos-db | Priority Based Execution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/priority-based-execution.md | query = list(container.query_items("Select * from c", partition_key="pk1", prior -## Monitoring Priority-based execution --You can monitor the behavior of requests with low and high priority using Azure monitor metrics in Azure portal. --- Monitor **Total Requests (preview)** metric to observe the HTTP status codes and volume of low and high priority requests.-- Monitor the RU/s consumption of low and high priority requests using **Total Request Units (preview)** metric in Azure portal.+## Monitor priority-based execution +You can monitor the behavior of requests with low and high priority using Azure Monitor metrics in the Azure portal. +To learn more about metrics, see [Azure Monitor metrics](monitor-reference.md#metrics). ## Change default priority level of a Cosmos DB account |
cosmos-db | Use Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-metrics.md | Azure Cosmos DB provides insights for throughput, storage, consistency, availabi This article walks through common use cases and how Azure Cosmos DB insights can be used to analyze and debug these issues. By default, the metric insights are collected every five minutes and are kept for seven days. -## View insights from Azure portal --1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to your Azure Cosmos DB account. --1. You can view your account metrics either from the **Metrics** pane or the **Insights** pane. -- * **Metrics:** This pane provides numerical metrics that are collected at regular intervals and describes some aspect of a system at a particular time. For example, you can view and monitor the [server side latency metric](monitor-server-side-latency.md), [normalized request unit usage metric](monitor-normalized-request-units.md), etc. -- * **Insights:** This pane provides a customized monitoring experience for Azure Cosmos DB. Insights use the same metrics and logs that are collected in Azure Monitor and show an aggregated view for your account. --1. Open the **Insights** pane. By default, the Insights pane shows the throughput, requests, storage, availability, latency, system, and management operations metrics for every container in your account. You can select the **Time Range**, **Database**, and **Container** for which you want to view insights. The **Overview** tab shows RU/s usage, data usage, index usage, throttled requests, and normalized RU/s consumption for the selected database and container. -- :::image type="content" source="./media/use-metrics/performance-metrics.png" alt-text="Screenshot of Azure Cosmos DB performance metrics in the Azure portal." lightbox="./media/use-metrics/performance-metrics.png" ::: --1. The following metrics are available from the **Insights** pane: -- * **Throughput**. This tab shows the total number of request units consumed or failed (429 response code) because the throughput or storage capacity provisioned for the container has exceeded. -- * **Requests**. This tab shows the total number of requests processed by status code, by operation type, and the count of failed requests (429 response code). Requests fail when the throughput or storage capacity provisioned for the container exceeds. -- * **Storage**. This tab shows the size of data and index usage over the selected time period. -- * **Availability**. This tab shows the percentage of successful requests over the total requests per hour. The Azure Cosmos DB SLAs defines the success rate. -- * **Latency**. This tab shows the read and write latency observed by Azure Cosmos DB in the region where your account is operating. You can visualize latency across regions for a geo-replicated account. You can also view server-side latency by different operations. This metric doesn't represent the end-to-end request latency. -- * **System**. This tab shows how many metadata requests that the primary partition serves. It also helps to identify the throttled requests. -- * **Management Operations**. This tab shows the metrics for account management activities such as account creation, deletion, key updates, network and replication settings. - The following sections explain common scenarios where you can use Azure Cosmos DB metrics. ## Understand how many requests are succeeding or causing errors |
cost-management-billing | Cannot Create Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/cannot-create-vm.md | For other assistance, follow these links: * [How to manage an Azure support request](../../azure-portal/supportability/how-to-manage-azure-support-request.md) * [Azure support ticket REST API](/rest/api/support)-* Engage with us on [Twitter](https://twitter.com/azuresupport) +* Engage with us on [X](https://x.com/azuresupport) * Get help from your peers in the [Microsoft question and answer](/answers/products/azure) * Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq) |
cost-management-billing | How To Create Azure Support Request Ea | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/how-to-create-azure-support-request-ea.md | Follow these links to learn more: * [How to manage an Azure support request](../../azure-portal/supportability/how-to-manage-azure-support-request.md) * [Azure support ticket REST API](/rest/api/support)-* Engage with us on [Twitter](https://twitter.com/azuresupport) +* Engage with us on [X](https://x.com/azuresupport) * Get help from your peers in the [Microsoft Q&A question page](/answers/products/azure) * Learn more in [Azure Support FAQ](https://azure.microsoft.com/support/faq) |
cost-management-billing | Troubleshoot Azure Sign Up | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-azure-sign-up.md | Other troubleshooting articles for Azure Billing and Subscriptions ## Contact us for help - Get answers in [Azure forums](https://azure.microsoft.com/support/forums/).-- Connect with [@AzureSupport](https://twitter.com/AzureSupport)- answers, support, experts.+- Connect with [@AzureSupport](https://x.com/azuresupport)- answers, support, experts. - If you have a support plan, [open a support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). ## Next steps |
data-factory | Connector Amazon Marketplace Web Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md | -# Copy data from Amazon Marketplace Web Service using Azure Data Factory or Synapse Analytics +# Copy data from Amazon Marketplace Web Service using Azure Data Factory or Synapse Analytics (Deprecated) [!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] -This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data from Amazon Marketplace Web Service. It builds on the [copy activity overview](copy-activity-overview.md) article that presents a general overview of copy activity. +## Deprecation -## Supported capabilities +>[!Note] +>This connector is deprecated because Amazon Marketplace Web Service is no longer available since **March 31, 2024**. For more information, see [Amazon Marketplace Web Service website](https://docs.developer.amazonservices.com/en_US/dev_guide/https://docsupdatetracker.net/index.html). -This Amazon Marketplace Web Service connector is supported for the following capabilities: --| Supported capabilities|IR | -|| --| -|[Copy activity](copy-activity-overview.md) (source/-)|① ②| -|[Lookup activity](control-flow-lookup-activity.md)|① ②| --*① Azure integration runtime ② Self-hosted integration runtime* -- For a list of data stores that are supported as sources/sinks, see the [Supported data stores](connector-overview.md#supported-data-stores) table. --The service provides a built-in driver to enable connectivity, therefore you don't need to manually install any driver using this connector. --## Getting started ---## Create a linked service to Amazon Marketplace Web Service using UI --Use the following steps to create a linked service to Amazon Marketplace Web Service in the Azure portal UI. --1. Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New: -- # [Azure Data Factory](#tab/data-factory) -- :::image type="content" source="media/doc-common-process/new-linked-service.png" alt-text="Screenshot of creating a new linked service with Azure Data Factory UI."::: -- # [Azure Synapse](#tab/synapse-analytics) -- :::image type="content" source="media/doc-common-process/new-linked-service-synapse.png" alt-text="Screenshot of creating a new linked service with Azure Synapse UI."::: --2. Search for Amazon and select the Amazon Marketplace Web Service connector. -- :::image type="content" source="media/connector-amazon-marketplace-web-service/amazon-marketplace-web-service-connector.png" alt-text="Screenshot of the Amazon Marketplace Web Service connector."::: ---1. Configure the service details, test the connection, and create the new linked service. -- :::image type="content" source="media/connector-amazon-marketplace-web-service/configure-amazon-marketplace-web-service-linked-service.png" alt-text="Screenshot of linked service configuration for Amazon Marketplace Web Service."::: --## Connector configuration details --The following sections provide details about properties that are used to define Data Factory entities specific to Amazon Marketplace Web Service connector. --## Linked service properties --The following properties are supported for Amazon Marketplace Web Service linked service: --| Property | Description | Required | -|: |: |: | -| type | The type property must be set to: **AmazonMWS** | Yes | -| endpoint | The endpoint of the Amazon MWS Server (that is, mws.amazonservices.com) | Yes | -| marketplaceID | The Amazon Marketplace ID you want to retrieve data from. To retrieve data from multiple Marketplace IDs, separate them with a comma (`,`). (that is, A2EUQ1WTGCTBG2) | Yes | -| sellerID | The Amazon seller ID. | Yes | -| mwsAuthToken | The Amazon MWS authentication token. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | -| accessKeyId | The access key ID used to access data. | Yes | -| secretKey | The secret key used to access data. Mark this field as a SecureString to store it securely, or [reference a secret stored in Azure Key Vault](store-credentials-in-key-vault.md). | Yes | -| useEncryptedEndpoints | Specifies whether the data source endpoints are encrypted using HTTPS. The default value is true. | No | -| useHostVerification | Specifies whether to require the host name in the server's certificate to match the host name of the server when connecting over TLS. The default value is true. | No | -| usePeerVerification | Specifies whether to verify the identity of the server when connecting over TLS. The default value is true. | No | --**Example:** --```json -{ - "name": "AmazonMWSLinkedService", - "properties": { - "type": "AmazonMWS", - "typeProperties": { - "endpoint" : "mws.amazonservices.com", - "marketplaceID" : "A2EUQ1WTGCTBG2", - "sellerID" : "<sellerID>", - "mwsAuthToken": { - "type": "SecureString", - "value": "<mwsAuthToken>" - }, - "accessKeyId" : "<accessKeyId>", - "secretKey": { - "type": "SecureString", - "value": "<secretKey>" - } - } - } -} -``` --## Dataset properties --For a full list of sections and properties available for defining datasets, see the [datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by Amazon Marketplace Web Service dataset. --To copy data from Amazon Marketplace Web Service, set the type property of the dataset to **AmazonMWSObject**. The following properties are supported: --| Property | Description | Required | -|: |: |: | -| type | The type property of the dataset must be set to: **AmazonMWSObject** | Yes | -| tableName | Name of the table. | No (if "query" in activity source is specified) | --**Example** --```json -{ - "name": "AmazonMWSDataset", - "properties": { - "type": "AmazonMWSObject", - "typeProperties": {}, - "schema": [], - "linkedServiceName": { - "referenceName": "<AmazonMWS linked service name>", - "type": "LinkedServiceReference" - } - } -} --``` --## Copy activity properties --For a full list of sections and properties available for defining activities, see the [Pipelines](concepts-pipelines-activities.md) article. This section provides a list of properties supported by Amazon Marketplace Web Service source. --### Amazon MWS as source --To copy data from Amazon Marketplace Web Service, set the source type in the copy activity to **AmazonMWSSource**. The following properties are supported in the copy activity **source** section: --| Property | Description | Required | -|: |: |: | -| type | The type property of the copy activity source must be set to: **AmazonMWSSource** | Yes | -| query | Use the custom SQL query to read data. For example: `"SELECT * FROM Orders where Amazon_Order_Id = 'xx'"`. | No (if "tableName" in dataset is specified) | --**Example:** --```json -"activities":[ - { - "name": "CopyFromAmazonMWS", - "type": "Copy", - "inputs": [ - { - "referenceName": "<AmazonMWS input dataset name>", - "type": "DatasetReference" - } - ], - "outputs": [ - { - "referenceName": "<output dataset name>", - "type": "DatasetReference" - } - ], - "typeProperties": { - "source": { - "type": "AmazonMWSSource", - "query": "SELECT * FROM Orders where Amazon_Order_Id = 'xx'" - }, - "sink": { - "type": "<sink type>" - } - } - } -] -``` --## Lookup activity properties --To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md). ## Related content-For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). +For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats). |
data-factory | Connector Snowflake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-snowflake.md | To determine the version of the Snowflake connector used in your existing Snowfl The V2 version offers several enhancements over the legacy version, including: -Autoscaling: Automatically adjusts resources based on traffic load. -Multi-Availability Zone Operation: Provides resilience by operating across multiple availability zones. +Autoscaling: Automatically adjusts resources based on traffic load.<br/> ++Multi-Availability Zone Operation: Provides resilience by operating across multiple availability zones.<br/> + Static IP Support: Enhances security by allowing the use of static IP addresses. ## Related content |
data-factory | Connector Troubleshoot Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-guide.md | The errors below are general to the copy activity and could occur with any conne #### Error code: 11775 -- **Message**: `Failed to connect to your instance of Azure Database for PostgreSQL flexible server.`+- **Message**: `Failed to connect to your instance of Azure Database for PostgreSQL flexible server. '%'` -- **Cause**: User or password provided are incorrect. The encryption method selected is not compatible with the configuration of the server. The network connectivity method configured for your instance doesn't allow connections from the Integration Runtime selected.+- **Cause**: Exact cause depends on the text returned in `'%'`. If it is **The operation has timed out**, it can be because the instance of PostgreSQL is stopped or because the network connectivity method configured for your instance doesn't allow connections from the Integration Runtime selected. User or password provided are incorrect. If it is **28P01: password authentication failed for user "*youruser*"**, it means that the user provided doesn't exist in the instance or that the password is incorrect. If it is **28000: no pg_hba.conf entry for host "*###.###.###.###*", user "*youruser*", database "*yourdatabase*", no encryption**, it means that the encryption method selected is not compatible with the configuration of the server. - **Recommendation**: Confirm that the user provided exists in your instance of PostgreSQL and that the password corresponds to the one currently assigned to that user. Make sure that the encryption method selected is accepted by your instance of PostgreSQL, based on its current configuration. If the network connectivity method of your instance is configured for Private access (VNet integration), use a Self-Hosted Integration Runtime (IR) to connect to it. If it is configured for Public access (allowed IP addresses), it is recommended to use an Azure IR with managed virtual network and deploy a managed private endpoint to connect to your instance. When it is configured for Public access (allowed IP addresses) a less recommended alternative consists in creating firewall rules in your instance to allow traffic originating on the IP addresses used by the Azure IR you're using. |
dev-box | Tutorial Configure Multiple Monitors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/tutorial-configure-multiple-monitors.md | To complete this tutorial, you must [install the Remote desktop app](tutorial-co When you connect to your cloud-hosted developer machine in Microsoft Dev Box by using a remote desktop app, you can take advantage of a multi-monitor setup. Microsoft Remote Desktop for Windows and Microsoft Remote Desktop for Mac both support up to 16 monitors. +> [!IMPORTANT] +> The Windows Store version of Microsoft Remote Desktop doesn't support multiple monitors. For more information, see [Get started with the Microsoft Store client](/windows-server/remote/remote-desktop-services/clients/windows). + Use the following steps to configure Remote Desktop to use multiple monitors. # [Microsoft Remote Desktop app](#tab/windows-app) |
devtest-labs | Devtest Lab Auto Shutdown | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-auto-shutdown.md | The following example shows you how to use Logic Apps to configure an auto shutd ### Create a logic app that sends email notifications -Logic Apps provides many connectors that make it easy to integrate a service with other clients, like Office 365 and Twitter. At a high level, the steps to set up a Logic App for email notification are: +Logic Apps provides many connectors that make it easy to integrate a service with other clients, like Office 365 and X. At a high level, the steps to set up a Logic App for email notification are: 1. Create a logic app. 1. Configure the built-in template. |
event-grid | Receive Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events.md | Finally, test that your function can now handle your custom event type: }] ``` -You can also test this functionality live by [sending a custom event with CURL from the Portal](./custom-event-quickstart-portal.md) or by [posting to a custom topic](./post-to-custom-topic.md) using any service or application that can POST to an endpoint such as [Postman](https://www.getpostman.com/). Create a custom topic and an event subscription with the endpoint set as the Function URL. +You can also test this functionality live by [sending a custom event with CURL from the Portal](./custom-event-quickstart-portal.md) or by [posting to a custom topic](./post-to-custom-topic.md) using any service or application that can POST to an endpoint. Create a custom topic and an event subscription with the endpoint set as the Function URL. [!INCLUDE [message-headers](./includes/message-headers.md)] |
event-grid | Troubleshoot Subscription Validation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/troubleshoot-subscription-validation.md | Last updated 09/28/2021 # Troubleshoot Azure Event Grid subscription validations During event subscription creation, if you're seeing an error message such as `The attempt to validate the provided endpoint https://your-endpoint-here failed. For more details, visit https://aka.ms/esvalidation`, it indicates that there's a failure in the validation handshake. To resolve this error, verify the following aspects: -- Do an HTTP POST to your webhook url with a [sample SubscriptionValidationEvent](webhook-event-delivery.md#validation-details) request body using Postman or curl or similar tool.+- Do an HTTP POST to your webhook url with a [sample SubscriptionValidationEvent](webhook-event-delivery.md#validation-details) request body using curl or similar tool. - If your webhook is implementing synchronous validation handshake mechanism, verify that the ValidationCode is returned as part of the response. - If your webhook is implementing asynchronous validation handshake mechanism, verify that you're the HTTP POST is returning 200 OK. - If your webhook is returning `403 (Forbidden)` in the response, check if your webhook is behind an Azure Application Gateway or Web Application Firewall. If it is, then your need to disable these firewall rules and do an HTTP POST again: During event subscription creation, if you're seeing an error message such as `T > [!IMPORTANT] > For detailed information on endpoint validation for webhooks, see [Webhook event delivery](webhook-event-delivery.md). -The following sections show you how to validate an event subscriptions using Postman and Curl. --## Validate Event Grid event subscription using Postman -Here's an example of using Postman for validating a webhook subscription of an Event Grid event: --![Event grid event subscription validation using Postman](./media/troubleshoot-subscription-validation/event-subscription-validation-postman.png) --Here is a sample **SubscriptionValidationEvent** JSON: +Here is a sample **SubscriptionValidationEvent** JSON you can send using a tool such as CURL: ```json [ Here is the sample successful response: } ``` -To learn more about Event Grid event validation for webhooks, see [Endpoint validation with event grid events](webhook-event-delivery.md#endpoint-validation-with-event-grid-events). - ## Validate Event Grid event subscription using Curl Here's the sample Curl command for validating a webhook subscription of an Event Grid event: Here's the sample Curl command for validating a webhook subscription of an Event curl -X POST -d '[{"id": "2d1781af-3a4c-4d7c-bd0c-e34b19da4e66","topic": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx","subject": "","data": {"validationCode": "512d38b6-c7b8-40c8-89fe-f46f9e9622b6"},"eventType": "Microsoft.EventGrid.SubscriptionValidationEvent","eventTime": "2018-01-25T22:12:19.4556811Z", "metadataVersion": "1","dataVersion": "1"}]' -H 'Content-Type: application/json' https://{your-webhook-url.com} ``` -## Validate cloud event subscription using Postman -Here's an example of using Postman for validating a webhook subscription of a cloud event: -![Cloud event subscription validation using Postman](./media/troubleshoot-subscription-validation/cloud-event-subscription-validation-postman.png) +To learn more about Event Grid event validation for webhooks, see [Endpoint validation with event grid events](webhook-event-delivery.md#endpoint-validation-with-event-grid-events). ++## Validate cloud event subscription Use the **HTTP OPTIONS** method for validation with cloud events. To learn more about cloud event validation for webhooks, see [Endpoint validation with cloud events](webhook-event-delivery.md#endpoint-validation-with-event-grid-events). ## Troubleshoot event subscription validation |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | If you're remote and don't have fiber connectivity, or you want to explore other | Provider | Exchange | | | | | **[CyrusOne](https://www.cyrusone.com/cloud-solutions/microsoft-azure)** | Megaport<br/>PacketFabric |-| **[Cyxtera](https://www.cyxtera.com/data-center-services/interconnection)** | Megaport<br/>PacketFabric | +| **[Cyxtera](https://centersquaredc.com/products)** | Megaport<br/>PacketFabric | | **[Databank](https://www.databank.com/platforms/connectivity/cloud-direct-connect/)** | Megaport | | **[DataFoundry](https://www.datafoundry.com/services/cloud-connect/)** | Megaport | | **[Digital Realty](https://www.digitalrealty.com/platform-digital/connectivity)** | IX Reach<br/>Megaport PacketFabric | |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/troubleshoot/general.md | channels for more support: - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).-- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure+- Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. - If you need more help, you can file an Azure support incident. Go to the |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/management-groups/troubleshoot/general.md | channels for more support: - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).-- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure+- Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. - If you need more help, you can file an Azure support incident. Go to the |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/troubleshoot/general.md | Regardless of the scenario, Azure policy retains the last known policy on the cl If your problem isn't listed in this article or you can't resolve it, get support by visiting one of the following channels: - Get answers from experts through [Microsoft Q&A](/answers/topics/azure-policy.html).-- Connect with [@AzureSupport](https://twitter.com/azuresupport). This official Microsoft Azure resource on Twitter helps improve the customer experience by connecting the Azure community to the right answers, support, and experts.+- Connect with [@AzureSupport](https://x.com/azuresupport). This official Microsoft Azure resource on X helps improve the customer experience by connecting the Azure community to the right answers, support, and experts. - If you still need help, go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Submit a support ticket**. |
governance | General | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/troubleshoot/general.md | Include at least one subscription in the subscription list that the customer run If you didn't see your problem or are unable to solve your issue, visit one of the following channels for more support: - Get answers from Azure experts through [Azure Forums](https://azure.microsoft.com/support/forums/).-- Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.+- Connect with [@AzureSupport](https://x.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts. - If you need more help, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select **Get Support**. |
hdinsight | Hdinsight Management Ip Addresses | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-management-ip-addresses.md | Allow traffic from the following IP addresses for Azure HDInsight health and man Allow traffic from the IP addresses listed for the Azure HDInsight health and management services in the specific Azure region where your resources are located, refer the following note: > [!IMPORTANT] -> We recommend to use [service tag](hdinsight-service-tags.md) feature for network security groups. If you require region specific service tags, please refer the [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://download.microsoft.com/download/7/1/D/71D86715-5596-4529-9B13-DA13A5DE5B63/ServiceTags_Public_20240624.json) +> We recommend to use [service tag](hdinsight-service-tags.md) feature for network security groups. If you require region specific service tags, please refer the [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519&msockid=2a4184eaec6960de1d4297c1ed7b6126) For information on the IP addresses to use for Azure Government, see the [Azure Government Intelligence + Analytics](../azure-government/compare-azure-government-global-azure.md) document. |
lab-services | Connect Virtual Machine Chromebook Remote Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-chromebook-remote-desktop.md | To connect to the lab VM by using RDP, use the Microsoft Remote Desktop app. To install the Microsoft Remote Desktop app: -1. Open the Play Store on your Chromebook, and search for **Microsoft Remote Desktop**. +1. In the Google Play store, open the Microsoft [Remote Desktop](https://play.google.com/store/apps/details?id=com.microsoft.rdc.androidx&pli=1) page, or search for **Microsoft Remote Desktop**. - :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/install-remote-desktop-chromebook.png" alt-text="Screenshot of the Microsoft Remote Desktop app in the app store." lightbox="./media/connect-virtual-machine-chromebook-remote-desktop/install-remote-desktop-chromebook.png"::: + :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/google-play.png" alt-text="Screenshot of the Microsoft Remote Desktop app in the app store." lightbox="./media/connect-virtual-machine-chromebook-remote-desktop/google-play.png"::: ++1. Verify that the app is available for your device. ++ :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/google-play-verify.png" alt-text="Screenshot of the Microsoft Remote Desktop app in the app store with the app availability message highlighted." lightbox="./media/connect-virtual-machine-chromebook-remote-desktop/google-play-verify.png"::: ++1. Select **Install** to install the app. If prompted, select the device on which to install the app. ++ :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/install-select-device.png" alt-text="Screenshot of the Microsoft Remote Desktop app select device dialog." lightbox="./media/connect-virtual-machine-chromebook-remote-desktop/install-select-device.png"::: -1. Select **Install** to install the latest version of the Remote Desktop application by Microsoft Corporation. ## Access the VM from your Chromebook using RDP Connect to the lab VM by using the remote desktop application. You can retrieve > [!NOTE] > The Microsoft Remote Desktop app is the recommended client for connecting to Azure Lab Services VMs. While you can connect to a lab VM from a Chromebook using RDP clients like Chrome Remote Desktop, third-party apps often need software installation and configuration on the VM. Coordinate with your lab administrator to confirm third-party app usage is permitted. +For more information about Microsoft Remote Desktop app, see: +- [What's new in the Remote Desktop client for Android and Chrome OS](/windows-server/remote/remote-desktop-services/clients/android-whatsnew) +- [Connect to Azure Virtual Desktop with the Remote Desktop client for Android and Chrome OS](../virtual-desktop/users/connect-android-chrome-os.md) ++ ## Related content - As an educator, [configure RDP for Linux VMs](how-to-enable-remote-desktop-linux.md) |
load-balancer | Load Balancer Tcp Reset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md | Azure Load Balancer has a 4 minutes to 100-minutes timeout range for Load Balanc When the connection is closed, your client application can receive the following error message: "The underlying connection was closed: A connection that was expected to be kept alive was closed by the server." +If TCP RSTs are enabled, and it's missed for any reason, RSTs will be sent for any subsequent packets. If the TCP RST option isn't enabled, then packets will be silently dropped. + A common practice is to use a TCP keep-alive. This practice keeps the connection active for a longer period. For more information, see these [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive). With keep-alive enabled, packets are sent during periods of inactivity on the connection. Keep-alive packets ensure the idle timeout value isn't reached and the connection is maintained for a long period. The setting works for inbound connections only. To avoid losing the connection, configure the TCP keep-alive with an interval less than the idle timeout setting or increase the idle timeout value. To support these scenarios, support for a configurable idle timeout is available. |
machine-learning | Convert Word To Vector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference/convert-word-to-vector.md | This section contains tips and answers to frequently asked questions. + Difference between online-training and pretrained model: - In this Convert Word to Vector component, we provided three different strategies: two online-training models and one pretrained model. The online-training models use your input dataset as training data, and generate vocabulary and word vectors during training. The pretrained model is already trained by a much larger text corpus, such as Wikipedia or Twitter text. The pretrained model is actually a collection of word/embedding pairs. + In this Convert Word to Vector component, we provided three different strategies: two online-training models and one pretrained model. The online-training models use your input dataset as training data, and generate vocabulary and word vectors during training. The pretrained model is already trained by a much larger text corpus, such as Wikipedia or X text. The pretrained model is actually a collection of word/embedding pairs. The GloVe pre-trained model summarizes a vocabulary from the input dataset and generates an embedding vector for each word from the pretrained model. Without online training, the use of a pretrained model can save training time. It has better performance, especially when the input dataset size is relatively small. |
machine-learning | How To Identity Based Service Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-service-authentication.md | Azure Machine Learning is composed of multiple Azure services. There are multipl ## Azure Container Registry and identity types -The following table lists the support matrix when authenticating to __Azure Container Registry__, depending on the authentication method and the __Azure Container Registry's__ [public network access configuration](/azure/container-registry/container-registry-access-selected-networks). +This table lists the support matrix when authenticating to __Azure Container Registry__, depending on the authentication method and the __Azure Container Registry's__ [public network access configuration](/azure/container-registry/container-registry-access-selected-networks). | Authentication method | Public network access</br>disabled | Azure Container Registry</br>Public network access enabled | | - | :-: | :-: | except Exception: ml_client.compute.begin_create_or_update(compute) ``` - # [Studio](#tab/azure-studio) For information on configuring managed identity when creating a compute cluster in studio, see [Set up managed identity](how-to-create-attach-compute-cluster.md#set-up-managed-identity). You can now use the environment in a [training job](how-to-train-cli.md). ### Build Azure Machine Learning managed environment into base image from private ACR for training or inference +> [!NOTE] +> Connecting to a private ACR using user-assigned managed identity is not currently supported. **Admin key** is the only auth type supported for private ACR. ++<!-- 20240725: this commented block will be restored at a later date TBD . . . + [!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)] In this scenario, Azure Machine Learning service builds the training or inference environment on top of a base image you supply from a private ACR. Because the image build task happens on the workspace ACR using ACR Tasks, you must perform more steps to allow access. In this scenario, Azure Machine Learning service builds the training or inferenc image: <acr url>/pytorch/pytorch:latest description: Environment created from private ACR. ```-+--> ## Next steps * Learn more about [enterprise security in Azure Machine Learning](concept-enterprise-security.md) |
machine-learning | How To Manage Inputs Outputs Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-inputs-outputs-pipeline.md | az ml job download --output-name <OUTPUT_PORT_NAME> -n <JOB_NAME> -g <RESOURCE_G ``` # [Python SDK](#tab/python) -Before we dive in the code, you need a way to reference your workspace. You create `ml_client` for a handle to the workspace. Refer to [Create handle to workspace](./tutorial-explore-data.md#create-handle-to-workspace) to initialize `ml_client`. +Before we dive in the code, you need a way to reference your workspace. You create `ml_client` for a handle to the workspace. Refer to [Create handle to workspace](./tutorial-explore-data.md#create-a-handle-to-the-workspace) to initialize `ml_client`. ```python # Download all the outputs of the job az ml job download --all -n <JOB_NAME> -g <RESOURCE_GROUP_NAME> -w <WORKSPACE_NA # [Python SDK](#tab/python) -Before we dive in the code, you need a way to reference your workspace. You create `ml_client` for a handle to the workspace. Refer to [Create handle to workspace](./tutorial-explore-data.md#create-handle-to-workspace) to initialize `ml_client`. +Before we dive in the code, you need a way to reference your workspace. You create `ml_client` for a handle to the workspace. Refer to [Create handle to workspace](./tutorial-explore-data.md#create-a-handle-to-the-workspace) to initialize `ml_client`. ```python # List all child jobs in the job |
machine-learning | How To Troubleshoot Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md | If the error message mentions `"failed to communicate with the workspace's conta Image build timeouts are often due to an image becoming too large to be able to complete building within the timeframe of deployment creation. To verify if this is your issue, check your image build logs at the location that the error may specify. The logs are cut off at the point that the image build timed out. -To resolve this, please [build your image separately](https://learn.microsoft.com/azure/devops/pipelines/ecosystems/containers/publish-to-acr?view=azure-devops&tabs=javascript%2Cportal%2Cmsi) so that the image only needs to be pulled during deployment creation. +To resolve this, please [build your image separately](/azure/devops/pipelines/ecosystems/containers/publish-to-acr?view=azure-devops&tabs=javascript%2Cportal%2Cmsi) so that the image only needs to be pulled during deployment creation. #### Generic image build failure |
machine-learning | Reference Yaml Component Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-component-command.md | |
machine-learning | Reference Yaml Core Syntax | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-core-syntax.md | |
machine-learning | Reference Yaml Job Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md | |
machine-learning | Reference Yaml Job Pipeline | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-pipeline.md | |
machine-learning | Reference Yaml Job Sweep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-sweep.md | |
machine-learning | Tutorial Explore Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-explore-data.md | Title: "Tutorial: Upload, access and explore your data" + Title: "Tutorial: upload, access, and explore your data" -description: Upload data to cloud storage, create an Azure Machine Learning data asset, create new versions for data assets, use the data for interactive development +description: Upload data to cloud storage, create an Azure Machine Learning data asset, create new versions for data assets, and use the data for interactive development -# Tutorial: Upload, access and explore your data in Azure Machine Learning +# Tutorial: Upload, access, and explore your data in Azure Machine Learning [!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)] In this tutorial you learn how to: > * Access your data in a notebook for interactive development > * Create new versions of data assets -The start of a machine learning project typically involves exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and the building of Machine Learning model prototypes to validate hypotheses. This _prototyping_ project phase is highly interactive. It lends itself to development in an IDE or a Jupyter notebook, with a _Python interactive console_. This tutorial describes these ideas. +A machine learning project typically starts with exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and building Machine Learning model prototypes to validate hypotheses. This _prototyping_ project phase is highly interactive. It lends itself to development in an IDE or a Jupyter notebook, with a _Python interactive console_. This tutorial describes these ideas. -This video shows how to get started in Azure Machine Learning studio so that you can follow the steps in the tutorial. The video shows how to create a notebook, clone the notebook, create a compute instance, and download the data needed for the tutorial. The steps are also described in the following sections. +This video shows how to get started in Azure Machine Learning studio, so that you can follow the steps in the tutorial. The video shows how to create a notebook, clone the notebook, create a compute instance, and download the data needed for the tutorial. The steps are also described in the following sections. > [!VIDEO https://learn-video.azurefd.net/vod/player?id=514a29e2-0ae7-4a5d-a537-8f10681f5545] This video shows how to get started in Azure Machine Learning studio so that you * [!INCLUDE [new notebook](includes/prereq-new-notebook.md)] * Or, open **tutorials/get-started-notebooks/explore-data.ipynb** from the **Samples** section of studio. [!INCLUDE [clone notebook](includes/prereq-clone-notebook.md)] <!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/explore-data.ipynb --> - ## Download the data used in this tutorial -For data ingestion, the Azure Data Explorer handles raw data in [these formats](/azure/data-explorer/ingestion-supported-formats). This tutorial uses this [CSV-format credit card client data sample](https://azuremlexamples.blob.core.windows.net/datasets/credit_card/default_of_credit_card_clients.csv). We see the steps proceed in an Azure Machine Learning resource. In that resource, we'll create a local folder with the suggested name of **data** directly under the folder where this notebook is located. +For data ingestion, the Azure Data Explorer handles raw data in [these formats](/azure/data-explorer/ingestion-supported-formats). This tutorial uses this [CSV-format credit card client data sample](https://azuremlexamples.blob.core.windows.net/datasets/credit_card/default_of_credit_card_clients.csv). The steps proceed in an Azure Machine Learning resource. In that resource, we'll create a local folder, with the suggested name of **data**, directly under the folder where this notebook is located. > [!NOTE]-> This tutorial depends on data placed in an Azure Machine Learning resource folder location. For this tutorial, 'local' means a folder location in that Azure Machine Learning resource. +> This tutorial depends on data placed in an Azure Machine Learning resource folder location. For this tutorial, 'local' means a folder location in that Azure Machine Learning resource. 1. Select **Open terminal** below the three dots, as shown in this image: :::image type="content" source="media/tutorial-cloud-workstation/open-terminal.png" alt-text="Screenshot shows open terminal tool in notebook toolbar."::: -1. The terminal window opens in a new tab. -1. Make sure you `cd` to the same folder where this notebook is located. For example, if the notebook is in a folder named **get-started-notebooks**: +1. The terminal window opens in a new tab. +1. Make sure you `cd` (**Change Directory**) to the same folder where this notebook is located. For example, if the notebook is in a folder named **get-started-notebooks**: ```bash cd get-started-notebooks # modify this to the path where your notebook is located For data ingestion, the Azure Data Explorer handles raw data in [these formats]( ``` 1. You can now close the terminal window. +For more information about the data in the UC Irvine Machine Learning Repository, visit [this resource](https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients). -[Learn more about this data on the UCI Machine Learning Repository.](https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients) --## Create handle to workspace +## Create a handle to the workspace -Before we dive in the code, you need a way to reference your workspace. You'll create `ml_client` for a handle to the workspace. You'll then use `ml_client` to manage resources and jobs. +Before we explore the code, you need a way to reference your workspace. You'll create `ml_client` for a handle to the workspace. You then use `ml_client` to manage resources and jobs. In the next cell, enter your Subscription ID, Resource Group name and Workspace name. To find these values: -1. In the upper right Azure Machine Learning studio toolbar, select your workspace name. -1. Copy the value for workspace, resource group and subscription ID into the code. -1. You'll need to copy one value, close the area and paste, then come back for the next one. -+1. At the upper right Azure Machine Learning studio toolbar, select your workspace name. +1. Copy the value for workspace, resource group, and subscription ID into the code. +1. You must individually copy the values one at a time, close the area and paste, then continue to the next one. ```python from azure.ai.ml import MLClient ml_client = MLClient( ``` > [!NOTE]-> Creating MLClient will not connect to the workspace. The client initialization is lazy, it will wait for the first time it needs to make a call (this will happen in the next code cell). -+> Creation of MLClient will not connect to the workspace. The client initialization is lazy. It waits for the first time it needs to make a call. This happenS in the next code cell. ## Upload data to cloud storage -Azure Machine Learning uses Uniform Resource Identifiers (URIs), which point to storage locations in the cloud. A URI makes it easy to access data in notebooks and jobs. Data URI formats look similar to the web URLs that you use in your web browser to access web pages. For example: +Azure Machine Learning uses Uniform Resource Identifiers (URIs), which point to storage locations in the cloud. A URI makes it easy to access data in notebooks and jobs. Data URI formats have a format similar to the web URLs that you use in your web browser to access web pages. For example: * Access data from public https server: `https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file>` * Access data from Azure Data Lake Gen 2: `abfss://<file_system>@<account_name>.dfs.core.windows.net/<folder>/<file>` An Azure Machine Learning data asset is similar to web browser bookmarks (favorites). Instead of remembering long storage paths (URIs) that point to your most frequently used data, you can create a data asset, and then access that asset with a friendly name. -Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, and local files. +Data asset creation also creates a *reference* to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and you don't risk data source integrity. You can create Data assets from Azure Machine Learning datastores, Azure Storage, public URLs, and local files. > [!TIP]-> For smaller-size data uploads, Azure Machine Learning data asset creation works well for data uploads from local machine resources to cloud storage. This approach avoids the need for extra tools or utilities. However, a larger-size data upload might require a dedicated tool or utility - for example, **azcopy**. The azcopy command-line tool moves data to and from Azure Storage. Learn more about azcopy [here](../storage/common/storage-use-azcopy-v10.md). +> For smaller-size data uploads, Azure Machine Learning data asset creation works well for data uploads from local machine resources to cloud storage. This approach avoids the need for extra tools or utilities. However, a larger-size data upload might require a dedicated tool or utility - for example, **azcopy**. The azcopy command-line tool moves data to and from Azure Storage. For more information about azcopy, visit [this resource](../storage/common/storage-use-azcopy-v10.md). -The next notebook cell creates the data asset. The code sample uploads the raw data file to the designated cloud storage resource. +The next notebook cell creates the data asset. The code sample uploads the raw data file to the designated cloud storage resource. -Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. In this code, we're using the "initial" for the first read of the data. If that version already exists, we'll skip creating it again. +Each time you create a data asset, you need a unique version for it. If the version already exists, you'll get an error. In this code, we use the "initial" for the first read of the data. If that version already exists, we don't recreate it. -You can also omit the **version** parameter, and a version number is generated for you, starting with 1 and then incrementing from there. --In this tutorial, we use the name "initial" as the first version. The [Create production machine learning pipelines](tutorial-pipeline-python-sdk.md) tutorial will also use this version of the data, so here we are using a value that you'll see again in that tutorial. +You can also omit the **version** parameter. In this case, a version number is generated for you, starting with 1 and then incrementing from there. +This tutorial uses the name "initial" as the first version. The [Create production machine learning pipelines](tutorial-pipeline-python-sdk.md) tutorial also uses this version of the data, so here we use a value that you'll see again in that tutorial. ```python from azure.ai.ml.entities import Data except: print(f"Data asset created. Name: {my_data.name}, version: {my_data.version}") ``` -You can see the uploaded data by selecting **Data** on the left. You'll see the data is uploaded and a data asset is created: +To examine the uploaded data, select **Data** on the left. The data is uploaded and a data asset is created: -This data is named **credit-card**, and in the **Data assets** tab, we can see it in the **Name** column. This data uploaded to your workspace's default datastore named **workspaceblobstore**, seen in the **Data source** column. +This data is named **credit-card**, and in the **Data assets** tab, we can see it in the **Name** column. An Azure Machine Learning datastore is a *reference* to an *existing* storage account on Azure. A datastore offers these benefits: -1. A common and easy-to-use API, to interact with different storage types (Blob/Files/Azure Data Lake Storage) and authentication methods. +1. A common and easy-to-use API, to interact with different storage types + + - Azure Data Lake Storage + - Blob + - Files ++ and authentication methods. 1. An easier way to discover useful datastores, when working as a team. 1. In your scripts, a way to hide connection information for credential-based data access (service principal/SAS/key). - ## Access your data in a notebook Pandas directly support URIs - this example shows how to read a CSV file from an Azure Machine Learning Datastore: import pandas as pd df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/workspaces/<workspace_name>/datastores/<datastore_name>/paths/<folder>/<filename>.csv") ``` -However, as mentioned previously, it can become hard to remember these URIs. Additionally, you must manually substitute all **<_substring_>** values in the **pd.read_csv** command with the real values for your resources. +However, as mentioned previously, it can become hard to remember these URIs. Additionally, you must manually substitute all **<_substring_>** values in the **pd.read_csv** command with the real values for your resources. You'll want to create data assets for frequently accessed data. Here's an easier way to access the CSV file in Pandas: > [!IMPORTANT] > In a notebook cell, execute this code to install the `azureml-fsspec` Python library in your Jupyter kernel: - ```python %pip install -U azureml-fsspec ``` - ```python import pandas as pd df = pd.read_csv(data_asset.path) df.head() ``` -Read [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md) to learn more about data access in a notebook. +For more information about data access in a notebook, visit [Access data from Azure cloud storage during interactive development](how-to-access-data-interactive.md). ## Create a new version of the data asset -You might have noticed that the data needs a little light cleaning, to make it fit to train a machine learning model. It has: +The data needs some light cleaning, to make it fit to train a machine learning model. It has: * two headers * a client ID column; we wouldn't use this feature in Machine Learning * spaces in the response variable name -Also, compared to the CSV format, the Parquet file format becomes a better way to store this data. Parquet offers compression, and it maintains schema. Therefore, to clean the data and store it in Parquet, use: -+Also, compared to the CSV format, the Parquet file format becomes a better way to store this data. Parquet offers compression, and it maintains schema. To clean the data and store it in Parquet, use: ```python # read in data again, this time using the 2nd row as the header This table shows the structure of the data in the original **default_of_credit_c |X18-23 | Explanatory | Amount of previous payment (NT dollar) from April to September 2005. | |Y | Response | Default payment (Yes = 1, No = 0) | -Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage). For this version, we'll add a time value, so that each time this code is run, a different version number will be created. --+Next, create a new _version_ of the data asset (the data automatically uploads to cloud storage). For this version, add a time value, so that each time this code runs, a different version number is created. ```python from azure.ai.ml.entities import Data print(f"Data asset created. Name: {my_data.name}, version: {my_data.version}") The cleaned parquet file is the latest version data source. This code shows the CSV version result set first, then the Parquet version: - ```python import pandas as pd print(v2df.head(5)) <!-- nbend --> --- ## Clean up resources If you plan to continue now to other tutorials, skip to [Next steps](#next-steps). ### Stop compute instance -If you're not going to use it now, stop the compute instance: +If you don't plan to use it now, stop the compute instance: 1. In the studio, in the left navigation area, select **Compute**. 1. In the top tabs, select **Compute instances** If you're not going to use it now, stop the compute instance: ## Next steps -Read [Create data assets](how-to-create-data-assets.md) for more information about data assets. +For more information about data assets, visit [Create data assets](how-to-create-data-assets.md). -Read [Create datastores](how-to-datastore.md) to learn more about datastores. +For more information about datastores, visit [Create datastores](how-to-datastore.md). -Continue with tutorials to learn how to develop a training script. +Continue with the next tutorial to learn how to develop a training script: > [!div class="nextstepaction"] > [Model development on a cloud workstation](tutorial-cloud-workstation.md) |
machine-learning | How To Configure Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-private-link.md | |
machine-learning | How To Manage Workspace Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace-cli.md | |
machine-learning | How To Secure Training Vnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md | |
machine-learning | Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md | |
machine-learning | Reference Azure Machine Learning Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/reference-azure-machine-learning-cli.md | |
machine-learning | Samples Designer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/samples-designer.md | The sample datasets are available under **Datasets**-**Samples** category. You c |CRM Upselling Labels Shared|Labels from the KDD Cup 2009 customer relationship prediction challenge ([orange_large_train_upselling.labels](https://kdd.org/cupfiles/KDDCupData/2009/orange_small_train_upselling.labels)| |Flight Delays Data|Passenger flight on-time performance data taken from the TranStats data collection of the U.S. Department of Transportation ([On-Time](https://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time)).<br/>The dataset covers the time period April-October 2013. Before uploading to the designer, the dataset was processed as follows: <br/>- The dataset was filtered to cover only the 70 busiest airports in the continental US <br/>- Canceled flights were labeled as delayed by more than 15 minutes <br/>- Diverted flights were filtered out <br/>- The following columns were selected: Year, Month, DayofMonth, DayOfWeek, Carrier, OriginAirportID, DestAirportID, CRSDepTime, DepDelay, DepDel15, CRSArrTime, ArrDelay, ArrDel15, Canceled| |German Credit Card UCI dataset|The UCI Statlog (German Credit Card) dataset ([Statlog+German+Credit+Data](https://archive.ics.uci.edu/dataset/144/statlog+german+credit+data)), using the german.data file.<br/>The dataset classifies people, described by a set of attributes, as low or high credit risks. Each example represents a person. There are 20 features, both numerical and categorical, and a binary label (the credit risk value). High credit risk entries have label = 2, low credit risk entries have label = 1. The cost of misclassifying a low risk example as high is 1, whereas the cost of misclassifying a high risk example as low is 5.|-|IMDB Movie Titles|The dataset contains information about movies that were rated in Twitter tweets: IMDB movie ID, movie name, genre, and production year. There are 17K movies in the dataset. The dataset was introduced in the paper "S. Dooms, T. De Pessemier and L. Martens. MovieTweetings: a Movie Rating Dataset Collected From Twitter. Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013."| -|Movie Ratings|The dataset is an extended version of the Movie Tweetings dataset. The dataset has 170K ratings for movies, extracted from well-structured tweets on Twitter. Each instance represents a tweet and is a tuple: user ID, IMDB movie ID, rating, timestamp, number of favorites for this tweet, and number of retweets of this tweet. The dataset was made available by A. Said, S. Dooms, B. Loni and D. Tikk for Recommender Systems Challenge 2014.| +|IMDB Movie Titles|The dataset contains information about movies that were rated in X tweets: IMDB movie ID, movie name, genre, and production year. There are 17K movies in the dataset. The dataset was introduced in the paper "S. Dooms, T. De Pessemier and L. Martens. MovieTweetings: a Movie Rating Dataset Collected From Twitter. Workshop on Crowdsourcing and Human Computation for Recommender Systems, CrowdRec at RecSys 2013."| +|Movie Ratings|The dataset is an extended version of the Movie Tweetings dataset. The dataset has 170K ratings for movies, extracted from well-structured tweets on X. Each instance represents a tweet and is a tuple: user ID, IMDB movie ID, rating, timestamp, number of favorites for this tweet, and number of retweets of this tweet. The dataset was made available by A. Said, S. Dooms, B. Loni and D. Tikk for Recommender Systems Challenge 2014.| |Weather Dataset|Hourly land-based weather observations from NOAA ([merged data from 201304 to 201310](https://az754797.vo.msecnd.net/data/WeatherDataset.csv)).<br/>The weather data covers observations made from airport weather stations, covering the time period April-October 2013. Before uploading to the designer, the dataset was processed as follows: <br/> - Weather station IDs were mapped to corresponding airport IDs <br/> - Weather stations not associated with the 70 busiest airports were filtered out <br/> - The Date column was split into separate Year, Month, and Day columns <br/> - The following columns were selected: AirportID, Year, Month, Day, Time, TimeZone, SkyCondition, Visibility, WeatherType, DryBulbFarenheit, DryBulbCelsius, WetBulbFarenheit, WetBulbCelsius, DewPointFarenheit, DewPointCelsius, RelativeHumidity, WindSpeed, WindDirection, ValueForWindCharacter, StationPressure, PressureTendency, PressureChange, SeaLevelPressure, RecordType, HourlyPrecip, Altimeter| |Wikipedia SP 500 Dataset|Data is derived from Wikipedia (https://www.wikipedia.org/) based on articles of each S&P 500 company, stored as XML data. <br/>Before uploading to the designer, the dataset was processed as follows: <br/> - Extract text content for each specific company <br/> - Remove wiki formatting <br/> - Remove non-alphanumeric characters <br/> - Convert all text to lowercase <br/> - Known company categories were added <br/>Note that for some companies an article could not be found, so the number of records is less than 500.| |Restaurant Feature Data| A set of metadata about restaurants and their features, such as food type, dining style, and location. <br/>**Usage**: Use this dataset, in combination with the other two restaurant datasets, to train and test a recommender system.<br/> **Related Research**: Bache, K. and Lichman, M. (2013). [UCI Machine Learning Repository](https://archive.ics.uci.edu/). Irvine, CA: University of California, School of Information and Computer Science.| |
managed-grafana | Concept Role Based Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-role-based-access-control.md | + + Title: "Azure role-based access control - Azure Managed Grafana" ++description: This conceptual article introduces Azure role-based access control for Azure Managed Grafana resources. +#customer intent: As a Grafana user, I want to understand how Azure role-based access control (RBAC) works with Azure Managed Grafana so that I can manage access to Azure Managed Grafana workspaces. +++ Last updated : 06/28/2024++++# Azure role-based access control within Azure Managed Grafana ++Azure Managed Grafana supports [Azure role-based access control (RBAC)](../role-based-access-control/index.yml), an authorization system that lets you manage individual access to your Azure resources. ++Azure RBAC enables you to allocate varying permission levels to users, groups, service principals, or managed identities, for managing your Azure Managed Grafana resources. ++## Azure Managed Grafana roles ++The following built-in roles are available in Azure Managed Grafana, each providing different levels of access: ++> [!div class="mx-tableFixed"] +> | Built-in role | Description | ID | +> | | | | +> | <a name='grafana-admin'></a>[Grafana Admin](../role-based-access-control/built-in-roles/monitor.md#grafana-admin) | Perform all Grafana operations, including the ability to manage data sources, create dashboards, and manage role assignments within Grafana. | 22926164-76b3-42b3-bc55-97df8dab3e41 | +> | <a name='grafana-editor'></a>[Grafana Editor](../role-based-access-control/built-in-roles/monitor.md#grafana-editor) | View and edit a Grafana instance, including its dashboards and alerts. | a79a5197-3a5c-4973-a920-486035ffd60f | +> | <a name='grafana-viewer'></a>[Grafana Viewer](../role-based-access-control/built-in-roles/monitor.md#grafana-viewer) | View a Grafana instance, including its dashboards and alerts. | 60921a7e-fef1-4a43-9b16-a26c52ad4769 | ++To access the Grafana user interface, users must possess one of these roles. ++These permissions are included within the broader roles of resource group Contributor and resource group Owner roles. If you're not a resource group Contributor or resource group Owner, a User Access Administrator, you will need to ask a subscription Owner or resource group Owner to grant you one of the Grafana roles on the resource you want to access. ++## Adding a role assignment to an Azure Managed Grafana resource ++To add a role assignment to an Azure Managed Grafana instance, in your Azure Managed Grafana workspace, open the **Access control (IAM)** menu and select **Add** > **Add role assignment**. +++Assign a role, such as **Grafana viewer**, to a user, group, service principal or managed identity. For more information about assigning a role, go to [Grant access](../role-based-access-control/quickstart-assign-role-user-portal.md#grant-access). ++## Related content ++* [Configure Grafana teams](how-to-sync-teams-with-azure-ad-groups.md) +* [Set up authentication and permissions](how-to-authentication-permissions.md) |
migrate | Migrate Replication Appliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-replication-appliance.md | When you set up the replication appliance using the OVA template provided in the **Component** | **Requirement** | | **VMware VM appliance**-PowerCLI | [PowerCLI version 6.0](https://my.vmware.com/web/vmware/details?productId=491&downloadGroup=PCLI600R1) should be installed if the replication appliance is running on a VMware VM. +PowerCLI | [PowerCLI version 6.0](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.esxi.install.doc/GUID-F02D0C2D-B226-4908-9E5C-2E783D41FE2D.html) should be installed if the replication appliance is running on a VMware VM. NIC type | VMXNET3 (if the appliance is a VMware VM) | **Hardware settings** CPU cores | 8 |
migrate | Tutorial Assess Vmware Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-assess-vmware-azure-vmware-solution.md | Run an assessment as follows: | | | Target settings | **Target location** | The Azure region to which you want to migrate. Size and cost recommendations are based on the location that you specify. Target settings | **Storage type** | Defaulted to **vSAN**. This is the default storage type for an AVS private cloud.- Target settings | **Reserved instance** | Specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs. If you decide to use a reserved instance, you can't specify **Discount (%)**. [Learn more](https://learn.microsoft.com/azure/azure-vmware/reserved-instance) about reserved instances. + Target settings | **Reserved instance** | Specify whether you want to use reserve instances for Azure VMware Solution nodes when you migrate your VMs. If you decide to use a reserved instance, you can't specify **Discount (%)**. [Learn more](/azure/azure-vmware/reserved-instance) about reserved instances. VM size | **Node type** | Defaulted to **AV36**. Azure Migrate recommends the node needed to migrate the servers to AVS. VM size | **FTT setting, RAID level** | Select the Failure to Tolerate and RAID combination. The selected FTT option, combined with the on-premises server disk requirement, determines the total vSAN storage required in AVS. VM size | **CPU Oversubscription** | Specify the ratio of virtual cores associated with one physical core in the AVS node. Oversubscription of greater than 4:1 might cause performance degradation, but can be used for web server type workloads. |
network-watcher | Vnet Flow Logs Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-portal.md | You can configure and change a flow log after you create it. For example, you ca :::image type="content" source="./media/vnet-flow-logs-portal/change-flow-log.png" alt-text="Screenshot that shows how to edit flow log's settings in the Azure portal where you can change some virtual network flow log settings." lightbox="./media/vnet-flow-logs-portal/change-flow-log.png"::: -1. Select **Save** to apply the changes. +1. Select **Save** to apply the changes or **Cancel** to exit without saving them. ## List all flow logs You can view the details of a flow log in a subscription or a group of subscript :::image type="content" source="./media/vnet-flow-logs-portal/flow-log-settings.png" alt-text="Screenshot of Flow logs settings page in the Azure portal." lightbox="./media/vnet-flow-logs-portal/flow-log-settings.png"::: -1. Select **Discard** to close the settings page without making changes. +1. Select **Cancel** to close the settings page without making changes. ## Download a flow log |
open-datasets | Dataset San Francisco Safety | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/dataset-san-francisco-safety.md | This dataset is stored in the East US Azure region. Allocating compute resources | dateTime | timestamp | 6,496,563 | 2020-10-19 12:28:08 2020-07-28 06:40:26 | The date and time when the service request was made or when the fire call was received. | | latitude | double | 1,615,369 | 37.777624238929 37.786117211838 | Latitude of the location, using the WGS84 projection. | | longitude | double | 1,554,612 | -122.39998111124 -122.419854245692 | Longitude of the location, using the WGS84 projection. |-| source | string | 9 | Phone Mobile/Open311 | Mechanism or path by which the service request was received; typically ΓÇ£PhoneΓÇ¥, ΓÇ£Text/SMSΓÇ¥, ΓÇ£WebsiteΓÇ¥, ΓÇ£Mobile AppΓÇ¥, ΓÇ£TwitterΓÇ¥, etc. but terms may vary by system. | +| source | string | 9 | Phone Mobile/Open311 | Mechanism or path by which the service request was received; typically ΓÇ£PhoneΓÇ¥, ΓÇ£Text/SMSΓÇ¥, ΓÇ£WebsiteΓÇ¥, ΓÇ£Mobile AppΓÇ¥, ΓÇ£XΓÇ¥, etc. but terms may vary by system. | | status | string | 3 | Closed Open | A single-word indicator of the current state of the service request. (Note: GeoReport V2 only permits ΓÇ£openΓÇ¥ and ΓÇ£closedΓÇ¥) | | subcategory | string | 1,270 | Medical Incident Bulky Items | The human readable name of the service request subtype for 311 cases or call type for 911 fire calls. | |
openshift | Responsibility Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/responsibility-matrix.md | Microsoft and Red Hat are responsible for enabling changes to the cluster infras <li>Monitor utilization of control plane (master nodes) resources including Network, Storage and Compute capacity -<li>Scale and/or resize control plane nodes to maintain quality of service +<li>Proactively scale and/or resize control plane nodes to maintain quality of service </li> </ul> Microsoft and Red Hat are responsible for enabling changes to the cluster infras <li>Respond to Microsoft and Red Hat notifications regarding cluster resource requirements. </li>++<li>Ensure ample quota is available for larger control plane VMs in case of scaling operation +</li> + </ul> </td> </tr> |
openshift | Support Policies V4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md | Certain configurations for Azure Red Hat OpenShift 4 clusters can affect your cl * Don't scale the cluster workers to zero, or attempt a cluster shutdown. Deallocating or powering down any virtual machine in the cluster resource group isn't supported. * If you're making use of infrastructure nodes, don't run any undesignated workloads on them as this can affect the Service Level Agreement and cluster stability. Also, it's recommended to have three infrastructure nodes; one in each availability zone. See [Deploy infrastructure nodes in an Azure Red Hat OpenShift (ARO) cluster](howto-infrastructure-nodes.md) for more information. * Non-RHCOS compute nodes aren't supported. For example, you can't use an RHEL compute node.-* Don't attempt to remove or replace a master node. That's a high risk operation that can cause issues with etcd, permanent network loss, and loss of access and manageability by ARO SRE. If you feel that a master node should be replaced or removed, contact support before making any changes. +* Don't attempt to remove, replace, add, or modify a master node. That's a high risk operation that can cause issues with etcd, permanent network loss, and loss of access and manageability by ARO SRE. If you feel that a master node should be replaced or removed, contact support before making any changes. +* Ensure ample VM quota is available in case control plane nodes need to be scaled up by keeping at least double your current control plane vCPU count available. ### Operators |
operational-excellence | Relocation App Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-app-service.md | + + Title: Relocate Azure App Services to another region +description: Learn how to relocate Azure App Services to another region +++ Last updated : 07/11/2024++++ - subject-relocation +#Customer intent: As an Azure service administrator, I want to move my App Service resources to another Azure region. +++# Relocate Azure App Services to another region +++This article describes how to move App Service resources to a different Azure region. ++++App Service resources are region-specific and can't be moved across regions. You must create a copy of your existing App Service resources in the target region, then relocate your content over to the new app. If your source app uses a custom domain, you can [migrate it to the new app in the target region](../app-service/manage-custom-dns-migrate-domain.md) after completion of the relocation. ++To make copying your app easier, you can [backup and restore individual App Service app](../app-service/manage-backup.md?tabs=portal) into an App Service plan in another region. ++## Prerequisites ++- Make sure that the App Service app is in the Azure region from which you want to move. +- Make sure that the target region supports App Service and any related service, whose resources you want to move. +- Validate that sufficient permission exist to deploy App Service resources to the target subscription and region. +- Validate if any Azure policy is assigned with a region restriction. +- Consider any operating costs, as Compute resource prices can vary from region to region. To estimate your possible costs, see [Pricing calculator](https://azure.microsoft.com/pricing/calculator/). ++## Prepare ++Identify all the App Service resources that you're currently using. For example: ++- App Service apps +- [App Service plans](../app-service/overview-hosting-plans.md) +- [Deployment slots](../app-service/deploy-staging-slots.md) +- [Custom domains purchased in Azure](../app-service/manage-custom-dns-buy-domain.md) +- [TLS/SSL certificates](../app-service/configure-ssl-certificate.md) +- [Azure Virtual Network integration](../app-service/overview-vnet-integration.md) +- [Hybrid connections](../app-service/app-service-hybrid-connections.md). +- [Managed identities](../app-service/overview-managed-identity.md) +- [Backup settings](../app-service/manage-backup.md) ++Certain resources, such as imported certificates or hybrid connections, contain integration with other Azure services. For information on how to move those resources across regions, see the [documentation for the respective services](overview-relocation.md). +++## Plan ++This section is a planning checklist in the following areas: ++- State, Storage and downstream dependencies +- Certificates +- Configuration +- VNet Connectivity / Custom Names / DNS +- Identities +- Service Endpoints ++++### State, storage, and downstream dependencies ++ - **Determine whether your App Service App is stateful or stateless.** Although we recommend that App Service Apps are stateless and the files on the `%HOME%\site` drive should be only those that are required to run the deployed application with any temporary files, it's still possible to store runtime application state on the `%HOME%\site` virtual drive. If your application writes state on the app shared storage path, make sure to plan how you're going to manage that state during a resource move. + + >[!TIP] + >You can use Kudu to, along with portal access, to provide a file access API (Virtual File System (VFS)) that can read/write files under the `%HOME%\site` directory. For more information, see [Kudu Wiki](https://github.com/projectkudu/kudu/wiki/REST-API#vfs). ++- **Check for internal caching and state** in application code. ++- **Disable session affinity setting.** Where possible, we recommend that you disable the session affinity setting. Disabling session affinity improves load balancing for a horizontal scale-out. Any internal state may impact the planning for cutting over a workload - particularly if zero down time is a requirement. Where possible, it may be beneficial to refactor out any application state to make the application stateless in preparation for the move. ++- **Analyze database connection strings.** Database connection strings can be found in the App Settings. However, they may also be hard coded or managed in config files that are shipped with the application. Analyze and plan for data migration/replication as part of the higher level planning to move the workload. For chatty or Latency Critical Applications it isn't performant for the application in the target region to reach back to data sources in the source region. ++- **Analyze external caching (for example Redis).** Application caches should be deployed as close as possible to the application. Analyze how caches are populated, expiry/eviction policies and any impact a cold cache may have on the first users to access the workload after cut-over. ++- **Analyze and plan for API (or application) dependencies** Cross-region communication is significantly less performant if the app in the target region reaches back to dependencies that are still in the source region. We recommend that you relocate all downstream dependencies as part of the workload relocation. However, \*on-premises* resources are the exception, in particular those resources that are geographically closer to the target region (as may be the case for repatriation scenarios). ++ Azure Container Registry can be a downstream (runtime) dependency for App Service that's configured to run against Custom Container Images. It makes more sense for the Container Registry to be in the same region as the App itself. Consider uploading the required images to a new ACR in the target get region. Otherwise, consider using the [geo-replication feature](../container-registry/container-registry-geo-replication.md) if you plan on keeping the images in the source region. ++- **Analyze and plan for regional services.** Application Insights and Log Analytics data are regional services. Consider the creation of new Application Insights and Log Analytics storage in the target region. For App Insights, a new resource also impacts the connection string that must be updated as part of the change in App Configuration. +++### Certificates ++There a number of different types of certificates that need to be taken into consideration as you plan your App Service relocation: ++- A [Free Managed Certificate from App Service](../app-service/configure-ssl-certificate.md#import-an-app-service-certificate) isn't exportable. +- An [App Service Certificate through Azure Key Vault](../app-service/configure-ssl-certificate.md?tabs=apex#import-an-app-service-certificate) can be exported using PS1/CLI. +- A certificate that you manage outside of App Service. +- An App Service Certificate, not managed through Azure Key Vault, can be exported. +- App Service certificate resources can be moved to a new Resource Group or Subscription but not cross-region. Cross-region relocations are not supported by App Service Certificates. +- Certificates Managed that you manage and store in Azure Key Vault would first need to be exported from the source Key Vault and re-imported to the Target Key Vault associated with the target app. +++Some further points to consider: ++- App Assigned Addresses, where an App Service appΓÇÖs SSL connection is bound to a specific app designated IP, can be used for allow-listing calls from third party networks into App Service. For example, a network / IT admin may want to lock down outbound calls from an on-premises network or VNet to use a static, well-known address. As such, if the App Assigned Address feature is in use, upstream firewall rules - such as internal, external, or third parties - for the callers into the app should be checked and informed of the new address. Firewall rules can be internal, external or third parties, such as partners or well-known customers. ++- Consider any upstream Network Virtual Appliance (NVA) or Reverse Proxy. The NVA config may need to change if you're rewriting the host header or and/or SSL terminating. +++>[!NOTE] +>App Service Environment is the only App Service offering allows downstream calls to downstream application dependencies over SSL, where the SSL relies on self-signed/PKI with built with [non-standard Root CA certificates](/azure/app-service/environment/overview-certificates#private-client-certificate). The multitenant service doesn't provide access for customers to upload to the trusted certificate store. +> +>App Service Environment today doesn't allow SSL certificate purchase, only Bring Your Own certificates. IP-SSL isn't possible (and doesnΓÇÖt make sense), but SNI is. Internal App Service Environment would not be associated with a public domain and therefore the SSL certs used must be provided by the customer and are therefore transportable, for example certs for internal use generated using PKI. App Service Environment v3 in external mode shares the same features as the regular multitenant App Service. +++### Configuration ++- Review App Configuration for Environment and region specific settings that may need modification. Make sure to check includes disk file configuration, which may or may not be overridden with App Settings. ++- Consider that configuration may also be managed from a central (downstream) database dependency or a service like [Azure Application Configuration](/azure/azure-app-configuration/overview). ++- Recreate [App Service Key Vault references](/azure/app-service/app-service-key-vault-references?tabs=azure-cli). Key Vault references are related to the unique MSI assigned to the resource (which has KV data plane access) and the Key Vault itself most likely needs to be in the same source region. Az Key Vault content can't be exported across an Azure geographical boundary. +++### VNet Connectivity / Custom Names / DNS ++- App Service Environment is a VNet-Injected single tenant service. App Service Environment networking differs from the multitenant App Service, which requires one or both ΓÇ£Private EndpointsΓÇ¥ or ΓÇ£Regional VNet integrationΓÇ¥. Other options that may be in play include the legacy P2S VPN based VNet integration and Hybrid Connections (an Azure Relay service). ++ >[!NOTE] + >ASEv3 Networking is simplified - the Azure Management traffic and the App Service Environments own downstream dependencies are not visible to the customer Virtual Network, greatly simplifying the configuration required where the customer is using a force-tunnel for all traffic, or sending a subset of outbound traffic, through a Network Virtual Appliance/Firewall. + > + >Hybrid Connections (Azure Relay) are regional. If Hybrid Connections are used and although an Azure Relay Namespace can be moved to another region, it would be simpler to redeploy the Hybrid Connection (ensure the Hybrid connection is setup in the new region on deploy of the target resources) and re-link it to the Hybrid Connection Manager. The Hybrid Connection Manager location should be carefully considered. ++- **Follow the strategy for a warm standby region.** Ensure that core networking and connectivity, Hub network, domain controllers, DNS, VPN or Express Route, etc., are present and tested prior to the resource relocation. ++- **Validate any upstream or downstream network ACLs and configuration**. For example, consider an external downstream service that allowlists only your App traffic. A relocation to a new Application Plan for a multitenant App Service would then also be a change in outbound IP addresses. ++- In most cases, it's best to **ensure that the target region VNets have unique address space**. A unique address space facilitates VNet connectivity if itΓÇÖs required, for example, to replicate data. Therefore, in these scenarios there's an implicit requirement to change: ++ - Private DNS + - Any hard coded or external configuration that references resources by IP address (without a hostname) + - Network ACLs including Network Security Groups and Firewall configuration (consider the impact to any on-premises NVAs here too) + - Any routing rules, User Defined Route Tables + + Also, make sure to check configuration including region specific IP Ranges / Service Tags if carrying forward existing DevOps deployment resources. ++- Fewer changes are required for customer-deployed private DNS that is configured to forward to Azure for Azure domains and Azure DNS Private Zones. However, as Private Endpoints are based on a resource FQDN and this is often also the resource name (which can be expected to be different in the target region), remember to **cross check configuration to ensure that FQDNs referenced in configuration are updated accordingly**. ++- **Recreate Private Endpoints, if used, in the target region**. The same applies for Regional VNet integration. +++ - DNS for App Service Environment is typically managed via the customers private custom DNS solution (there is a manual settings override available on a per app basic). App Service Environment provides a load balancer for ingress/egress, while App Service itself filters on Host headers. Therefore, multiple custom names can be pointed towards the same App Service Environment ingress endpoint. App Service Environment doesn't require domain validation. ++ >[!NOTE] + >Kudu endpoint for App Service Environment v3 is only available at ``{resourcename}.scm.{asename}.appserviceenvironment.net``. For more information on App Service Environment v3 DNS and Networking etc see [App Service Environment networking](/azure/app-service/environment/networking#dns). +++ For App Service Environment, the customer owns the routing and therefore the resources used for the cut-over. Wherever access is enabled to the App Service Environment externally - typically via a Layer 7 NVA or Reverse Proxy - Traffic Manager, or Azure Front Door/Other L7 Global Load Balancing Service can be used. ++- For the public multitenant version of the service, a default name `{resourcename}.azurwwebsites.net` is provisioned for the data plane endpoints, along with a default name for the Kudu (SCM) endpoint. As the service provides a public endpoint by default, the binding must be verified to prove domain ownership. However, once the binding is in place, re-verification isn't required, nor is it required for public DNS records to point at the App Service endpoint. ++- If you use a custom domain, [bind it preemptively to the target app](/azure/app-service/manage-custom-dns-migrate-domain#bind-the-domain-name-preemptively). Verify and [enable the domain in the target app](/azure/app-service/manage-custom-dns-migrate-domain#enable-the-domain-for-your-app). +++### Identities ++- **Recreate App Service Managed Service Identities** in the new target region. ++- **Assign the new MSI credential downstream service access (RBAC)**. Typically, an automatically created Microsoft Entra ID App (one used by EasyAuth) defaults to the App resource name. Consideration may be required here for recreating a new resource in the target region. A user defined Service Principal would be useful - as it can be applied to both source and target with extra access permissions to target deployment resources. ++- **Plan for relocating the Identity Provider (IDP) to the target region**. Although Microsoft Entra ID is a global service, some solutions rely on a local (or downstream on premises) IDP. ++- **Update any resources to the App Service that may rely on Kudu FTP credentials.** +++### Service endpoints ++The virtual network service endpoints for Azure App Service restrict access to a specified virtual network. The endpoints can also restrict access to a list of IPv4 (internet protocol version 4) address ranges. Any user connecting to the Event Hubs from outside those sources is denied access. If Service endpoints were configured in the source region for the Event Hubs resource, the same would need to be done in the target one. ++For a successful recreation of the Azure App Service to the target region, the VNet and Subnet must be created beforehand. In case the move of these two resources is being carried out with the Azure Resource Mover tool, the service endpoints wonΓÇÖt be configured automatically. Hence, they need to be configured manually, which can be done through the [Azure portal](/azure/key-vault/general/quick-create-portal), the [Azure CLI](/azure/key-vault/general/quick-create-cli), or [Azure PowerShell](/azure/key-vault/general/quick-create-powershell). ++## Relocate ++To relocate App Service resources, you can use either Azure portal or Infrastructure as Code (IaC). ++### Relocate using Azure portal ++The greatest advantage of using Azure portal to relocate is its simplicity. The app, plan, and contents, as well as many settings are cloned into the new App Service resource and plan. ++Keep in mind that for App Service Environment (Isolated) tiers, you need to redeploy the entire App Service Environment in another region first, and then you can start redeploying the individual plans in the new App Service Environment in the new region. ++**To relocate your App Service resources to a new region using Azure portal:** +++1. [Create a back up of the source app](../app-service/manage-backup.md). +1. [Create an app in a new App Service plan, in the target region](../app-service/app-service-plan-manage.md#create-an-app-service-plan). +1. [Restore the back up in the target app](../app-service/manage-backup.md) +1. If you use a custom domain, [bind it preemptively to the target app](../app-service/manage-custom-dns-migrate-domain.md#2-create-the-dns-records) with `asuid.` and [enable the domain in the target app](../app-service/manage-custom-dns-migrate-domain.md#3-enable-the-domain-for-your-app). +1. Configure everything else in your target app to be the same as the source app and verify your configuration. +1. When you're ready for the custom domain to point to the target app, [remap the domain name](../app-service/manage-custom-dns-migrate-domain.md#4-remap-the-active-dns-name). +++### Relocate using IaC ++Use IaC when an existing Continuous Integration and Continuous Delivery/Deployment(CI/CD) pipeline exists, or can be created. With an CI/CD pipeline in place, your App Service resource can be created in the target region by means of a deployment action or a Kudu zip deployment. ++SLA requirements should determine how much additional effort is required. For example: Is this a redeploy with limited downtime, or is it a near real time cut-over required with minimal to no downtime? ++The inclusion of external, global traffic routing edge services, such as Traffic Manager, or Azure Front Door help to facilitate cut-over for external users and applications. +++>[!TIP] +>It's possible to use Traffic Manager (ATM) when failing over App Services behind private endpoints. Although the private endpoints are not reachable by Traffic Manager Probes - if all endpoints are degraded, then ATM allows routing. For more information, see [Controlling Azure App Service traffic with Azure Traffic Manager](../app-service/web-sites-traffic-manager.md). ++## Validate +++Once the relocation is completed, test and validate Azure App Service with the recommended guidelines: ++- Once the Azure App Service is relocated to the target region, run a smoke and integration test. You can manually test or run a test through a script. Make sure to validate that all configurations and dependent resources are properly linked and that all configured data are accessible. ++- Validate all Azure App Service components and integration. ++- Perform integration testing on the target region deployment, including all formal regression testing. Integration testing should align with the usual Rhythm of Business deployment and test processes applicable to the workload. ++- In some scenarios, particularly where the relocation includes updates, changes to the applications or Azure Resources, or a change in usage profile, use load testing to validate that the new workload is fit for purpose. Load testing is also an opportunity to validate operations and monitoring coverage. For example, use load testing to validate that the required infrastructure and application logs are being generated correctly. Load tests should be measured against established workload performance baselines. +++>[!TIP] +>An App Service relocation is also an opportunity to re-assess Availability and Disaster Recovery planning. App Service and App Service Environment (App Service Environment v3) supports [availability zones](/azure/reliability/availability-zones-overview) and it's recommended that configure with an availability zone configuration. Keep in mind the prerequisites for deployment, pricing, and limitations and factor these into the resource move planning. For more information on availability zones and App Service, see [Reliability in Azure App Service](/azure/reliability/reliability-app-service). +++## Clean up ++Delete the source app and App Service plan. [An App Service plan in the non-free tier carries a charge, even if no app is running in it.](../app-service/app-service-plan-manage.md#delete-an-app-service-plan) ++## Next steps ++[Azure App Service App Cloning Using PowerShell](../app-service/app-service-web-app-cloning.md) |
operator-nexus | Concepts Nexus Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-networking.md | field. [vrf]: https://en.wikipedia.org/wiki/Virtual_routing_and_forwarding [isd]: ./howto-configure-isolation-domain.md [internal-net]: ./howto-configure-isolation-domain.md#create-internal-network-[vm-netattach]: https://learn.microsoft.com/rest/api/networkcloud/virtual-machines/create-or-update?view=rest-networkcloud-2023-07-01&tabs=HTTP#networkattachment -[attachednetconf]: https://learn.microsoft.com/rest/api/networkcloud/kubernetes-clusters/create-or-update?view=rest-networkcloud-2023-07-01&tabs=HTTP#attachednetworkconfiguration +[vm-netattach]: /rest/api/networkcloud/virtual-machines/create-or-update?view=rest-networkcloud-2023-07-01&tabs=HTTP#networkattachment +[attachednetconf]: /rest/api/networkcloud/kubernetes-clusters/create-or-update?view=rest-networkcloud-2023-07-01&tabs=HTTP#attachednetworkconfiguration ## Operator Nexus Kubernetes networking |
peering-service | Location Partners | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/location-partners.md | The following table provides information on the Peering Service connectivity par | [Dimension Data](https://www.dimensiondata.com/en-gb/about-us/our-partners/microsoft/)| Africa | | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/)| Asia, Europe, North America | | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | Asia, North America |-| [France-IX](https://www.franceix.net/en/english-services/cloud-access/microsoft-azure-peering-service) | Europe | +| [France-IX](https://www.franceix.net/en/english-services/cloud/microsoft-azure-peering-service) | Europe | | [IIJ](https://www.iij.ad.jp/en/) | Japan | | [Intercloud](https://www.intercloud.com/partners/microsoft-azure)| Europe | | [Kordia](https://www.kordia.co.nz/cloudconnect) | Oceania | The following table provides information on the Peering Service connectivity par | Marseilles | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | Mumbai | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) | | New York | [DE-CIX](https://www.de-cix.net/services/microsoft-azure-peering-service/) |-| Paris | [France-IX](https://www.franceix.net/en/english-services/cloud-access/microsoft-azure-peering-service) | +| Paris | [France-IX](https://www.franceix.net/en/english-services/cloud/microsoft-azure-peering-service) | | San Jose | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | | Santiago | [PIT Chile](https://www.pitchile.cl/wp/maps/) | | Seattle | [Equinix IX](https://www.equinix.com/interconnection-services/internet-exchange/) | |
postgresql | How To Manage Azure Ad Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md | DROP ROLE rolename; ## Create a role using Microsoft Entra object identifier ```sql-pg_catalog.pgaadauth_create_principal(roleName text, objectId text, objectType text, isAdmin boolean, isMfa boolean) +pg_catalog.pgaadauth_create_principal_with_oid(roleName text, objectId text, objectType text, isAdmin boolean, isMfa boolean) ``` #### Arguments |
postgresql | How To Restore Server Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md | description: This article describes how to perform restore operations in Azure D Previously updated : 04/27/2024 Last updated : 07/26/2024 Follow these steps to restore your Azure Database for PostgreSQL flexible server 5. Under **Server details**, for **Name**, provide a server name. For **Availability zone**, you can optionally choose an availability zone to restore to. - :::image type="content" source="./media/how-to-restore-server-portal/restore-custom-2.png" alt-text="Screenshot that shows selections for restoring to a custom restore point."::: + :::image type="content" source="./media/how-to-restore-server-portal/restore-custom.png" alt-text="Screenshot that shows selections for restoring to a custom restore point."::: 6. Select **OK**. A notification shows that the restore operation has started. If your source server is configured with geo-redundant backup, you can restore t 2. Select **Overview** from the left pane, and then select **Restore**. - :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-click.png" alt-text="Screenshot that shows the Restore button."::: + :::image type="content" source="./media/how-to-restore-server-portal/restore-overview.png" alt-text="Screenshot that shows the Restore button."::: -3. Under **Source details**, for **Geo-redundant restore**, select the **Restore to paired region** checkbox. +3. Under **Source details**, for **Geo-redundant restore (preview)**, select the **Restore to paired region** checkbox. :::image type="content" source="./media/how-to-restore-server-portal/geo-restore-choose-checkbox.png" alt-text="Screenshot that shows the option for restoring to a paired region for geo-redundant restore."::: |
search | Cognitive Search Debug Session | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-debug-session.md | Expression Evaluator gives you full interactive access for testing skill context :::image type="content" source="media/cognitive-search-debug/expression-evaluator.png" alt-text="Screenshot of Expression Evaluator."::: +## Limitations ++Debug sessions feature doesn't support [SharePoint Online indexer](search-howto-index-sharepoint-online.md). + ## Next steps Now that you understand the elements of debug sessions, start your first debug session on an existing skillset. |
search | Search Howto Index Plaintext Blobs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-plaintext-blobs.md | api-key: [admin key] } ``` -By default, the `UTF-8` encoding is assumed. To specify a different encoding, use the `encoding` configuration property: +By default, the `UTF-8` encoding is assumed. To specify a different encoding, use the `encoding` configuration property. The supported [list of encodings](/dotnet/fundamentals/runtime-libraries/system-text-encoding#list-of-encodings) is under **.NET 5 and later support** column. ```http { ... other parts of indexer definition- "parameters" : { "configuration" : { "parsingMode" : "text", "encoding" : "windows-1252" } } + "parameters" : { "configuration" : { "parsingMode" : "text", "encoding" : "iso-8859-1" } } } ``` |
search | Search Howto Managed Identities Data Sources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md | You can use a preview Management REST API instead of the portal to assign a user + "type" is the type of identity. Valid values are "SystemAssigned", "UserAssigned", or "SystemAssigned, UserAssigned" for both. A value of "None" clears any previously assigned identities from the search service. + "userAssignedIdentities" includes the details of the user assigned managed identity. This identity [must already exist](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) before you can specify it in the Update Service request.+ A custom skill targets the endpoint of an Azure function or app hosting custom c An Azure OpenAI embedding skill and vectorizer in AI Search target the endpoint of an Azure OpenAI service hosting an embedding model. The endpoint is specified in the [Azure OpenAI embedding skill definition](cognitive-search-skill-azure-openai-embedding.md) and/or in the [Azure OpenAI vectorizer definition](vector-search-how-to-configure-vectorizer.md). The system-managed identity is used if configured and if the "apikey" and "authIdentity" are empty. The "authIdentity" property is used for user-assigned managed identity only. +**System-managed identity example:** ```json { A custom skill targets the endpoint of an Azure function or app hosting custom c ] ``` +**User-assigned managed identity example:** ++```json +{ + "@odata.type": "#Microsoft.Skills.Text.AzureOpenAIEmbeddingSkill", + "description": "Connects a deployed embedding model.", + "resourceUri": "https://url.openai.azure.com/", + "deploymentId": "text-embedding-ada-002", + "modelName": "text-embedding-ada-002", + "inputs": [ + { + "name": "text", + "source": "/document/content" + } + ], + "outputs": [ + { + "name": "embedding" + } + ], + "authIdentity": { + "@odata.type": "#Microsoft.Azure.Search.DataUserAssignedIdentity", + "userAssignedIdentity": "/subscriptions/<subscription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned-managed-identity-name>" + } +} +``` ++```json + "vectorizers": [ + { + "name": "my_azure_open_ai_vectorizer", + "kind": "azureOpenAI", + "azureOpenAIParameters": { + "resourceUri": "https://url.openai.azure.com", + "deploymentId": "text-embedding-ada-002", + "modelName": "text-embedding-ada-002" + "authIdentity": { + "@odata.type": "#Microsoft.Azure.Search.DataUserAssignedIdentity", + "userAssignedIdentity": "/subscriptions/<subscription_id>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned-managed-identity-name>" + } + } + } + ] +``` + ## Check for firewall access If your Azure resource is behind a firewall, make sure there's an inbound rule that admits requests from your search service. |
sentinel | Better Mobile Threat Defense Mtd | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/better-mobile-threat-defense-mtd.md | This is autogenerated content. For changes, contact the solution provider. | | | | **Log Analytics table(s)** | BetterMTDIncidentLog_CL<br/> BetterMTDDeviceLog_CL<br/> BetterMTDAppLog_CL<br/> BetterMTDNetflowLog_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [Better Mobile Security Inc.](https://www.better.mobi/about#contact-us) | +| **Supported by** | Better Mobile Security Inc. | ## Query samples |
sentinel | Trend Micro Deep Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-deep-security.md | This is autogenerated content. For changes, contact the solution provider. | **Kusto function url** | https://aka.ms/TrendMicroDeepSecurityFunction | | **Log Analytics table(s)** | CommonSecurityLog (TrendMicroDeepSecurity)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Trend Micro](https://success.trendmicro.com/dcx/s/?language=en_US) | +| **Supported by** | [Trend Micro](https://success.trendmicro.com/) | ## Query samples |
sentinel | Trend Micro Tippingpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-micro-tippingpoint.md | This is autogenerated content. For changes, contact the solution provider. | | | | **Log Analytics table(s)** | CommonSecurityLog (TrendMicroTippingPoint)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Trend Micro](https://success.trendmicro.com/dcx/s/contactus?language=en_US) | +| **Supported by** | [Trend Micro](https://success.trendmicro.com/) | ## Query samples |
sentinel | Trend Vision One | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/trend-vision-one.md | This is autogenerated content. For changes, contact the solution provider. | | | | **Log Analytics table(s)** | TrendMicro_XDR_WORKBENCH_CL<br/> TrendMicro_XDR_RCA_Task_CL<br/> TrendMicro_XDR_RCA_Result_CL<br/> TrendMicro_XDR_OAT_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [Trend Micro](https://success.trendmicro.com/dcx/s/?language=en_US) | +| **Supported by** | [Trend Micro](https://success.trendmicro.com/) | ## Query samples |
sentinel | Vmware Carbon Black Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-carbon-black-cloud.md | -The [VMware Carbon Black Cloud](https://www.vmware.com/products/carbon-black-cloud.html) connector provides the capability to ingest Carbon Black data into Microsoft Sentinel. The connector provides visibility into Audit, Notification and Event logs in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities. +The [VMware Carbon Black Cloud](https://carbonblack.vmware.com/resource/carbon-black-cloud-audit-and-remediation-technical-overview) connector provides the capability to ingest Carbon Black data into Microsoft Sentinel. The connector provides visibility into Audit, Notification and Event logs in Microsoft Sentinel to view dashboards, create custom alerts, and to improve monitoring and investigation capabilities. This is autogenerated content. For changes, contact the solution provider. |
sentinel | Vmware Vcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/vmware-vcenter.md | -The [vCenter](https://www.vmware.com/in/products/vcenter-server.html) connector allows you to easily connect your vCenter server logs with Microsoft Sentinel. This gives you more insight into your organization's data centers and improves your security operation capabilities. +The [vCenter](https://www.vmware.com/products/cloud-infrastructure/vcenter) connector allows you to easily connect your vCenter server logs with Microsoft Sentinel. This gives you more insight into your organization's data centers and improves your security operation capabilities. This is autogenerated content. For changes, contact the solution provider. |
sentinel | Threat Intelligence Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/threat-intelligence-integration.md | To connect to Threat Intelligence Platform (TIP) feeds, see [connect Threat Inte ### Recorded Future Security Intelligence Platform -- [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) makes use of Azure Logic Apps (playbooks) to connect to Microsoft Sentinel. See the [specialized instructions](https://go.recordedfuture.com/hubfs/partners/microsoft-azure-installation-guide.pdf) necessary to take full advantage of the complete offering.+- [Recorded Future](https://www.recordedfuture.com/integrations/microsoft-azure/) makes use of Azure Logic Apps (playbooks) to connect to Microsoft Sentinel. See the [specialized instructions](https://www.recordedfuture.com/integrations/microsoft) necessary to take full advantage of the complete offering. ### ThreatConnect Platform |
service-bus-messaging | Message Browsing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-browsing.md | Title: Azure Service Bus - Browse or peek messages description: Browse and peek Service Bus messages enables an Azure Service Bus client to enumerate all messages in a queue or subscription. Previously updated : 06/08/2023 Last updated : 07/25/2024 #customer intent: As a developer, I want to know how to browse or peek messages in a queue or a subscription, for diagnostic and debugging purposes. The Peek operation on a queue or a subscription returns at most the requested nu | Scheduled messages | Yes for queues. No for subscriptions | ## Dead-lettered messages-To peek into **Dead-lettered** messages of a queue or subscription, the peek operation should be run on the dead letter queue associated with the queue or subscription. For more information, see [accessing dead letter queues](service-bus-dead-letter-queues.md#path-to-the-dead-letter-queue). +To peek into **dead-lettered** messages of a queue or subscription, the peek operation should be run on the dead letter queue associated with the queue or subscription. For more information, see [accessing dead letter queues](service-bus-dead-letter-queues.md#path-to-the-dead-letter-queue). ## Expired messages Expired messages might be included in the results returned from the Peek operation. Consumed and expired messages are cleaned up by an asynchronous "garbage collection" run. This step might not necessarily occur immediately after messages expire. That's why, a peek operation might return messages that have already expired. These messages will be removed or dead-lettered when a receive operation is invoked on the queue or subscription the next time. Keep this behavior in mind when attempting to recover deferred messages from the queue. You can also pass a SequenceNumber to a peek operation. It's used to determine w You can specify the maximum number of messages that you want the peek operation to return. But, there's no way to guarantee a minimum size for the batch. The number of returned messages depends on several factors of which the most impactful is how quickly the network can stream messages to the client.  -Here's an example snippet for peeking all messages with the Python Service Bus SDK. The `sequence_number​` can be used to track the last peeked message and start browsing at the next message. ### [C#](#tab/csharp) +Here's an example snippet for peeking all messages with the .NET SDK. The `SequenceNumber​` can be used to track the last peeked message and start browsing at the next message. + ```csharp using Azure.Messaging.ServiceBus; Peek round complete ### [Python](#tab/python) +Here's an example snippet for peeking all messages with the Python Service Bus SDK. The `sequence_number​` can be used to track the last peeked message and start browsing at the next message. + ```python import os from azure.servicebus import ServiceBusClient |
service-bus-messaging | Message Deferral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-deferral.md | Title: Azure Service Bus - message deferral description: This article explains how to defer delivery of Azure Service Bus messages. The message remains in the queue or subscription, but it's set aside. Previously updated : 06/08/2023 Last updated : 07/25/2024 # Message deferral |
service-connector | Tutorial Python Aks Sql Database Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-sql-database-connection-string.md | + + Title: Connect an AKS app to Azure SQL Database ++description: Learn how to connect an app hosted on Azure Kubernetes Service (AKS) to Microsoft Azure SQL Database. +#customer intent: As a developer, I want to connect my application hosted on AKS to Azure SQL Database. ++++ Last updated : 07/23/2024+++# Tutorial: Connect an AKS app to Azure SQL Database (preview) ++In this tutorial, you learn how to connect an application deployed to AKS, to an Azure SQL Database, using service connector (preview). You complete the following tasks: ++> [!div class="checklist"] +> * Create an Azure SQL Database resource +> * Create a connection between the AKS cluster and the database with Service Connector. +> * Update your container +> * Update your application code +> * Clean up Azure resources. ++## Prerequisites ++* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). +* An application deployed to AKS. +* [!INCLUDE [azure-cli-prepare-your-environment.md](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] ++## Create an Azure SQL Database ++1. Create a resource group to store the Azure resources you create in this tutorial using the [`az group create`](/cli/azure/group#az_group_create) command. ++ ```azurecli-interactive + az group create \ + --name $RESOURCE_GROUP \ + --location eastus + ``` ++1. Follow the instructions to [create an Azure SQL Database](/azure/azure-sql/database/single-database-create-quickstart) in the resource group you created in the previous step. Make note of the server name, database name, and the database credentials for use throughout this tutorial. ++## Create a service connection in AKS with Service Connector (preview) ++### Register the Service Connector and Kubernetes Configuration resource providers ++Register the Service Connector and Kubernetes Configuration resource providers using the [`az provider register`](/cli/azure/provider#az-provider-register) command. ++```azurecli-interactive +az provider register --namespace Microsoft.ServiceLinker +``` ++```azurecli-interactive +az provider register --namespace Microsoft.KubernetesConfiguration +``` ++> [!TIP] +> You can check if these resource providers are already registered using the `az provider show --namespace "Microsoft.ServiceLinker" --query registrationState` and `az provider show --namespace "Microsoft.KubernetesConfiguration" --query registrationState` commands. If the output is `Registered`, then the service provider is already registered. +++### Create a new connection ++Create a service connection between your AKS cluster and your SQL database in the Azure portal or the Azure CLI. ++### [Azure portal](#tab/azure-portal) ++1. In the [Azure portal](https://portal.azure.com/), navigate to your AKS cluster resource. +2. Select **Settings** > **Service Connector (Preview)** > **Create**. +3. On the **Basics** tab, configure the following settings: ++ * **Kubernetes namespace**: Select **default**. + * **Service type**: Select **SQL Database**. + * **Connection name**: Use the connection name provided by Service Connector or enter your own connection name. + * **Subscription**: Select the subscription that includes the Azure SQL Database service. + * **SQL server**: Select your SQL server. + * **SQL database**: Select your SQL database. + * **Client type**: The code language or framework you use to connect to the target service, such as **Python**. + + :::image type="content" source="media/tutorial-ask-sql/create-connection.png" alt-text="Screenshot of the Azure portal showing the form to create a new connection to a SQL database in AKS."::: ++4. Select **Next: Authentication**. On the **Authentication** tab, enter your database username and password. +5. Select **Next: Networking** > **Next: Review + create** >**Create**. +6. Once the deployment is successful, you can view information about the new connection in the **Service Connector** pane. ++### [Azure CLI](#tab/azure-cli) ++Create a service connection to the SQL database using the [`az aks connection create sql`](/cli/azure/aks/connection/create#az-aks-connection-create-sql) command. You can run this command in two different ways: + + * generate the new connection step by step. + + ```azurecli-interactive + az aks connection create sql + ``` + + * generate the new connection at once. Make sure you replace the following placeholders with your own information: `<source-subscription>`, `<source_resource_group>`, `<cluster>`, `<target-subscription>`, `<target_resource_group>`, `<server>`, `<database>`, and `<***>`. + + ```azurecli-interactive + az aks connection create sql \ + --source-id /subscriptions/<source-subscription>/resourceGroups/<source_resource_group>/providers/Microsoft.ContainerService/managedClusters/<cluster> \ + --target-id /subscriptions/<target-subscription>/resourceGroups/<target_resource_group>/providers/Microsoft.Sql/servers/<server>/databases/<database> \ + --secret name=<secret-name> secret=<secret> + ``` ++++## Update your container ++Now that you created a connection between your AKS cluster and the database, you need to retrieve the connection secrets and deploy them in your container. ++1. In the [Azure portal](https://portal.azure.com/), navigate to your AKS cluster resource and select **Service Connector (Preview)**. +1. Select the newly created connection, and then select **YAML snippet**. This action opens a panel displaying a sample YAML file generated by Service Connector. +1. To set the connection secrets as environment variables in your container, you have two options: + + * Directly create a deployment using the YAML sample code snippet provided. The snippet includes highlighted sections showing the secret object that will be injected as the environment variables. Select **Apply** to proceed with this method. ++ :::image type="content" source="media/tutorial-ask-sql/sample-yaml-snippet.png" alt-text="Screenshot of the Azure portal showing the sample YAML snippet to create a new connection to a SQL database in AKS."::: ++ * Alternatively, under **Resource Type**, select **Kubernetes Workload**, and then select an existing Kubernetes workload. This action sets the secret object of your new connection as the environment variables for the selected workload. After selecting the workload, select **Apply**. ++ :::image type="content" source="media/tutorial-ask-sql/kubernetes-snippet.png" alt-text="Screenshot of the Azure portal showing the Kubernetes snippet to create a new connection to a SQL database in AKS."::: ++## Update your application code ++As a final step, update your application code to use your environment variables, by [following these instructions](how-to-integrate-sql-database.md#connection-string). ++## Clean up resources ++If you no longer need the resources you created when following this tutorial, you can remove them by deleting the Azure resource group. ++Delete your resource group using the [`az group delete`](/cli/azure/group#az_group_delete) command. ++```azurecli-interactive +az group delete --resource-group $RESOURCE_GROUP +``` ++## Related content ++Read the following articles to learn more about Service Connector concepts and how it helps AKS connect to Azure ++* [Use Service Connector to connect AKS clusters to other cloud services](./how-to-use-service-connector-in-aks.md) +* [Learn about Service Connector concepts](./concept-service-connector-internals.md) |
service-fabric | How To Managed Cluster Deploy With Subnet Per Nodetype | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-deploy-with-subnet-per-nodetype.md | Title: Deploy a Service Fabric managed cluster with a subnet per NodeType -description: This article provides the configuration necessary to deploy a Service Fabric managed cluster with different subnets per secondary NodeType. +description: This article provides the configuration necessary to deploy a Service Fabric managed cluster with different subnets per NodeType. Last updated 05/17/2024 # Deploy a Service Fabric managed cluster with a subnet per NodeType -Service Fabric managed clusters now supports different subnets per secondary NodeType in a [Bring-Your-Own-Virtual-Network scenario](how-to-managed-cluster-networking.md#bring-your-own-virtual-network). With different subnets per NodeType, customers can have specific applications deployed to specific subnets and utilize traffic management via Network Security Group (NSG) rules. Customers can expect increased network isolation for their deployments through this configuration. +Service Fabric managed clusters now supports different subnets per NodeType in a [Bring-Your-Own-Virtual-Network scenario](how-to-managed-cluster-networking.md#bring-your-own-virtual-network). With different subnets per NodeType, customers can have specific applications deployed to specific subnets and utilize traffic management via Network Security Group (NSG) rules. Customers can expect increased network isolation for their deployments through this configuration. This feature works on both primary and secondary NodeTypes. ++ ## Prerequisites Subnet per NodeType only works for Service Fabric API version `2022-10-01 previe ## Considerations and limitations -* **Only secondary NodeTypes** can support subnet per NodeType. * For existing clusters in a Bring-Your-Own-Virtual-Network configuration with a `subnetId` specified, enabling subnet per NodeType overrides the existing value for the current NodeType. * For new clusters, customers need to specify `useCustomVNet : true` at the cluster level. This setting indicates that the cluster uses Bring-Your-Own-Virtual-Network but that the subnet is specified at the NodeType level. For such clusters, a virtual network isn't created in the managed resource group. For such clusters, the `subnetId` property is required for NodeTypes. |
service-fabric | Service Fabric Application And Service Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-and-service-security.md | The first step to making API-level trust decisions is authentication. Authentica If services can be accessed directly, an authentication service like Microsoft Entra ID or a dedicated authentication microservice acting as a security token service (STS) can be used to authenticate users. Trust decisions are shared between services with security tokens or cookies. -For ASP.NET Core, the primary mechanism for [authenticating users](/dotnet/standard/microservices-architecture/secure-net-microservices-web-applications/) is the ASP.NET Core Identity membership system. ASP.NET Core Identity stores user information (including sign-in information, roles, and claims) in a data store configured by the developer. ASP.NET Core Identity supports two-factor authentication. External authentication providers are also supported, so users can sign in using existing authentication processes from providers like Microsoft, Google, Facebook, or Twitter. +For ASP.NET Core, the primary mechanism for [authenticating users](/dotnet/standard/microservices-architecture/secure-net-microservices-web-applications/) is the ASP.NET Core Identity membership system. ASP.NET Core Identity stores user information (including sign-in information, roles, and claims) in a data store configured by the developer. ASP.NET Core Identity supports two-factor authentication. External authentication providers are also supported, so users can sign in using existing authentication processes from providers like Microsoft, Google, Facebook, or X. ### Authorization After authentication, services need to authorize user access or determine what a user is able to do. This process allows a service to make APIs available to some authenticated users, but not to all. Authorization is orthogonal and independent from authentication, which is the process of ascertaining who a user is. Authentication may create one or more identities for the current user. |
site-recovery | Hyper V Azure Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/hyper-v-azure-troubleshoot.md | All Hyper-V replication events are logged in the Hyper-V-VMMS\Admin log, located This tool can help with advanced troubleshooting: - For VMM, perform Site Recovery log collection using the [Support Diagnostics Platform (SDP) tool](https://social.technet.microsoft.com/wiki/contents/articles/28198.asr-data-collection-and-analysis-using-the-vmm-support-diagnostics-platform-sdp-tool.aspx).-- For Hyper-V without VMM, [download this tool](https://answers.microsoft.com/en-us/windows/forum/all/unable-to-open-diagcab-files/e7f8e4e5-b442-4e53-af7a-90e74985a73f), and run it on the Hyper-V host to collect the logs.+- For Hyper-V without VMM, [download this tool](https://answers.microsoft.com/windows/forum/all/unable-to-open-diagcab-files/e7f8e4e5-b442-4e53-af7a-90e74985a73f), and run it on the Hyper-V host to collect the logs. |
site-recovery | Site Recovery Deployment Planner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-deployment-planner.md | The tool has two main phases: profiling and report generation. There is also a t | Server requirement | Description| |||-|Profiling and throughput measurement| <ul><li>Operating system: Windows Server 2016 or Windows Server 2012 R2<br>(ideally matching at least the [size recommendations for the configuration server](/en-in/azure/site-recovery/site-recovery-plan-capacity-vmware#size-recommendations-for-the-configuration-server))</li><li>Machine configuration: 8 vCPUs, 16 GB RAM, 300 GB HDD</li><li>[.NET Framework 4.5](https://aka.ms/dotnet-framework-45)</li><li>[VMware vSphere PowerCLI 6.0 R3](https://aka.ms/download_powercli)</li><li>[Visual C++ Redistributable for Visual Studio 2012](https://aka.ms/vcplusplus-redistributable)</li><li>Internet access to Azure (`*.blob.core.windows.net`) from this server, port 443<br>[This is optional. You can choose to provide the available bandwidth during Report Generation manually.]</li><li>Azure storage account</li><li>Administrator access on the server</li><li>Minimum 100 GB of free disk space (assuming 1,000 VMs with an average of three disks each, profiled for 30 days)</li><li>VMware vCenter statistics level settings can be 1 or higher level</li><li>Allow vCenter port (default 443): Site Recovery Deployment Planner uses this port to connect to the vCenter server/ESXi host</ul></ul>| -| Report generation | A Windows PC or Windows Server with Excel 2013 or later.<li>[.NET Framework 4.5](https://aka.ms/dotnet-framework-45)</li><li>[Visual C++ Redistributable for Visual Studio 2012](https://aka.ms/vcplusplus-redistributable)</li><li>[VMware vSphere PowerCLI 6.0 R3](https://aka.ms/download_powercli) is required only when you pass -User option in the report generation command to fetch the latest VM configuration information of the VMs. The Deployment Planner connects to vCenter server. Allow vCenter port (default 443) port to connect to vCenter server.</li>| +|Profiling and throughput measurement| <ul><li>Operating system: Windows Server 2016 or Windows Server 2012 R2<br>(ideally matching at least the [size recommendations for the configuration server](/en-in/azure/site-recovery/site-recovery-plan-capacity-vmware#size-recommendations-for-the-configuration-server))</li><li>Machine configuration: 8 vCPUs, 16 GB RAM, 300 GB HDD</li><li>[.NET Framework 4.5](https://aka.ms/dotnet-framework-45)</li><li>[VMware vSphere PowerCLI 6.0 R3](https://vdc-download.vmware.com/vmwb-repository/dcr-public/7569b8fd-f359-420e-abe7-84c9f2c8703b/a5f7dc07-6ffe-4809-a402-8290b15ff04d/powercli60r3-releasenotes.html)</li><li>[Visual C++ Redistributable for Visual Studio 2012](https://aka.ms/vcplusplus-redistributable)</li><li>Internet access to Azure (`*.blob.core.windows.net`) from this server, port 443<br>[This is optional. You can choose to provide the available bandwidth during Report Generation manually.]</li><li>Azure storage account</li><li>Administrator access on the server</li><li>Minimum 100 GB of free disk space (assuming 1,000 VMs with an average of three disks each, profiled for 30 days)</li><li>VMware vCenter statistics level settings can be 1 or higher level</li><li>Allow vCenter port (default 443): Site Recovery Deployment Planner uses this port to connect to the vCenter server/ESXi host</ul></ul>| +| Report generation | A Windows PC or Windows Server with Excel 2013 or later.<li>[.NET Framework 4.5](https://aka.ms/dotnet-framework-45)</li><li>[Visual C++ Redistributable for Visual Studio 2012](https://aka.ms/vcplusplus-redistributable)</li><li>[VMware vSphere PowerCLI 6.0 R3](https://vdc-download.vmware.com/vmwb-repository/dcr-public/7569b8fd-f359-420e-abe7-84c9f2c8703b/a5f7dc07-6ffe-4809-a402-8290b15ff04d/powercli60r3-releasenotes.html) is required only when you pass -User option in the report generation command to fetch the latest VM configuration information of the VMs. The Deployment Planner connects to vCenter server. Allow vCenter port (default 443) port to connect to vCenter server.</li>| | User permissions | Read-only permission for the user account that's used to access the VMware vCenter server/VMware vSphere ESXi host during profiling | > [!NOTE] |
spring-apps | How To Private Network Access Backend Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-private-network-access-backend-storage.md | There are two sets of private link resources deployed in the resource group, eac - A private endpoint that represents the backend storage account's private endpoint. - A network interface (NIC) that maintains a private IP address within the service runtime subnet.-- A private DNS zone that's deployed for your virtual network, with a DNS A record also created for the storage account within this DNS zone.+- A private DNS zone deployed for your virtual network, with a DNS A record also created for the storage account within this DNS zone. > [!IMPORTANT] > The resource groups are fully managed by the Azure Spring Apps service. Don't manually delete or modify any resource inside these resource groups. az spring update \ --enable-private-storage-access <true-or-false> ``` +## Use central DNS resolution ++A centralized DNS management architecture is documented in the hub and spoke network architecture in [Private Link and DNS integration at scale](/azure/cloud-adoption-framework/ready/azure-best-practices/private-link-and-dns-integration-at-scale). In this architecture, all private DNS zones are deployed and managed centrally in a different central virtual network than the Azure Spring Apps service instance. If you're using this architecture, you can enable central DNS resolution for private storage access by configuring the DNS settings appropriately. This setup ensures that: ++- When a private endpoint is created, the corresponding DNS records are automatically added to the centralized private DNS zone. +- DNS records are managed according to the lifecycle of the private endpoint, meaning they are automatically removed when the private endpoint is deleted. ++The following sections explain how to enable central DNS resolution for Azure Storage blobs by using [Azure Policy](/azure/governance/policy/overview), assuming you already have the private DNS zone `privatelink.blob.core.windows.net` set up in the central virtual network. The same principles apply to Azure Storage files and other Azure services that support Private Link. ++### Policy definition ++In addition to the private DNS zone, you need to create a custom Azure Policy definition. For more information, see [Tutorial: Create a custom policy definition](/azure/governance/policy/tutorials/create-custom-policy-definition). This definition automatically creates the required DNS record in the central private DNS zone when you create a private endpoint. ++The following policy is triggered when you create a private endpoint resource with a service-specific `groupId`. The `groupId` is the ID of the group obtained from the remote resource or service that this private endpoint should connect to. In this example, the `groupId` for Azure Storage blobs is `blob`. For more information on the `groupId` for other Azure services, see the tables in [Azure Private Endpoint private DNS zone values](../../private-link/private-endpoint-dns.md), under the **Subresource** column. ++The policy then triggers a deployment of a `privateDNSZoneGroup` within the private endpoint, which associates the private endpoint with the private DNS zone specified as the parameter. In the following example, the private DNS zone resource ID is `/subscriptions/<subscription-id>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/privateDnsZones/privatelink.blob.core.windows.net`: ++```json +{ + "mode": "Indexed", + "policyRule": { + "if": { + "allOf": [ + { + "field": "type", + "equals": "Microsoft.Network/privateEndpoints" + }, + { + "value": "[contains(resourceGroup().name, 'ap-res_')]", + "equals": "true" + }, + { + "count": { + "field": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections[*].groupIds[*]", + "where": { + "field": "Microsoft.Network/privateEndpoints/privateLinkServiceConnections[*].groupIds[*]", + "equals": "blob" + } + }, + "greaterOrEquals": 1 + } + ] + }, + "then": { + "effect": "deployIfNotExists", + "details": { + "type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups", + "evaluationDelay": "AfterProvisioningSuccess", + "roleDefinitionIds": [ + "/providers/Microsoft.Authorization/roleDefinitions/4d97b98b-1d4f-4787-a291-c67834d212e7" + ], + "deployment": { + "properties": { + "mode": "incremental", + "template": { + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "privateDnsZoneId": { + "type": "string" + }, + "privateEndpointName": { + "type": "string" + }, + "location": { + "type": "string" + } + }, + "resources": [ + { + "name": "[concat(parameters('privateEndpointName'), '/deployedByPolicy')]", + "type": "Microsoft.Network/privateEndpoints/privateDnsZoneGroups", + "apiVersion": "2020-03-01", + "location": "[parameters('location')]", + "properties": { + "privateDnsZoneConfigs": [ + { + "name": "storageBlob-privateDnsZone", + "properties": { + "privateDnsZoneId": "[parameters('privateDnsZoneId')]" + } + } + ] + } + } + ] + }, + "parameters": { + "privateDnsZoneId": { + "value": "[parameters('privateDnsZoneId')]" + }, + "privateEndpointName": { + "value": "[field('name')]" + }, + "location": { + "value": "[field('location')]" + } + } + } + } + } + } + }, + "parameters": { + "privateDnsZoneId": { + "type": "String", + "metadata": { + "displayName": "privateDnsZoneId", + "description": null, + "strongType": "Microsoft.Network/privateDnsZones" + } + } + } +} +``` ++### Policy assignment ++After you deploy the policy definition, assign the policy at the subscription hosting the Azure Spring Apps service instances and specify the central private DNS zone as the parameter. ++The central private DNS zone and Azure Spring Apps service instance might be hosted in the different subscriptions. In this case, remember to assign the [Private DNS Zone Contributor role](/azure/dns/dns-protect-private-zones-recordsets) in the subscription and resource group where the private DNS zones are hosted to the managed identity created by the `DeployIfNotExists` policy assignment that's responsible to create and manage the private endpoint DNS record in the private DNS zone. For more information, see the [Configure the managed identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) section of [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal). ++After you finish the configurations, when you enable or disable the private storage access feature, the DNS records for private endpoints are automatically registered - and removed after a private endpoint is deleted - in the corresponding private DNS zone. + ## Extra costs The Azure Spring Apps instance doesn't incur charges for this feature. However, you're billed for the private link resources hosted in your subscription that support this feature. For more information, see [Azure Private Link Pricing](https://azure.microsoft.com/pricing/details/private-link/) and [Azure DNS Pricing](https://azure.microsoft.com/pricing/details/dns/). The Azure Spring Apps instance doesn't incur charges for this feature. However, If you're using a custom domain name system (DNS) server and the Azure DNS IP `168.63.129.16` isn't configured as the upstream DNS server, you must manually bind all the DNS records of the private DNS zones shown in the resource group `ap-res_{service instance name}_{service instance region}` to resolve the private IP addresses. -## Next step +## Next steps -[Customer responsibilities for running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md) +* [Customer responsibilities for running Azure Spring Apps in a virtual network](vnet-customer-responsibilities.md) +* [Private Link and DNS integration at scale](/azure/cloud-adoption-framework/ready/azure-best-practices/private-link-and-dns-integration-at-scale) |
storage-mover | Service Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/service-overview.md | The [resource hierarchy article](resource-hierarchy.md) has more information abo ## Using Azure Storage Mover and Azure Data Box -When transitioning on-premises workloads to Azure Storage, reducing downtime and ensuring predictable periods of unavailability is crucial for users and business operations. For the initial bulk migration, you can use [Azure Data Box](https://learn.microsoft.com/azure/databox/) and combine it with Azure Storage Mover for online catch-up. +When transitioning on-premises workloads to Azure Storage, reducing downtime and ensuring predictable periods of unavailability is crucial for users and business operations. For the initial bulk migration, you can use [Azure Data Box](/azure/databox/) and combine it with Azure Storage Mover for online catch-up. Using Azure Data Box conserves significant network bandwidth. However, active workloads on your source storage might undergo changes while the Data Box is in transit to an Azure Data Center. The "online catch-up" phase involves updating your cloud storage with these changes before fully cutting over the workload to use the cloud data. This typically requires minimal bandwidth since most data already resides in Azure, and only the delta needs to be transferred. Azure Storage Mover excels in this task. |
storage | Storage Blob Copy Async Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-async-go.md | + + Title: Copy a blob with asynchronous scheduling using Go ++description: Learn how to copy a blob with asynchronous scheduling in Azure Storage by using the Go client module. +++ Last updated : 07/25/2024+++ms.devlang: golang ++++# Copy a blob with asynchronous scheduling using Go +++This article shows how to copy a blob with asynchronous scheduling using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. You can also abort a pending copy operation. ++The methods covered in this article use the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and can be used when you want to perform a copy with asynchronous scheduling. For most copy scenarios where you want to move data into a storage account and have a URL for the source object, see [Copy a blob from a source object URL with Go](storage-blob-copy-url-go.md). +++## Set up your environment +++#### Authorization ++The authorization mechanism must have the necessary permissions to perform a copy operation, or to abort a pending copy. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Copy Blob](/rest/api/storageservices/copy-blob#authorization) or [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob#authorization). ++## About copying blobs with asynchronous scheduling ++The `Copy Blob` operation can finish asynchronously and is performed on a best-effort basis, which means that the operation isn't guaranteed to start immediately or complete within a specified time frame. The copy operation is scheduled in the background and performed as the server has available resources. The operation can complete synchronously if the copy occurs within the same storage account. ++A `Copy Blob` operation can perform any of the following actions: ++- Copy a source blob to a destination blob with a different name. The destination blob can be an existing blob of the same blob type (block, append, or page), or it can be a new blob created by the copy operation. +- Copy a source blob to a destination blob with the same name, which replaces the destination blob. This type of copy operation removes any uncommitted blocks and overwrites the destination blob's metadata. +- Copy a source file in the Azure File service to a destination blob. The destination blob can be an existing block blob, or can be a new block blob created by the copy operation. Copying from files to page blobs or append blobs isn't supported. +- Copy a snapshot over its base blob. By promoting a snapshot to the position of the base blob, you can restore an earlier version of a blob. +- Copy a snapshot to a destination blob with a different name. The resulting destination blob is a writeable blob and not a snapshot. ++The source blob for a copy operation can be one of the following types: block blob, append blob, page blob, blob snapshot, or blob version. The copy operation always copies the entire source blob or file. Copying a range of bytes or set of blocks isn't supported. ++If the destination blob already exists, it must be of the same blob type as the source blob, and the existing destination blob is overwritten. The destination blob can't be modified while a copy operation is in progress, and a destination blob can only have one outstanding copy operation. ++To learn more about the `Copy Blob` operation, including information about properties, index tags, metadata, and billing, see [Copy Blob remarks](/rest/api/storageservices/copy-blob#remarks). ++## Copy a blob with asynchronous scheduling ++This section gives an overview of methods provided by the Azure Storage client module for Go to perform a copy operation with asynchronous scheduling. ++The following methods wrap the [Copy Blob](/rest/api/storageservices/copy-blob) REST API operation, and begin an asynchronous copy of data from the source blob: ++- [StartCopyFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#Client.StartCopyFromURL) ++## Copy a blob from a source within Azure ++If you're copying a blob within the same storage account, the operation can complete synchronously. Access to the source blob can be authorized via Microsoft Entra ID (recommended), a shared access signature (SAS), or an account key. For an alterative synchronous copy operation, see [Copy a blob from a source object URL with Go](storage-blob-copy-url-go.md). ++If the copy source is a blob in a different storage account, the operation can complete asynchronously. The source blob must either be public or authorized via SAS token. The SAS token needs to include the **Read ('r')** permission. To learn more about SAS tokens, see [Delegate access with shared access signatures](../common/storage-sas-overview.md). ++The following example shows a scenario for copying a source blob from a different storage account with asynchronous scheduling. In this example, we create a source blob URL with an appended user delegation SAS token. The example assumes you provide your own SAS. The example also shows how to lease the source blob during the copy operation to prevent changes to the blob from a different client. The `Copy Blob` operation saves the `ETag` value of the source blob when the copy operation starts. If the `ETag` value is changed before the copy operation finishes, the operation fails. We also set the access tier for the destination blob to `Cool` using the [StartCopyFromURLOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#StartCopyFromURLOptions) struct. +++The following example shows sample usage: +++> [!NOTE] +> User delegation SAS tokens offer greater security, as they're signed with Microsoft Entra credentials instead of an account key. To create a user delegation SAS token, the Microsoft Entra security principal needs appropriate permissions. For authorization requirements, see [Get User Delegation Key](/rest/api/storageservices/get-user-delegation-key#authorization). ++## Copy a blob from a source outside of Azure ++You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following example shows a scenario for copying a blob from an accessible source object URL: +++The following example shows sample usage: +++## Check the status of a copy operation ++To check the status of an asynchronous `Copy Blob` operation, you can poll the [GetProperties](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#Client.GetProperties) method and check the copy status. ++The following code example shows how to check the status of a copy operation: +++## Abort a copy operation ++Aborting a pending `Copy Blob` operation results in a destination blob of zero length. However, the metadata for the destination blob has the new values copied from the source blob or set explicitly during the copy operation. To keep the original metadata from before the copy, make a snapshot of the destination blob before calling one of the copy methods. ++To abort a pending copy operation, call the following operation: ++- [AbortCopyFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.AbortCopyFromURL) ++This method wraps the [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) REST API operation, which cancels a pending `Copy Blob` operation. The following code example shows how to abort a pending `Copy Blob` operation: +++## Resources ++To learn more about copying blobs with asynchronous scheduling using the Azure Blob Storage client module for Go, see the following resources. ++### Code samples ++- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/blob-storage-devguide-go/blob/main/cmd/copy-blob-async/copy_blob_async.go) ++### REST API operations ++The Azure SDK for Go contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Go paradigms. The methods covered in this article use the following REST API operations: ++- [Copy Blob](/rest/api/storageservices/copy-blob) (REST API) +- [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) (REST API) + |
storage | Storage Blob Copy Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-go.md | + + Title: Copy a blob with Go ++description: Learn how to copy a blob in Azure Storage by using the Go client library. ++++ Last updated : 07/25/2024+++ms.devlang: golang ++++# Copy a blob with Go +++This article provides an overview of copy operations using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme). ++## About copy operations ++Copy operations can be used to move data within a storage account, between storage accounts, or into a storage account from a source outside of Azure. When using the Blob Storage client libraries to copy data resources, it's important to understand the REST API operations behind the client library methods. The following table lists REST API operations that can be used to copy data resources to a storage account. The table also includes links to detailed guidance about how to perform these operations using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme). ++| REST API operation | When to use | Client library methods | Guidance | +| | | | | +| [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) | This operation is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. This operation completes synchronously. | [UploadBlobFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.UploadBlobFromURL) | [Copy a blob from a source object URL with Go](storage-blob-copy-url-go.md) | +| [Put Block From URL](/rest/api/storageservices/put-block-from-url) | For large objects, you can use [Put Block From URL](/rest/api/storageservices/put-block-from-url) to write individual blocks to Blob Storage, and then call [Put Block List](/rest/api/storageservices/put-block-list) to commit those blocks to a block blob. This operation completes synchronously. | [StageBlockFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.StageBlockFromURL) | [Copy a blob from a source object URL with Go](storage-blob-copy-url-go.md) | +| [Copy Blob](/rest/api/storageservices/copy-blob) | This operation can be used when you want asynchronous scheduling for a copy operation. | [StartCopyFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob#Client.StartCopyFromURL) | [Copy a blob with asynchronous scheduling using Go](storage-blob-copy-async-go.md) | ++For append blobs, you can use the [Append Block From URL](/rest/api/storageservices/append-block-from-url) operation to commit a new block of data to the end of an existing append blob. The following client library method wraps this operation: ++- [AppendBlockFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/appendblob#Client.AppendBlockFromURL) ++For page blobs, you can use the [Put Page From URL](/rest/api/storageservices/put-page-from-url) operation to write a range of pages to a page blob where the contents are read from a URL. The following client library method wraps this operation: ++- [UploadPagesFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/pageblob#Client.UploadPagesFromURL) ++## Client library resources ++- [Client module reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme) +- [Client module source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/storage/azblob) +- [Package (pkg.go.dev)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob) |
storage | Storage Blob Copy Url Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy-url-go.md | + + Title: Copy a blob from a source object URL with Go ++description: Learn how to copy a blob from a source object URL in Azure Storage by using the Go client library. +++ Last updated : 07/25/2024+++ms.devlang: golang ++++# Copy a blob from a source object URL with Go +++This article shows how to copy a blob from a source object URL using the [Azure Storage client module for Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob#section-readme). You can copy a blob from a source within the same storage account, from a source in a different storage account, or from any accessible object retrieved via HTTP GET request on a given URL. ++The client library methods covered in this article use the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) and [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operations. These methods are preferred for copy scenarios where you want to move data into a storage account and have a URL for the source object. For copy operations where you want asynchronous scheduling, see [Copy a blob with asynchronous scheduling using Go](storage-blob-copy-async-go.md). +++## Set up your environment +++#### Authorization ++The authorization mechanism must have the necessary permissions to perform a copy operation. For authorization with Microsoft Entra ID (recommended), you need Azure RBAC built-in role **Storage Blob Data Contributor** or higher. To learn more, see the authorization guidance for [Put Blob From URL](/rest/api/storageservices/put-blob-from-url#authorization) or [Put Block From URL](/rest/api/storageservices/put-block-from-url#authorization). ++## About copying blobs from a source object URL ++The `Put Blob From URL` operation creates a new block blob where the contents of the blob are read from a given URL. The operation completes synchronously. ++The source can be any object retrievable via a standard HTTP GET request on the given URL. This includes block blobs, append blobs, page blobs, blob snapshots, blob versions, or any accessible object inside or outside Azure. ++When the source object is a block blob, all committed blob content is copied. The content of the destination blob is identical to the content of the source, but the list of committed blocks isn't preserved and uncommitted blocks aren't copied. ++The destination is always a block blob, either an existing block blob, or a new block blob created by the operation. The contents of an existing blob are overwritten with the contents of the new blob. ++The `Put Blob From URL` operation always copies the entire source blob. Copying a range of bytes or set of blocks isn't supported. To perform partial updates to a block blobΓÇÖs contents by using a source URL, use the [Put Block From URL](/rest/api/storageservices/put-block-from-url) API along with [`Put Block List`](/rest/api/storageservices/put-block-list). ++To learn more about the `Put Blob From URL` operation, including blob size limitations and billing considerations, see [Put Blob From URL remarks](/rest/api/storageservices/put-blob-from-url#remarks). ++## Copy a blob from a source object URL ++This section gives an overview of methods provided by the Azure Storage client library for Go to perform a copy operation from a source object URL. ++The following method wraps the [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) REST API operation, and creates a new block blob where the contents of the blob are read from a given URL: ++- [UploadBlobFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.UploadBlobFromURL) ++This method is preferred for scenarios where you want to move data into a storage account and have a URL for the source object. ++For large objects, you might choose to work with individual blocks. The following method wraps the [Put Block From URL](/rest/api/storageservices/put-block-from-url) REST API operation. This method creates a new block to be committed as part of a blob where the contents are read from a source URL: ++- [StageBlockFromURL](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#Client.StageBlockFromURL) ++## Copy a blob from a source within Azure ++If you're copying a blob from a source within Azure, access to the source blob can be authorized via Microsoft Entra ID (recommended), a shared access signature (SAS), or an account key. ++The following code example shows a scenario for copying a source blob within Azure. In this example, we also set the access tier for the destination blob to `Cool` using the [UploadBlobFromURLOptions](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob#UploadBlobFromURLOptions) struct. +++The following example shows sample usage: +++## Copy a blob from a source outside of Azure ++You can perform a copy operation on any source object that can be retrieved via HTTP GET request on a given URL, including accessible objects outside of Azure. The following code example shows a scenario for copying a blob from an accessible source object URL. +++The following example shows sample usage: +++## Resources ++To learn more about copying blobs using the Azure Blob Storage client library for Go, see the following resources. ++### Code samples ++- View [code samples](https://github.com/Azure-Samples/blob-storage-devguide-go/blob/main/cmd/copy-put-from-url/copy_put_from_url.go) from this article (GitHub) ++### REST API operations ++The Azure SDK for Go contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar Go paradigms. The client library methods covered in this article use the following REST API operations: ++- [Put Blob From URL](/rest/api/storageservices/put-blob-from-url) (REST API) +- [Put Block From URL](/rest/api/storageservices/put-block-from-url) (REST API) + |
stream-analytics | Sql Database Upsert | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-upsert.md | Update the `Device` class and mapping section to match your own schema: public DateTime Timestamp { get; set; } ``` -You can now test the wiring between the local function and the database by debugging (F5 in Visual Studio Code). The SQL database needs to be reachable from your machine. [SSMS](/sql/ssms/sql-server-management-studio-ssms) can be used to check connectivity. Then a tool like [Postman](https://www.postman.com/) can be used to issue POST requests to the local endpoint. A request with an empty body should return http 204. A request with an actual payload should be persisted in the destination table (in replace / update mode). Here's a sample payload corresponding to the schema used in this sample: +You can now test the wiring between the local function and the database by debugging (F5 in Visual Studio Code). The SQL database needs to be reachable from your machine. [SSMS](/sql/ssms/sql-server-management-studio-ssms) can be used to check connectivity. Then, send POST requests to the local endpoint. A request with an empty body should return http 204. A request with an actual payload should be persisted in the destination table (in replace / update mode). Here's a sample payload corresponding to the schema used in this sample: ```JSON [{"DeviceId":3,"Value":13.4,"Timestamp":"2021-11-30T03:22:12.991Z"},{"DeviceId":4,"Value":41.4,"Timestamp":"2021-11-30T03:22:12.991Z"}] Update the `sqltext` command building section to match your own schema (notice h $"WHEN NOT MATCHED BY TARGET THEN INSERT (DeviceId, Value, TimeStamp) VALUES (DeviceId, Value, Timestamp);"; ``` -You can now test the wiring between the local function and the database by debugging (F5 in VS Code). The SQL database needs to be reachable from your machine. [SSMS](/sql/ssms/sql-server-management-studio-ssms) can be used to check connectivity. Then a tool like [Postman](https://www.postman.com/) can be used to issue POST requests to the local endpoint. A request with an empty body should return http 204. A request with an actual payload should be persisted in the destination table (in accumulate / merge mode). Here's a sample payload corresponding to the schema used in this sample: +You can now test the wiring between the local function and the database by debugging (F5 in VS Code). The SQL database needs to be reachable from your machine. [SSMS](/sql/ssms/sql-server-management-studio-ssms) can be used to check connectivity. Then, issue POST requests to the local endpoint. A request with an empty body should return http 204. A request with an actual payload should be persisted in the destination table (in accumulate / merge mode). Here's a sample payload corresponding to the schema used in this sample: ```JSON [{"DeviceId":3,"Value":13.4,"Timestamp":"2021-11-30T03:22:12.991Z"},{"DeviceId":4,"Value":41.4,"Timestamp":"2021-11-30T03:22:12.991Z"}] |
synapse-analytics | Apache Spark Development Using Notebooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-development-using-notebooks.md | The IntelliSense features are at different levels of maturity for different lang <h3 id="code-snippets">Code Snippets</h3> -Synapse notebooks provide code snippets that make it easier to enter common used code patterns, such as configuring your Spark session, reading data as a Spark DataFrame, or drawing charts with matplotlib etc. +Synapse notebooks provide code snippets that make it easier to enter commonly used code patterns, such as configuring your Spark session, reading data as a Spark DataFrame, or drawing charts with matplotlib etc. Snippets appear in [Shortcut keys of IDE style IntelliSense](#ide-style-intellisense) mixed with other suggestions. The code snippets contents align with the code cell language. You can see available snippets by typing **Snippet** or any keywords appear in the snippet title in the code cell editor. For example, by typing **read** you can see the list of snippets to read data from various data sources. The Outlines (Table of Contents) presents the first markdown header of any markd You can run the code cells in your notebook individually or all at once. The status and progress of each cell is represented in the notebook. +> [!NOTE] +> Deleting a notebook will not automatically cancel any jobs that are currently running. If you need to cancel a job, you should visit the Monitoring Hub and cancel it manually. + ### Run a cell There are several ways to run the code in a cell. You can select the **Variables** button on the notebook command bar to open or h ### Cell status indicator -A step-by-step cell execution status is displayed beneath the cell to help you see its current progress. Once the cell run is complete, an execution summary with the total duration and end time are shown and kept there for future reference. +A step-by-step cell execution status is displayed beneath the cell to help you see its current progress. Once the cell run is complete, an execution summary with the total duration and end time is shown and kept there for future reference. ![Screenshot of cell-status](./media/apache-spark-development-using-notebooks/synapse-cell-status.png) |
synapse-analytics | Apache Spark Development Using Notebooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/third-party-notices.md | in this repository under the [Creative Commons Attribution 4.0 International Pub Microsoft, Windows, Microsoft Azure and/or other Microsoft products and services referenced in the documentation may be either trademarks or registered trademarks of Microsoft in the United States and/or other countries. The licenses for this project do not grant you rights to use any Microsoft names, logos, or trademarks.-Microsoft's general trademark guidelines can be found at [https://go.microsoft.com/fwlink/?LinkID=254653](https://go.microsoft.com/fwlink/?LinkID=254653). +Microsoft's general trademark guidelines can be found at [Microsoft Trademark and Brand Guidelines](https://www.microsoft.com/legal/intellectualproperty/trademarks). Privacy information can be found at [https://privacy.microsoft.com/en-us/](https://privacy.microsoft.com/en-us/) |
time-series-insights | Concepts Model Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-model-overview.md | This article describes Time Series Model, the capabilities, and how to start bui > [!TIP] >-> * Go to theΓÇ»[Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) environment for a live Time Series Model example. > * Learn [how to work with Time Series Model](./how-to-edit-your-model.md) using the Azure Time Series Insights Explorer. ## Summary Instances have descriptive information associated with them called *instance pro After an event source is configured for the Azure Time Series Insights Gen2 environment, instances are automatically discovered and created in a time series model. The instances can be created or updated via the Azure Time Series Insights Explorer by using Time Series Model queries. -The [Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) provides several live instance examples. - [![Time Series Model instance example](media/v2-update-tsm/time-series-model-instance.png)](media/v2-update-tsm/time-series-model-instance.png#lightbox) ### Instance properties Time Series Model *hierarchies* organize instances by specifying property names You can configure multiple hierarchies in a given Azure Time Series Insights Gen2 environment. A Time Series Model instance can map to a single hierarchy or multiple hierarchies (many-to-many relationship). -The [Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) displays a standard instance and type hierarchy. - [![Time Series Model hierarchy example](media/v2-update-tsm/time-series-model-hierarchies.png)](media/v2-update-tsm/time-series-model-hierarchies.png#lightbox) ### Hierarchy definition Time Series Model *types* help you define variables or formulas for doing comput A type can have one or more variables. For example, a Time Series Model instance might be of type *Temperature Sensor*, which consists of the variables *avg temperature*, *min temperature*, and *max temperature*. -The [Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) visualizes several Time Series Model types associated with their respective instances. - [![Time Series Model type example](media/v2-update-tsm/time-series-model-types.png)](media/v2-update-tsm/time-series-model-types.png#lightbox) > [!TIP] |
time-series-insights | Concepts Ux Panels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/concepts-ux-panels.md | Last updated 01/22/2021 [!INCLUDE [retirement](../../includes/tsi-retirement.md)] -This article describes the various features and options available within the Azure Time Series Insights Gen2 [Demo environment](https://insights.timeseries.azure.com/preview/demo). +This article describes the various features and options available within the Azure Time Series Insights Gen2 Demo environment. ## Prerequisites Select the new **Share** icon to share a URL link with your team. * Displays your current Azure Time Series Insights Gen2 sign-in account information. * Use it to switch between the available themes.-* Use it to view the Gen2 [Demo environment](https://insights.timeseries.azure.com/preview/demo). +* Use it to view the Gen2 Demo environment. ### Theme selection |
time-series-insights | How To Api Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-api-migration.md | Users must migrate their environment's [Time Series Model variables](./concepts- ## Migrate Time Series Model and saved queries -To help users migrate their [Time Series Model variables](./concepts-variables.md) and saved queries, there is a built-in tool available through the [Azure Time Series Insights Explorer](https://insights.timeseries.azure.com). Navigate to the environment you wish to migrate and follow the steps below. **You can complete the migration partially and return to complete it at a later time, however, none of the updates can be reverted.** +To help users migrate their [Time Series Model variables](./concepts-variables.md) and saved queries, there is a built-in tool available through the Azure Time Series Insights Explorer. Navigate to the environment you wish to migrate and follow the steps below. **You can complete the migration partially and return to complete it at a later time, however, none of the updates can be reverted.** > [!NOTE] > You must be a Contributor to the environment to make updates to the Time Series Model and saved queries. If you are not a Contributor, you will only be able to migrate your personal saved queries. Please review [environment access policies](./concepts-access-policies.md) and your access level before proceeding. |
time-series-insights | How To Diagnose Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-diagnose-troubleshoot.md | This problem might occur if you don't have permissions to access the Time Series ## Problem: No data is seen in the Gen2 Explorer -There are several common reasons why your data might not appear in the [Azure Time Series Insights Gen2 Explorer](https://insights.timeseries.azure.com/preview). +There are several common reasons why your data might not appear in the Azure Time Series Insights Gen2 Explorer. - Your event source might not be receiving data. |
time-series-insights | How To Ingest Data Event Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-ingest-data-event-hub.md | To add a new consumer group in your event hub: - [Define data access policies](./concepts-access-policies.md) to secure the data. - [Send events](time-series-insights-send-events.md) to the event source.--- Access your environment in the [Azure Time Series Insights Explorer](https://insights.timeseries.azure.com). |
time-series-insights | How To Ingest Data Iot Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-ingest-data-iot-hub.md | To add a new consumer group to your IoT hub: * [Define data access policies](./concepts-access-policies.md) to secure the data. * [Send events](time-series-insights-send-events.md) to the event source.--* Access your environment in the [Azure Time Series Insight Explorer](https://insights.timeseries.azure.com). |
time-series-insights | How To Plan Your Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-plan-your-environment.md | You can now configure your Azure Time Series Insights environment's Time Series The model is dynamic, so it can be built at any time. To get started quickly, build and upload it prior to pushing data into Azure Time Series Insights. To build your model, read [Use the Time Series Model](./concepts-model-overview.md). -For many customers, the Time Series Model maps to an existing asset model or ERP system already in place. If you don't have an existing model, a prebuilt user experience is [provided](https://github.com/Microsoft/tsiclient) to get up and running quickly. To envision how a model might help you, view the [sample demo environment](https://insights.timeseries.azure.com/preview/demo). +For many customers, the Time Series Model maps to an existing asset model or ERP system already in place. If you don't have an existing model, a prebuilt user experience is [provided](https://github.com/Microsoft/tsiclient) to get up and running quickly. ## Shape your events |
time-series-insights | Quickstart Explore Tsi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/quickstart-explore-tsi.md | In this quickstart, you learn how to use Azure Time Series Insights Gen2 to find The Azure Time Series Insights Gen2 Explorer demonstrates historical data and root cause analysis. To get started: -1. Go to the [Contoso Wind Farm demo](https://insights.timeseries.azure.com/preview/samples) environment. +1. Go to the Contoso Wind Farm demo environment. 1. If you're prompted, sign in to the Azure Time Series Insights Gen2 Explorer by using your Azure account credentials. |
time-series-insights | Time Series Insights Diagnose And Solve Problems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-diagnose-and-solve-problems.md | This article describes issues that you might encounter in your Azure Time Series ## Problem: No data is shown -If no data is appearing in the [Azure Time Series Insights explorer](https://insights.timeseries.azure.com), consider these common causes. +If no data is appearing in the Azure Time Series Insights explorer, consider these common causes. ### Cause A: Event source data isn't in JSON format If you connect an existing event source, it's likely that your IoT hub or event To fix the lag: -1. Increase the SKU capacity to the maximum allowed value (10, in this case). After you increase capacity, the ingress process starts to catch up much more quickly. You're charged for the increased capacity. To visualize how quickly you're catching up, you can view the availability chart in the [Azure Time Series Insights explorer](https://insights.timeseries.azure.com). +1. Increase the SKU capacity to the maximum allowed value (10, in this case). After you increase capacity, the ingress process starts to catch up much more quickly. You're charged for the increased capacity. To visualize how quickly you're catching up, you can view the availability chart in the Azure Time Series Insights explorer. 2. When the lag is caught up, decrease the SKU capacity to your normal ingress rate. |
time-series-insights | Time Series Insights Explorer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-explorer.md | Last updated 09/29/2020 > [!CAUTION] > This is a Gen1 article. -This article describes the features and options for the Azure Time Series Insights Gen1 [Explorer web app](https://insights.timeseries.azure.com/). The Azure Time Series Insights Explorer demonstrates the powerful data visualization capabilities provided by the service and can be accessed within your own environment. +This article describes the features and options for the Azure Time Series Insights Gen1 Explorer web app. The Azure Time Series Insights Explorer demonstrates the powerful data visualization capabilities provided by the service and can be accessed within your own environment. Azure Time Series Insights is a fully managed analytics, storage, and visualization service that makes it simple to explore and analyze billions of IoT events simultaneously. It gives you a global view of your data, which lets you quickly validate your IoT solution and avoid costly downtime to mission-critical devices. You can discover hidden trends, spot anomalies, and conduct root-cause analyses in near real time. Before you can use Azure Time Series Insights Explorer, you must: Within minutes of connecting your event source to your Azure Time Series Insights environment, you can explore and query your time series data. -1. To start, open the [Azure Time Series Insights Explorer](https://insights.timeseries.azure.com/) in your web browser. On the left side of the window, select an environment. All environments that you have access to are listed in alphabetical order. +1. To start, open the Azure Time Series Insights Explorer in your web browser. On the left side of the window, select an environment. All environments that you have access to are listed in alphabetical order. 1. After you select an environment, either use the **From** and **To** configurations at the top, or select and drag over the timespan you want. Select the magnifying glass in the upper-right corner, or right-click on the selected timespan and select **Search**. |
time-series-insights | Time Series Insights Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-get-started.md | Follow these steps to create an environment: Setting|Suggested value|Description ||- Environment name | A unique name | This name represents the environment in [time series Explorer](https://insights.timeseries.azure.com) + Environment name | A unique name | This name represents the environment in time series Explorer. Subscription | Your subscription | If you have multiple subscriptions, choose the subscription that contains your event source preferably. Azure Time Series Insights can automatically detect Azure IoT Hub and Event Hub resources existing in the same subscription. Resource group | Create a new or use existing | A resource group is a collection of Azure resources used together. You can choose an existing resource group, for example the one that contains your Event Hub or IoT Hub. Or you can make a new one if this resource is not related to the other resources. Location | Nearest your event source | Preferably, choose the same data center location that contains your event source data, in effort to avoid added cross-region and cross-zone bandwidth costs and added latency when moving data out of the region. Follow these steps to create an environment: * [Add an Event Hub event source](./how-to-ingest-data-event-hub.md) to your Azure Time Series Insights environment. -* [Send events](time-series-insights-send-events.md) to the event source. --* View your environment in [Azure Time Series Insights Explorer](https://insights.timeseries.azure.com). +* [Send events](time-series-insights-send-events.md) to the event source. |
time-series-insights | Time Series Insights Parameterized Urls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-parameterized-urls.md | Azure Time Series Insights Explorer supports URL query parameters to specify vie > [!TIP] >-> * View the free [Azure Time Series Insights demo](https://insights.timeseries.azure.com/samples). > * Read the accompanying [Azure Time Series Insights Explorer](./time-series-insights-explorer.md) documentation. ## Environment ID https://insights.timeseries.azure.com/classic/samples?environmentId=10000000-000 [![Azure Time Series Insights Explorer parameterized URL](media/parameterized-url/share-parameterized-url.png)](media/parameterized-url/share-parameterized-url.png#lightbox) -> [!TIP] -> See the Explorer live [using the URL](https://insights.timeseries.azure.com/classic/samples?environmentId=10000000-0000-0000-0000-100000000108&relativeMillis=3600000&timeSeriesDefinitions=[%7B%22name%22:%22F1PressureId%22,%22splitBy%22:%22Id%22,%22measureName%22:%22Pressure%22,%22predicate%22:%22%27Factory1%27%22%7D,%7B%22name%22:%22F2TempStation%22,%22splitBy%22:%22Station%22,%22measureName%22:%22Temperature%22,%22predicate%22:%22%27Factory2%27%22%7D,%7B%22name%22:%22F3VibrationPL%22,%22splitBy%22:%22ProductionLine%22,%22measureName%22:%22Vibration%22,%22predicate%22:%22%27Factory3%27%22%7D]) example above. - The URL above describes and displays the parameterized Azure Time Series Insights Explorer view. * The parameterized predicates. |
time-series-insights | Time Series Insights Send Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-send-events.md | In Azure Time Series Insights Gen2, you can add contextual data to incoming tele ## Next steps -* [View your environment](https://insights.timeseries.azure.com) in the Azure Time Series Insights Explorer. - * Read more about [IoT Hub device messages](../iot-hub/iot-hub-devguide-messages-construct.md) |
time-series-insights | Time Series Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-quickstart.md | This Azure Time Series Insights Explorer quickstart offers a guided tour of feat ## Prepare the demo environment -1. In your browser, go to the [Gen1 demo](https://insights.timeseries.azure.com/demo). +1. In your browser, go to the Gen1 demo. 1. If prompted, sign in to the Azure Time Series Insights Explorer by using your Azure account credentials. |
update-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md | -Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your machines in Azure and on-premises/on other cloud platforms (connected by [Azure Arc](https://learn.microsoft.com/azure/azure-arc/)) from a single pane of management. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window. +Update Manager is a unified service to help manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your machines in Azure and on-premises/on other cloud platforms (connected by [Azure Arc](/azure/azure-arc/)) from a single pane of management. You can also use Update Manager to make real-time updates or schedule them within a defined maintenance window. You can use Update Manager in Azure to: You can use Update Manager in Azure to: - Enable [periodic assessment](https://aka.ms/umc-periodic-assessment-policy) to check for updates every 24 hours. - Use flexible patching options such as: - [Customer-defined maintenance schedules](https://aka.ms/umc-scheduled-patching) for both Azure and Arc-connected machines.- - [Automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) and [hot patching](https://learn.microsoft.com/azure/automanage/automanage-hotpatch) for Azure VMs. + - [Automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) and [hot patching](/azure/automanage/automanage-hotpatch) for Azure VMs. - Build custom reporting dashboards for reporting update status and [configure alerts](https://aka.ms/aum-alerts) on certain conditions.-- Oversee update compliance for your entire fleet of machines in Azure and on-premises/in other cloud environments connected by [Azure Arc](https://learn.microsoft.com/azure/azure-arc/) through a single pane. The different types of machines that can be managed are:- - [Hybrid machines](https://learn.microsoft.com/azure/azure-arc/servers/) - - [VMWare machines](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/) - - [SCVMM machines](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/) - - [Azure Stack HCI VMs](https://learn.microsoft.com/azure-stack/hci/) +- Oversee update compliance for your entire fleet of machines in Azure and on-premises/in other cloud environments connected by [Azure Arc](/azure/azure-arc/) through a single pane. The different types of machines that can be managed are: +- + - [Hybrid machines](/azure/azure-arc/servers/) + - [VMWare machines](/azure/azure-arc/vmware-vsphere/) + - [SCVMM machines](/azure/azure-arc/system-center-virtual-machine-manager/) + - [Azure Stack HCI VMs](/azure-stack/hci/) ## Key benefits Update Manager offers many new features and provides enhanced and native functio - Offers enhanced flexibility - Take immediate action either by [installing updates immediately](https://aka.ms/on-demand-patching) or [scheduling them for a later date](https://aka.ms/umc-scheduled-patching). - [Check updates automatically](https://aka.ms/aum-policy-support) or [on demand](https://aka.ms/on-demand-assessment).- - Secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](https://learn.microsoft.com/azure/automanage/automanage-hotpatch) or [custom maintenance schedules](https://aka.ms/umc-scheduled-patching). + - Secure machines with new ways of patching such as [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](/azure/automanage/automanage-hotpatch) or [custom maintenance schedules](https://aka.ms/umc-scheduled-patching). - Sync patch cycles in relation to **patch Tuesday** the unofficial term for Microsoft's scheduled security fix release on every second Tuesday of each month. - Reporting and alerting - Build custom reporting dashboards through [Azure Workbooks](manage-workbooks.md) to monitor the update compliance of your infrastructure. |
update-manager | Pre Post Events Schedule Maintenance Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/pre-post-events-schedule-maintenance-configuration.md | In the **Event Subscription Details** section, provide an appropriate name. #### [Using API](#tab/api) -1. Create a maintenance configuration by following the steps listed [here](https://learn.microsoft.com/rest/api/maintenance/maintenance-configurations/create-or-update?view=rest-maintenance-2023-09-01-preview&tabs=HTTP). +1. Create a maintenance configuration by following the steps listed [here](/rest/api/maintenance/maintenance-configurations/create-or-update?view=rest-maintenance-2023-09-01-preview&tabs=HTTP). 1. **# System topic creation [Learn more](/rest/api/eventgrid/controlplane/system-topics/create-or-update)** |
update-manager | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequisites.md | Azure VM extensions and Azure Arc-enabled VM extensions are required to run on t To prepare your network to support Update Manager, you might need to configure some infrastructure components. For more information, see the [network requirements for Arc-enabled servers](../azure-arc/servers/network-requirements.md). -For Windows machines, you must allow traffic to any endpoints required by the Windows Update agent. You can find an updated list of required endpoints in [issues related to HTTP Proxy](https://learn.microsoft.com/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting?toc=%2Fwindows%2Fdeployment%2Ftoc.json&bc=%2Fwindows%2Fdeployment%2Fbreadcrumb%2Ftoc.json#issues-related-to-httpproxy). If you have a local [WSUS](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) deployment, you must allow traffic to the server specified in your [WSUS key](https://learn.microsoft.com/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). +For Windows machines, you must allow traffic to any endpoints required by the Windows Update agent. You can find an updated list of required endpoints in [issues related to HTTP Proxy](/troubleshoot/windows-client/installing-updates-features-roles/windows-update-issues-troubleshooting?toc=%2Fwindows%2Fdeployment%2Ftoc.json&bc=%2Fwindows%2Fdeployment%2Fbreadcrumb%2Ftoc.json#issues-related-to-httpproxy). If you have a local [WSUS](/windows-server/administration/windows-server-update-services/plan/plan-your-wsus-deployment) deployment, you must allow traffic to the server specified in your [WSUS key](/windows/deployment/update/waas-wu-settings#configuring-automatic-updates-by-editing-the-registry). For Red Hat Linux machines, see [IPs for the RHUI content delivery servers](../virtual-machines/workloads/redhat/redhat-rhui.md#the-ips-for-the-rhui-content-delivery-servers)for required endpoints. For other Linux distributions, see your provider documentation. ### Configure Windows Update client -Azure Update Manager relies on the [Windows Update client](https://learn.microsoft.com/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. For more information, see [configure Windows Update client](configure-wu-agent.md). +Azure Update Manager relies on the [Windows Update client](/windows/deployment/update/windows-update-overview) to download and install Windows updates. There are specific settings that are used by the Windows Update client when connecting to Windows Server Update Services (WSUS) or Windows Update. For more information, see [configure Windows Update client](configure-wu-agent.md). ## Next steps |
update-manager | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md | Update Manager doesn't support driver updates. #### Extended Security Updates (ESU) for Windows Server -Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. ESUs are available are default to Azure Virtual machines. To enroll in Windows Server 2012 Extended Security Updates on Arc connected machines, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2 via Azure Arc](https://learn.microsoft.com/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc). +Using Azure Update Manager, you can deploy Extended Security Updates for your Azure Arc-enabled Windows Server 2012 / R2 machines. ESUs are available are default to Azure Virtual machines. To enroll in Windows Server 2012 Extended Security Updates on Arc connected machines, follow the guidance on [How to get Extended Security Updates (ESU) for Windows Server 2012 and 2012 R2 via Azure Arc](/windows-server/get-started/extended-security-updates-deploy#extended-security-updates-enabled-by-azure-arc). #### Microsoft application updates on Windows Use one of the following options to perform the settings change at scale: #### [Windows](#tab/third-party-win) -Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](https://learn.microsoft.com/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](https://learn.microsoft.com/mem/configmgr/sum/tools/install-updates-publisher). +Update Manager relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/mem/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Manager to update machines that use Configuration Manager as their update repository with third party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/mem/configmgr/sum/tools/install-updates-publisher). #### [Linux](#tab/third-party-lin) |
update-manager | Workflow Update Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/workflow-update-manager.md | Azure Update Manager honors the update source settings on the machine and will f #### [Windows](#tab/update-win) -If the [Windows Update Agent (WUA)](https://learn.microsoft.com/windows/win32/wua_sdk/updating-the-windows-update-agent) is configured to fetch updates from Windows Update repository or Microsoft Update repository or [Windows Server Update Services](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) (WSUS), AUM will honor these settings. For more information, see how to [configure Windows Update client](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). By default, **it is configured to fetch updates from Windows Updates repository**. +If the [Windows Update Agent (WUA)](/windows/win32/wua_sdk/updating-the-windows-update-agent) is configured to fetch updates from Windows Update repository or Microsoft Update repository or [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus) (WSUS), AUM will honor these settings. For more information, see how to [configure Windows Update client](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus). By default, **it is configured to fetch updates from Windows Updates repository**. #### [Linux](#tab/update-lin) |
virtual-desktop | App Attach Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/app-attach-setup.md | In order to use MSIX app attach in Azure Virtual Desktop, you need to meet the p - Your session hosts need to run a [supported Windows client operating system](prerequisites.md#operating-systems-and-licenses) and at least one of them must be powered on. Windows Server isn't supported. -- Your host pool needs to be [configured as a validation environment](configure-validation-environment.md).- ::: zone pivot="app-attach" - Your session hosts need to be joined to Microsoft Entra ID or an Active Directory Domain Services (AD DS) domain. ::: zone-end |
virtual-desktop | Multimedia Redirection Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md | The following sites work with video playback redirection: - Skillshare - The Guardian - Twitch- - Twitter - Udemy\* - UMU - U.S. News - Vidazoo - Vimeo - The Wall Street Journal+ - X - Yahoo - Yammer - YouTube (including sites with embedded YouTube videos). |
virtual-machine-scale-sets | Virtual Machine Scale Sets Automatic Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md | az vmss rolling-upgrade start --resource-group "myResourceGroup" --name "myScale ## Leverage Activity Logs for Upgrade Notifications and Insights -[Activity Log](https://learn.microsoft.com/azure/azure-monitor/essentials/activity-log?tabs=powershell) is a subscription log that provides insight into subscription-level events that have occurred in Azure. Customers are able to: +[Activity Log](/azure/azure-monitor/essentials/activity-log?tabs=powershell) is a subscription log that provides insight into subscription-level events that have occurred in Azure. Customers are able to: * See events related to operations performed on their resources in Azure portal * Create action groups to tune notification methods like email, sms, webhooks, or ITSM * Set up suitable alerts using different criteria using Portal, ARM resource template, PowerShell or CLI to be sent to action groups Customers will receive three types of notifications related to Automatic OS Upgr ### Setting up Action Groups for Activity log alerts -An [action group](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups) is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. +An [action group](/azure/azure-monitor/alerts/action-groups) is a collection of notification preferences defined by the owner of an Azure subscription. Azure Monitor and Service Health alerts use action groups to notify users that an alert has been triggered. Action groups can be created and managed using: -* [ARM Resource Manager](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#create-an-action-group-with-a-resource-manager-template) -* [Portal](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#create-an-action-group-in-the-azure-portal) +* [ARM Resource Manager](/azure/azure-monitor/alerts/action-groups#create-an-action-group-with-a-resource-manager-template) +* [Portal](/azure/azure-monitor/alerts/action-groups#create-an-action-group-in-the-azure-portal) * PowerShell:- * [New-AzActionGroup](https://learn.microsoft.com/powershell/module/az.monitor/new-azactiongroup?view=azps-12.0.0) - * [Get-AzActionGroup](https://learn.microsoft.com/powershell/module/az.monitor/get-azactiongroup?view=azps-12.0.0) - * [Remove-AzActionGroup](https://learn.microsoft.com/powershell/module/az.monitor/remove-azactiongroup?view=azps-12.0.0) -* [CLI](https://learn.microsoft.com/cli/azure/monitor/action-group?view=azure-cli-latest#az-monitor-action-group-create) + * [New-AzActionGroup](/powershell/module/az.monitor/new-azactiongroup?view=azps-12.0.0) + * [Get-AzActionGroup](/powershell/module/az.monitor/get-azactiongroup?view=azps-12.0.0) + * [Remove-AzActionGroup](/powershell/module/az.monitor/remove-azactiongroup?view=azps-12.0.0) +* [CLI](/cli/azure/monitor/action-group?view=azure-cli-latest#az-monitor-action-group-create) Customers can set up the following using action groups:-* [SMS and/or Email notifications](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#email-azure-resource-manager) -* [Webhooks](https://learn.microsoft.com/azure/azure-monitor/alerts/action-groups#webhook) - Customers can attach webhooks to their automation runbooks and configure their action groups to trigger the runbooks. You can start a runbook from a [webhook](https://docs.microsoft.com/azure/automation/automation-webhooks) -* [ITSM Connections](https://learn.microsoft.com/azure/azure-monitor/alerts/itsmc-overview) +* [SMS and/or Email notifications](/azure/azure-monitor/alerts/action-groups#email-azure-resource-manager) +* [Webhooks](/azure/azure-monitor/alerts/action-groups#webhook) - Customers can attach webhooks to their automation runbooks and configure their action groups to trigger the runbooks. You can start a runbook from a [webhook](https://docs.microsoft.com/azure/automation/automation-webhooks) +* [ITSM Connections](/azure/azure-monitor/alerts/itsmc-overview) ## Investigate and Resolve Auto Upgrade Errors |
virtual-machines | Azure Hpc Vm Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-hpc-vm-images.md | |
virtual-machines | Capacity Reservation Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md | From this example accumulation of Minutes Not Available, here's the calculation - At VM deployment for below VM Series for Capacity Reservation, Fault Domain (FD) count of 1 can be set using Virtual Machine Scale Sets. A deployment with more than 1 FD will fail to deploy against a Capacity Reservation: - NC-series, v3 - NCasT4_v3 series+ - NCADSA10_v4 series - NC_A100_v4 series - NV-series, v3 and newer - NVadsA10_v5 series+ - NGads V620_v1 series - Support for below VM Series for Capacity Reservation is in Public Preview: - M-series, v2 - M-series, v3 For example, lets say a Capacity Reservation with quantity reserved 2 has been c ![Capacity Reservation image 7.](./media/capacity-reservation-overview/capacity-reservation-7.jpg) -In the previous image, a Reserved VM Instance discount is applied to one of the unused instances and the cost for that instance is zeroed out. For the other instance, PAYG rate is charged for the VM size reserved. +In the previous image, a Reserved VM Instance discount is applied to one of the unused instances and the cost for that instance is zeroed out. For the other instance, pay-as-you-go rate is charged for the VM size reserved. When a VM is allocated against the Capacity Reservation, the other VM components such as disks, network, extensions, and any other requested components must also be allocated. In this state, the VM usage reflects one allocated VM and one unused capacity instance. The Reserved VM Instance will zero out the cost of either the VM or the unused capacity instance. The other charges for disks, networking, and other components associated with the allocated VM also appears on the bill. ![Capacity Reservation image 8.](./media/capacity-reservation-overview/capacity-reservation-8.jpg) -In the previous image, the VM Reserved Instance discount is applied to VM 0, which is only charged for other components such as disk and networking. The other unused instance is being charged at PAYG rate for the VM size reserved. +In the previous image, the VM Reserved Instance discount is applied to VM 0, which is only charged for other components such as disk and networking. The other unused instance is being charged at pay-as-you-go rate for the VM size reserved. ## Frequently asked questions |
virtual-machines | Configure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/configure.md | |
virtual-machines | Disks Redundancy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-redundancy.md | Except for more write latency, disks using ZRS are identical to disks using LRS, ## Next steps - To learn how to create a ZRS disk, see [Deploy a ZRS managed disk](disks-deploy-zrs.md).-- To convert an LRS disk to ZRS, see [Convert a disk from LRS to ZRS](disks-migrate-lrs-zrs.md).++- To convert an LRS disk to ZRS, see [Convert a disk from LRS to ZRS](disks-migrate-lrs-zrs.md). |
virtual-machines | Dlsv6 Dldsv6 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dlsv6-dldsv6-series.md | Dlsv6-series virtual machines run on 5<sup>th</sup> Generation Intel® Xeon® Pl Dlsv6-series virtual machines do not have any temporary storage thus lowering the price of entry. You can attach Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). -[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported +[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported | **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth (Mbps)** | |||||||||||||| Dldsv6-series virtual machines run on the 5th Generation Intel® Xeon® Platinum Dldsv5-series virtual machines support Standard SSD, Standard HDD, and Premium SSD disk types. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). -[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview <br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported +[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported | **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** | |||||||||||||| |
virtual-machines | Dsv6 Ddsv6 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dsv6-ddsv6-series.md | Applies to ✔️ Linux VMs ✔️ Windows VMs ✔️ Flexible scale sets ✔️ >[!NOTE] >Azure Virtual Machine Series Dsv6 and Ddsv6 are currently in **Preview**. See the [Preview Terms Of Use | Microsoft Azure](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. -The new Dsv6 and Ddsv6 Virtual Machine (VM) series only works on OS images that are tagged with NVMe support. If your current OS image is not supported for NVMe, an error message is shown. NVMe support is available in 50+ of the most popular OS images, and we continuously improve the OS image coverage. Refer to our up-to-date [**lists**](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-interface) for information on which OS images are tagged as NVMe supported. For more information on NVMe enablement, see our [**FAQ**](https://learn.microsoft.com/azure/virtual-machines/enable-nvme-faqs). +The new Dsv6 and Ddsv6 Virtual Machine (VM) series only works on OS images that are tagged with NVMe support. If your current OS image is not supported for NVMe, an error message is shown. NVMe support is available in 50+ of the most popular OS images, and we continuously improve the OS image coverage. Refer to our up-to-date [**lists**](/azure/virtual-machines/enable-nvme-interface) for information on which OS images are tagged as NVMe supported. For more information on NVMe enablement, see our [**FAQ**](/azure/virtual-machines/enable-nvme-faqs). The new Dsv6 and Ddsv6 VM series virtual machines public preview is now available. To get more information or sign up for the preview, visit our announcement and follow the link to the sign-up form. This is an opportunity to experience our latest innovation. Dsv6-series virtual machines run on the 5<sup>th</sup> Generation Intel® Xeon® Dsv6-series virtual machines do not have any temporary storage thus lowering the price of entry. You can attach Standard SSDs, Standard HDDs, Premium SSDs, and Premium SSD V2 disk storage to these virtual machines. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. -[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>Live Migration: Not Supported for Preview <br>Memory Preserving Updates: Supported <br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2 <br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported +[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported <br>Live Migration: Not Supported for Preview <br>Memory Preserving Updates: Supported <br>[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2 <br>[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported | **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** | |||||||||||||| Ddsv6-series virtual machines run on the 5<sup>th</sup> Generation Intel® Xeon Ddsv6-series virtual machines offer local storage disks. You can attach Standard SSDs, Standard HDDs, Premium SSDs, and Premium SSD V2 disk storage to these virtual machines. You can also attach Ultra Disk storage based on its regional availability. Disk storage is billed separately from virtual machines. -[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview <br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported +[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported <br>[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported <br>[Live Migration](/azure/virtual-machines/maintenance-and-updates): Not Supported for Preview <br>[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported <br>[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli): Supported <br>[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported | **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** | |||||||||||||| Disk throughput is measured in input/output operations per second (IOPS) and MBp Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**. -To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance). +To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](/azure/virtual-machines/disks-performance). -**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](https://learn.microsoft.com/azure/virtual-network/virtual-machine-network-throughput). +**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](/azure/virtual-network/virtual-machine-network-throughput). -Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance depends on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](https://learn.microsoft.com/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](https://learn.microsoft.com/azure/virtual-network/virtual-network-bandwidth-testing). +Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance depends on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](/azure/virtual-network/virtual-network-bandwidth-testing). |
virtual-machines | Esv6 Edsv6 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/esv6-edsv6-series.md | These new Intel based VMs have two variants: Esv6 without local SSD and Edsv6 wi Esv6-series virtual machines run on the 5th Generation Intel® Xeon® Platinum 8473C (Emerald Rapids) processor reaching an all- core turbo clock speed of up to 3.0 GHz. These virtual machines offer up to 128 vCPU and 1024 GiB of RAM. Esv6-series virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage. -[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Not Supported<br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Not Supported<br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli)<sup>1</sup>: Required<br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported +[Premium Storage](/azure/virtual-machines/premium-storage-performance): Not Supported<br>[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Not Supported<br>[Live Migration](/azure/virtual-machines/maintenance-and-updates): Supported<br>[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported<br>[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli)<sup>1</sup>: Required<br>[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported | **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** | |||||||||||||| Edsv6-series virtual machines run on the 5th Generation Intel® Xeon® Platinum Edsv6-series virtual machines support Standard SSD and Standard HDD disk types. To use Premium SSD or Ultra Disk storage, select Edsv6-series virtual machines. Disk storage is billed separately from virtual machines. [See pricing for disks](https://azure.microsoft.com/pricing/details/managed-disks/). -[Premium Storage](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported<br>[Premium Storage caching](https://learn.microsoft.com/azure/virtual-machines/premium-storage-performance): Supported<br>[Live Migration](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[Memory Preserving Updates](https://learn.microsoft.com/azure/virtual-machines/maintenance-and-updates): Supported<br>[VM Generation Support](https://learn.microsoft.com/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](https://learn.microsoft.com/azure/virtual-network/create-vm-accelerated-networking-cli)<sup>1</sup>: Required<br>[Ephemeral OS Disks](https://learn.microsoft.com/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](https://learn.microsoft.com/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported +[Premium Storage](/azure/virtual-machines/premium-storage-performance): Supported<br>[Premium Storage caching](/azure/virtual-machines/premium-storage-performance): Supported<br>[Live Migration](/azure/virtual-machines/maintenance-and-updates): Supported<br>[Memory Preserving Updates](/azure/virtual-machines/maintenance-and-updates): Supported<br>[VM Generation Support](/azure/virtual-machines/generation-2): Generation 2<br>[Accelerated Networking](/azure/virtual-network/create-vm-accelerated-networking-cli)<sup>1</sup>: Required<br>[Ephemeral OS Disks](/azure/virtual-machines/ephemeral-os-disks): Not Supported for Preview<br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Supported | **Size** | **vCPU** | **Memory: GiB** | **Temp storage (SSD) GiB** | **Max data disks** | **Max temp storage throughput: IOPS/MBPS (RR)** | **Max temp storage throughput: IOPS/MBPS (RW)** | **Max** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max burst** **uncached Premium SSD and Standard SSD/HDD disk throughput: IOPS/MBps** | **Max** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max burst** **uncached Ultra Disk and Premium SSD V2 disk throughput: IOPS/MBps** | **Max NICs** | **Network bandwidth** | |||||||||||||| Disk throughput is measured in input/output operations per second (IOPS) and MBp Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to **ReadOnly** or **ReadWrite**. For uncached data disk operation, the host cache mode is set to **None**. -To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](https://learn.microsoft.com/azure/virtual-machines/disks-performance). +To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](/azure/virtual-machines/disks-performance). -**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](https://learn.microsoft.com/azure/virtual-network/virtual-machine-network-throughput). +**Expected network bandwidth** is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](/azure/virtual-network/virtual-machine-network-throughput). -Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](https://learn.microsoft.com/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](https://learn.microsoft.com/azure/virtual-network/virtual-network-bandwidth-testing). +Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](/azure/virtual-network/virtual-network-optimize-network-bandwidth). To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](/azure/virtual-network/virtual-network-bandwidth-testing). |
virtual-machines | Enable Infiniband | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/enable-infiniband.md | |
virtual-machines | Hpc Compute Infiniband Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-linux.md | This extension supports the following OS distros, depending on driver support fo | Distribution | Version | InfiniBand NIC drivers | ||||-| Ubuntu | 18.04 LTS, 20.04 LTS | CX3-Pro, CX5, CX6 | +| Ubuntu | 18.04 LTS, 20.04 LTS, 22.04 LTS | CX3-Pro, CX5, CX6 | | CentOS | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 | | Red Hat Enterprise Linux | 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8.1, 8,2 | CX3-Pro, CX5, CX6 | |
virtual-machines | Hpc Compute Infiniband Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpc-compute-infiniband-windows.md | |
virtual-machines | Hpccompute Amd Gpu Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-amd-gpu-windows.md | |
virtual-machines | Hpccompute Gpu Linux | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-linux.md | |
virtual-machines | Hpccompute Gpu Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/hpccompute-gpu-windows.md | |
virtual-machines | Hb Hc Known Issues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-hc-known-issues.md | |
virtual-machines | Hb Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-overview.md | |
virtual-machines | Hb Series Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series-performance.md | description: Learn about performance testing results for HB-series VM sizes in A Previously updated : 03/04/2023 Last updated : 07/25/2024 --++ # HB-series virtual machine sizes |
virtual-machines | Hb Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hb-series.md | description: Specifications for the HB-series VMs. Previously updated : 12/7/2023- Last updated : 07/25/2024++ # HB-series |
virtual-machines | Hbv2 Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-performance.md | |
virtual-machines | Hbv2 Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series-overview.md | |
virtual-machines | Hbv2 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv2-series.md | description: Specifications for the HBv2-series VMs. Previously updated : 12/7/2023- Last updated : 07/25/2024++ # HBv2-series |
virtual-machines | Hbv3 Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-performance.md | |
virtual-machines | Hbv3 Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series-overview.md | Title: HBv3-series VM overview, architecture, topology - Azure Virtual Machines | Microsoft Docs description: Learn about the HBv3-series VM size in Azure. -+ Previously updated : 04/21/2023 Last updated : 07/25/2024 --++ # HBv3-series virtual machine overview |
virtual-machines | Hbv3 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv3-series.md | description: Specifications for the HBv3-series VMs. Previously updated : 12/7/2023 Last updated : 07/25/2024 + # HBv3-series |
virtual-machines | Hbv4 Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-performance.md | |
virtual-machines | Hbv4 Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-series-overview.md | |
virtual-machines | Hbv4 Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hbv4-series.md | |
virtual-machines | Hc Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series-overview.md | |
virtual-machines | Hc Series Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series-performance.md | description: Learn about performance testing results for HC-series VM sizes in A Previously updated : 03/04/2023 Last updated : 07/25/2024 -+ |
virtual-machines | Hc Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hc-series.md | description: Specifications for the HC-series VMs. Previously updated : 12/7/2023- Last updated : 07/25/2024++ # HC-series |
virtual-machines | Hx Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hx-performance.md | |
virtual-machines | Hx Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hx-series-overview.md | |
virtual-machines | Hx Series | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hx-series.md | |
virtual-machines | Run Command Managed | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/run-command-managed.md | The *updated* managed Run Command uses the same VM agent channel to execute scri ## Prerequisites +> [!IMPORTANT] +> The minimum supported Linux Guest Agent is version 2.4.0.2 +> Older versions do not support Managed RunCommand + ### Linux DistroΓÇÖs Supported | **Linux Distro** | **x64** | **ARM64** | |:--|:--:|:--:| |
virtual-machines | Overview Hb Hc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/overview-hb-hc.md | description: Learn about the features and capabilities of InfiniBand enabled HB- Previously updated : 03/10/2023 Last updated : 07/25/2024 |
virtual-machines | Set Up Hpc Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/set-up-hpc-vms.md | |
virtual-machines | Setup Mpi | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/setup-mpi.md | description: Learn how to set up MPI for HPC on Azure. Previously updated : 03/10/2023 Last updated : 07/25/2024 |
virtual-machines | Centos End Of Life | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/centos/centos-end-of-life.md | OpenLogic by Perforce Azure Marketplace offers: - [CentOS-based HPC](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-hpc?tab=Overview) -- [CentOS-based LVM](https://azuremarketplace.microsoft.com/marketplace/apps/openlogic.centos-lvm?tab=Overview)- These are the official / endorsed CentOS images in Azure, and don't have software billing information associated. They're candidates for an in-place conversion (after a backup and any necessary prerequisites and updates). **Other Azure Marketplace offers** |
virtual-network | How To Virtual Machine Mtu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/how-to-virtual-machine-mtu.md | + + Title: Configure MTU for virtual machines in Azure ++description: Get started with this how-to article to configure Maximum Transmission Unit (MTU) for Linux and Windows in Azure. ++++ Last updated : 07/22/2024++#customer intent: As a network administrator, I want to change the MTU for my Linux or Windows virtual machine so that I can optimize network performance. ++++# Configure Maximum Transmission Unit (MTU) for virtual machines in Azure ++The Maximum Transmission Unit (MTU) is a measurement representing the largest size ethernet frame (packet) transmitted by a network device or interface. If a packet exceeds the largest size accepted by the device, the packet is fragmented into multiple smaller packets, then later reassembled at the destination. ++Fragmentation and reassembly can introduce performance and ordering issues, resulting in a suboptimal experience. Optimizing MTU for your solution can provide network bandwidth performance benefits by reducing the total number of packets required to send a dataset. Configuration of larger MTU sizes can potentially improve network throughput as it reduces the number of packets and header overhead required to send a dataset. ++The MTU is a configurable setting in a virtual machine's operating system. The default value MTU setting in Azure is 1500 bytes. ++VMs in Azure can support larger MTU than the 1,500-byte default only for traffic that stays within the virtual network. ++The following table shows the largest MTU size supported on the Azure Network Interfaces available in Azure: ++| Operating System | Network Interface | Largest MTU for inter virtual network traffic | +||-|--| +| Windows Server | Mellanox Cx-3, Cx-4, Cx-5 | 3900 </br> **When setting the MTU value with `Set-NetAdapterAdvancedProperty`, use the value `4088`.**. **To persist reboots, the value returned by `Test-Connection` must also be set with `Set-NetIPInterface`.** | +| Windows Server | (Preview) Microsoft Azure Network Adapter MANA | 9000 </br> **When setting the MTU value with `Set-NetAdapterAdvancedProperty`, use the value `9014`.** **To persist reboots, the value returned by `Test-Connection` must also be set with `Set-NetIPInterface`.** | +| Linux | Mellanox Cx-3, Cx-4, Cx-5 | 3900 | +| Linux | (Preview) Microsoft Azure Network Adapter | 9000 | ++## Prerequisites ++# [Linux](#tab/linux) ++- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ++- Two Linux virtual machines in the same virtual network in Azure. For more information about creating a Linux virtual machine, see [Create a Linux virtual machine in the Azure portal](/azure/virtual-machines/linux/quick-create-portal). Remote access to the virtual machines is required for completion of the article. For more information about connecting to Azure Virtual Machines securely, see [What is Azure Bastion?](/azure/bastion/bastion-overview) ++ - For the purposes of this article, the virtual machines are named **vm-1** and **vm-2**. Replace these values with your values. ++# [Windows](#tab/windows) ++- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). ++- Two Windows Server virtual machines in the same virtual network in Azure. For more information about creating a Windows Server virtual machine, see [Create a Windows virtual machine in the Azure portal](/azure/virtual-machines/windows/quick-create-portal). Remote access to the virtual machines is required for completion of the article. For more information about connecting to Azure Virtual Machines securely, see [What is Azure Bastion?](/azure/bastion/bastion-overview) ++ - For the purposes of this article, the virtual machines are named **vm-1** and **vm-2**. Replace these values with your values. ++- The newest version of PowerShell installed in the Windows Server virtual machines. The commands in the article don't work with PowerShell installed in Windows Server. For more information about [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows). ++++## Resource examples ++The following resources are used as examples in this article. Replace these values with your values. ++| Resource | Name | IP Address | +|-|-|-| +| **Virtual Machine 1** | vm-1 | 10.0.0.4 | +| **Virtual Machine 2** | vm-2 | 10.0.0.5 | ++## Precautions ++- Virtual machines in Azure can support a larger MTU than the 1,500-byte default only for traffic that stays within the virtual network. A larger MTU isn't supported for scenarios outside of intra-virtual network VM-to-VM traffic. Traffic traversing through gateways, peeringΓÇÖs, or to the internet might not be supported. Configuration of a larger MTU can result in fragmentation and reduction in performance. For traffic utilizing these scenarios, utilize the default 1,500 byte MTU for testing to ensure that a larger MTU is supported across the entire network path. ++- Optimal MTU is operating system, network, and application specific. The maximal supported MTU might not be optimal for your use case. ++- Always test MTU settings changes in a noncritical environment first before applying broadly or to critical environments. ++## Path MTU Discovery ++It's important to understand the MTU supported across the network path your application or machines uses. Path MTU discovery is a means to find out the largest MTU supported between a source and destination address. Using a larger MTU than is supported between the source and destination address results in fragmentation, which could negatively affect performance. ++In this article, the examples used test the MTU path between two virtual machines. Subsequent tests can be performed from a virtual machine to any routable destination. ++Use the following steps to set a larger MTU size on a source and destination virtual machine. Verify the path MTU with a shell script for Linux or PowerShell for Windows. If the larger MTU isn't supported, the results shown in the path MTU discovery test differ from the settings configured on the source or destination virtual machine interface. ++# [Linux](#tab/linux) ++The shell script is available in the Azure samples gallery. Download the script for Linux from the following link and save to **vm-1** and **vm-2**. ++- [GetPathMTU - Path MTU Discovery Sample Script](/samples/azure-samples/getpathmtu/getpathmtu/) ++Use the following steps to change the MTU size on a Linux virtual machine: ++1. Sign-in to **vm-1** ++1. Use the `ip` command to show the current network interfaces and their MTU settings, Record the IP address for the subsequent steps. In this example, the IP address is **10.0.0.4** and the ethernet interface is **eth0**. ++ ```bash + ip address show + ``` ++ ```output + azureuser@vm-1:~$ ip address show + 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 + link/ether 00:0d:3a:c5:f3:14 brd ff:ff:ff:ff:ff:ff + inet 10.0.0.4/24 metric 100 brd 10.0.0.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::20d:3aff:fec5:f314/64 scope link + valid_lft forever preferred_lft forever + 3: enP46433s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP group default qlen 1000 + link/ether 00:0d:3a:c5:f3:14 brd ff:ff:ff:ff:ff:ff + altname enP46433p0s2 + inet6 fe80::20d:3aff:fec5:f314/64 scope link + valid_lft forever preferred_lft forever + ``` ++1. Set the MTU value on **vm-1** to the highest value supported by the network interface. In this example, the name of the network interface is **eth0**. Replace this value with your value. ++ * For the Mellanox adapter, use the following example to set the MTU value to **3900**: ++ ```bash + echo '3900' | sudo tee /sys/class/net/eth0/mtu || echo "failed: $?" + ``` ++ * For the Microsoft Azure Network Adapter, use the following example to set the MTU value to **9000**: ++ ```bash + echo '9000' | sudo tee /sys/class/net/eth0/mtu || echo "failed: $?" + ``` ++ >[!IMPORTANT] + > The MTU changes made in the previous steps don't persist during a reboot. To make the changes permanent, consult the appropriate documentation for your Linux distribution. ++1. Use the `ip` command to verify that the MTU settings are applied to the network interface: ++ ```bash + ip address show + ``` ++ ```output + azureuser@vm-1:~$ ip address show + 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 3900 qdisc mq state UP group default qlen 1000 + link/ether 00:0d:3a:c5:f3:14 brd ff:ff:ff:ff:ff:ff + inet 10.0.0.4/24 metric 100 brd 10.0.0.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::20d:3aff:fec5:f314/64 scope link + valid_lft forever preferred_lft forever + 3: enP46433s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 3900 qdisc mq master eth0 state UP group default qlen 1000 + link/ether 00:0d:3a:c5:f3:14 brd ff:ff:ff:ff:ff:ff + altname enP46433p0s2 + inet6 fe80::20d:3aff:fec5:f314/64 scope link + valid_lft forever preferred_lft forever + ``` ++1. Sign-in to **vm-2** to repeat the previous steps to set the MTU value to the highest value supported by the network interface. ++1. Sign-in to **vm-1**. ++1. Use the following example to execute the Linux shell script to test the largest MTU size that can be used for a specific network path. Replace the value of the destination host with the IP address of **vm-2**. ++ ```bash + ./GetPathMtu.sh 10.0.0.5 + ``` ++1. The output is similar to the following example. If the script's output doesn't display the setting on the network interface, it indicates that the MTU size isn't set correctly. Alternatively, it could mean that a network device along the path only supports the MTU size returned by the GetPathMTU script. ++ ```output + azureuser@vm-1:~/GetPathMTU$ ./GetPathMtu.sh 10.0.0.5 + destination: 10.0.0.5 + startSendBufferSize: 1200 + interfaceName: Default interface + Test started .................................................................................................................................................................................................... + 3900 + ``` ++1. Verify the MTU size on the network interface using `PING`. For Linux, use the -M, -s, and -c flags. The -M option instructs ping to NOT fragment, -s sets the packet size, and -c sets the number of pings to send. To determine the packet size, subtract 28 from the MTU setting of 3900. + + ```bash + ping 10.0.0.5 -c 10 -M do -s 3872 + ``` + + ```output + azureuser@vm-1:~/GetPathMTU$ ping 10.0.0.5 -c 10 -M do -s 3872 + PING 10.0.0.5 (10.0.0.5) 3872(3900) bytes of data. + 3880 bytes from 10.0.0.5: icmp_seq=1 ttl=64 time=3.70 ms + 3880 bytes from 10.0.0.5: icmp_seq=2 ttl=64 time=1.08 ms + 3880 bytes from 10.0.0.5: icmp_seq=3 ttl=64 time=1.51 ms + 3880 bytes from 10.0.0.5: icmp_seq=4 ttl=64 time=1.25 ms + 3880 bytes from 10.0.0.5: icmp_seq=5 ttl=64 time=1.29 ms + 3880 bytes from 10.0.0.5: icmp_seq=6 ttl=64 time=1.05 ms + 3880 bytes from 10.0.0.5: icmp_seq=7 ttl=64 time=5.67 ms + 3880 bytes from 10.0.0.5: icmp_seq=8 ttl=64 time=1.92 ms + 3880 bytes from 10.0.0.5: icmp_seq=9 ttl=64 time=2.72 ms + 3880 bytes from 10.0.0.5: icmp_seq=10 ttl=64 time=1.20 ms ++ 10.0.0.5 ping statistics + 10 packets transmitted, 10 received, 0% packet loss, time 9014ms + rtt min/avg/max/mdev = 1.051/2.138/5.666/1.426 ms + ``` ++ An indication that there is a mismatch in settings between the source and destination displays as an error message in the output. In this case, the MTU isn't set on the source network interface. ++ ```output + azureuser@vm-1:~/GetPathMTU$ ping 10.0.0.5 -c 10 -M do -s 3872 + PING 10.0.0.5 (10.0.0.5) 3872(3900) bytes of data. + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 ++ 10.0.0.5 ping statistics + 10 packets transmitted, 0 received, +10 errors, 100% packet loss, time 9248ms + ``` ++1. Sign-in to **vm-2**. ++1. Use the following example to run the Linux shell script to test the largest MTU size that can be used for a specific network path: ++ ```bash + ./GetPathMtu.sh 10.0.0.4 + ``` ++1. The output is similar to the following example. If the script's output doesn't display the setting on the network interface, it indicates that the MTU size isn't set correctly. Alternatively, it could mean that a network device along the path only supports the MTU size returned by the GetPathMTU script. ++ ```output + azureuser@vm-1:~/GetPathMTU$ ./GetPathMtu.sh 10.0.0.4 + destination: 10.0.0.4 + startSendBufferSize: 1200 + interfaceName: Default interface + Test started .................................................................................................................................................................................................... + 3900 + ``` ++1. Verify the MTU size on the network interface using `PING`. For Linux, use the -M, -s, and -c flags. The -M option instructs ping to NOT fragment, -s sets the packet size, and -c sets the number of pings to send. To determine the packet size, subtract 28 from the MTU setting of 3900. + + ```bash + ping 10.0.0.4 -c 10 -M do -s 3872 + ``` + + ```output + azureuser@vm-2:~/GetPathMTU$ ping 10.0.0.4 -c 10 -M do -s 3872 + PING 10.0.0.4 (10.0.0.4) 3872(3900) bytes of data. + 3880 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=3.70 ms + 3880 bytes from 10.0.0.4: icmp_seq=2 ttl=64 time=1.08 ms + 3880 bytes from 10.0.0.4: icmp_seq=3 ttl=64 time=1.51 ms + 3880 bytes from 10.0.0.4: icmp_seq=4 ttl=64 time=1.25 ms + 3880 bytes from 10.0.0.4: icmp_seq=5 ttl=64 time=1.29 ms + 3880 bytes from 10.0.0.4: icmp_seq=6 ttl=64 time=1.05 ms + 3880 bytes from 10.0.0.4: icmp_seq=7 ttl=64 time=5.67 ms + 3880 bytes from 10.0.0.4: icmp_seq=8 ttl=64 time=1.92 ms + 3880 bytes from 10.0.0.4: icmp_seq=9 ttl=64 time=2.72 ms + 3880 bytes from 10.0.0.4: icmp_seq=10 ttl=64 time=1.20 ms ++ 10.0.0.4 ping statistics + 10 packets transmitted, 10 received, 0% packet loss, time 9014ms + rtt min/avg/max/mdev = 1.051/2.138/5.666/1.426 ms + ``` ++ An indication that there is a mismatch in settings between the source and destination displays as an error message in the output. In this case, the MTU isn't set on the source network interface. ++ ```output + azureuser@vm-2:~/GetPathMTU$ ping 10.0.0.4 -c 10 -M do -s 3872 + PING 10.0.0.4 (10.0.0.4) 3872(3900) bytes of data. + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 + ping: local error: message too long, mtu=1500 ++ 10.0.0.4 ping statistics + 10 packets transmitted, 0 received, +10 errors, 100% packet loss, time 9248ms + ``` ++# [Windows](#tab/windows) ++Use PowerShell to test the connection and MTU size between **vm-1** and **vm-2**. ++>[!IMPORTANT] +> You must have the newest version of PowerShell installed in the Windows Server virtual machines. The commands in the article don't work with PowerShell included with Windows Server. For more information about [Installing PowerShell on Windows](/powershell/scripting/install/installing-powershell-on-windows) ++Use the following steps to change the MTU size on a Windows Server virtual machine: ++1. Sign-in to **vm-1**. ++1. Open a PowerShell window as an administrator. ++1. Use `Get-NetIPAddress` to show the IP address of **vm-1**. Record the IP address for the subsequent steps. In this example, the IP address is **10.0.0.4**. ++ ```powershell + Get-NetIPAddress -AddressFamily IPv4 + ``` ++ ```output + PS C:\Users\azureuser> Get-NetIPAddress -AddressFamily IPv4 ++ IPAddress : 10.0.0.4 + InterfaceIndex : 7 + InterfaceAlias : Ethernet + AddressFamily : IPv4 + Type : Unicast + PrefixLength : 24 + PrefixOrigin : Dhcp + SuffixOrigin : Dhcp + AddressState : Preferred + ValidLifetime : Infinite ([TimeSpan]::MaxValue) + PreferredLifetime : Infinite ([TimeSpan]::MaxValue) + SkipAsSource : False + PolicyStore : ActiveStore ++ IPAddress : 127.0.0.1 + InterfaceIndex : 1 + InterfaceAlias : Loopback Pseudo-Interface 1 + AddressFamily : IPv4 + Type : Unicast + PrefixLength : 8 + PrefixOrigin : WellKnown + SuffixOrigin : WellKnown + AddressState : Preferred + ValidLifetime : Infinite ([TimeSpan]::MaxValue) + PreferredLifetime : Infinite ([TimeSpan]::MaxValue) + SkipAsSource : False + PolicyStore : ActiveStore + ``` ++1. Use `Get-NetAdapter` in the following example to display the current network interfaces. ++ ```powershell + Get-NetAdapter + ``` ++ ```output + PS C:\Users\azureuser> Get-NetAdapter ++ Name InterfaceDescription ifIndex Status MacAddress LinkSpeed + - -- - - + Ethernet 2 Mellanox ConnectX-5 Virtual Adapter 10 Up 60-45-BD-CC-77-01 100 Gbps + Ethernet Microsoft Hyper-V Network Adapter 6 Up 60-45-BD-CC-77-01 100 Gbps + ``` ++ The virtual machine has two network interfaces displayed in the output. ++1. Record the value of the MAC address of the network interface and the name. You'll need these values for the next step. For the purposes of this article, the example values are **60-45-BD-CC-77-01** and **Ethernet 2**. Replace the values with your own value. ++1. Use the following example to display the current MTU value for the network interface. ++ ```powershell + Get-NetAdapter -Name "Ethernet 2" | Format-List -Property MtuSize + ``` + + ```output + PS C:\Users\azureuser> Get-NetAdapter -Name "Ethernet 2" | Format-List -Property MtuSize ++ MtuSize : 1500 + ``` ++1. Windows Virtual machine support both the Mellanox interface and the Microsoft Azure Network Adapter. + + * To set the value on the Mellanox interface, use the following example to set the MTU value to **4088**. Replace the value of the MAC address with your own value. ++ ```powershell + Get-NetAdapter | ? {$_.MacAddress -eq "60-45-BD-CC-77-01"} | Set-NetAdapterAdvancedProperty -RegistryKeyword "*JumboPacket" -RegistryValue 4088 + ``` ++ * To set the value on the Microsoft Azure Network Adapter, use the following example to set the MTU value to **9014**. Replace the value of the MAC address with your own value. ++ ```powershell + Get-NetAdapter | ? {$_.MacAddress -eq "60-45-BD-CC-77-01"} | Set-NetAdapterAdvancedProperty -RegistryKeyword "*JumboPacket" -RegistryValue 9014 + ``` ++1. Use the following example to verify the MTU value is set on the network interface. ++ ```powershell + Get-NetAdapter -Name "Ethernet 2" | Format-List -Property MtuSize + ``` ++ ```output + PS C:\Users\azureuser> Get-NetAdapter -Name "Ethernet 2" | Format-List -Property MtuSize ++ MtuSize : 4074 + ``` ++1. Internet Control Message Protocol (ICMP) traffic is required between the source and destination to test the MTU size. Use the following example to enable ICMP traffic on **vm-1**: ++ ```powershell + Set-NetFirewallRule -DisplayName 'File and Printer Sharing (Echo Request - ICMPv4-In)' -enabled True + ``` ++1. Sign-in to **vm-2** to repeat the previous steps to set the MTU value to the highest value supported by the network interface. ++1. Sign-in to **vm-1**. ++1. Open a PowerShell window as an administrator. ++1. Use the following example to execute the PowerShell command `Test-Connection` test the network path. Replace the value of the destination host with the IP address of **vm-2**. ++ ```powershell + Test-Connection -TargetName 10.0.0.5 -MtuSize + ``` ++1. The output is similar to the following example. If the command's output doesn't display the setting on the network interface, it indicates that the MTU size isn't set correctly. Alternatively, it could mean that a network device along the path only supports the MTU size returned by the `Test-Connection` command. ++ ```output + PS C:\Users\azureuser> Test-Connection -TargetName 10.0.0.5 -MtuSize ++ Destination: 10.0.0.5 ++ Source Address Latency Status MtuSize + (ms) (B) + - - - + vm-1 10.0.0.5 1 Success 3892 + ``` ++1. Verify the MTU size on the network interface using `PING`. For Windows, use -f and -l. The -f option instructs ping to NOT fragment and -l sets the packet size. Use the value returned by the `Test-Connection` command for the MtuSize property. In this example, it's **3892**. ++ ```powershell + ping 10.0.0.5 -f -l 3892 + ``` + + ```output + PS C:\Users\azureuser> ping 10.0.0.5 -f -l 3892 ++ Pinging 10.0.0.5 with 3892 bytes of data: + Reply from 10.0.0.5: bytes=3892 time=1ms TTL=128 + Reply from 10.0.0.5: bytes=3892 time<1ms TTL=128 + Reply from 10.0.0.5: bytes=3892 time=1ms TTL=128 + Reply from 10.0.0.5: bytes=3892 time=1ms TTL=128 ++ Ping statistics for 10.0.0.5: + Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), + Approximate round trip times in milli-seconds: + Minimum = 0ms, Maximum = 1ms, Average = 0ms + ``` ++ An indication that there is a mismatch in settings between the source and destination displays as an error message in the output. In this case, the MTU isn't set on the source network interface. ++ ```output + PS C:\Users\azureuser> ping 10.0.0.5 -f -l 3892 ++ Pinging 10.0.0.5 with 3892 bytes of data: + Packet needs to be fragmented but DF set. + Packet needs to be fragmented but DF set. + Packet needs to be fragmented but DF set. + Packet needs to be fragmented but DF set. ++ Ping statistics for 10.0.0.5: + Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), + ``` ++1. Use `Get-NetIPInterface` to determine the interface alias and the current MTU value. ++ ```powershell + Get-NetIPInterface + ``` ++ ```output + PS C:\Users\azureuser> Get-NetIPInterface ++ ifIndex InterfaceAlias AddressFamily NlMtu(Bytes) InterfaceMetric Dhcp ConnectionState PolicyStore + - -- - - -- + 6 Ethernet IPv6 4074 10 Enabled Connected ActiveStore + 1 Loopback Pseudo-Interface 1 IPv6 4294967295 75 Disabled Connected ActiveStore + 6 Ethernet IPv4 4074 10 Enabled Connected ActiveStore + 1 Loopback Pseudo-Interface 1 IPv4 4294967295 75 Enabled Connected ActiveStore + ``` ++ In the example, the interface alias is **Ethernet** and the MTU value is **4074**. ++1. Use `Set-NetIPInterface` to set the MTU value for **vm-1** to persist reboots. For the MTU value, **3892** is used in this example. Replace this value with your value returned by the `Test-Connection` command. The interface alias is **Ethernet** in this example. Replace this value with your value. ++ * Mellanox interface: + + ```powershell + Set-NetIPInterface -InterfaceAlias "Ethernet" -NIMtuBytes 3892 + ``` + + * Microsoft Azure Network Adapter: + + ```powershell + Set-NetIPInterface -InterfaceAlias "Ethernet" -NIMtuBytes 9000 + ``` ++1. Use `Get-NetIPInterface` to verify the MTU was set with `Set-NetIPInterface`. ++ ```powershell + Get-NetIPInterface -InterfaceAlias "Ethernet" + ``` ++ ```output + PS C:\Users\azureuser> Get-NetIPInterface -InterfaceAlias "Ethernet" ++ ifIndex InterfaceAlias AddressFamily NlMtu(Bytes) InterfaceMetric Dhcp ConnectionState PolicyStore + - -- - - -- + 6 Ethernet IPv6 3892 10 Enabled Connected ActiveStore + 6 Ethernet IPv4 3892 10 Enabled Connected ActiveStore + ``` ++1. Sign-in to **vm-2**. ++1. Open a PowerShell window as an administrator. ++1. Use the following example to execute the PowerShell command `Test-Connection` test the network path. Replace the value of the destination host with the IP address of **vm-2**. ++ ```powershell + Test-Connection -TargetName 10.0.0.4 -MtuSize + ``` ++1. The output is similar to the following example. If the command's output doesn't display the setting on the network interface, it indicates that the MTU size isn't set correctly. Alternatively, it could mean that a network device along the path only supports the MTU size returned by the `Test-Connection` command. ++ ```output + PS C:\Users\azureuser> Test-Connection -TargetName 10.0.0.4 -MutSize ++ Destination: 10.0.0.4 ++ Source Address Latency Status MtuSize + (ms) (B) + - - - + vm-2 10.0.0.4 1 Success 3892 + ``` ++1. Verify the MTU size on the network interface using `PING`. For Windows, use -f and -l. The -f option instructs ping to NOT fragment and -l sets the packet size. To determine the packet size, subtract 28 from the MTU setting of 3900. ++ ```powershell + ping 10.0.0.4 -f -l 3892 + ``` + + ```output + PS C:\Users\azureuser> ping 10.0.0.4 -f -l 3892 ++ Pinging 10.0.0.4 with 3892 bytes of data: + Reply from 10.0.0.4: bytes=3892 time=1ms TTL=128 + Reply from 10.0.0.4: bytes=3892 time<1ms TTL=128 + Reply from 10.0.0.4: bytes=3892 time=1ms TTL=128 + Reply from 10.0.0.4: bytes=3892 time=1ms TTL=128 ++ Ping statistics for 10.0.0.4: + Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), + Approximate round trip times in milli-seconds: + Minimum = 0ms, Maximum = 1ms, Average = 0ms + ``` ++ An indication that there is a mismatch in settings between the source and destination displays as an error message in the output. In this case, the MTU isn't set on the source network interface. ++ ```output + PS C:\Users\azureuser> ping 10.0.0.4 -f -l 3892 ++ Pinging 10.0.0.4 with 3892 bytes of data: + Packet needs to be fragmented but DF set. + Packet needs to be fragmented but DF set. + Packet needs to be fragmented but DF set. + Packet needs to be fragmented but DF set. ++ Ping statistics for 10.0.0.4: + Packets: Sent = 4, Received = 0, Lost = 4 (100% loss), + ``` ++1. Use `Get-NetIPInterface` to determine the interface alias and the current MTU value. ++ ```powershell + Get-NetIPInterface + ``` ++ ```output + PS C:\Users\azureuser> Get-NetIPInterface ++ ifIndex InterfaceAlias AddressFamily NlMtu(Bytes) InterfaceMetric Dhcp ConnectionState PolicyStore + - -- - - -- + 6 Ethernet IPv6 4074 10 Enabled Connected ActiveStore + 1 Loopback Pseudo-Interface 1 IPv6 4294967295 75 Disabled Connected ActiveStore + 6 Ethernet IPv4 4074 10 Enabled Connected ActiveStore + 1 Loopback Pseudo-Interface 1 IPv4 4294967295 75 Enabled Connected ActiveStore + ``` ++ In the example, the interface alias is **Ethernet** and the MTU value is **4074**. ++1. Use `Set-NetIPInterface` to set the MTU value for **vm-2** to persist reboots. For the MTU value, **3892** is used in this example. Replace this value with your value returned by the `Test-Connection` command. The interface alias is **Ethernet** in this example. Replace this value with your value. ++ * Mellanox interface: + + ```powershell + Set-NetIPInterface -InterfaceAlias "Ethernet" -NIMtuBytes 3892 + ``` + + * Microsoft Azure Network Adapter: + + ```powershell + Set-NetIPInterface -InterfaceAlias "Ethernet" -NIMtuBytes 9000 + ``` ++1. Use `Get-NetIPInterface` to verify the MTU was set with `Set-NetIPInterface`. ++ ```powershell + Get-NetIPInterface -InterfaceAlias "Ethernet" + ``` ++ ```output + PS C:\Users\azureuser> Get-NetIPInterface -InterfaceAlias "Ethernet" ++ ifIndex InterfaceAlias AddressFamily NlMtu(Bytes) InterfaceMetric Dhcp ConnectionState PolicyStore + - -- - - -- + 6 Ethernet IPv6 3892 10 Enabled Connected ActiveStore + 6 Ethernet IPv4 3892 10 Enabled Connected ActiveStore + ``` +++## Revert changes ++To revert the changes made in this article, use the following steps: ++# [Linux](#tab/linux) ++1. Sign-in to **vm-1**. ++1. Use the following example to set the MTU value to the default value of **1500**: ++ ```bash + echo '1500' | sudo tee /sys/class/net/eth0/mtu || echo "failed: $?" + ``` ++ >[!IMPORTANT] + > The MTU changes made in the previous steps don't persist during a reboot. To make the changes permanent, consult the appropriate documentation for your Linux distribution. ++1. Use the `ip` command to verify that the MTU settings are applied to the network interface: ++ ```bash + ip address show + ``` ++ ```output + azureuser@vm-1:~$ ip address show + 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 + link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 + inet 127.0.0.1/8 scope host lo + valid_lft forever preferred_lft forever + inet6 ::1/128 scope host + valid_lft forever preferred_lft forever + 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 + link/ether 00:0d:3a:c5:f3:14 brd ff:ff:ff:ff:ff:ff + inet 10.0.0.4/24 metric 100 brd 10.0.0.255 scope global eth0 + valid_lft forever preferred_lft forever + inet6 fe80::20d:3aff:fec5:f314/64 scope link + valid_lft forever preferred_lft forever + 3: enP46433s1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master eth0 state UP group default qlen 1000 + link/ether 00:0d:3a:c5:f3:14 brd ff:ff:ff:ff:ff:ff + altname enP46433p0s2 + inet6 fe80::20d:3aff:fec5:f314/64 scope link + valid_lft forever preferred_lft forever + ``` ++1. Sign-in to **vm-2** to repeat the previous steps to set the MTU value to the default value of **1500**. ++# [Windows](#tab/windows) ++1. Sign-in to **vm-1**. ++1. Open a PowerShell window as an administrator. ++1. Use the following example to set the MTU value to the default value of **1500**: ++ ```powershell + Get-NetAdapter | ? {$_.MacAddress -eq "60-45-BD-CC-77-01"} | Set-NetAdapterAdvancedProperty -RegistryKeyword "*JumboPacket" -RegistryValue 1514 + ``` ++1. Use the following example to verify the MTU value is set on the network interface: ++ ```powershell + Get-NetAdapter -Name "Ethernet 2" | Format-List -Property MtuSize + ``` ++ ```output + PS C:\Users\azureuser> Get-NetAdapter -Name "Ethernet 2" | Format-List -Property MtuSize ++ MtuSize : 1500 + ``` ++1. Use the following steps on **vm-1** to set the MTU value for **vm-1** to persist reboots. ++ ```powershell + Set-NetIPInterface -InterfaceAlias "Ethernet 2" -NIMtuBytes 1500 + ``` ++1. Sign-in to **vm-2** to repeat the previous steps to set the MTU value to the default value of **1500**. ++++## Related content ++* [Microsoft Azure Network Adapter (MANA) overview](/azure/virtual-network/accelerated-networking-mana-overview). + |
virtual-network | Virtual Network Encryption Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-encryption-overview.md | Virtual network encryption has the following requirements: | Type | VM Series | VM SKU | | | | |- | General purpose workloads | D-series V4 </br> D-series V5 | **[Dv4 and Dsv4-series](/azure/virtual-machines/dv4-dsv4-series)** </br> **[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)** </br> **[Dav4 and Dasv4-series](/azure/virtual-machines/dav4-dasv4-series)** </br> **[Dv5 and Dsv5-series](/azure/virtual-machines/dv5-dsv5-series)** </br> **[Ddv5 and Ddsv5-series](/azure/virtual-machines/ddv5-ddsv5-series)** </br> **[Dlsv5 and Dldsv5-series](/azure/virtual-machines/dlsv5-dldsv5-series)** </br> **[Dasv5 and Dadsv5-series](/azure/virtual-machines/dasv5-dadsv5-series)** | - | General purpose and memory intensive workloads | E-series V4 </br> E-series V5 | **[Ev4 and Esv4-series](/azure/virtual-machines/ev4-esv4-series)** </br> **[Edv4 and Edsv4-series](/azure/virtual-machines/edv4-edsv4-series)** </br> **[Eav4 and Easv4-series](/azure/virtual-machines/eav4-easv4-series)** </br> **[Ev5 and Esv5-series](/azure/virtual-machines/ev5-esv5-series)** </br> **[Edv5 and Edsv5-series](/azure/virtual-machines/edv5-edsv5-series)** </br> **[Easv5 and Eadsv5-series](/azure/virtual-machines/easv5-eadsv5-series)** | + | General purpose workloads | D-series V4 </br> D-series V5 </br> D-series V6 | **[Dv4 and Dsv4-series](/azure/virtual-machines/dv4-dsv4-series)** </br> **[Ddv4 and Ddsv4-series](/azure/virtual-machines/ddv4-ddsv4-series)** </br> **[Dav4 and Dasv4-series](/azure/virtual-machines/dav4-dasv4-series)** </br> **[Dv5 and Dsv5-series](/azure/virtual-machines/dv5-dsv5-series)** </br> **[Ddv5 and Ddsv5-series](/azure/virtual-machines/ddv5-ddsv5-series)** </br> **[Dlsv5 and Dldsv5-series](/azure/virtual-machines/dlsv5-dldsv5-series)** </br> **[Dasv5 and Dadsv5-series](/azure/virtual-machines/dasv5-dadsv5-series)** </br> **[Dasv6 and Dadsv6-series](/azure/virtual-machines/dasv6-dadsv6-series)** </br> **[Dalsv6 and Daldsv6-series](/azure/virtual-machines/dalsv6-daldsv6-series)** | + | General purpose and memory intensive workloads | E-series V4 </br> E-series V5 </br> E-series V6 | **[Ev4 and Esv4-series](/azure/virtual-machines/ev4-esv4-series)** </br> **[Edv4 and Edsv4-series](/azure/virtual-machines/edv4-edsv4-series)** </br> **[Eav4 and Easv4-series](/azure/virtual-machines/eav4-easv4-series)** </br> **[Ev5 and Esv5-series](/azure/virtual-machines/ev5-esv5-series)** </br> **[Edv5 and Edsv5-series](/azure/virtual-machines/edv5-edsv5-series)** </br> **[Easv5 and Eadsv5-series](/azure/virtual-machines/easv5-eadsv5-series)** </br> **[Easv6 and Eadsv6-series](/azure/virtual-machines/easv6-eadsv6-series)** | | Storage intensive workloads | LSv3 | **[LSv3-series](/azure/virtual-machines/lsv3-series)** | | Memory intensive workloads | M-series | **[Mv2-series](/azure/virtual-machines/mv2-series)** </br> **[Msv2 and Mdsv2-series Medium Memory](/azure/virtual-machines/msv2-mdsv2-series)** </br> **[Msv3 and Mdsv3 Medium Memory Series](/azure/virtual-machines/msv3-mdsv3-medium-series)** | |
vpn-gateway | Create Gateway Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-gateway-powershell.md | Active-active gateways differ from active-standby gateways in the following ways * The virtual network gateway SKU can't be Basic or Standard. For more information about active-active gateways, see [Highly Available cross-premises and VNet-to-VNet connectivity](vpn-gateway-highlyavailable.md).-For more information about availability zones and zone redundant gateways, see [What are availability zones](https://learn.microsoft.com/azure/reliability/availability-zones-overview?toc=%2Fazure%2Fvpn-gateway%2Ftoc.json&tabs=azure-cli#availability-zones)? +For more information about availability zones and zone redundant gateways, see [What are availability zones](/azure/reliability/availability-zones-overview?toc=%2Fazure%2Fvpn-gateway%2Ftoc.json&tabs=azure-cli#availability-zones)? ## Before you begin |